Academic

MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning

arXiv:2604.01694v1 Announce Type: new Abstract: Minor Component Adaptation (MiCA) is a novel parameter-efficient fine-tuning method for large language models that focuses on adapting underutilized subspaces of model representations. Unlike conventional methods such as Low-Rank Adaptation (LoRA), which target dominant subspaces, MiCA leverages Singular Value Decomposition to identify subspaces related to minor singular vectors associated with the least significant singular values and constrains the update of parameters during fine-tuning to those directions. This strategy leads to up to 5.9x improvement in knowledge acquisition under optimized training hyperparameters and a minimal parameter footprint of 6-60% compared to LoRA. These results suggest that constraining adaptation to minor singular directions provides a more efficient and stable mechanism for integrating new knowledge into pre-trained language models.

S
Sten R\"udiger, Sebastian Raschka
· · 1 min read · 4 views

arXiv:2604.01694v1 Announce Type: new Abstract: Minor Component Adaptation (MiCA) is a novel parameter-efficient fine-tuning method for large language models that focuses on adapting underutilized subspaces of model representations. Unlike conventional methods such as Low-Rank Adaptation (LoRA), which target dominant subspaces, MiCA leverages Singular Value Decomposition to identify subspaces related to minor singular vectors associated with the least significant singular values and constrains the update of parameters during fine-tuning to those directions. This strategy leads to up to 5.9x improvement in knowledge acquisition under optimized training hyperparameters and a minimal parameter footprint of 6-60% compared to LoRA. These results suggest that constraining adaptation to minor singular directions provides a more efficient and stable mechanism for integrating new knowledge into pre-trained language models.

Executive Summary

The article introduces Minor Component Adaptation (MiCA), a novel parameter-efficient fine-tuning method for large language models that improves knowledge acquisition by targeting minor singular subspaces via SVD-based identification of underutilized directions. Unlike LoRA, which focuses on dominant subspaces, MiCA constrains parameter updates to minor singular directions, achieving up to 5.9x improvement in knowledge acquisition with a minimal parameter footprint (6–60% relative to LoRA). This approach offers a more efficient, stable, and scalable alternative for adapting pre-trained models without compromising performance or requiring extensive resources.

Key Points

  • MiCA targets minor singular subspaces via SVD
  • Achieves up to 5.9x improvement in knowledge acquisition
  • Requires minimal parameter footprint compared to LoRA

Merits

Efficiency

MiCA’s focus on minor singular directions reduces computational burden while enhancing adaptability without sacrificing accuracy.

Demerits

Scope Limitation

While effective for minor subspaces, MiCA’s methodology may not generalize equally well to complex, multi-modal, or hybrid architectures requiring broader adaptation.

Expert Commentary

MiCA represents a significant advancement in the evolution of fine-tuning paradigms for large language models. The use of SVD to isolate and adapt minor singular directions is both mathematically elegant and pragmatically impactful. By shifting the focus from dominant to underutilized representations, MiCA circumvents the typical trade-off between efficiency and effectiveness that plagues conventional methods. Furthermore, the minimal parameter footprint—especially relative to LoRA—suggests a paradigm shift toward more targeted, context-aware adaptation. This method may inspire future hybrid models that combine MiCA with other adaptive mechanisms for layered, multi-stage fine-tuning. Importantly, the results were validated under optimized hyperparameters, indicating that real-world deployment may require careful tuning—a consideration that should be factored into downstream applications. Overall, MiCA fills a critical gap in the fine-tuning landscape and deserves broader adoption and further empirical validation.

Recommendations

  • Integrate MiCA into hybrid fine-tuning pipelines as a complementary component
  • Conduct comparative studies across diverse LLM architectures to assess scalability and generalizability

Sources

Original: arXiv - cs.LG