Academic

LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering

arXiv:2604.03532v1 Announce Type: new Abstract: Large language models (LLMs) show strong multilingual capabilities, yet reliably controlling the language of their outputs remains difficult. Representation-level steering addresses this by adding language-specific vectors to model activations at inference time, but identifying language-specific directions in the residual stream often relies on multilingual or parallel data that can be expensive to obtain. Sparse autoencoders (SAEs) decompose residual activations into interpretable, sparse feature directions and offer a natural basis for this search, yet existing SAE-based approaches face the same data constraint. We introduce LangFIR (Language Feature Identification via Random-token Filtering), a method that discovers language-specific SAE features using only a small amount of monolingual data and random-token sequences. Many SAE features consistently activated by target-language inputs do not encode language identity. Random-token sequ

S
Sing Hieng Wong, Hassan Sajjad, A. B. Siddique
· · 1 min read · 9 views

arXiv:2604.03532v1 Announce Type: new Abstract: Large language models (LLMs) show strong multilingual capabilities, yet reliably controlling the language of their outputs remains difficult. Representation-level steering addresses this by adding language-specific vectors to model activations at inference time, but identifying language-specific directions in the residual stream often relies on multilingual or parallel data that can be expensive to obtain. Sparse autoencoders (SAEs) decompose residual activations into interpretable, sparse feature directions and offer a natural basis for this search, yet existing SAE-based approaches face the same data constraint. We introduce LangFIR (Language Feature Identification via Random-token Filtering), a method that discovers language-specific SAE features using only a small amount of monolingual data and random-token sequences. Many SAE features consistently activated by target-language inputs do not encode language identity. Random-token sequences surface these language-agnostic features, allowing LangFIR to filter them out and isolate a sparse set of language-specific features. We show that these features are extremely sparse, highly selective for their target language, and causally important: directional ablation increases cross-entropy loss only for the corresponding language. Using these features to construct steering vectors for multilingual generation control, LangFIR achieves the best average accuracy BLEU across three models (Gemma 3 1B, Gemma 3 4B, and Llama 3.1 8B), three datasets, and twelve target languages, outperforming the strongest monolingual baseline by up to and surpassing methods that rely on parallel data. Our results suggest that language identity in multilingual LLMs is localized in a sparse set of feature directions discoverable with monolingual data. Code is available at https://anonymous.4open.science/r/LangFIR-C0F5/.

Executive Summary

This study introduces LangFIR, a novel method for discovering sparse language-specific features from monolingual data, addressing the challenge of language steering in multilingual large language models (LLMs). LangFIR leverages sparse autoencoders and random-token filtering to identify language-agnostic features, enabling the isolation of a sparse set of language-specific features. The results demonstrate LangFIR's effectiveness in achieving the best average accuracy BLEU across three models, three datasets, and twelve target languages, outperforming monolingual baselines and parallel data-reliant methods. The study's findings suggest that language identity in multilingual LLMs is localized in a sparse set of feature directions, which can be discovered using monolingual data. This breakthrough has significant implications for the development of more accurate and controllable multilingual language models.

Key Points

  • LangFIR introduces a novel method for discovering sparse language-specific features from monolingual data.
  • LangFIR leverages sparse autoencoders and random-token filtering to identify language-agnostic features.
  • LangFIR achieves the best average accuracy BLEU across three models, three datasets, and twelve target languages.

Merits

Strength in Methodological Innovation

LangFIR's approach to language feature identification using sparse autoencoders and random-token filtering represents a significant methodological innovation in the field of multilingual language models.

Strength in Empirical Results

The study's empirical results demonstrate LangFIR's effectiveness in achieving state-of-the-art performance in language steering, outperforming monolingual baselines and parallel data-reliant methods.

Demerits

Limitation in Generalizability

The study's results may not generalize to other language models or datasets, limiting the method's broader applicability.

Limitation in Interpretability

The sparse language-specific features identified by LangFIR may be difficult to interpret, limiting the method's usefulness in applications requiring transparent language control.

Expert Commentary

This study represents a significant breakthrough in the field of multilingual language models, demonstrating the effectiveness of LangFIR in discovering sparse language-specific features from monolingual data. The method's innovative approach to language feature identification, leveraging sparse autoencoders and random-token filtering, has significant potential for application in a range of NLP tasks. However, the study's limitations in generalizability and interpretability of the identified features must be addressed in future work. Nonetheless, LangFIR's findings have significant implications for the development of more accurate and controllable multilingual language models, and its potential applications in multimodal language understanding and transfer learning in NLP are substantial.

Recommendations

  • Future studies should investigate the generalizability of LangFIR to other language models and datasets, as well as its application in multimodal language understanding and transfer learning in NLP.
  • The development of more interpretable language-specific features identified by LangFIR is essential for applications requiring transparent language control.

Sources

Original: arXiv - cs.CL