Academic

Extracting and Steering Emotion Representations in Small Language Models: A Methodological Comparison

arXiv:2604.04064v1 Announce Type: new Abstract: Small language models (SLMs) in the 100M-10B parameter range increasingly power production systems, yet whether they possess the internal emotion representations recently discovered in frontier models remains unknown. We present the first comparative analysis of emotion vector extraction methods for SLMs, evaluating 9 models across 5 architectural families (GPT-2, Gemma, Qwen, Llama, Mistral) using 20 emotions and two extraction methods (generation-based and comprehension-based). Generation-based extraction produces statistically superior emotion separation (Mann-Whitney p = 0.007; Cohen's d = -107.5), with the advantage modulated by instruction tuning and architecture. Emotion representations localize at middle transformer layers (~50% depth), following a U-shaped curve that is architecture-invariant from 124M to 3B parameters. We validate these findings against representational anisotropy baselines across 4 models and confirm causal be

J
Jihoon Jeong
· · 1 min read · 4 views

arXiv:2604.04064v1 Announce Type: new Abstract: Small language models (SLMs) in the 100M-10B parameter range increasingly power production systems, yet whether they possess the internal emotion representations recently discovered in frontier models remains unknown. We present the first comparative analysis of emotion vector extraction methods for SLMs, evaluating 9 models across 5 architectural families (GPT-2, Gemma, Qwen, Llama, Mistral) using 20 emotions and two extraction methods (generation-based and comprehension-based). Generation-based extraction produces statistically superior emotion separation (Mann-Whitney p = 0.007; Cohen's d = -107.5), with the advantage modulated by instruction tuning and architecture. Emotion representations localize at middle transformer layers (~50% depth), following a U-shaped curve that is architecture-invariant from 124M to 3B parameters. We validate these findings against representational anisotropy baselines across 4 models and confirm causal behavioral effects through steering experiments, independently verified by an external emotion classifier (92% success rate, 37/40 scenarios). Steering reveals three regimes -- surgical (coherent text transformation), repetitive collapse, and explosive (text degradation) -- quantified by perplexity ratios and separated by model architecture rather than scale. We document cross-lingual emotion entanglement in Qwen, where steering activates semantically aligned Chinese tokens that RLHF does not suppress, raising safety concerns for multilingual deployment. This work provides methodological guidelines for emotion research on open-weight models and contributes to the Model Medicine series by bridging external behavioral profiling with internal representational analysis.

Executive Summary

This study presents a comparative analysis of emotion vector extraction methods for small language models (SLMs) in the 100M-10B parameter range. The researchers evaluate nine SLMs across five architectural families using 20 emotions and two extraction methods (generation-based and comprehension-based). Their findings indicate that generation-based extraction produces statistically superior emotion separation, with the advantage modulated by instruction tuning and architecture. The study also reveals that emotion representations localize at middle transformer layers, following a U-shaped curve that is architecture-invariant. The results are validated against representational anisotropy baselines and confirmed through steering experiments, which reveal three regimes of text transformation, collapse, or degradation. The study contributes to the Model Medicine series by bridging external behavioral profiling with internal representational analysis.

Key Points

  • The study presents the first comparative analysis of emotion vector extraction methods for SLMs.
  • Generation-based extraction produces statistically superior emotion separation.
  • Emotion representations localize at middle transformer layers, following a U-shaped curve that is architecture-invariant.

Merits

Methodological Innovation

The study introduces a novel approach to comparing emotion vector extraction methods for SLMs, providing a comprehensive analysis of the internal representations of these models.

Cross-Lingual Emotion Entanglement Discovery

The study reveals cross-lingual emotion entanglement in Qwen, a phenomenon that raises safety concerns for multilingual deployment and has significant implications for the development of language models.

Causal Behavioral Effects Validation

The study validates the findings through steering experiments, which confirm causal behavioral effects and provide a robust test of the internal representations of SLMs.

Demerits

Limited Model Selection

The study focuses on a limited selection of SLMs, which may not be representative of the broader range of models in the 100M-10B parameter range.

Lack of Interpretability

The study does not provide a clear understanding of the internal workings of the models, which may limit the applicability of the findings to more complex models.

Expert Commentary

The study presents a significant contribution to the field of natural language processing, providing a comprehensive analysis of emotion vector extraction methods for SLMs. The findings have important implications for the development of more robust and interpretable language models, which is critical for a range of applications, from customer service chatbots to emotional intelligence analytics. However, the study also raises important questions about the safety and deployment of language models, particularly in multilingual contexts. Future research should focus on developing more robust and interpretable models that can capture the nuances of human emotion, while also considering the potential risks and benefits of deploying these models in real-world contexts.

Recommendations

  • Develop more robust and interpretable language models that can capture the nuances of human emotion.
  • Conduct further research on the safety and deployment of language models in multilingual contexts.

Sources

Original: arXiv - cs.CL