Skip to main content
Academic

Facet-Level Persona Control by Trait-Activated Routing with Contrastive SAE for Role-Playing LLMs

arXiv:2602.19157v1 Announce Type: new Abstract: Personality control in Role-Playing Agents (RPAs) is commonly achieved via training-free methods that inject persona descriptions and memory through prompts or retrieval-augmented generation, or via supervised fine-tuning (SFT) on persona-specific corpora. While SFT can be effective, it requires persona-labeled data and retraining for new roles, limiting flexibility. In contrast, prompt- and RAG-based signals are easy to apply but can be diluted in long dialogues, leading to drifting and sometimes inconsistent persona behavior. To address this, we propose a contrastive Sparse AutoEncoder (SAE) framework that learns facet-level personality control vectors aligned with the Big Five 30-facet model. A new 15,000-sample leakage-controlled corpus is constructed to provide balanced supervision for each facet. The learned vectors are integrated into the model's residual space and dynamically selected by a trait-activated routing module, enabling

W
Wenqiu Tang, Zhen Wan, Takahiro Komamizu, Ichiro Ide
· · 1 min read · 3 views

arXiv:2602.19157v1 Announce Type: new Abstract: Personality control in Role-Playing Agents (RPAs) is commonly achieved via training-free methods that inject persona descriptions and memory through prompts or retrieval-augmented generation, or via supervised fine-tuning (SFT) on persona-specific corpora. While SFT can be effective, it requires persona-labeled data and retraining for new roles, limiting flexibility. In contrast, prompt- and RAG-based signals are easy to apply but can be diluted in long dialogues, leading to drifting and sometimes inconsistent persona behavior. To address this, we propose a contrastive Sparse AutoEncoder (SAE) framework that learns facet-level personality control vectors aligned with the Big Five 30-facet model. A new 15,000-sample leakage-controlled corpus is constructed to provide balanced supervision for each facet. The learned vectors are integrated into the model's residual space and dynamically selected by a trait-activated routing module, enabling precise and interpretable personality steering. Experiments on Large Language Models (LLMs) show that the proposed method maintains stable character fidelity and output quality across contextualized settings, outperforming Contrastive Activation Addition (CAA) and prompt-only baselines. The combined SAE+Prompt configuration achieves the best overall performance, confirming that contrastively trained latent vectors can enhance persona control while preserving dialogue coherence.

Executive Summary

This article introduces a novel approach to personality control in Role-Playing Agents (RPAs) using a contrastive Sparse AutoEncoder (SAE) framework. The proposed method, facet-level persona control by trait-activated routing, learns control vectors aligned with the Big Five 30-facet model. The learned vectors are integrated into the model's residual space and dynamically selected by a trait-activated routing module. The approach is evaluated on Large Language Models (LLMs) and compared to existing methods, demonstrating stable character fidelity and output quality across contextualized settings. The combined SAE+Prompt configuration achieves the best overall performance, suggesting that contrastively trained latent vectors can enhance persona control while preserving dialogue coherence.

Key Points

  • The proposed method learns facet-level personality control vectors aligned with the Big Five 30-facet model.
  • The contrastive SAE framework is integrated with a trait-activated routing module for dynamic selection of control vectors.
  • The approach is evaluated on LLMs and compared to existing methods, demonstrating improved performance.

Merits

Effective Persona Control

The proposed method achieves stable character fidelity and output quality across contextualized settings, outperforming existing methods.

Interpretable and Precise Control

The trait-activated routing module enables precise and interpretable personality steering, allowing for more effective control over RPA behavior.

Demerits

Requirement for Large-Scale Training Data

The proposed method requires a large-scale, leakage-controlled corpus for training, which may be challenging to obtain in certain domains or applications.

Potential Overfitting to Training Data

The method may be susceptible to overfitting to the training data, particularly if the corpus is not diverse or representative of the target domain.

Expert Commentary

The proposed method represents a significant advancement in the field of RPAs, offering a novel approach to personality control that is both effective and interpretable. The use of contrastive SAE and trait-activated routing demonstrates a deep understanding of the challenges facing RPAs and a commitment to developing solutions that are robust and reliable. However, the requirement for large-scale training data and potential susceptibility to overfitting are limitations that must be addressed in future work. Overall, this article is a valuable contribution to the field and has the potential to shape the future of RPAs and AI more broadly.

Recommendations

  • Future work should focus on addressing the limitations of the proposed method, including the requirement for large-scale training data and potential susceptibility to overfitting.
  • The method should be evaluated in a variety of applications, including customer service, education, and public services, to demonstrate its potential impact and practicality.

Sources