Academic

Hiding in Plain Text: Detecting Concealed Jailbreaks via Activation Disentanglement

arXiv:2602.19396v1 Announce Type: new Abstract: Large language models (LLMs) remain vulnerable to jailbreak prompts that are fluent and semantically coherent, and therefore difficult to detect with standard heuristics. A particularly challenging failure mode occurs when an attacker tries to hide the malicious goal of their request by manipulating its framing to induce compliance. Because these attacks maintain malicious intent through a flexible presentation, defenses that rely on structural artifacts or goal-specific signatures can fail. Motivated by this, we introduce a self-supervised framework for disentangling semantic factor pairs in LLM activations at inference. We instantiate the framework for goal and framing and construct GoalFrameBench, a corpus of prompts with controlled goal and framing variations, which we use to train Representation Disentanglement on Activations (ReDAct) module to extract disentangled representations in a frozen LLM. We then propose FrameShield, an ano

arXiv:2602.19396v1 Announce Type: new Abstract: Large language models (LLMs) remain vulnerable to jailbreak prompts that are fluent and semantically coherent, and therefore difficult to detect with standard heuristics. A particularly challenging failure mode occurs when an attacker tries to hide the malicious goal of their request by manipulating its framing to induce compliance. Because these attacks maintain malicious intent through a flexible presentation, defenses that rely on structural artifacts or goal-specific signatures can fail. Motivated by this, we introduce a self-supervised framework for disentangling semantic factor pairs in LLM activations at inference. We instantiate the framework for goal and framing and construct GoalFrameBench, a corpus of prompts with controlled goal and framing variations, which we use to train Representation Disentanglement on Activations (ReDAct) module to extract disentangled representations in a frozen LLM. We then propose FrameShield, an anomaly detector operating on the framing representations, which improves model-agnostic detection across multiple LLM families with minimal computational overhead. Theoretical guarantees for ReDAct and extensive empirical validations show that its disentanglement effectively powers FrameShield. Finally, we use disentanglement as an interpretability probe, revealing distinct profiles for goal and framing signals and positioning semantic disentanglement as a building block for both LLM safety and mechanistic interpretability.

Executive Summary

This article introduces a novel framework for detecting concealed jailbreaks in large language models (LLMs) via activation disentanglement. The proposed approach, Representation Disentanglement on Activations (ReDAct), enables the extraction of disentangled representations of goal and framing in LLMs, allowing for improved anomaly detection and interpretability. The authors demonstrate the effectiveness of their approach through extensive empirical validations and theoretical guarantees, positioning semantic disentanglement as a key building block for LLM safety and mechanistic interpretability.

Key Points

  • Introduction of a self-supervised framework for disentangling semantic factor pairs in LLM activations
  • Development of GoalFrameBench, a corpus of prompts with controlled goal and framing variations
  • Proposal of FrameShield, an anomaly detector operating on framing representations

Merits

Effective Detection

The proposed approach demonstrates improved model-agnostic detection of concealed jailbreaks across multiple LLM families with minimal computational overhead.

Interpretability

The use of disentanglement as an interpretability probe reveals distinct profiles for goal and framing signals, enhancing our understanding of LLMs.

Demerits

Limited Generalizability

The approach may not be directly applicable to other types of AI models or domains, requiring further research and adaptation.

Expert Commentary

The article presents a significant contribution to the field of LLM safety and interpretability. The proposed framework for disentangling semantic factor pairs in LLM activations offers a promising approach for detecting concealed jailbreaks and enhancing our understanding of LLM behavior. The use of disentanglement as an interpretability probe provides valuable insights into the inner workings of LLMs, which can inform future research and development in this area. However, further research is needed to fully explore the potential of this approach and address potential limitations, such as generalizability to other AI models and domains.

Recommendations

  • Future research should investigate the application of the proposed framework to other types of AI models and domains.
  • The development of more comprehensive evaluation metrics and benchmarks is necessary to fully assess the effectiveness of the proposed approach.

Sources