Fragile Thoughts: How Large Language Models Handle Chain-of-Thought Perturbations
arXiv:2603.03332v1 Announce Type: new Abstract: Chain-of-Thought (CoT) prompting has emerged as a foundational technique for eliciting reasoning from Large Language Models (LLMs), yet the robustness of this approach to corruptions in intermediate reasoning steps remains poorly understood. This paper presents...
LEA: Label Enumeration Attack in Vertical Federated Learning
arXiv:2603.03777v1 Announce Type: new Abstract: A typical Vertical Federated Learning (VFL) scenario involves several participants collaboratively training a machine learning model, where each party has different features for the same samples, with labels held exclusively by one party. Since labels...
Structure-Aware Distributed Backdoor Attacks in Federated Learning
arXiv:2603.03865v1 Announce Type: new Abstract: While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing studies on backdoor attacks in federated learning mainly focus on trigger design or poisoning strategies, typically...
MUSE: A Run-Centric Platform for Multimodal Unified Safety Evaluation of Large Language Models
arXiv:2603.02482v1 Announce Type: cross Abstract: Safety evaluation and red-teaming of large language models remain predominantly text-centric, and existing frameworks lack the infrastructure to systematically test whether alignment generalizes to audio, image, and video inputs. We present MUSE (Multimodal Unified Safety...
BLUFF: Benchmarking the Detection of False and Synthetic Content across 58 Low-Resource Languages
arXiv:2603.00634v1 Announce Type: new Abstract: Multilingual falsehoods threaten information integrity worldwide, yet detection benchmarks remain confined to English or a few high-resource languages, leaving low-resource linguistic communities without robust defense tools. We introduce BLUFF, a comprehensive benchmark for detecting false...
Beyond Refusal: Probing the Limits of Agentic Self-Correction for Semantic Sensitive Information
arXiv:2602.21496v1 Announce Type: new Abstract: While defenses for structured PII are mature, Large Language Models (LLMs) pose a new threat: Semantic Sensitive Information (SemSI), where models infer sensitive identity attributes, generate reputation-harmful content, or hallucinate potentially wrong information. The capacity...
OpenAI’s Sam Altman announces Pentagon deal with ‘technical safeguards’
OpenAI's CEO claims its new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic.
Assessing Deanonymization Risks with Stylometry-Assisted LLM Agent
arXiv:2602.23079v1 Announce Type: new Abstract: The rapid advancement of large language models (LLMs) has enabled powerful authorship inference capabilities, raising growing concerns about unintended deanonymization risks in textual data such as news articles. In this work, we introduce an LLM...
Anthropic won’t budge as Pentagon escalates AI dispute
The Pentagon has given Anthropic until Friday to loosen AI guardrails or face potential penalties, escalating a high-stakes dispute that raises questions about government leverage, vendor dependence, and investor confidence in defense tech.
Asking Forever: Universal Activations Behind Turn Amplification in Conversational LLMs
arXiv:2602.17778v1 Announce Type: new Abstract: Multi-turn interaction length is a dominant factor in the operational costs of conversational LLMs. In this work, we present a new failure mode in conversational LLMs: turn amplification, in which a model consistently prolongs multi-turn...
AIDG: Evaluating Asymmetry Between Information Extraction and Containment in Multi-Turn Dialogue
arXiv:2602.17443v1 Announce Type: new Abstract: Evaluating the strategic reasoning capabilities of Large Language Models (LLMs) requires moving beyond static benchmarks to dynamic, multi-turn interactions. We introduce AIDG (Adversarial Information Deduction Game), a game-theoretic framework that probes the asymmetry between information...
Learning to Stay Safe: Adaptive Regularization Against Safety Degradation during Fine-Tuning
arXiv:2602.17546v1 Announce Type: new Abstract: Instruction-following language models are trained to be helpful and safe, yet their safety behavior can deteriorate under benign fine-tuning and worsen under adversarial updates. Existing defenses often offer limited protection or force a trade-off between...
Hybrid Federated and Split Learning for Privacy Preserving Clinical Prediction and Treatment Optimization
arXiv:2602.15304v1 Announce Type: new Abstract: Collaborative clinical decision support is often constrained by governance and privacy rules that prevent pooling patient-level records across institutions. We present a hybrid privacy-preserving framework that combines Federated Learning (FL) and Split Learning (SL) to...
Criminalising ‘Conversion Therapy’
An increasing number of jurisdictions have introduced legal bans on so-called ‘conversion therapy’ practices. Yet significant uncertainty and disagreement persist among legal scholars, policymakers and advocates about whether criminal law is an appropriate tool in this area and, if so,...