Why Is RLHF Alignment Shallow? A Gradient Analysis
arXiv:2603.04851v1 Announce Type: new Abstract: Why is safety alignment in LLMs shallow? We prove that gradient-based alignment inherently concentrates on positions where harm is decided and vanishes beyond. Using a martingale decomposition of sequence-level harm, we derive an exact characterization...
FedAFD: Multimodal Federated Learning via Adversarial Fusion and Distillation
arXiv:2603.04890v1 Announce Type: new Abstract: Multimodal Federated Learning (MFL) enables clients with heterogeneous data modalities to collaboratively train models without sharing raw data, offering a privacy-preserving framework that leverages complementary cross-modal information. However, existing methods often overlook personalized client performance...
EVMbench: Evaluating AI Agents on Smart Contract Security
arXiv:2603.04915v1 Announce Type: new Abstract: Smart contracts on public blockchains now manage large amounts of value, and vulnerabilities in these systems can lead to substantial losses. As AI agents become more capable at reading, writing, and running code, it is...
The Untold Story of the Proto-Smith Era: Justice O’Connor’s Papers and the Court’s Free Exercise Revolution
Justice O’Connor’s recently released Supreme Court papers reveal the untold story of how the Court systematically dismantled religious accommodation protections in the decade leading up to Employment Division v. Smith. While Smith’s abandonment of strict scrutiny for neutral, generally applicable...
SCOTUStoday for Friday, March 6
On this day in 1857, the Supreme Court released its opinion in Dred Scott v. Sandford, holding that Scott, an enslaved man who spent time in free territory, was not […]The postSCOTUStoday for Friday, March 6appeared first onSCOTUSblog.
Meta sued over AI smart glasses’ privacy concerns, after workers reviewed nudity, sex, and other footage
Lawyers say Meta's marketing materials promised privacy and user control over sharing footage. But an investigation found that subcontractors are reviewing footage from customers' glasses.
AriadneMem: Threading the Maze of Lifelong Memory for LLM Agents
arXiv:2603.03290v1 Announce Type: cross Abstract: Long-horizon LLM agents require memory systems that remain accurate under fixed context budgets. However, existing systems struggle with two persistent challenges in long-term dialogue: (i) \textbf{disconnected evidence}, where multi-hop answers require linking facts distributed across...
Fine-Tuning and Evaluating Conversational AI for Agricultural Advisory
arXiv:2603.03294v1 Announce Type: cross Abstract: Large Language Models show promise for agricultural advisory, yet vanilla models exhibit unsupported recommendations, generic advice lacking specific, actionable detail, and communication styles misaligned with smallholder farmer needs. In high stakes agricultural contexts, where recommendation...
TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation
arXiv:2603.03298v1 Announce Type: cross Abstract: Large Language Models (LLMs) have improved substantially alignment, yet their behavior remains highly sensitive to prompt phrasing. This brittleness has motivated automated prompt engineering, but most existing methods (i) require a task-specific training set, (ii)...
Developing an AI Assistant for Knowledge Management and Workforce Training in State DOTs
arXiv:2603.03302v1 Announce Type: cross Abstract: Effective knowledge management is critical for preserving institutional expertise and improving the efficiency of workforce training in state transportation agencies. Traditional approaches, such as static documentation, classroom-based instruction, and informal mentorship, often lead to fragmented...
Escaping the BLEU Trap: A Signal-Grounded Framework with Decoupled Semantic Guidance for EEG-to-Text Decoding
arXiv:2603.03312v1 Announce Type: cross Abstract: Decoding natural language from non-invasive EEG signals is a promising yet challenging task. However, current state-of-the-art models remain constrained by three fundamental limitations: Semantic Bias (mode collapse into generic templates), Signal Neglect (hallucination based on...
Towards Self-Robust LLMs: Intrinsic Prompt Noise Resistance via CoIPO
arXiv:2603.03314v1 Announce Type: cross Abstract: Large language models (LLMs) have demonstrated remarkable and steadily improving performance across a wide range of tasks. However, LLM performance may be highly sensitive to prompt variations especially in scenarios with limited openness or strict...
M-QUEST -- Meme Question-Understanding Evaluation on Semantics and Toxicity
arXiv:2603.03315v1 Announce Type: cross Abstract: Internet memes are a powerful form of online communication, yet their nature and reliance on commonsense knowledge make toxicity detection challenging. Identifying key features for meme interpretation and understanding, is a crucial task. Previous work...
Can Large Language Models Derive New Knowledge? A Dynamic Benchmark for Biological Knowledge Discovery
arXiv:2603.03322v1 Announce Type: cross Abstract: Recent advancements in Large Language Model (LLM) agents have demonstrated remarkable potential in automatic knowledge discovery. However, rigorously evaluating an AI's capacity for knowledge discovery remains a critical challenge. Existing benchmarks predominantly rely on static...
SE-Search: Self-Evolving Search Agent via Memory and Dense Reward
arXiv:2603.03293v1 Announce Type: new Abstract: Retrieval augmented generation (RAG) reduces hallucinations and factual errors in large language models (LLMs) by conditioning generation on retrieved external knowledge. Recent search agents further cast RAG as an autonomous, multi-turn information-seeking process. However, existing...
Combating data scarcity in recommendation services: Integrating cognitive types of VARK and neural network technologies (LLM)
arXiv:2603.03309v1 Announce Type: new Abstract: Cold start scenarios present fundamental obstacles to effective recommendation generation, particularly when dealing with users lacking interaction history or items with sparse metadata. This research proposes an innovative hybrid framework that leverages Large Language Models...
StructLens: A Structural Lens for Language Models via Maximum Spanning Trees
arXiv:2603.03328v1 Announce Type: new Abstract: Language exhibits inherent structures, a property that explains both language acquisition and language change. Given this characteristic, we expect language models to manifest internal structures as well. While interpretability research has investigated the components of...
The CompMath-MCQ Dataset: Are LLMs Ready for Higher-Level Math?
arXiv:2603.03334v1 Announce Type: new Abstract: The evaluation of Large Language Models (LLMs) on mathematical reasoning has largely focused on elementary problems, competition-style questions, or formal theorem proving, leaving graduate-level and computational mathematics relatively underexplored. We introduce CompMath-MCQ, a new benchmark...
Compressed Sensing for Capability Localization in Large Language Models
arXiv:2603.03335v1 Announce Type: new Abstract: Large language models (LLMs) exhibit a wide range of capabilities, including mathematical reasoning, code generation, and linguistic behaviors. We show that many capabilities are highly localized to small subsets of attention heads within Transformer architectures....
AOI: Turning Failed Trajectories into Training Signals for Autonomous Cloud Diagnosis
arXiv:2603.03378v1 Announce Type: new Abstract: Large language model (LLM) agents offer a promising data-driven approach to automating Site Reliability Engineering (SRE), yet their enterprise deployment is constrained by three challenges: restricted access to proprietary data, unsafe action execution under permission-governed...
Directional Neural Collapse Explains Few-Shot Transfer in Self-Supervised Learning
arXiv:2603.03530v1 Announce Type: new Abstract: Frozen self-supervised representations often transfer well with only a few labels across many semantic tasks. We argue that a single geometric quantity, \emph{directional} CDNV (decision-axis variance), sits at the core of two favorable behaviors: strong...
Local Shapley: Model-Induced Locality and Optimal Reuse in Data Valuation
arXiv:2603.03672v1 Announce Type: new Abstract: The Shapley value provides a principled foundation for data valuation, but exact computation is #P-hard due to the exponential coalition space. Existing accelerations remain global and ignore a structural property of modern predictors: for a...
A Stein Identity for q-Gaussians with Bounded Support
arXiv:2603.03673v1 Announce Type: new Abstract: Stein's identity is a fundamental tool in machine learning with applications in generative models, stochastic optimization, and other problems involving gradients of expectations under Gaussian distributions. Less attention has been paid to problems with non-Gaussian...
Why Do Unlearnable Examples Work: A Novel Perspective of Mutual Information
arXiv:2603.03725v1 Announce Type: new Abstract: The volume of freely scraped data on the Internet has driven the tremendous success of deep learning. Along with this comes the growing concern about data privacy and security. Numerous methods for generating unlearnable examples...
MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier
arXiv:2603.03756v1 Announce Type: new Abstract: While large language models (LLMs) show promise in scientific discovery, existing research focuses on inference or feedback-driven training, leaving the direct modeling of the generative reasoning process, $P(\text{hypothesis}|\text{background})$ ($P(h|b)$), unexplored. We demonstrate that directly training...
LEA: Label Enumeration Attack in Vertical Federated Learning
arXiv:2603.03777v1 Announce Type: new Abstract: A typical Vertical Federated Learning (VFL) scenario involves several participants collaboratively training a machine learning model, where each party has different features for the same samples, with labels held exclusively by one party. Since labels...
Inverse Contextual Bandits without Rewards: Learning from a Non-Stationary Learner via Suffix Imitation
arXiv:2603.03778v1 Announce Type: new Abstract: We study the Inverse Contextual Bandit (ICB) problem, in which a learner seeks to optimize a policy while an observer, who cannot access the learner's rewards and only observes actions, aims to recover the underlying...
k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods
arXiv:2603.03867v1 Announce Type: new Abstract: Link prediction (LP) plays a central role in graph-based applications, particularly in social recommendation. However, real-world graphs often reflect structural biases, most notably homophily, the tendency of nodes with similar attributes to connect. While this...
Graph-GRPO: Stabilizing Multi-Agent Topology Learning via Group Relative Policy Optimization
arXiv:2603.02701v1 Announce Type: new Abstract: Optimizing communication topology is fundamental to the efficiency and effectiveness of Large Language Model (LLM)-based Multi-Agent Systems (MAS). While recent approaches utilize reinforcement learning to dynamically construct task-specific graphs, they typically rely on single-sample policy...