Learning Unified Distance Metric for Heterogeneous Attribute Data Clustering
arXiv:2603.04458v1 Announce Type: new Abstract: Datasets composed of numerical and categorical attributes (also called mixed data hereinafter) are common in real clustering tasks. Differing from numerical attributes that indicate tendencies between two concepts (e.g., high and low temperature) with their...
Standing on the Shoulders of Giants: Rethinking EEG Foundation Model Pretraining via Multi-Teacher Distillation
arXiv:2603.04478v1 Announce Type: new Abstract: Pretraining for electroencephalogram (EEG) foundation models has predominantly relied on self-supervised masked reconstruction, a paradigm largely adapted from and inspired by the success of vision and language foundation models. However, unlike images and text, EEG...
Invariant Causal Routing for Governing Social Norms in Online Market Economies
arXiv:2603.04534v1 Announce Type: new Abstract: Social norms are stable behavioral patterns that emerge endogenously within economic systems through repeated interactions among agents. In online market economies, such norms -- like fair exposure, sustained participation, and balanced reinvestment -- are critical...
PDE foundation model-accelerated inverse estimation of system parameters in inertial confinement fusion
arXiv:2603.04606v1 Announce Type: new Abstract: PDE foundation models are typically pretrained on large, diverse corpora of PDE datasets and can be adapted to new settings with limited task-specific data. However, most downstream evaluations focus on forward problems, such as autoregressive...
Probabilistic Dreaming for World Models
arXiv:2603.04715v1 Announce Type: new Abstract: "Dreaming" enables agents to learn from imagined experiences, enabling more robust and sample-efficient learning of world models. In this work, we consider innovations to the state-of-the-art Dreamer model using probabilistic methods that enable: (1) the...
Why Is RLHF Alignment Shallow? A Gradient Analysis
arXiv:2603.04851v1 Announce Type: new Abstract: Why is safety alignment in LLMs shallow? We prove that gradient-based alignment inherently concentrates on positions where harm is decided and vanishes beyond. Using a martingale decomposition of sequence-level harm, we derive an exact characterization...
FedAFD: Multimodal Federated Learning via Adversarial Fusion and Distillation
arXiv:2603.04890v1 Announce Type: new Abstract: Multimodal Federated Learning (MFL) enables clients with heterogeneous data modalities to collaboratively train models without sharing raw data, offering a privacy-preserving framework that leverages complementary cross-modal information. However, existing methods often overlook personalized client performance...
EVMbench: Evaluating AI Agents on Smart Contract Security
arXiv:2603.04915v1 Announce Type: new Abstract: Smart contracts on public blockchains now manage large amounts of value, and vulnerabilities in these systems can lead to substantial losses. As AI agents become more capable at reading, writing, and running code, it is...
The Untold Story of the Proto-Smith Era: Justice O’Connor’s Papers and the Court’s Free Exercise Revolution
Justice O’Connor’s recently released Supreme Court papers reveal the untold story of how the Court systematically dismantled religious accommodation protections in the decade leading up to Employment Division v. Smith. While Smith’s abandonment of strict scrutiny for neutral, generally applicable...
SCOTUStoday for Friday, March 6
On this day in 1857, the Supreme Court released its opinion in Dred Scott v. Sandford, holding that Scott, an enslaved man who spent time in free territory, was not […]The postSCOTUStoday for Friday, March 6appeared first onSCOTUSblog.
Meta sued over AI smart glasses’ privacy concerns, after workers reviewed nudity, sex, and other footage
Lawyers say Meta's marketing materials promised privacy and user control over sharing footage. But an investigation found that subcontractors are reviewing footage from customers' glasses.
AriadneMem: Threading the Maze of Lifelong Memory for LLM Agents
arXiv:2603.03290v1 Announce Type: cross Abstract: Long-horizon LLM agents require memory systems that remain accurate under fixed context budgets. However, existing systems struggle with two persistent challenges in long-term dialogue: (i) \textbf{disconnected evidence}, where multi-hop answers require linking facts distributed across...
Fine-Tuning and Evaluating Conversational AI for Agricultural Advisory
arXiv:2603.03294v1 Announce Type: cross Abstract: Large Language Models show promise for agricultural advisory, yet vanilla models exhibit unsupported recommendations, generic advice lacking specific, actionable detail, and communication styles misaligned with smallholder farmer needs. In high stakes agricultural contexts, where recommendation...
TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation
arXiv:2603.03298v1 Announce Type: cross Abstract: Large Language Models (LLMs) have improved substantially alignment, yet their behavior remains highly sensitive to prompt phrasing. This brittleness has motivated automated prompt engineering, but most existing methods (i) require a task-specific training set, (ii)...
Developing an AI Assistant for Knowledge Management and Workforce Training in State DOTs
arXiv:2603.03302v1 Announce Type: cross Abstract: Effective knowledge management is critical for preserving institutional expertise and improving the efficiency of workforce training in state transportation agencies. Traditional approaches, such as static documentation, classroom-based instruction, and informal mentorship, often lead to fragmented...
Escaping the BLEU Trap: A Signal-Grounded Framework with Decoupled Semantic Guidance for EEG-to-Text Decoding
arXiv:2603.03312v1 Announce Type: cross Abstract: Decoding natural language from non-invasive EEG signals is a promising yet challenging task. However, current state-of-the-art models remain constrained by three fundamental limitations: Semantic Bias (mode collapse into generic templates), Signal Neglect (hallucination based on...
Towards Self-Robust LLMs: Intrinsic Prompt Noise Resistance via CoIPO
arXiv:2603.03314v1 Announce Type: cross Abstract: Large language models (LLMs) have demonstrated remarkable and steadily improving performance across a wide range of tasks. However, LLM performance may be highly sensitive to prompt variations especially in scenarios with limited openness or strict...
M-QUEST -- Meme Question-Understanding Evaluation on Semantics and Toxicity
arXiv:2603.03315v1 Announce Type: cross Abstract: Internet memes are a powerful form of online communication, yet their nature and reliance on commonsense knowledge make toxicity detection challenging. Identifying key features for meme interpretation and understanding, is a crucial task. Previous work...
Can Large Language Models Derive New Knowledge? A Dynamic Benchmark for Biological Knowledge Discovery
arXiv:2603.03322v1 Announce Type: cross Abstract: Recent advancements in Large Language Model (LLM) agents have demonstrated remarkable potential in automatic knowledge discovery. However, rigorously evaluating an AI's capacity for knowledge discovery remains a critical challenge. Existing benchmarks predominantly rely on static...
SE-Search: Self-Evolving Search Agent via Memory and Dense Reward
arXiv:2603.03293v1 Announce Type: new Abstract: Retrieval augmented generation (RAG) reduces hallucinations and factual errors in large language models (LLMs) by conditioning generation on retrieved external knowledge. Recent search agents further cast RAG as an autonomous, multi-turn information-seeking process. However, existing...
Combating data scarcity in recommendation services: Integrating cognitive types of VARK and neural network technologies (LLM)
arXiv:2603.03309v1 Announce Type: new Abstract: Cold start scenarios present fundamental obstacles to effective recommendation generation, particularly when dealing with users lacking interaction history or items with sparse metadata. This research proposes an innovative hybrid framework that leverages Large Language Models...
StructLens: A Structural Lens for Language Models via Maximum Spanning Trees
arXiv:2603.03328v1 Announce Type: new Abstract: Language exhibits inherent structures, a property that explains both language acquisition and language change. Given this characteristic, we expect language models to manifest internal structures as well. While interpretability research has investigated the components of...
The CompMath-MCQ Dataset: Are LLMs Ready for Higher-Level Math?
arXiv:2603.03334v1 Announce Type: new Abstract: The evaluation of Large Language Models (LLMs) on mathematical reasoning has largely focused on elementary problems, competition-style questions, or formal theorem proving, leaving graduate-level and computational mathematics relatively underexplored. We introduce CompMath-MCQ, a new benchmark...
Compressed Sensing for Capability Localization in Large Language Models
arXiv:2603.03335v1 Announce Type: new Abstract: Large language models (LLMs) exhibit a wide range of capabilities, including mathematical reasoning, code generation, and linguistic behaviors. We show that many capabilities are highly localized to small subsets of attention heads within Transformer architectures....
AOI: Turning Failed Trajectories into Training Signals for Autonomous Cloud Diagnosis
arXiv:2603.03378v1 Announce Type: new Abstract: Large language model (LLM) agents offer a promising data-driven approach to automating Site Reliability Engineering (SRE), yet their enterprise deployment is constrained by three challenges: restricted access to proprietary data, unsafe action execution under permission-governed...
Directional Neural Collapse Explains Few-Shot Transfer in Self-Supervised Learning
arXiv:2603.03530v1 Announce Type: new Abstract: Frozen self-supervised representations often transfer well with only a few labels across many semantic tasks. We argue that a single geometric quantity, \emph{directional} CDNV (decision-axis variance), sits at the core of two favorable behaviors: strong...
Local Shapley: Model-Induced Locality and Optimal Reuse in Data Valuation
arXiv:2603.03672v1 Announce Type: new Abstract: The Shapley value provides a principled foundation for data valuation, but exact computation is #P-hard due to the exponential coalition space. Existing accelerations remain global and ignore a structural property of modern predictors: for a...
A Stein Identity for q-Gaussians with Bounded Support
arXiv:2603.03673v1 Announce Type: new Abstract: Stein's identity is a fundamental tool in machine learning with applications in generative models, stochastic optimization, and other problems involving gradients of expectations under Gaussian distributions. Less attention has been paid to problems with non-Gaussian...
Why Do Unlearnable Examples Work: A Novel Perspective of Mutual Information
arXiv:2603.03725v1 Announce Type: new Abstract: The volume of freely scraped data on the Internet has driven the tremendous success of deep learning. Along with this comes the growing concern about data privacy and security. Numerous methods for generating unlearnable examples...