On Emergences of Non-Classical Statistical Characteristics in Classical Neural Networks
arXiv:2603.04451v1 Announce Type: new Abstract: Inspired by measurement incompatibility and Bell-family inequalities in quantum mechanics, we propose the Non-Classical Network (NCnet), a simple classical neural architecture that stably exhibits non-classical statistical behaviors under typical and interpretable experimental setups. We find...
Activity Recognition from Smart Insole Sensor Data Using a Circular Dilated CNN
arXiv:2603.04477v1 Announce Type: new Abstract: Smart insoles equipped with pressure sensors, accelerometers, and gyroscopes offer a non-intrusive means of monitoring human gait and posture. We present an activity classification system based on a circular dilated convolutional neural network (CDCNN) that...
Standing on the Shoulders of Giants: Rethinking EEG Foundation Model Pretraining via Multi-Teacher Distillation
arXiv:2603.04478v1 Announce Type: new Abstract: Pretraining for electroencephalogram (EEG) foundation models has predominantly relied on self-supervised masked reconstruction, a paradigm largely adapted from and inspired by the success of vision and language foundation models. However, unlike images and text, EEG...
Invariant Causal Routing for Governing Social Norms in Online Market Economies
arXiv:2603.04534v1 Announce Type: new Abstract: Social norms are stable behavioral patterns that emerge endogenously within economic systems through repeated interactions among agents. In online market economies, such norms -- like fair exposure, sustained participation, and balanced reinvestment -- are critical...
Why Do Neural Networks Forget: A Study of Collapse in Continual Learning
arXiv:2603.04580v1 Announce Type: new Abstract: Catastrophic forgetting is a major problem in continual learning, and lots of approaches arise to reduce it. However, most of them are evaluated through task accuracy, which ignores the internal model structure. Recent research suggests...
Engineering Regression Without Real-Data Training: Domain Adaptation for Tabular Foundation Models Using Multi-Dataset Embeddings
arXiv:2603.04692v1 Announce Type: new Abstract: Predictive modeling in engineering applications has long been dominated by bespoke models and small, siloed tabular datasets, limiting the applicability of large-scale learning approaches. Despite recent progress in tabular foundation models, the resulting synthetic training...
Distributional Equivalence in Linear Non-Gaussian Latent-Variable Cyclic Causal Models: Characterization and Learning
arXiv:2603.04780v1 Announce Type: new Abstract: Causal discovery with latent variables is a fundamental task. Yet most existing methods rely on strong structural assumptions, such as enforcing specific indicator patterns for latents or restricting how they can interact with others. We...
Justices poised to adopt exceptions to federal criminal defendants’ appellate waivers
The Supreme Court heard oral argument on Tuesday in Hunter v. United States about what exceptions exist to federal defendants’ waivers of their right to appeal. The justices seemed poised […]The postJustices poised to adopt exceptions to federal criminal defendants’...
Birthright citizenship: the exceptions provide the rule
The battle over birthright citizenship is a battle over its exceptions. The 14th Amendment’s first sentence proudly proclaims that “[a]ll persons born . . . in the United States, and subject to the jurisdiction […]The postBirthright citizenship: the exceptions provide...
Syrian nationals urge Supreme Court to keep ruling in place allowing them to stay in the United States
A group of Syrian nationals urged the Supreme Court on Thursday to leave in place a ruling by a federal judge in New York City that allows them to remain […]The postSyrian nationals urge Supreme Court to keep ruling in...
AI Now Institute
AI Now Institute | 19,196 followers on LinkedIn. The AI Now Institute produces diagnosis and actionable policy research on artificial intelligence.
Workers report watching Ray-Ban Meta-shot footage of people using the bathroom
Meta accused of "concealing the facts" about smart glass users' privacy.
Musk testifies tweet that led to $44 billion lawsuit "may not have been my wisest"
Elon Musk takes the stand in a San Francisco courtroom.
From Conflict to Consensus: Boosting Medical Reasoning via Multi-Round Agentic RAG
arXiv:2603.03292v1 Announce Type: cross Abstract: Large Language Models (LLMs) exhibit high reasoning capacity in medical question-answering, but their tendency to produce hallucinations and outdated knowledge poses critical risks in healthcare fields. While Retrieval-Augmented Generation (RAG) mitigates these issues, existing methods...
Language Model Goal Selection Differs from Humans' in an Open-Ended Task
arXiv:2603.03295v1 Announce Type: cross Abstract: As large language models (LLMs) get integrated into human decision-making, they are increasingly choosing goals autonomously rather than only completing human-defined ones, assuming they will reflect human preferences. However, human-LLM similarity in goal selection remains...
Developing an AI Assistant for Knowledge Management and Workforce Training in State DOTs
arXiv:2603.03302v1 Announce Type: cross Abstract: Effective knowledge management is critical for preserving institutional expertise and improving the efficiency of workforce training in state transportation agencies. Traditional approaches, such as static documentation, classroom-based instruction, and informal mentorship, often lead to fragmented...
HumanLM: Simulating Users with State Alignment Beats Response Imitation
arXiv:2603.03303v1 Announce Type: cross Abstract: Large Language Models (LLMs) are increasingly used to simulate how specific users respond to a given context, enabling more user-centric applications that rely on user feedback. However, existing user simulators mostly imitate surface-level patterns and...
Quantum-Inspired Self-Attention in a Large Language Model
arXiv:2603.03318v1 Announce Type: cross Abstract: Recent advances in Natural Language Processing have been predominantly driven by transformer-based architectures, which rely heavily on self-attention mechanisms to model relationships between tokens in a sequence. Similarly, the field of Quantum Natural Language Processing,...
Can Large Language Models Derive New Knowledge? A Dynamic Benchmark for Biological Knowledge Discovery
arXiv:2603.03322v1 Announce Type: cross Abstract: Recent advancements in Large Language Model (LLM) agents have demonstrated remarkable potential in automatic knowledge discovery. However, rigorously evaluating an AI's capacity for knowledge discovery remains a critical challenge. Existing benchmarks predominantly rely on static...
IntPro: A Proxy Agent for Context-Aware Intent Understanding via Retrieval-conditioned Inference
arXiv:2603.03325v1 Announce Type: cross Abstract: Large language models (LLMs) have become integral to modern Human-AI collaboration workflows, where accurately understanding user intent serves as a crucial step for generating satisfactory responses. Context-aware intent understanding, which involves inferring user intentions from...
SE-Search: Self-Evolving Search Agent via Memory and Dense Reward
arXiv:2603.03293v1 Announce Type: new Abstract: Retrieval augmented generation (RAG) reduces hallucinations and factual errors in large language models (LLMs) by conditioning generation on retrieved external knowledge. Recent search agents further cast RAG as an autonomous, multi-turn information-seeking process. However, existing...
Benchmarking Legal RAG: The Promise and Limits of AI Statutory Surveys
arXiv:2603.03300v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) offers significant potential for legal AI, yet systematic benchmarks are sparse. Prior work introduced LaborBench to benchmark RAG models based on ostensible ground truth from an exhaustive, multi-month, manual enumeration of all...
StructLens: A Structural Lens for Language Models via Maximum Spanning Trees
arXiv:2603.03328v1 Announce Type: new Abstract: Language exhibits inherent structures, a property that explains both language acquisition and language change. Given this characteristic, we expect language models to manifest internal structures as well. While interpretability research has investigated the components of...
Tracing Pharmacological Knowledge In Large Language Models
arXiv:2603.03407v1 Announce Type: new Abstract: Large language models (LLMs) have shown strong empirical performance across pharmacology and drug discovery tasks, yet the internal mechanisms by which they encode pharmacological knowledge remain poorly understood. In this work, we investigate how drug-group...
A theoretical model of dynamical grammatical gender shifting based on set-valued set function
arXiv:2603.03510v1 Announce Type: new Abstract: This study investigates the diverse characteristics of nouns, focusing on both semantic (e.g., countable/uncountable) and morphosyntactic (e.g., masculine/feminine) distinctions. We explore inter-word variations for gender markers in noun morphology. Grammatical gender shift is a widespread...
Trade-offs in Ensembling, Merging and Routing Among Parameter-Efficient Experts
arXiv:2603.03535v1 Announce Type: new Abstract: While large language models (LLMs) fine-tuned with lightweight adapters achieve strong performance across diverse tasks, their performance on individual tasks depends on the fine-tuning strategy. Fusing independently trained models with different strengths has shown promise...
Riemannian Optimization in Modular Systems
arXiv:2603.03610v1 Announce Type: new Abstract: Understanding how systems built out of modular components can be jointly optimized is an important problem in biology, engineering, and machine learning. The backpropagation algorithm is one such solution and has been instrumental in the...
MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier
arXiv:2603.03756v1 Announce Type: new Abstract: While large language models (LLMs) show promise in scientific discovery, existing research focuses on inference or feedback-driven training, leaving the direct modeling of the generative reasoning process, $P(\text{hypothesis}|\text{background})$ ($P(h|b)$), unexplored. We demonstrate that directly training...
Pretrained Vision-Language-Action Models are Surprisingly Resistant to Forgetting in Continual Learning
arXiv:2603.03818v1 Announce Type: new Abstract: Continual learning is a long-standing challenge in robot policy learning, where a policy must acquire new skills over time without catastrophically forgetting previously learned ones. While prior work has extensively studied continual learning in relatively...
Nodes Are Early, Edges Are Late: Probing Diagram Representations in Large Vision-Language Models
arXiv:2603.02865v1 Announce Type: new Abstract: Large vision-language models (LVLMs) demonstrate strong performance on diagram understanding benchmarks, yet they still struggle with understanding relationships between elements, particularly those represented by nodes and directed edges (e.g., arrows and lines). To investigate the...