Matching Accuracy, Different Geometry: Evolution Strategies vs GRPO in LLM Post-Training
arXiv:2604.01499v1 Announce Type: new Abstract: Evolution Strategies (ES) have emerged as a scalable gradient-free alternative to reinforcement learning based LLM fine-tuning, but it remains unclear whether comparable task performance implies comparable solutions in parameter space. We compare ES and Group...
Sven: Singular Value Descent as a Computationally Efficient Natural Gradient Method
arXiv:2604.01279v1 Announce Type: new Abstract: We introduce Sven (Singular Value dEsceNt), a new optimization algorithm for neural networks that exploits the natural decomposition of loss functions into a sum over individual data points, rather than reducing the full loss to...
Improvisational Games as a Benchmark for Social Intelligence of AI Agents: The Case of Connections
arXiv:2604.00284v1 Announce Type: new Abstract: We formally introduce a improvisational wordplay game called Connections to explore reasoning capabilities of AI agents. Playing Connections combines skills in knowledge retrieval, summarization and awareness of cognitive states of other agents. We show how...
Supreme Court appears likely to side against Trump on birthright citizenship
Updated on April 1 at 10:10 p.m. On Jan. 20, 2025, President Donald Trump signed an executive order that would end birthright citizenship – the guarantee of U.S. citizenship to […]The postSupreme Court appears likely to side against Trump on...
Trump attends birthright citizenship argument
Updated on April 1 at 7:48 p.m. As soon as President Donald Trump last evening mentioned attending argument in the birthright citizenship case in Trump v. Barbara today, some Supreme […]The postTrump attends birthright citizenship argumentappeared first onSCOTUSblog.
Birthright citizenship live blog for Wednesday, April 1
On Wednesday, April 1, we will be live blogging as the court hears argument in Trump v. Barbara, on the constitutionality of President Donald Trump’s executive order on birthright citizenship. […]The postBirthright citizenship live blog for Wednesday, April 1appeared first...
Beyond Symbolic Solving: Multi Chain-of-Thought Voting for Geometric Reasoning in Large Language Models
arXiv:2604.00890v1 Announce Type: new Abstract: Geometric Problem Solving (GPS) remains at the heart of enhancing mathematical reasoning in large language models because it requires the combination of diagrammatic understanding, symbolic manipulation and logical inference. In existing literature, researchers have chiefly...
DISCO-TAB: A Hierarchical Reinforcement Learning Framework for Privacy-Preserving Synthesis of Complex Clinical Data
arXiv:2604.01481v1 Announce Type: new Abstract: The development of robust clinical decision support systems is frequently impeded by the scarcity of high-fidelity, privacy-preserving biomedical data. While Generative Large Language Models (LLMs) offer a promising avenue for synthetic data generation, they often...
SCOTUStoday for Wednesday, April 1
This morning, the court will hear argument in the birthright citizenship case, Trump v. Barbara. We will be live blogging beginning at 9:30 a.m. EDT. For a great introduction to […]The postSCOTUStoday for Wednesday, April 1appeared first onSCOTUSblog.
Can LLMs Perceive Time? An Empirical Investigation
arXiv:2604.00010v1 Announce Type: cross Abstract: Large language models cannot estimate how long their own tasks take. We investigate this limitation through four experiments across 68 tasks and four model families. Pre-task estimates overshoot actual duration by 4--7$\times$ ($p < 0.001$),...
Can Large Language Models Self-Correct in Medical Question Answering? An Exploratory Study
arXiv:2604.00261v2 Announce Type: new Abstract: Large language models (LLMs) have achieved strong performance on medical question answering (medical QA), and chain-of-thought (CoT) prompting has further improved results by eliciting explicit intermediate reasoning; meanwhile, self-reflective (self-corrective) prompting has been widely claimed...
Therefore I am. I Think
arXiv:2604.01202v2 Announce Type: new Abstract: We consider the question: when a large language reasoning model makes a choice, did it think first and then decide to, or decide first and then think? In this paper, we present evidence that detectable,...
TRIMS: Trajectory-Ranked Instruction Masked Supervision for Diffusion Language Models
arXiv:2604.00666v1 Announce Type: new Abstract: Diffusion language models (DLMs) offer a promising path toward low-latency generation through parallel decoding, but their practical efficiency depends heavily on the decoding trajectory. In practice, this advantage often fails to fully materialize because standard...
Criterion Validity of LLM-as-Judge for Business Outcomes in Conversational Commerce
arXiv:2604.00022v1 Announce Type: cross Abstract: Multi-dimensional rubric-based dialogue evaluation is widely used to assess conversational AI, yet its criterion validity -- whether quality scores are associated with the downstream outcomes they are meant to serve -- remains largely untested. We...
FourierMoE: Fourier Mixture-of-Experts Adaptation of Large Language Models
arXiv:2604.01762v1 Announce Type: new Abstract: Parameter-efficient fine-tuning (PEFT) has emerged as a crucial paradigm for adapting large language models (LLMs) under constrained computational budgets. However, standard PEFT methods often struggle in multi-task fine-tuning settings, where diverse optimization objectives induce task...
Koopman-Based Nonlinear Identification and Adaptive Control of a Turbofan Engine
arXiv:2604.01730v1 Announce Type: new Abstract: This paper investigates Koopman operator-based approaches for multivariable control of a two-spool turbofan engine. A physics-based component-level model is developed to generate training data and validate the controllers. A meta-heuristic extended dynamic mode decomposition is...
Think Twice Before You Write -- an Entropy-based Decoding Strategy to Enhance LLM Reasoning
arXiv:2604.00018v1 Announce Type: cross Abstract: Decoding strategies play a central role in shaping the reasoning ability of large language models (LLMs). Traditional methods such as greedy decoding and beam search often suffer from error propagation, while sampling-based approaches introduce randomness...
Training In-Context and In-Weights Mixtures Via Contrastive Context Sampling
arXiv:2604.01601v1 Announce Type: new Abstract: We investigate training strategies that co-develop in-context learning (ICL) and in-weights learning (IWL), and the ability to switch between them based on context relevance. Although current LLMs exhibit both modes, standard task-specific fine-tuning often erodes...
In harmony with gpt-oss
arXiv:2604.00362v1 Announce Type: new Abstract: No one has independently reproduced OpenAI's published scores for gpt-oss-20b with tools, because the original paper discloses neither the tools nor the agent harness. We reverse-engineered the model's in-distribution tools: when prompted without tool definitions,...
Retrospective on PAT x ICML 2026 AI Paper Assistant Program
Agent Q-Mix: Selecting the Right Action for LLM Multi-Agent Systems through Reinforcement Learning
arXiv:2604.00344v1 Announce Type: new Abstract: Large Language Models (LLMs) have shown remarkable performance in completing various tasks. However, solving complex problems often requires the coordination of multiple agents, raising a fundamental question: how to effectively select and interconnect these agents....
Detecting Multi-Agent Collusion Through Multi-Agent Interpretability
arXiv:2604.01151v1 Announce Type: new Abstract: As LLM agents are increasingly deployed in multi-agent systems, they introduce risks of covert coordination that may evade standard forms of human oversight. While linear probes on model activations have shown promise for detecting deception...
MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning
arXiv:2604.01694v1 Announce Type: new Abstract: Minor Component Adaptation (MiCA) is a novel parameter-efficient fine-tuning method for large language models that focuses on adapting underutilized subspaces of model representations. Unlike conventional methods such as Low-Rank Adaptation (LoRA), which target dominant subspaces,...
Experience as a Compass: Multi-agent RAG with Evolving Orchestration and Agent Prompts
arXiv:2604.00901v1 Announce Type: new Abstract: Multi-agent Retrieval-Augmented Generation (RAG), wherein each agent takes on a specific role, supports hard queries that require multiple steps and sources, or complex reasoning. Existing approaches, however, rely on static agent behaviors and fixed orchestration...
BIAS, FAIRNESS, AND INCLUSIVITY IN GENERATIVE AI SYSTEMS: A CRITICAL EXAMINATION OF ALGORITHMIC BIAS, REPRESENTATION GAPS, AND THE CHALLENGES OF ENSURING EQUITY IN AI-GENERATED OUTPUTS
Generative AI systems such as large language models (LLMs), image synthesizers, and multimodal frameworks have transformed content creation while also exposing and amplifying systemic biases that undermine fairness and inclusivity. This study critically examines algorithmic bias in model outputs, representation...
No Third Term: Rejecting the Nonconsecutive Loophole – Wisconsin Law Review – UW–Madison
The text of the Twenty-Second Amendment seems clear that a president cannot be elected to a third term: “No person shall be elected to the office of the President more than twice.” This Essay looks further to the history surrounding...
Volume 110 Headnotes: Spring Issue - Minnesota Law Review
No More IEEPA Tariffs? The Legal Bases of an Alternative Regime By Lawrence J. Liu Full essay here. lawreview - Minnesota Law Review
About the Association for the Advancement of Artificial Intelligence (AAAI)
AAAI is an artificial intelligence organization dedicated to advancing the scientific understanding of AI.
The key arguments in the birthright citizenship case
On April 1, the Supreme Court will hear oral arguments in one of the highest-profile cases of the 2025-26 term – and indeed, one of the biggest cases in several […]The postThe key arguments in the birthright citizenship caseappeared first...