CircuitBuilder: From Polynomials to Circuits via Reinforcement Learning
arXiv:2603.17075v1 Announce Type: new Abstract: Motivated by auto-proof generation and Valiant's VP vs. VNP conjecture, we study the problem of discovering efficient arithmetic circuits to compute polynomials, using addition and multiplication gates. We formulate this problem as a single-player game,...
Contextual Preference Distribution Learning
arXiv:2603.17139v1 Announce Type: new Abstract: Decision-making problems often feature uncertainty stemming from heterogeneous and context-dependent human preferences. To address this, we propose a sequential learning-and-optimization pipeline to learn preference distributions and leverage them to solve downstream problems, for example risk-averse...
Noise-Response Calibration: A Causal Intervention Protocol for LLM-Judges
arXiv:2603.17172v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used as automated judges and synthetic labelers, especially in low-label settings. Yet these systems are stochastic and often overconfident, which makes deployment decisions difficult when external ground truth is...
Domain-informed explainable boosting machines for trustworthy lateral spread predictions
arXiv:2603.17175v1 Announce Type: new Abstract: Explainable Boosting Machines (EBMs) provide transparent predictions through additive shape functions, enabling direct inspection of feature contributions. However, EBMs can learn non-physical relationships that reduce their reliability in natural hazard applications. This study presents a...
Self-Conditioned Denoising for Atomistic Representation Learning
arXiv:2603.17196v1 Announce Type: new Abstract: The success of large-scale pretraining in NLP and computer vision has catalyzed growing efforts to develop analogous foundation models for the physical sciences. However, pretraining strategies using atomistic data remain underexplored. To date, large-scale supervised...
On the Cone Effect and Modality Gap in Medical Vision-Language Embeddings
arXiv:2603.17246v1 Announce Type: new Abstract: Vision-Language Models (VLMs) exhibit a characteristic "cone effect" in which nonlinear encoders map embeddings into highly concentrated regions of the representation space, contributing to cross-modal separation known as the modality gap. While this phenomenon has...
Learning Permutation Distributions via Reflected Diffusion on Ranks
arXiv:2603.17353v1 Announce Type: new Abstract: The finite symmetric group S_n provides a natural domain for permutations, yet learning probability distributions on S_n is challenging due to its factorially growing size and discrete, non-Euclidean structure. Recent permutation diffusion methods define forward...
SCALE:Scalable Conditional Atlas-Level Endpoint transport for virtual cell perturbation prediction
arXiv:2603.17380v1 Announce Type: new Abstract: Virtual cell models aim to enable in silico experimentation by predicting how cells respond to genetic, chemical, or cytokine perturbations from single-cell measurements. In practice, however, large-scale perturbation prediction remains constrained by three coupled bottlenecks:...
Large-Scale 3D Ground-Motion Synthesis with Physics-Inspired Latent Operator Flow Matching
arXiv:2603.17403v1 Announce Type: new Abstract: Earthquake hazard analysis and design of spatially distributed infrastructure, such as power grids and energy pipeline networks, require scenario-specific ground-motion time histories with realistic frequency content and spatiotemporal coherence. However, producing the large ensembles needed...
Causal Representation Learning on High-Dimensional Data: Benchmarks, Reproducibility, and Evaluation Metrics
arXiv:2603.17405v1 Announce Type: new Abstract: Causal representation learning (CRL) models aim to transform high-dimensional data into a latent space, enabling interventions to generate counterfactual samples or modify existing data based on the causal relationships among latent variables. To facilitate the...
The Phasor Transformer: Resolving Attention Bottlenecks on the Unit Circle
arXiv:2603.17433v1 Announce Type: new Abstract: Transformer models have redefined sequence learning, yet dot-product self-attention introduces a quadratic token-mixing bottleneck for long-context time-series. We introduce the \textbf{Phasor Transformer} block, a phase-native alternative representing sequence states on the unit-circle manifold $S^1$. Each...
TimeAPN: Adaptive Amplitude-Phase Non-Stationarity Normalization for Time Series Forecasting
arXiv:2603.17436v1 Announce Type: new Abstract: Non-stationarity is a fundamental challenge in multivariate long-term time series forecasting, often manifested as rapid changes in amplitude and phase. These variations lead to severe distribution shifts and consequently degrade predictive performance. Existing normalization-based methods...
Baguan-TS: A Sequence-Native In-Context Learning Model for Time Series Forecasting with Covariates
arXiv:2603.17439v1 Announce Type: new Abstract: Transformers enable in-context learning (ICL) for rapid, gradient-free adaptation in time series forecasting, yet most ICL-style approaches rely on tabularized, hand-crafted features, while end-to-end sequence models lack inference-time adaptation. We bridge this gap with a...
QuantFL: Sustainable Federated Learning for Edge IoT via Pre-Trained Model Quantisation
arXiv:2603.17507v1 Announce Type: new Abstract: Federated Learning (FL) enables privacy-preserving intelligence on Internet of Things (IoT) devices but incurs a significant carbon footprint due to the high energy cost of frequent uplink transmission. While pre-trained models are increasingly available on...
SCOTUStoday for Wednesday, March 18
Should the White House look more like the Supreme Court Building? The chairman of the Commission of Fine Arts, Rodney Mims Cook, Jr., has suggested swapping the White House’s “graceful […]The postSCOTUStoday for Wednesday, March 18appeared first onSCOTUSblog.
Meta is having trouble with rogue AI agents
A rogue AI agent inadvertently exposed Meta company and user data to engineers who didn't have permission to see it.
Sam Altman’s thank-you to coders draws the memes
Altman expresses gratitude for people who knew how to write their code from scratch. The internet replies with salty jokes.
Nothing CEO Carl Pei says smartphone apps will disappear as AI agents take their place
Nothing CEO Carl Pei says AI agents will eventually replace apps, shifting smartphones toward systems that understand intent and act on a user's behalf.
Patreon CEO calls AI companies’ fair use argument ‘bogus,’ says creators should be paid
Patreon CEO Jack Conte says AI companies should pay creators for training data, arguing their fair use defense falls apart when they license content from major publishers.
Rebel Audio is a new AI podcasting tool aimed at first-time creators
Rebel Audio is a new all-in-one podcasting tool that allows creators to record podcasts, edit, clip content for social, and publish episodes, all without ever leaving the platform.
The Gemini-powered features in Google Workspace that are worth using
From summarizing emails, drafting content, organizing data, and tracking meetings, here are all the best Gemini features in Google Workspace.
This startup wants to make enterprise software look more like a prompt
The company has raised $12 million in seed funding to build an AI operating system for enterprise.
Sequen snags $16M to bring TikTok-style personalization tech to any consumer company
With its Series A, Sequen is bringing its proprietary AI ranking and personalization technology to large consumer business.
Microsoft hires the team of Sequoia-backed AI collaboration platform, Cove
AI collaboration startup Cove is shutting down after its team joined Microsoft, with service ending April 1 and customer data set for deletion.
DOD says Anthropic’s ‘red lines’ make it an ‘unacceptable risk to national security’
The Defense Department said concerns that Anthropic might "attempt to disable its technology" during "warfighting operations" validate its decision to label the AI firm a supply-chain risk.
NLP Occupational Emergence Analysis: How Occupations Form and Evolve in Real Time -- A Zero-Assumption Method Demonstrated on AI in the US Technology Workforce, 2022-2026
arXiv:2603.15998v1 Announce Type: new Abstract: Occupations form and evolve faster than classification systems can track. We propose that a genuine occupation is a self-reinforcing structure (a bipartite co-attractor) in which a shared professional vocabulary makes practitioners cohesive as a group,...
Theoretical Foundations of Latent Posterior Factors: Formal Guarantees for Multi-Evidence Reasoning
arXiv:2603.15674v1 Announce Type: new Abstract: We present a complete theoretical characterization of Latent Posterior Factors (LPF), a principled framework for aggregating multiple heterogeneous evidence items in probabilistic prediction tasks. Multi-evidence reasoning arises pervasively in high-stakes domains including healthcare diagnosis, financial...
Neural-Symbolic Logic Query Answering in Non-Euclidean Space
arXiv:2603.15633v1 Announce Type: new Abstract: Answering complex first-order logic (FOL) queries on knowledge graphs is essential for reasoning. Symbolic methods offer interpretability but struggle with incomplete graphs, while neural approaches generalize better but lack transparency. Neural-symbolic models aim to integrate...
Compiled Memory: Not More Information, but More Precise Instructions for Language Agents
arXiv:2603.15666v1 Announce Type: new Abstract: Existing memory systems for language agents address memory management: how to retrieve and page more information within a context budget. We address a complementary problem -- memory utility: what experience is worth keeping, and how...
Survey of Various Fuzzy and Uncertain Decision-Making Methods
arXiv:2603.15709v1 Announce Type: new Abstract: Decision-making in real applications is often affected by vagueness, incomplete information, heterogeneous data, and conflicting expert opinions. This survey reviews uncertainty-aware multi-criteria decision-making (MCDM) and organizes the field into a concise, task-oriented taxonomy. We summarize...