PaperVoyager : Building Interactive Web with Visual Language Models
arXiv:2603.22999v1 Announce Type: new Abstract: Recent advances in visual language models have enabled autonomous agents for complex reasoning, tool use, and document understanding. However, existing document agents mainly transform papers into static artifacts such as summaries, webpages, or slides, which...
Knowledge Access Beats Model Size: Memory Augmented Routing for Persistent AI Agents
arXiv:2603.23013v1 Announce Type: new Abstract: Production AI agents frequently receive user-specific queries that are highly repetitive, with up to 47\% being semantically similar to prior interactions, yet each query is typically processed with the same computational cost. We argue that...
AuthorMix: Modular Authorship Style Transfer via Layer-wise Adapter Mixing
arXiv:2603.23069v1 Announce Type: new Abstract: The task of authorship style transfer involves rewriting text in the style of a target author while preserving the meaning of the original text. Existing style transfer methods train a single model on large corpora...
From Synthetic to Native: Benchmarking Multilingual Intent Classification in Logistics Customer Service
arXiv:2603.23172v1 Announce Type: new Abstract: Multilingual intent classification is central to customer-service systems on global logistics platforms, where models must process noisy user queries across languages and hierarchical label spaces. Yet most existing multilingual benchmarks rely on machine-translated text, which...
ImplicitRM: Unbiased Reward Modeling from Implicit Preference Data for LLM alignment
arXiv:2603.23184v1 Announce Type: new Abstract: Reward modeling represents a long-standing challenge in reinforcement learning from human feedback (RLHF) for aligning language models. Current reward modeling is heavily contingent upon experimental feedback data with high collection costs. In this work, we...
Efficient Embedding-based Synthetic Data Generation for Complex Reasoning Tasks
arXiv:2603.22294v1 Announce Type: new Abstract: Synthetic Data Generation (SDG), leveraging Large Language Models (LLMs), has recently been recognized and broadly adopted as an effective approach to improve the performance of smaller but more resource and compute efficient LLMs through fine-tuning....
Mitigating Premature Discretization with Progressive Quantization for Robust Vector Tokenization
arXiv:2603.22304v1 Announce Type: new Abstract: Vector Quantization (VQ) has become the cornerstone of tokenization for many multimodal Large Language Models and diffusion synthesis. However, existing VQ paradigms suffer from a fundamental conflict: they enforce discretization before the encoder has captured...
Full waveform inversion method based on diffusion model
arXiv:2603.22307v1 Announce Type: new Abstract: Seismic full-waveform inversion is a core technology for obtaining high-resolution subsurface model parameters. However, its highly nonlinear characteristics and strong dependence on the initial model often lead to the inversion process getting trapped in local...
ST-GDance++: A Scalable Spatial-Temporal Diffusion for Long-Duration Group Choreography
arXiv:2603.22316v1 Announce Type: new Abstract: Group dance generation from music requires synchronizing multiple dancers while maintaining spatial coordination, making it highly relevant to applications such as film production, gaming, and animation. Recent group dance generation models have achieved promising generation...
Geometric Mixture-of-Experts with Curvature-Guided Adaptive Routing for Graph Representation Learning
arXiv:2603.22317v1 Announce Type: new Abstract: Graph-structured data typically exhibits complex topological heterogeneity, making it difficult to model accurately within a single Riemannian manifold. While emerging mixed-curvature methods attempt to capture such diversity, they often rely on implicit, task-driven routing that...
DAQ: Delta-Aware Quantization for Post-Training LLM Weight Compression
arXiv:2603.22324v1 Announce Type: new Abstract: We introduce Delta-Aware Quantization (DAQ), a data-free post-training quantization framework that preserves the knowledge acquired during post-training. Standard quantization objectives minimize reconstruction error but are agnostic to the base model, allowing quantization noise to disproportionately...
Beyond the Mean: Distribution-Aware Loss Functions for Bimodal Regression
arXiv:2603.22328v1 Announce Type: new Abstract: Despite the strong predictive performance achieved by machine learning models across many application domains, assessing their trustworthiness through reliable estimates of predictive confidence remains a critical challenge. This issue arises in scenarios where the likelihood...
Cloud-Edge Collaborative Large Models for Robust Photovoltaic Power Forecasting
arXiv:2603.22343v1 Announce Type: new Abstract: Photovoltaic (PV) power forecasting in edge-enabled grids requires balancing forecasting accuracy, robustness under weather-driven distribution shifts, and strict latency constraints. Local specialized models are efficient for routine conditions but often degrade under rare ramp events...
Three Creates All: You Only Sample 3 Steps
arXiv:2603.22375v1 Announce Type: new Abstract: Diffusion models deliver high-fidelity generation but remain slow at inference time due to many sequential network evaluations. We find that standard timestep conditioning becomes a key bottleneck for few-step sampling. Motivated by layer-dependent denoising dynamics,...
Instruction-Tuned, but Not More Verifiable Instruction-Following: A Cross-Task Diagnosis for LoRA Adapters
arXiv:2603.22379v1 Announce Type: new Abstract: Adapters are often selected and deployed based on nominal labels (e.g., instruction-tuned), which implicitly suggest what capability improves after adaptation. We test whether nominal training objectives reliably align with realized cross-task capability gains by evaluating...
Learning When to Act: Interval-Aware Reinforcement Learning with Predictive Temporal Structure
arXiv:2603.22384v1 Announce Type: new Abstract: Autonomous agents operating in continuous environments must decide not only what to do, but when to act. We introduce a lightweight adaptive temporal control system that learns the optimal interval between cognitive ticks from experience,...
Model Predictive Control with Differentiable World Models for Offline Reinforcement Learning
arXiv:2603.22430v1 Announce Type: new Abstract: Offline Reinforcement Learning (RL) aims to learn optimal policies from fixed offline datasets, without further interactions with the environment. Such methods train an offline policy (or value function), and apply it at inference time without...
SkillRouter: Retrieve-and-Rerank Skill Selection for LLM Agents at Scale
arXiv:2603.22455v1 Announce Type: new Abstract: As LLM agent ecosystems grow, the number of available skills (tools, plugins) has reached tens of thousands, making it infeasible to inject all skills into an agent's context. This creates a need for skill routing...
A Foundation Model for Instruction-Conditioned In-Context Time Series Tasks
arXiv:2603.22586v1 Announce Type: new Abstract: In-context learning (ICL) allows a model to adapt at inference time by conditioning on examples rather than updating parameters. Existing time-series foundation models use implicit positional context, retrieval, or task-specific objectives, but rarely explicit instruction-conditioned...
Electronic Frontier Foundation to swap leaders as AI, ICE fights escalate
Public interest in government tech abuses is peaking. EFF's new leader plans to build on that.
Arm is releasing the first in-house chip in its 35-year history
Arm is producing its own CPU for the first time. It developed the CPU with Meta, which is also the chip's first customer.
Talat’s AI meeting notes stay on your machine, not in the cloud
The subscription-free AI meeting notes app is a local-first twist on notetaking tools like Granola.
KLDrive: Fine-Grained 3D Scene Reasoning for Autonomous Driving based on Knowledge Graph
arXiv:2603.21029v1 Announce Type: new Abstract: Autonomous driving requires reliable reasoning over fine-grained 3D scene facts. Fine-grained question answering over multi-modal driving observations provides a natural way to evaluate this capability, yet existing perception pipelines and driving-oriented large language model (LLM)...
Do LLM-Driven Agents Exhibit Engagement Mechanisms? Controlled Tests of Information Load, Descriptive Norms, and Popularity Cues
arXiv:2603.20911v1 Announce Type: new Abstract: Large language models make agent-based simulation more behaviorally expressive, but they also sharpen a basic methodological tension: fluent, human-like output is not, by itself, evidence for theory. We evaluate what an LLM-driven simulation can credibly...
The Intelligent Disobedience Game: Formulating Disobedience in Stackelberg Games and Markov Decision Processes
arXiv:2603.20994v1 Announce Type: new Abstract: In shared autonomy, a critical tension arises when an automated assistant must choose between obeying a human's instruction and deliberately overriding it to prevent harm. This safety-critical behavior is known as intelligent disobedience. To formalize...
Seed1.8 Model Card: Towards Generalized Real-World Agency
arXiv:2603.20633v1 Announce Type: new Abstract: We present Seed1.8, a foundation model aimed at generalized real-world agency: going beyond single-turn prediction to multi-turn interaction, tool use, and multi-step execution. Seed1.8 keeps strong LLM and vision-language performance while supporting a unified agentic...
AutoMOOSE: An Agentic AI for Autonomous Phase-Field Simulation
arXiv:2603.20986v1 Announce Type: new Abstract: Multiphysics simulation frameworks such as MOOSE provide rigorous engines for phase-field materials modeling, yet adoption is constrained by the expertise required to construct valid input files, coordinate parameter sweeps, diagnose failures, and extract quantitative results....
Thinking into the Future: Latent Lookahead Training for Transformers
arXiv:2603.20219v1 Announce Type: new Abstract: Autoregressive language models trained with next-token prediction generate text by sampling one discrete token at a time. Although very scalable, this objective forces the model to commit at every step, preventing it from exploring or...
ORACLE: Optimizing Reasoning Abilities of Large Language Models via Constraint-Led Synthetic Data Elicitation
arXiv:2603.21140v1 Announce Type: new Abstract: Training large language models (LLMs) with synthetic reasoning data has become a popular approach to enhancing their reasoning capabilities, while a key factor influencing the effectiveness of this paradigm is the quality of the generated...
Multi-Agent Debate with Memory Masking
arXiv:2603.20215v1 Announce Type: new Abstract: Large language models (LLMs) have recently demonstrated impressive capabilities in reasoning tasks. Currently, mainstream LLM reasoning frameworks predominantly focus on scaling up inference-time sampling to enhance performance. In particular, among all LLM reasoning frameworks, *multi-agent...