Towards Explainable Deep Learning for Ship Trajectory Prediction in Inland Waterways
arXiv:2603.04472v1 Announce Type: new Abstract: Accurate predictions of ship trajectories in crowded environments are essential to ensure safety in inland waterways traffic. Recent advances in deep learning promise increased accuracy even for complex scenarios. While the challenge of ship-to-ship awareness...
Augmenting representations with scientific papers
arXiv:2603.04516v1 Announce Type: new Abstract: Astronomers have acquired vast repositories of multimodal data, including images, spectra, and time series, complemented by decades of literature that analyzes astrophysical sources. Still, these data sources are rarely systematically integrated. This work introduces a...
Oracle-efficient Hybrid Learning with Constrained Adversaries
arXiv:2603.04546v1 Announce Type: new Abstract: The Hybrid Online Learning Problem, where features are drawn i.i.d. from an unknown distribution but labels are generated adversarially, is a well-motivated setting positioned between statistical and fully-adversarial online learning. Prior work has presented a...
Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling
arXiv:2603.04553v1 Announce Type: new Abstract: We introduce Latent Particle World Model (LPWM), a self-supervised object-centric world model scaled to real-world multi-object datasets and applicable in decision-making. LPWM autonomously discovers keypoints, bounding boxes, and object masks directly from video data, enabling...
When Sensors Fail: Temporal Sequence Models for Robust PPO under Sensor Drift
arXiv:2603.04648v1 Announce Type: new Abstract: Real-world reinforcement learning systems must operate under distributional drift in their observation streams, yet most policy architectures implicitly assume fully observed and noise-free states. We study robustness of Proximal Policy Optimization (PPO) under temporally persistent...
Engineering Regression Without Real-Data Training: Domain Adaptation for Tabular Foundation Models Using Multi-Dataset Embeddings
arXiv:2603.04692v1 Announce Type: new Abstract: Predictive modeling in engineering applications has long been dominated by bespoke models and small, siloed tabular datasets, limiting the applicability of large-scale learning approaches. Despite recent progress in tabular foundation models, the resulting synthetic training...
Probabilistic Dreaming for World Models
arXiv:2603.04715v1 Announce Type: new Abstract: "Dreaming" enables agents to learn from imagined experiences, enabling more robust and sample-efficient learning of world models. In this work, we consider innovations to the state-of-the-art Dreamer model using probabilistic methods that enable: (1) the...
When Priors Backfire: On the Vulnerability of Unlearnable Examples to Pretraining
arXiv:2603.04731v1 Announce Type: new Abstract: Unlearnable Examples (UEs) serve as a data protection strategy that generates imperceptible perturbations to mislead models into learning spurious correlations instead of underlying semantics. In this paper, we uncover a fundamental vulnerability of UEs that...
Distribution-Conditioned Transport
arXiv:2603.04736v1 Announce Type: new Abstract: Learning a transport model that maps a source distribution to a target distribution is a canonical problem in machine learning, but scientific applications increasingly require models that can generalize to source and target distributions unseen...
ConTSG-Bench: A Unified Benchmark for Conditional Time Series Generation
arXiv:2603.04767v1 Announce Type: new Abstract: Conditional time series generation plays a critical role in addressing data scarcity and enabling causal analysis in real-world applications. Despite its increasing importance, the field lacks a standardized and systematic benchmarking framework for evaluating generative...
Distributional Reinforcement Learning with Information Bottleneck for Uncertainty-Aware DRAM Equalization
arXiv:2603.04768v1 Announce Type: new Abstract: Equalizer parameter optimization is critical for signal integrity in high-speed memory systems operating at multi-gigabit data rates. However, existing methods suffer from computationally expensive eye diagram evaluation, optimization of expected rather than worst-case performance, and...
Distributional Equivalence in Linear Non-Gaussian Latent-Variable Cyclic Causal Models: Characterization and Learning
arXiv:2603.04780v1 Announce Type: new Abstract: Causal discovery with latent variables is a fundamental task. Yet most existing methods rely on strong structural assumptions, such as enforcing specific indicator patterns for latents or restricting how they can interact with others. We...
Diffusion Policy through Conditional Proximal Policy Optimization
arXiv:2603.04790v1 Announce Type: new Abstract: Reinforcement learning (RL) has been extensively employed in a wide range of decision-making problems, such as games and robotics. Recently, diffusion policies have shown strong potential in modeling multi-modal behaviors, enabling more diverse and flexible...
Missingness Bias Calibration in Feature Attribution Explanations
arXiv:2603.04831v1 Announce Type: new Abstract: Popular explanation methods often produce unreliable feature importance scores due to missingness bias, a systematic distortion that arises when models are probed with ablated, out-of-distribution inputs. Existing solutions treat this as a deep representational flaw...
Why Is RLHF Alignment Shallow? A Gradient Analysis
arXiv:2603.04851v1 Announce Type: new Abstract: Why is safety alignment in LLMs shallow? We prove that gradient-based alignment inherently concentrates on positions where harm is decided and vanishes beyond. Using a martingale decomposition of sequence-level harm, we derive an exact characterization...
BandPO: Bridging Trust Regions and Ratio Clipping via Probability-Aware Bounds for LLM Reinforcement Learning
arXiv:2603.04918v1 Announce Type: new Abstract: Proximal constraints are fundamental to the stability of the Large Language Model reinforcement learning. While the canonical clipping mechanism in PPO serves as an efficient surrogate for trust regions, we identify a critical bottleneck: fixed...
Generative AI in legal education: a two-year experiment with ChatGPT
Trump gets data center companies to pledge to pay for power generation
With no enforcement and questionable economics, it may not make a difference.
DiligenceSquared uses AI, voice agents to make M&A research affordable
Instead of relying on expensive management consultants, the startup uses AI voice agents to conduct interviews with customers of the companies the PE firms are considering buying.
A Dual-Helix Governance Approach Towards Reliable Agentic AI for WebGIS Development
arXiv:2603.04390v1 Announce Type: new Abstract: WebGIS development requires rigor, yet agentic AI frequently fails due to five large language model (LLM) limitations: context constraints, cross-session forgetting, stochasticity, instruction failure, and adaptation rigidity. We propose a dual-helix governance framework reframing these...
One Bias After Another: Mechanistic Reward Shaping and Persistent Biases in Language Reward Models
arXiv:2603.03291v1 Announce Type: cross Abstract: Reward Models (RMs) are crucial for online alignment of language models (LMs) with human preferences. However, RM-based preference-tuning is vulnerable to reward hacking, whereby LM policies learn undesirable behaviors from flawed RMs. By systematically measuring...
Language Model Goal Selection Differs from Humans' in an Open-Ended Task
arXiv:2603.03295v1 Announce Type: cross Abstract: As large language models (LLMs) get integrated into human decision-making, they are increasingly choosing goals autonomously rather than only completing human-defined ones, assuming they will reflect human preferences. However, human-LLM similarity in goal selection remains...
TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation
arXiv:2603.03298v1 Announce Type: cross Abstract: Large Language Models (LLMs) have improved substantially alignment, yet their behavior remains highly sensitive to prompt phrasing. This brittleness has motivated automated prompt engineering, but most existing methods (i) require a task-specific training set, (ii)...
HumanLM: Simulating Users with State Alignment Beats Response Imitation
arXiv:2603.03303v1 Announce Type: cross Abstract: Large Language Models (LLMs) are increasingly used to simulate how specific users respond to a given context, enabling more user-centric applications that rely on user feedback. However, existing user simulators mostly imitate surface-level patterns and...
Token-Oriented Object Notation vs JSON: A Benchmark of Plain and Constrained Decoding Generation
arXiv:2603.03306v1 Announce Type: cross Abstract: Recently presented Token-Oriented Object Notation (TOON) aims to replace JSON as a serialization format for passing structured data to LLMs with significantly reduced token usage. While showing solid accuracy in LLM comprehension, there is a...
How does fine-tuning improve sensorimotor representations in large language models?
arXiv:2603.03313v1 Announce Type: cross Abstract: Large Language Models (LLMs) exhibit a significant "embodiment gap", where their text-based representations fail to align with human sensorimotor experiences. This study systematically investigates whether and how task-specific fine-tuning can bridge this gap. Utilizing Representational...
Can Large Language Models Derive New Knowledge? A Dynamic Benchmark for Biological Knowledge Discovery
arXiv:2603.03322v1 Announce Type: cross Abstract: Recent advancements in Large Language Model (LLM) agents have demonstrated remarkable potential in automatic knowledge discovery. However, rigorously evaluating an AI's capacity for knowledge discovery remains a critical challenge. Existing benchmarks predominantly rely on static...
Discern Truth from Falsehood: Reducing Over-Refusal via Contrastive Refinement
arXiv:2603.03323v1 Announce Type: cross Abstract: Large language models (LLMs) aligned for safety often suffer from over-refusal, the tendency to reject seemingly toxic or benign prompts by misclassifying them as toxic. This behavior undermines models' helpfulness and restricts usability in sensitive...
Controlling Chat Style in Language Models via Single-Direction Editing
arXiv:2603.03324v1 Announce Type: cross Abstract: Controlling stylistic attributes in large language models (LLMs) remains challenging, with existing approaches relying on either prompt engineering or post-training alignment. This paper investigates this challenge through the lens of representation engineering, testing the hypothesis...
IntPro: A Proxy Agent for Context-Aware Intent Understanding via Retrieval-conditioned Inference
arXiv:2603.03325v1 Announce Type: cross Abstract: Large language models (LLMs) have become integral to modern Human-AI collaboration workflows, where accurately understanding user intent serves as a crucial step for generating satisfactory responses. Context-aware intent understanding, which involves inferring user intentions from...