The DSA's Blind Spot: Algorithmic Audit of Advertising and Minor Profiling on TikTok
arXiv:2603.05653v1 Announce Type: cross Abstract: Adolescents spend an increasing amount of their time in digital environments where their still-developing cognitive capacities leave them unable to recognize or resist commercial persuasion. Article 28(2) of the Digital Service Act (DSA) responds to...
When Rubrics Fail: Error Enumeration as Reward in Reference-Free RL Post-Training for Virtual Try-On
arXiv:2603.05659v1 Announce Type: cross Abstract: Reinforcement learning with verifiable rewards (RLVR) and Rubrics as Rewards (RaR) have driven strong gains in domains with clear correctness signals and even in subjective domains by synthesizing evaluation criteria from ideal reference answers. But...
The Rise of AI in Weather and Climate Information and its Impact on Global Inequality
arXiv:2603.05710v1 Announce Type: cross Abstract: The rapid adoption of AI in Earth system science promises unprecedented speed and fidelity in the generation of climate information. However, this technological prowess rests on a fragile and unequal foundation: the current trajectory of...
Verify as You Go: An LLM-Powered Browser Extension for Fake News Detection
arXiv:2603.05519v1 Announce Type: new Abstract: The rampant spread of fake news in the digital age poses serious risks to public trust and democratic institutions, underscoring the need for effective, transparent, and user-centered detection tools. Existing browser extensions often fall short...
Attention Meets Reachability: Structural Equivalence and Efficiency in Grammar-Constrained LLM Decoding
arXiv:2603.05540v1 Announce Type: new Abstract: We study grammar-constrained decoding (GCD) as a coupling between an autoregressive next-token distribution and a reachability oracle over a pushdown system compiled from a context-free grammar (CFG). We prove an oracle invariance theorem: language-equivalent grammars...
PVminerLLM: Structured Extraction of Patient Voice from Patient-Generated Text using Large Language Models
arXiv:2603.05776v1 Announce Type: new Abstract: Motivation: Patient-generated text contains critical information about patients' lived experiences, social circumstances, and engagement in care, including factors that strongly influence adherence, care coordination, and health equity. However, these patient voice signals are rarely available...
Tutor Move Taxonomy: A Theory-Aligned Framework for Analyzing Instructional Moves in Tutoring
arXiv:2603.05778v1 Announce Type: new Abstract: Understanding what makes tutoring effective requires methods for systematically analyzing tutors' instructional actions during learning interactions. This paper presents a tutor move taxonomy designed to support large-scale analysis of tutoring dialogue within the National Tutoring...
RouteGoT: Node-Adaptive Routing for Cost-Efficient Graph of Thoughts Reasoning
arXiv:2603.05818v1 Announce Type: new Abstract: Large Language Models (LLMs) excel at multi-step reasoning, yet increasing the structural complexity of inference does not consistently improve system-level returns. Methods such as Tree of Thoughts (ToT), Graph of Thoughts (GoT), and Adaptive Graph...
Lost in Stories: Consistency Bugs in Long Story Generation by LLMs
arXiv:2603.05890v1 Announce Type: new Abstract: What happens when a storyteller forgets its own story? Large Language Models (LLMs) can now generate narratives spanning tens of thousands of words, but they often fail to maintain consistency throughout. When generating long-form narratives,...
Learning Next Action Predictors from Human-Computer Interaction
arXiv:2603.05923v1 Announce Type: new Abstract: Truly proactive AI systems must anticipate what we will do next. This foresight demands far richer information than the sparse signals we type into our prompts -- it demands reasoning over the entire context of...
Implicit Style Conditioning: A Structured Style-Rewrite Framework for Low-Resource Character Modeling
arXiv:2603.05933v1 Announce Type: new Abstract: Large Language Models (LLMs) have demonstrated impressive capabilities in role-playing (RP); however, small Language Models (SLMs) with highly stylized personas remains a challenge due to data scarcity and the complexity of style disentanglement. Standard Supervised...
Evaluating Austrian A-Level German Essays with Large Language Models for Automated Essay Scoring
arXiv:2603.06066v1 Announce Type: new Abstract: Automated Essay Scoring (AES) has been explored for decades with the goal to support teachers by reducing grading workload and mitigating subjective biases. While early systems relied on handcrafted features and statistical models, recent advances...
Experiences Build Characters: The Linguistic Origins and Functional Impact of LLM Personality
arXiv:2603.06088v1 Announce Type: new Abstract: Human problem-solving is enriched by a diversity of styles and personality traits, yet the development of Large Language Models (LLMs) has largely prioritized uniform performance benchmarks that favour specific behavioural tendencies such as assertiveness. To...
A Causal Graph Approach to Oppositional Narrative Analysis
arXiv:2603.06135v1 Announce Type: new Abstract: Current methods for textual analysis rely on data annotated within predefined ontologies, often embedding human bias within black-box models. Despite achieving near-perfect performance, these approaches exploit unstructured, linear pattern recognition rather than modeling the structured...
MAPO: Mixed Advantage Policy Optimization for Long-Horizon Multi-Turn Dialogue
arXiv:2603.06194v1 Announce Type: new Abstract: Subjective multi-turn dialogue tasks, such as emotional support, require conversational policies that adapt to evolving user states and optimize long-horizon interaction quality. However, reinforcement learning (RL) for such settings remains challenging due to the absence...
LIT-RAGBench: Benchmarking Generator Capabilities of Large Language Models in Retrieval-Augmented Generation
arXiv:2603.06198v1 Announce Type: new Abstract: Retrieval-Augmented Generation (RAG) is a framework in which a Generator, such as a Large Language Model (LLM), produces answers by retrieving documents from an external collection using a Retriever. In practice, Generators must integrate evidence...
Transparent AI for Mathematics: Transformer-Based Large Language Models for Mathematical Entity Relationship Extraction with XAI
arXiv:2603.06348v1 Announce Type: new Abstract: Mathematical text understanding is a challenging task due to the presence of specialized entities and complex relationships between them. This study formulates mathematical problem interpretation as a Mathematical Entity Relation Extraction (MERE) task, where operands...
Evaluation of Deontic Conditional Reasoning in Large Language Models: The Case of Wason's Selection Task
arXiv:2603.06416v1 Announce Type: new Abstract: As large language models (LLMs) advance in linguistic competence, their reasoning abilities are gaining increasing attention. In humans, reasoning often performs well in domain specific settings, particularly in normative rather than purely formal contexts. Although...
Speak in Context: Multilingual ASR with Speech Context Alignment via Contrastive Learning
arXiv:2603.06505v1 Announce Type: new Abstract: Automatic speech recognition (ASR) has benefited from advances in pretrained speech and language models, yet most systems remain constrained to monolingual settings and short, isolated utterances. While recent efforts in context-aware ASR show promise, two...
FuseDiff: Symmetry-Preserving Joint Diffusion for Dual-Target Structure-Based Drug Design
arXiv:2603.05567v1 Announce Type: new Abstract: Dual-target structure-based drug design aims to generate a single ligand together with two pocket-specific binding poses, each compatible with a corresponding target pocket, enabling polypharmacological therapies with improved efficacy and reduced resistance. Existing approaches typically...
Reinforcement Learning for Power-Flow Network Analysis
arXiv:2603.05673v1 Announce Type: new Abstract: The power flow equations are non-linear multivariate equations that describe the relationship between power injections and bus voltages of electric power networks. Given a network topology, we are interested in finding network parameters with many...
Bridging Domains through Subspace-Aware Model Merging
arXiv:2603.05768v1 Announce Type: new Abstract: Model merging integrates multiple task-specific models into a single consolidated one. Recent research has made progress in improving merging performance for in-distribution or multi-task scenarios, but domain generalization in model merging remains underexplored. We investigate...
Sparse Crosscoders for diffing MoEs and Dense models
arXiv:2603.05805v1 Announce Type: new Abstract: Mixture of Experts (MoE) achieve parameter-efficient scaling through sparse expert routing, yet their internal representations remain poorly understood compared to dense models. We present a systematic comparison of MoE and dense model internals using crosscoders,...
Reference-guided Policy Optimization for Molecular Optimization via LLM Reasoning
arXiv:2603.05900v1 Announce Type: new Abstract: Large language models (LLMs) benefit substantially from supervised fine-tuning (SFT) and reinforcement learning with verifiable rewards (RLVR) in reasoning tasks. However, these recipes perform poorly in instruction-based molecular optimization, where each data point typically provides...
Omni-Masked Gradient Descent: Memory-Efficient Optimization via Mask Traversal with Improved Convergence
arXiv:2603.05960v1 Announce Type: new Abstract: Memory-efficient optimization methods have recently gained increasing attention for scaling full-parameter training of large language models under the GPU-memory bottleneck. Existing approaches either lack clear convergence guarantees, or only achieve the standard ${\mathcal{O}}(\epsilon^{-4})$ iteration complexity...
EvoESAP: Non-Uniform Expert Pruning for Sparse MoE
arXiv:2603.06003v1 Announce Type: new Abstract: Sparse Mixture-of-Experts (SMoE) language models achieve strong capability at low per-token compute, yet deployment remains memory- and throughput-bound because the full expert pool must be stored and served. Post-training expert pruning reduces this cost, but...
Preventing Learning Stagnation in PPO by Scaling to 1 Million Parallel Environments
arXiv:2603.06009v1 Announce Type: new Abstract: Plateaus, where an agent's performance stagnates at a suboptimal level, are a common problem in deep on-policy RL. Focusing on PPO due to its widespread adoption, we show that plateaus in certain regimes arise not...
Latent Diffusion-Based 3D Molecular Recovery from Vibrational Spectra
arXiv:2603.06113v1 Announce Type: new Abstract: Infrared (IR) spectroscopy, a type of vibrational spectroscopy, is widely used for molecular structure determination and provides critical structural information for chemists. However, existing approaches for recovering molecular structures from IR spectra typically rely on...
Dynamic Momentum Recalibration in Online Gradient Learning
arXiv:2603.06120v1 Announce Type: new Abstract: Stochastic Gradient Descent (SGD) and its momentum variants form the backbone of deep learning optimization, yet the underlying dynamics of their gradient behavior remain insufficiently understood. In this work, we reinterpret gradient updates through the...
Partial Policy Gradients for RL in LLMs
arXiv:2603.06138v1 Announce Type: new Abstract: Reinforcement learning is a framework for learning to act sequentially in an unknown environment. We propose a natural approach for modeling policy structure in policy gradients. The key idea is to optimize for a subset...