Action-Graph Policies: Learning Action Co-dependencies in Multi-Agent Reinforcement Learning
arXiv:2602.17009v1 Announce Type: new Abstract: Coordinating actions is the most fundamental form of cooperation in multi-agent reinforcement learning (MARL). Successful decentralized decision-making often depends not only on good individual actions, but on selecting compatible actions across agents to synchronize behavior,...
Malliavin Calculus as Stochastic Backpropogation
arXiv:2602.17013v1 Announce Type: new Abstract: We establish a rigorous connection between pathwise (reparameterization) and score-function (Malliavin) gradient estimators by showing that both arise from the Malliavin integration-by-parts identity. Building on this equivalence, we introduce a unified and variance-aware hybrid estimator...
WS-GRPO: Weakly-Supervised Group-Relative Policy Optimization for Rollout-Efficient Reasoning
arXiv:2602.17025v1 Announce Type: new Abstract: Group Relative Policy Optimization (GRPO) is effective for training language models on complex reasoning. However, since the objective is defined relative to a group of sampled trajectories, extended deliberation can create more chances to realize...
Forecasting Anomaly Precursors via Uncertainty-Aware Time-Series Ensembles
arXiv:2602.17028v1 Announce Type: new Abstract: Detecting anomalies in time-series data is critical in domains such as industrial operations, finance, and cybersecurity, where early identification of abnormal patterns is essential for ensuring system reliability and enabling preventive maintenance. However, most existing...
MeGU: Machine-Guided Unlearning with Target Feature Disentanglement
arXiv:2602.17088v1 Announce Type: new Abstract: The growing concern over training data privacy has elevated the "Right to be Forgotten" into a critical requirement, thereby raising the demand for effective Machine Unlearning. However, existing unlearning approaches commonly suffer from a fundamental...
FLoRG: Federated Fine-tuning with Low-rank Gram Matrices and Procrustes Alignment
arXiv:2602.17095v1 Announce Type: new Abstract: Parameter-efficient fine-tuning techniques such as low-rank adaptation (LoRA) enable large language models (LLMs) to adapt to downstream tasks efficiently. Federated learning (FL) further facilitates this process by enabling collaborative fine-tuning across distributed clients without sharing...
FCC asks stations for "pro-America" programming, like daily Pledge of Allegiance
Brendan Carr wants "patriotic" shows for Trump's yearlong America 250 celebration.
Wikipedia blacklists Archive.today, starts removing 695,000 archive links
If DDoSing a blog wasn't bad enough, archive site also tampered with web snapshots.
Supreme Court blocks Trump's emergency tariffs, billions in refunds may be owed
Economists estimated more than $175 billion may need to be refunded.
UAE’s G42 teams up with Cerebras to deploy 8 exaflops of compute in India
Abu Dhabi-based tech company G42 has partnered with U.S.-based chipmaker Cerebras to deploy 8 exaflops of compute through a new system in India.
KD4MT: A Survey of Knowledge Distillation for Machine Translation
arXiv:2602.15845v1 Announce Type: new Abstract: Knowledge Distillation (KD) as a research area has gained a lot of traction in recent years as a compression tool to address challenges related to ever-larger models in NLP. Remarkably, Machine Translation (MT) offers a...
Gated Tree Cross-attention for Checkpoint-Compatible Syntax Injection in Decoder-Only LLMs
arXiv:2602.15846v1 Announce Type: new Abstract: Decoder-only large language models achieve strong broad performance but are brittle to minor grammatical perturbations, undermining reliability for downstream reasoning. However, directly injecting explicit syntactic structure into an existing checkpoint can interfere with its pretrained...
Multi-source Heterogeneous Public Opinion Analysis via Collaborative Reasoning and Adaptive Fusion: A Systematically Integrated Approach
arXiv:2602.15857v1 Announce Type: new Abstract: The analysis of public opinion from multiple heterogeneous sources presents significant challenges due to structural differences, semantic variations, and platform-specific biases. This paper introduces a novel Collaborative Reasoning and Adaptive Fusion (CRAF) framework that systematically...
From Transcripts to AI Agents: Knowledge Extraction, RAG Integration, and Robust Evaluation of Conversational AI Assistants
arXiv:2602.15859v1 Announce Type: new Abstract: Building reliable conversational AI assistants for customer-facing industries remains challenging due to noisy conversational data, fragmented knowledge, and the requirement for accurate human hand-off - particularly in domains that depend heavily on real-time information. This...
CheckIfExist: Detecting Citation Hallucinations in the Era of AI-Generated Content
arXiv:2602.15871v1 Announce Type: new Abstract: The proliferation of large language models (LLMs) in academic workflows has introduced unprecedented challenges to bibliographic integrity, particularly through reference hallucination -- the generation of plausible but non-existent citations. Recent investigations have documented the presence...
P-RAG: Prompt-Enhanced Parametric RAG with LoRA and Selective CoT for Biomedical and Multi-Hop QA
arXiv:2602.15874v1 Announce Type: new Abstract: Large Language Models (LLMs) demonstrate remarkable capabilities but remain limited by their reliance on static training data. Retrieval-Augmented Generation (RAG) addresses this constraint by retrieving external knowledge during inference, though it still depends heavily on...
Every Little Helps: Building Knowledge Graph Foundation Model with Fine-grained Transferable Multi-modal Tokens
arXiv:2602.15896v1 Announce Type: new Abstract: Multi-modal knowledge graph reasoning (MMKGR) aims to predict the missing links by exploiting both graph structure information and multi-modal entity contents. Most existing works are designed for a transductive setting, which learns dataset-specific embeddings and...
MultiCube-RAG for Multi-hop Question Answering
arXiv:2602.15898v1 Announce Type: new Abstract: Multi-hop question answering (QA) necessitates multi-step reasoning and retrieval across interconnected subjects, attributes, and relations. Existing retrieval-augmented generation (RAG) methods struggle to capture these structural semantics accurately, resulting in suboptimal performance. Graph-based RAGs structure such...
A Curious Class of Adpositional Multiword Expressions in Korean
arXiv:2602.16023v1 Announce Type: new Abstract: Multiword expressions (MWEs) have been widely studied in cross-lingual annotation frameworks such as PARSEME. However, Korean MWEs remain underrepresented in these efforts. In particular, Korean multiword adpositions lack systematic analysis, annotated resources, and integration into...
Language Statistics and False Belief Reasoning: Evidence from 41 Open-Weight LMs
arXiv:2602.16085v1 Announce Type: new Abstract: Research on mental state reasoning in language models (LMs) has the potential to inform theories of human social cognition--such as the theory that mental state reasoning emerges in part from language exposure--and our understanding of...
Missing-by-Design: Certifiable Modality Deletion for Revocable Multimodal Sentiment Analysis
arXiv:2602.16144v1 Announce Type: new Abstract: As multimodal systems increasingly process sensitive personal data, the ability to selectively revoke specific data modalities has become a critical requirement for privacy compliance and user autonomy. We present Missing-by-Design (MBD), a unified framework for...
Balancing Faithfulness and Performance in Reasoning via Multi-Listener Soft Execution
arXiv:2602.16154v1 Announce Type: new Abstract: Chain-of-thought (CoT) reasoning sometimes fails to faithfully reflect the true computation of a large language model (LLM), hampering its utility in explaining how LLMs arrive at their answers. Moreover, optimizing for faithfulness and interpretability in...
Beyond Learning: A Training-Free Alternative to Model Adaptation
arXiv:2602.16189v1 Announce Type: new Abstract: Despite the continuous research and evolution of language models, they sometimes underperform previous versions. Existing approaches to overcome these challenges are resource-intensive, highlighting the need for alternatives that enable immediate action. We assume that each...
Aladdin-FTI @ AMIYA Three Wishes for Arabic NLP: Fidelity, Diglossia, and Multidialectal Generation
arXiv:2602.16290v1 Announce Type: new Abstract: Arabic dialects have long been under-represented in Natural Language Processing (NLP) research due to their non-standardization and high variability, which pose challenges for computational modeling. Recent advances in the field, such as Large Language Models...
Helpful to a Fault: Measuring Illicit Assistance in Multi-Turn, Multilingual LLM Agents
arXiv:2602.16346v1 Announce Type: new Abstract: LLM-based agents execute real-world workflows via tools and memory. These affordances enable ill-intended adversaries to also use these agents to carry out complex misuse scenarios. Existing agent misuse benchmarks largely test single-prompt instructions, leaving a...
Distributed physics-informed neural networks via domain decomposition for fast flow reconstruction
arXiv:2602.15883v1 Announce Type: new Abstract: Physics-Informed Neural Networks (PINNs) offer a powerful paradigm for flow reconstruction, seamlessly integrating sparse velocity measurements with the governing Navier-Stokes equations to recover complete velocity and latent pressure fields. However, scaling such models to large...
Adaptive Semi-Supervised Training of P300 ERP-BCI Speller System with Minimum Calibration Effort
arXiv:2602.15955v1 Announce Type: new Abstract: A P300 ERP-based Brain-Computer Interface (BCI) speller is an assistive communication tool. It searches for the P300 event-related potential (ERP) elicited by target stimuli, distinguishing it from the neural responses to non-target stimuli embedded in...
Verifier-Constrained Flow Expansion for Discovery Beyond the Data
arXiv:2602.15984v1 Announce Type: new Abstract: Flow and diffusion models are typically pre-trained on limited available data (e.g., molecular samples), covering only a fraction of the valid design space (e.g., the full molecular space). As a consequence, they tend to generate...
AI-CARE: Carbon-Aware Reporting Evaluation Metric for AI Models
arXiv:2602.16042v1 Announce Type: new Abstract: As machine learning (ML) continues its rapid expansion, the environmental cost of model training and inference has become a critical societal concern. Existing benchmarks overwhelmingly focus on standard performance metrics such as accuracy, BLEU, or...
MoE-Spec: Expert Budgeting for Efficient Speculative Decoding
arXiv:2602.16052v1 Announce Type: new Abstract: Speculative decoding accelerates Large Language Model (LLM) inference by verifying multiple drafted tokens in parallel. However, for Mixture-of-Experts (MoE) models, this parallelism introduces a severe bottleneck: large draft trees activate many unique experts, significantly increasing...