Prescriptive Scaling Reveals the Evolution of Language Model Capabilities
arXiv:2602.15327v1 Announce Type: cross Abstract: For deploying foundation models, practitioners increasingly need prescriptive scaling laws: given a pre training compute budget, what downstream accuracy is attainable with contemporary post training practice, and how stable is that mapping as the field...
Learning Representations from Incomplete EHR Data with Dual-Masked Autoencoding
arXiv:2602.15159v1 Announce Type: new Abstract: Learning from electronic health records (EHRs) time series is challenging due to irregular sam- pling, heterogeneous missingness, and the resulting sparsity of observations. Prior self-supervised meth- ods either impute before learning, represent missingness through a...
Size Transferability of Graph Transformers with Convolutional Positional Encodings
arXiv:2602.15239v1 Announce Type: new Abstract: Transformers have achieved remarkable success across domains, motivating the rise of Graph Transformers (GTs) as attention-based architectures for graph-structured data. A key design choice in GTs is the use of Graph Neural Network (GNN)-based positional...
Fast and Effective On-policy Distillation from Reasoning Prefixes
arXiv:2602.15260v1 Announce Type: new Abstract: On-policy distillation (OPD), which samples trajectories from the student model and supervises them with a teacher at the token level, avoids relying solely on verifiable terminal rewards and can yield better generalization than off-policy distillation....
Complex-Valued Unitary Representations as Classification Heads for Improved Uncertainty Quantification in Deep Neural Networks
arXiv:2602.15283v1 Announce Type: new Abstract: Modern deep neural networks achieve high predictive accuracy but remain poorly calibrated: their confidence scores do not reliably reflect the true probability of correctness. We propose a quantum-inspired classification head architecture that projects backbone features...
On Surprising Effectiveness of Masking Updates in Adaptive Optimizers
arXiv:2602.15322v1 Announce Type: new Abstract: Training large language models (LLMs) relies almost exclusively on dense adaptive optimizers with increasingly sophisticated preconditioners. We challenge this by showing that randomly masking parameter updates can be highly effective, with a masked variant of...
Directional Reasoning Trajectory Change (DRTC): Identifying Critical Trace Segments in Reasoning Models
arXiv:2602.15332v1 Announce Type: new Abstract: Understanding how language models carry out long-horizon reasoning remains an open challenge. Existing interpretability methods often highlight tokens or spans correlated with an answer, but they rarely reveal where the model makes consequential reasoning turns,...
FedPSA: Modeling Behavioral Staleness in Asynchronous Federated Learning
arXiv:2602.15337v1 Announce Type: new Abstract: Asynchronous Federated Learning (AFL) has emerged as a significant research area in recent years. By not waiting for slower clients and executing the training process concurrently, it achieves faster training speed compared to traditional federated...
CDRL: A Reinforcement Learning Framework Inspired by Cerebellar Circuits and Dendritic Computational Strategies
arXiv:2602.15367v1 Announce Type: new Abstract: Reinforcement learning (RL) has achieved notable performance in high-dimensional sequential decision-making tasks, yet remains limited by low sample efficiency, sensitivity to noise, and weak generalization under partial observability. Most existing approaches address these issues primarily...
Logit Distance Bounds Representational Similarity
arXiv:2602.15438v1 Announce Type: new Abstract: For a broad family of discriminative models that includes autoregressive language models, identifiability results imply that if two models induce the same conditional distributions, then their internal representations agree up to an invertible linear transformation....
ExLipBaB: Exact Lipschitz Constant Computation for Piecewise Linear Neural Networks
arXiv:2602.15499v1 Announce Type: new Abstract: It has been shown that a neural network's Lipschitz constant can be leveraged to derive robustness guarantees, to improve generalizability via regularization or even to construct invertible networks. Therefore, a number of methods varying in...
CEPAE: Conditional Entropy-Penalized Autoencoders for Time Series Counterfactuals
arXiv:2602.15546v1 Announce Type: new Abstract: The ability to accurately perform counterfactual inference on time series is crucial for decision-making in fields like finance, healthcare, and marketing, as it allows us to understand the impact of events or treatments on outcomes...
Certified Per-Instance Unlearning Using Individual Sensitivity Bounds
arXiv:2602.15602v1 Announce Type: new Abstract: Certified machine unlearning can be achieved via noise injection leading to differential privacy guarantees, where noise is calibrated to worst-case sensitivity. Such conservative calibration often results in performance degradation, limiting practical applicability. In this work,...
Join the Largest Global Community in Computing
IEEE Computer Society is the top source for information, inspiration, and collaboration in computer science and engineering, empowering technologist worldwide
CVPR 2026 Compute Reporting Form - Author Guidelines
From Scarcity to Scale: A Release-Level Analysis of the Pashto Common Voice Dataset
arXiv:2602.14062v1 Announce Type: new Abstract: Large, openly licensed speech datasets are essential for building automatic speech recognition (ASR) systems, yet many widely spoken languages remain underrepresented in public resources. Pashto, spoken by more than 60 million people, has historically lacked...
Character-aware Transformers Learn an Irregular Morphological Pattern Yet None Generalize Like Humans
arXiv:2602.14100v1 Announce Type: new Abstract: Whether neural networks can serve as cognitive models of morphological learning remains an open question. Recent work has shown that encoder-decoder models can acquire irregular patterns, but evidence that they generalize these patterns like humans...
Knowing When Not to Answer: Abstention-Aware Scientific Reasoning
arXiv:2602.14189v1 Announce Type: new Abstract: Large language models are increasingly used to answer and verify scientific claims, yet existing evaluations typically assume that a model must always produce a definitive answer. In scientific settings, however, unsupported or uncertain conclusions can...
Detecting LLM Hallucinations via Embedding Cluster Geometry: A Three-Type Taxonomy with Measurable Signatures
arXiv:2602.14259v1 Announce Type: new Abstract: We propose a geometric taxonomy of large language model hallucinations based on observable signatures in token embedding cluster structure. By analyzing the static embedding spaces of 11 transformer models spanning encoder (BERT, RoBERTa, ELECTRA, DeBERTa,...
Does Socialization Emerge in AI Agent Society? A Case Study of Moltbook
arXiv:2602.14299v1 Announce Type: new Abstract: As large language model agents increasingly populate networked environments, a fundamental question arises: do artificial intelligence (AI) agent societies undergo convergence dynamics similar to human social systems? Lately, Moltbook approximates a plausible future scenario in...
Exploring the Performance of ML/DL Architectures on the MNIST-1D Dataset
arXiv:2602.13348v1 Announce Type: new Abstract: Small datasets like MNIST have historically been instrumental in advancing machine learning research by providing a controlled environment for rapid experimentation and model evaluation. However, their simplicity often limits their utility for distinguishing between advanced...
Accelerated Discovery of Cryoprotectant Cocktails via Multi-Objective Bayesian Optimization
arXiv:2602.13398v1 Announce Type: new Abstract: Designing cryoprotectant agent (CPA) cocktails for vitrification is challenging because formulations must be concentrated enough to suppress ice formation yet non-toxic enough to preserve cell viability. This tradeoff creates a large, multi-objective design space in...
Text Has Curvature
arXiv:2602.13418v1 Announce Type: new Abstract: Does text have an intrinsic curvature? Language is increasingly modeled in curved geometries - hyperbolic spaces for hierarchy, mixed-curvature manifolds for compositional structure - yet a basic scientific question remains unresolved: what does curvature mean...
Finding Highly Interpretable Prompt-Specific Circuits in Language Models
arXiv:2602.13483v1 Announce Type: new Abstract: Understanding the internal circuits that language models use to solve tasks remains a central challenge in mechanistic interpretability. Most prior work identifies circuits at the task level by averaging across many prompts, implicitly assuming a...
Federated Learning of Nonlinear Temporal Dynamics with Graph Attention-based Cross-Client Interpretability
arXiv:2602.13485v1 Announce Type: new Abstract: Networks of modern industrial systems are increasingly monitored by distributed sensors, where each system comprises multiple subsystems generating high dimensional time series data. These subsystems are often interdependent, making it important to understand how temporal...
Preventing Rank Collapse in Federated Low-Rank Adaptation with Client Heterogeneity
arXiv:2602.13486v1 Announce Type: new Abstract: Federated low-rank adaptation (FedLoRA) has facilitated communication-efficient and privacy-preserving fine-tuning of foundation models for downstream tasks. In practical federated learning scenarios, client heterogeneity in system resources and data distributions motivates heterogeneous LoRA ranks across clients....
Zero-Order Optimization for LLM Fine-Tuning via Learnable Direction Sampling
arXiv:2602.13659v1 Announce Type: new Abstract: Fine-tuning large pretrained language models (LLMs) is a cornerstone of modern NLP, yet its growing memory demands (driven by backpropagation and large optimizer States) limit deployment in resource-constrained settings. Zero-order (ZO) methods bypass backpropagation by...