TARAZ: Persian Short-Answer Question Benchmark for Cultural Evaluation of Language Models
arXiv:2602.22827v1 Announce Type: new Abstract: This paper presents a comprehensive evaluation framework for assessing the cultural competence of large language models (LLMs) in Persian. Existing Persian cultural benchmarks rely predominantly on multiple-choice formats and English-centric metrics that fail to capture...
Where Vision Becomes Text: Locating the OCR Routing Bottleneck in Vision-Language Models
arXiv:2602.22918v1 Announce Type: new Abstract: Vision-language models (VLMs) can read text from images, but where does this optical character recognition (OCR) information enter the language processing stream? We investigate the OCR routing mechanism across three architecture families (Qwen3-VL, Phi-4, InternVL3.5)...
CiteLLM: An Agentic Platform for Trustworthy Scientific Reference Discovery
arXiv:2602.23075v1 Announce Type: new Abstract: Large language models (LLMs) have created new opportunities to enhance the efficiency of scholarly activities; however, challenges persist in the ethical deployment of AI assistance, including (1) the trustworthiness of AI-generated content, (2) preservation of...
MTRAG-UN: A Benchmark for Open Challenges in Multi-Turn RAG Conversations
arXiv:2602.23184v1 Announce Type: new Abstract: We present MTRAG-UN, a benchmark for exploring open challenges in multi-turn retrieval augmented generation, a popular use of large language models. We release a benchmark of 666 tasks containing over 2,800 conversation turns across 6...
SPARTA: Scalable and Principled Benchmark of Tree-Structured Multi-hop QA over Text and Tables
arXiv:2602.23286v1 Announce Type: new Abstract: Real-world Table-Text question answering (QA) tasks require models that can reason across long text and source tables, traversing multiple hops and executing complex operations such as aggregation. Yet existing benchmarks are small, manually curated -...
Revisiting Chebyshev Polynomial and Anisotropic RBF Models for Tabular Regression
arXiv:2602.22422v1 Announce Type: new Abstract: Smooth-basis models such as Chebyshev polynomial regressors and radial basis function (RBF) networks are well established in numerical analysis. Their continuously differentiable prediction surfaces suit surrogate optimisation, sensitivity analysis, and other settings where the response...
Calibrated Test-Time Guidance for Bayesian Inference
arXiv:2602.22428v1 Announce Type: new Abstract: Test-time guidance is a widely used mechanism for steering pretrained diffusion models toward outcomes specified by a reward function. Existing approaches, however, focus on maximizing reward rather than sampling from the true Bayesian posterior, leading...
Beyond performance-wise Contribution Evaluation in Federated Learning
arXiv:2602.22470v1 Announce Type: new Abstract: Federated learning offers a privacy-friendly collaborative learning framework, yet its success, like any joint venture, hinges on the contributions of its participants. Existing client evaluation methods predominantly focus on model performance, such as accuracy or...
Coarse-to-Fine Learning of Dynamic Causal Structures
arXiv:2602.22532v1 Announce Type: new Abstract: Learning the dynamic causal structure of time series is a challenging problem. Most existing approaches rely on distributional or structural invariance to uncover underlying causal dynamics, assuming stationary or partially stationary causality. However, these assumptions...
The legal protection of artificial intelligence-generated work: The argument for sui generis over copyright
Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. As with other elements of society, the modern economy has become more reliant on AI, indicating the potentially great influence it has on innovation. Many...
Structured Prompt Language: Declarative Context Management for LLMs
arXiv:2602.21257v1 Announce Type: new Abstract: We present SPL (Structured Prompt Language), a declarative SQL-inspired language that treats large language models as generative knowledge bases and their context windows as constrained resources. SPL provides explicit WITH BUDGET/LIMIT token management, an automatic...
Scalable Multilingual Multimodal Machine Translation with Speech-Text Fusion
arXiv:2602.21646v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) have achieved notable success in enhancing translation performance by integrating multimodal information. However, existing research primarily focuses on image-guided methods, whose applicability is constrained by the scarcity of multilingual image-text...
CxMP: A Linguistic Minimal-Pair Benchmark for Evaluating Constructional Understanding in Language Models
arXiv:2602.21978v1 Announce Type: new Abstract: Recent work has examined language models from a linguistic perspective to better understand how they acquire language. Most existing benchmarks focus on judging grammatical acceptability, whereas the ability to interpret meanings conveyed by grammatical forms...
Archetypal Graph Generative Models: Explainable and Identifiable Communities via Anchor-Dominant Convex Hulls
arXiv:2602.21342v1 Announce Type: new Abstract: Representation learning has been essential for graph machine learning tasks such as link prediction, community detection, and network visualization. Despite recent advances in achieving high performance on these downstream tasks, little progress has been made...
Generative Bayesian Computation as a Scalable Alternative to Gaussian Process Surrogates
arXiv:2602.21408v1 Announce Type: new Abstract: Gaussian process (GP) surrogates are the default tool for emulating expensive computer experiments, but cubic cost, stationarity assumptions, and Gaussian predictive distributions limit their reach. We propose Generative Bayesian Computation (GBC) via Implicit Quantile Networks...
D-Flow SGLD: Source-Space Posterior Sampling for Scientific Inverse Problems with Flow Matching
arXiv:2602.21469v1 Announce Type: new Abstract: Data assimilation and scientific inverse problems require reconstructing high-dimensional physical states from sparse and noisy observations, ideally with uncertainty-aware posterior samples that remain faithful to learned priors and governing physics. While training-free conditional generation is...
Semantic Novelty at Scale: Narrative Shape Taxonomy and Readership Prediction in 28,606 Books
arXiv:2602.20647v1 Announce Type: new Abstract: I introduce semantic novelty--cosine distance between each paragraph's sentence embedding and the running centroid of all preceding paragraphs--as an information-theoretic measure of narrative structure at corpus scale. Applying it to 28,606 books in PG19 (pre-1920...
Blackbird Language Matrices: A Framework to Investigate the Linguistic Competence of Language Models
arXiv:2602.20966v1 Announce Type: new Abstract: This article describes a novel language task, the Blackbird Language Matrices (BLM) task, inspired by intelligence tests, and illustrates the BLM datasets, their construction and benchmarking, and targeted experiments on chunking and systematicity. BLMs are...
On Data Engineering for Scaling LLM Terminal Capabilities
arXiv:2602.21193v1 Announce Type: new Abstract: Despite rapid recent progress in the terminal capabilities of large language models, the training data strategies behind state-of-the-art terminal agents remain largely undisclosed. We address this gap through a systematic study of data engineering practices...
Protein Language Models Diverge from Natural Language: Comparative Analysis and Improved Inference
arXiv:2602.20449v1 Announce Type: cross Abstract: Modern Protein Language Models (PLMs) apply transformer-based model architectures from natural language processing to biological sequences, predicting a variety of protein functions and properties. However, protein language has key differences from natural language, such as...
Learning to Solve Complex Problems via Dataset Decomposition
arXiv:2602.20296v1 Announce Type: new Abstract: Curriculum learning is a class of training strategies that organizes the data being exposed to a model by difficulty, gradually from simpler to more complex examples. This research explores a reverse curriculum generation approach that...
Hierarchical Molecular Representation Learning via Fragment-Based Self-Supervised Embedding Prediction
arXiv:2602.20344v1 Announce Type: new Abstract: Graph self-supervised learning (GSSL) has demonstrated strong potential for generating expressive graph embeddings without the need for human annotations, making it particularly valuable in domains with high labeling costs such as molecular graph analysis. However,...
Three Concrete Challenges and Two Hopes for the Safety of Unsupervised Elicitation
arXiv:2602.20400v1 Announce Type: new Abstract: To steer language models towards truthful outputs on tasks which are beyond human capability, previous work has suggested training models on easy tasks to steer them on harder ones (easy-to-hard generalization), or using unsupervised training...
$\kappa$-Explorer: A Unified Framework for Active Model Estimation in MDPs
arXiv:2602.20404v1 Announce Type: new Abstract: In tabular Markov decision processes (MDPs) with perfect state observability, each trajectory provides active samples from the transition distributions conditioned on state-action pairs. Consequently, accurate model estimation depends on how the exploration policy allocates visitation...
Benchmarking GNN Models on Molecular Regression Tasks with CKA-Based Representation Analysis
arXiv:2602.20573v1 Announce Type: new Abstract: Molecules are commonly represented as SMILES strings, which can be readily converted to fixed-size molecular fingerprints. These fingerprints serve as feature vectors to train ML/DL models for molecular property prediction tasks in the field of...
Liability for damages caused by artificial intelligence
TriTopic: Tri-Modal Graph-Based Topic Modeling with Iterative Refinement and Archetypes
arXiv:2602.19079v1 Announce Type: new Abstract: Topic modeling extracts latent themes from large text collections, but leading approaches like BERTopic face critical limitations: stochastic instability, loss of lexical precision ("Embedding Blur"), and reliance on a single data perspective. We present TriTopic,...
Anatomy of Unlearning: The Dual Impact of Fact Salience and Model Fine-Tuning
arXiv:2602.19612v1 Announce Type: new Abstract: Machine Unlearning (MU) enables Large Language Models (LLMs) to remove unsafe or outdated information. However, existing work assumes that all facts are equally forgettable and largely ignores whether the forgotten knowledge originates from pretraining or...
The Geometry of Multi-Task Grokking: Transverse Instability, Superposition, and Weight Decay Phase Structure
arXiv:2602.18523v1 Announce Type: new Abstract: Grokking -- the abrupt transition from memorization to generalization long after near-zero training loss -- has been studied mainly in single-task settings. We extend geometric analysis to multi-task modular arithmetic, training shared-trunk Transformers on dual-task...
Diagnosing LLM Reranker Behavior Under Fixed Evidence Pools
arXiv:2602.18613v1 Announce Type: new Abstract: Standard reranking evaluations study how a reranker orders candidates returned by an upstream retriever. This setup couples ranking behavior with retrieval quality, so differences in output cannot be attributed to the ranking policy alone. We...