Decoder-based Sense Knowledge Distillation
arXiv:2602.22351v1 Announce Type: new Abstract: Large language models (LLMs) learn contextual embeddings that capture rich semantic information, yet they often overlook structured lexical knowledge such as word senses and relationships. Prior work has shown that incorporating sense dictionaries can improve...
Scaling In, Not Up? Testing Thick Citation Context Analysis with GPT-5 and Fragile Prompts
arXiv:2602.22359v1 Announce Type: new Abstract: This paper tests whether large language models (LLMs) can support interpretative citation context analysis (CCA) by scaling in thick, text-grounded readings of a single hard case rather than scaling up typological labels. It foregrounds prompt-sensitivity...
Causality $\neq$ Invariance: Function and Concept Vectors in LLMs
arXiv:2602.22424v1 Announce Type: new Abstract: Do large language models (LLMs) represent concepts abstractly, i.e., independent of input format? We revisit Function Vectors (FVs), compact representations of in-context learning (ICL) tasks that causally drive task performance. Across multiple LLMs, we show...
A Fusion of context-aware based BanglaBERT and Two-Layer Stacked LSTM Framework for Multi-Label Cyberbullying Detection
arXiv:2602.22449v1 Announce Type: new Abstract: Cyberbullying has become a serious and growing concern in todays virtual world. When left unnoticed, it can have adverse consequences for social and mental health. Researchers have explored various types of cyberbullying, but most approaches...
Bridging Latent Reasoning and Target-Language Generation via Retrieval-Transition Heads
arXiv:2602.22453v1 Announce Type: new Abstract: Recent work has identified a subset of attention heads in Transformer as retrieval heads, which are responsible for retrieving information from the context. In this work, we first investigate retrieval heads in multilingual contexts. In...
Mind the Gap in Cultural Alignment: Task-Aware Culture Management for Large Language Models
arXiv:2602.22475v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in culturally sensitive real-world tasks. However, existing cultural alignment approaches fail to align LLMs' broad cultural values with the specific goals of downstream tasks and suffer from cross-culture...
Iterative Prompt Refinement for Dyslexia-Friendly Text Summarization Using GPT-4o
arXiv:2602.22524v1 Announce Type: new Abstract: Dyslexia affects approximately 10% of the global population and presents persistent challenges in reading fluency and text comprehension. While existing assistive technologies address visual presentation, linguistic complexity remains a substantial barrier to equitable access. This...
Search-P1: Path-Centric Reward Shaping for Stable and Efficient Agentic RAG Training
arXiv:2602.22576v1 Announce Type: new Abstract: Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by incorporating external knowledge, yet traditional single-round retrieval struggles with complex multi-step reasoning. Agentic RAG addresses this by enabling LLMs to dynamically decide when and what to...
Enhancing Persuasive Dialogue Agents by Synthesizing Cross-Disciplinary Communication Strategies
arXiv:2602.22696v1 Announce Type: new Abstract: Current approaches to developing persuasive dialogue agents often rely on a limited set of predefined persuasive strategies that fail to capture the complexity of real-world interactions. We applied a cross-disciplinary approach to develop a framework...
The Poly Problem in Zoning: Redefining “Family” for a Changing Society lawreview - Minnesota Law Review
By ARIC SHORT & TANYA PIERCE. Full Text. Single-family zoning has long dictated not only where people may live but also with whom. Although extensively critiqued for perpetuating racial and economic exclusion, these laws also privilege relationships defined by blood,...
The Innocence Trap lawreview - Minnesota Law Review
By CAITLIN GLASS & JULIAN GREEN. Full Text. What makes a conviction wrongful? Developments in DNA science have led to a wave of exonerations over the past thirty years, revealing sources of error in the criminal legal process. Innocence organizations...
The Skidmore Compromise: Interpreting Skidmore as a Tiebreaker to Preserve Judicial Wisdom in the Era of Loper Bright lawreview - Minnesota Law Review
By MITCHELL ZAIC. Full Text. 'Law must be stable, and yet it cannot stand still.' Here is the great antinomy confronting us at every turn. Rest and motion, unrelieved and unchecked, are equally destructive. The law, like human kind, if...
The Crisis in U.S. Cancer Care: Law, Markets, and Privatization lawreview - Minnesota Law Review
By DANIEL G. AARON. Full Text. Cancer is surging among youth and young adults in the United States, yet, instead of public regulation addressing its root causes, we have outsourced the management of cancer to the private sector. A suite...
The Rise of AI-Powered Legal Research: Transforming How Lawyers Work
AI-powered legal research tools are fundamentally changing the practice of law, offering unprecedented efficiency while raising questions about quality and oversight.
The Emerging Legal Framework for Generative AI: A Comprehensive Analysis
As generative AI transforms industries worldwide, legal systems are racing to establish frameworks that balance innovation with accountability.
Reinforcing Real-world Service Agents: Balancing Utility and Cost in Task-oriented Dialogue
arXiv:2602.22697v1 Announce Type: new Abstract: The rapid evolution of Large Language Models (LLMs) has accelerated the transition from conversational chatbots to general agents. However, effectively balancing empathetic communication with budget-aware decision-making remains an open challenge. Since existing methods fail to...
AuditBench: Evaluating Alignment Auditing Techniques on Models with Hidden Behaviors
arXiv:2602.22755v1 Announce Type: new Abstract: We introduce AuditBench, an alignment auditing benchmark. AuditBench consists of 56 language models with implanted hidden behaviors. Each model has one of 14 concerning behaviors--such as sycophantic deference, opposition to AI regulation, or secret geopolitical...
Towards Better RL Training Data Utilization via Second-Order Rollout
arXiv:2602.22765v1 Announce Type: new Abstract: Reinforcement Learning (RL) has empowered Large Language Models (LLMs) with strong reasoning capabilities, but vanilla RL mainly focuses on generation capability improvement by training with only first-order rollout (generating multiple responses for a question), and...
Probing for Knowledge Attribution in Large Language Models
arXiv:2602.22787v1 Announce Type: new Abstract: Large language models (LLMs) often generate fluent but unfounded claims, or hallucinations, which fall into two types: (i) faithfulness violations - misusing user context - and (ii) factuality violations - errors from internal knowledge. Proper...
Test-Time Scaling with Diffusion Language Models via Reward-Guided Stitching
arXiv:2602.22871v1 Announce Type: new Abstract: Reasoning with large language models often benefits from generating multiple chains-of-thought, but existing aggregation strategies are typically trajectory-level (e.g., selecting the best trace or voting on the final answer), discarding useful intermediate work from partial...
Where Vision Becomes Text: Locating the OCR Routing Bottleneck in Vision-Language Models
arXiv:2602.22918v1 Announce Type: new Abstract: Vision-language models (VLMs) can read text from images, but where does this optical character recognition (OCR) information enter the language processing stream? We investigate the OCR routing mechanism across three architecture families (Qwen3-VL, Phi-4, InternVL3.5)...
Quantity Convergence, Quality Divergence: Disentangling Fluency and Accuracy in L2 Mandarin Prosody
arXiv:2602.23071v1 Announce Type: new Abstract: While second language (L2) learners may acquire target syntactic word order, mapping this syntax onto appropriate prosodic structures remains a persistent challenge. This study investigates the fossilization and stability of the L2 syntax-prosody interface by...
CiteLLM: An Agentic Platform for Trustworthy Scientific Reference Discovery
arXiv:2602.23075v1 Announce Type: new Abstract: Large language models (LLMs) have created new opportunities to enhance the efficiency of scholarly activities; however, challenges persist in the ethical deployment of AI assistance, including (1) the trustworthiness of AI-generated content, (2) preservation of...
Assessing Deanonymization Risks with Stylometry-Assisted LLM Agent
arXiv:2602.23079v1 Announce Type: new Abstract: The rapid advancement of large language models (LLMs) has enabled powerful authorship inference capabilities, raising growing concerns about unintended deanonymization risks in textual data such as news articles. In this work, we introduce an LLM...
MTRAG-UN: A Benchmark for Open Challenges in Multi-Turn RAG Conversations
arXiv:2602.23184v1 Announce Type: new Abstract: We present MTRAG-UN, a benchmark for exploring open challenges in multi-turn retrieval augmented generation, a popular use of large language models. We release a benchmark of 666 tasks containing over 2,800 conversation turns across 6...
Discourse-Aware Dual-Track Streaming Response for Low-Latency Spoken Dialogue Systems
arXiv:2602.23266v1 Announce Type: new Abstract: Achieving human-like responsiveness is a critical yet challenging goal for cascaded spoken dialogue systems. Conventional ASR-LLM-TTS pipelines follow a strictly sequential paradigm, requiring complete transcription and full reasoning before speech synthesis can begin, which results...
A Mixture-of-Experts Model for Multimodal Emotion Recognition in Conversations
arXiv:2602.23300v1 Announce Type: new Abstract: Emotion Recognition in Conversations (ERC) presents unique challenges, requiring models to capture the temporal flow of multi-turn dialogues and to effectively integrate cues from multiple modalities. We propose Mixture of Speech-Text Experts for Recognition of...
Scale Can't Overcome Pragmatics: The Impact of Reporting Bias on Vision-Language Reasoning
arXiv:2602.23351v1 Announce Type: new Abstract: The lack of reasoning capabilities in Vision-Language Models (VLMs) has remained at the forefront of research discourse. We posit that this behavior stems from a reporting bias in their training data. That is, how people...
Enriching Taxonomies Using Large Language Models
arXiv:2602.22213v1 Announce Type: cross Abstract: Taxonomies play a vital role in structuring and categorizing information across domains. However, many existing taxonomies suffer from limited coverage and outdated or ambiguous nodes, reducing their effectiveness in knowledge retrieval. To address this, we...
To Deceive is to Teach? Forging Perceptual Robustness via Adversarial Reinforcement Learning
arXiv:2602.22227v1 Announce Type: new Abstract: Despite their impressive capabilities, Multimodal Large Language Models (MLLMs) exhibit perceptual fragility when confronted with visually complex scenes. This weakness stems from a reliance on finite training datasets, which are prohibitively expensive to scale and...