All Practice Areas

Immigration Law

이민법

Jurisdiction: All US KR EU Intl
LOW Academic United States

Bidirectional Curriculum Generation: A Multi-Agent Framework for Data-Efficient Mathematical Reasoning

arXiv:2603.05120v1 Announce Type: new Abstract: Enhancing mathematical reasoning in Large Language Models typically demands massive datasets, yet data efficiency remains a critical bottleneck. While Curriculum Learning attempts to structure this process, standard unidirectional approaches (simple-to-complex) suffer from inefficient sample utilization:...

1 min 1 month, 2 weeks ago
ead
LOW Academic United Kingdom

Unpacking Human Preference for LLMs: Demographically Aware Evaluation with the HUMAINE Framework

arXiv:2603.04409v1 Announce Type: new Abstract: The evaluation of large language models faces significant challenges. Technical benchmarks often lack real-world relevance, while existing human preference evaluations suffer from unrepresentative sampling, superficial assessment depth, and single-metric reductionism. To address these issues, we...

1 min 1 month, 2 weeks ago
ead
LOW Academic European Union

Multiclass Hate Speech Detection with RoBERTa-OTA: Integrating Transformer Attention and Graph Convolutional Networks

arXiv:2603.04414v1 Announce Type: new Abstract: Multiclass hate speech detection across demographic categories remains computationally challenging due to implicit targeting strategies and linguistic variability in social media content. Existing approaches rely solely on learned representations from training data, without explicitly incorporating...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

The Thinking Boundary: Quantifying Reasoning Suitability of Multimodal Tasks via Dual Tuning

arXiv:2603.04415v1 Announce Type: new Abstract: While reasoning-enhanced Large Language Models (LLMs) have demonstrated remarkable advances in complex tasks such as mathematics and coding, their effectiveness across universal multimodal scenarios remains uncertain. The trend of releasing parallel "Instruct" and "Thinking" models...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

Same Input, Different Scores: A Multi Model Study on the Inconsistency of LLM Judge

arXiv:2603.04417v1 Announce Type: new Abstract: Large language models are increasingly used as automated evaluators in research and enterprise settings, a practice known as LLM-as-a-judge. While prior work has examined accuracy, bias, and alignment with human preferences, far less attention has...

1 min 1 month, 2 weeks ago
ead
LOW Academic European Union

Generating Realistic, Protocol-Compliant Maritime Radio Dialogues using Self-Instruct and Low-Rank Adaptation

arXiv:2603.04423v1 Announce Type: new Abstract: VHF radio miscommunication remains a major safety risk in maritime operations, with human factors accounting for over 58% of recorded incidents in Europe between 2014 and 2023. Despite decades of operational use, VHF radio communications...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

A unified foundational framework for knowledge injection and evaluation of Large Language Models in Combustion Science

arXiv:2603.04452v1 Announce Type: new Abstract: To advance foundation Large Language Models (LLMs) for combustion science, this study presents the first end-to-end framework for developing domain-specialized models for the combustion community. The framework comprises an AI-ready multimodal knowledge base at the...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

Induced Numerical Instability: Hidden Costs in Multimodal Large Language Models

arXiv:2603.04453v1 Announce Type: new Abstract: The use of multimodal large language models has become widespread, and as such the study of these models and their failure points has become of utmost importance. We study a novel mode of failure that...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

Query Disambiguation via Answer-Free Context: Doubling Performance on Humanity's Last Exam

arXiv:2603.04454v1 Announce Type: new Abstract: How carefully and unambiguously a question is phrased has a profound impact on the quality of the response, for Language Models (LMs) as well as people. While model capabilities continue to advance, the interplay between...

1 min 1 month, 2 weeks ago
tps
LOW Academic International

From Static Inference to Dynamic Interaction: Navigating the Landscape of Streaming Large Language Models

arXiv:2603.04592v1 Announce Type: new Abstract: Standard Large Language Models (LLMs) are predominantly designed for static inference with pre-defined inputs, which limits their applicability in dynamic, real-time scenarios. To address this gap, the streaming LLM paradigm has emerged. However, existing definitions...

1 min 1 month, 2 weeks ago
tps
LOW Academic International

Non-Zipfian Distribution of Stopwords and Subset Selection Models

arXiv:2603.04691v1 Announce Type: new Abstract: Stopwords are words that are not very informative to the content or the meaning of a language text. Most stopwords are function words but can also be common verbs, adjectives and adverbs. In contrast to...

1 min 1 month, 2 weeks ago
ead
LOW Academic United States

AI-Assisted Moot Courts: Simulating Justice-Specific Questioning in Oral Arguments

arXiv:2603.04718v1 Announce Type: new Abstract: In oral arguments, judges probe attorneys with questions about the factual record, legal claims, and the strength of their arguments. To prepare for this questioning, both law schools and practicing attorneys rely on moot courts:...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

IF-RewardBench: Benchmarking Judge Models for Instruction-Following Evaluation

arXiv:2603.04738v1 Announce Type: new Abstract: Instruction-following is a foundational capability of large language models (LLMs), with its improvement hinging on scalable and accurate feedback from judge models. However, the reliability of current judge models in instruction-following remains underexplored due to...

1 min 1 month, 2 weeks ago
tps
LOW Academic International

Beyond the Context Window: A Cost-Performance Analysis of Fact-Based Memory vs. Long-Context LLMs for Persistent Agents

arXiv:2603.04814v1 Announce Type: new Abstract: Persistent conversational AI systems face a choice between passing full conversation histories to a long-context large language model (LLM) and maintaining a dedicated memory system that extracts and retrieves structured facts. We compare a fact-based...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

SinhaLegal: A Benchmark Corpus for Information Extraction and Analysis in Sinhala Legislative Texts

arXiv:2603.04854v1 Announce Type: new Abstract: SinhaLegal introduces a Sinhala legislative text corpus containing approximately 2 million words across 1,206 legal documents. The dataset includes two types of legal documents: 1,065 Acts dated from 1981 to 2014 and 141 Bills from...

1 min 1 month, 2 weeks ago
ead
LOW Academic European Union

HACHIMI: Scalable and Controllable Student Persona Generation via Orchestrated Agents

arXiv:2603.04855v1 Announce Type: new Abstract: Student Personas (SPs) are emerging as infrastructure for educational LLMs, yet prior work often relies on ad-hoc prompting or hand-crafted profiles with limited control over educational theory and population distributions. We formalize this as Theory-Aligned...

1 min 1 month, 2 weeks ago
tps
LOW Academic International

AILS-NTUA at SemEval-2026 Task 10: Agentic LLMs for Psycholinguistic Marker Extraction and Conspiracy Endorsement Detection

arXiv:2603.04921v1 Announce Type: new Abstract: This paper presents a novel agentic LLM pipeline for SemEval-2026 Task 10 that jointly extracts psycholinguistic conspiracy markers and detects conspiracy endorsement. Unlike traditional classifiers that conflate semantic reasoning with structural localization, our decoupled design...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

When Weak LLMs Speak with Confidence, Preference Alignment Gets Stronger

arXiv:2603.04968v1 Announce Type: new Abstract: Preference alignment is an essential step in adapting large language models (LLMs) to human values, but existing approaches typically depend on costly human annotations or large-scale API-based models. We explore whether a weak LLM can...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

MPCEval: A Benchmark for Multi-Party Conversation Generation

arXiv:2603.04969v1 Announce Type: new Abstract: Multi-party conversation generation, such as smart reply and collaborative assistants, is an increasingly important capability of generative AI, yet its evaluation remains a critical bottleneck. Compared to two-party dialogue, multi-party settings introduce distinct challenges, including...

1 min 1 month, 2 weeks ago
tps
LOW Academic United States

FedEMA-Distill: Exponential Moving Average Guided Knowledge Distillation for Robust Federated Learning

arXiv:2603.04422v1 Announce Type: new Abstract: Federated learning (FL) often degrades when clients hold heterogeneous non-Independent and Identically Distributed (non-IID) data and when some clients behave adversarially, leading to client drift, slow convergence, and high communication overhead. This paper proposes FedEMA-Distill,...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

Thin Keys, Full Values: Reducing KV Cache via Low-Dimensional Attention Selection

arXiv:2603.04427v1 Announce Type: new Abstract: Standard transformer attention uses identical dimensionality for queries, keys, and values ($d_q = d_k = d_v = \dmodel$). Our insight is that these components serve fundamentally different roles, and this symmetry is unnecessary. Queries and...

1 min 1 month, 2 weeks ago
ead
LOW Academic United States

Agent Memory Below the Prompt: Persistent Q4 KV Cache for Multi-Agent LLM Inference on Edge Devices

arXiv:2603.04428v1 Announce Type: new Abstract: Multi-agent LLM systems on edge devices face a memory management problem: device RAM is too small to hold every agent's KV cache simultaneously. On Apple M4 Pro with 10.2 GB of cache budget, only 3...

1 min 1 month, 2 weeks ago
tps
LOW Academic European Union

Flowers: A Warp Drive for Neural PDE Solvers

arXiv:2603.04430v1 Announce Type: new Abstract: We introduce Flowers, a neural architecture for learning PDE solution operators built entirely from multihead warps. Aside from pointwise channel mixing and a multiscale scaffold, Flowers use no Fourier multipliers, no dot-product attention, and no...

1 min 1 month, 2 weeks ago
ead
LOW Academic United Kingdom

ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation

arXiv:2603.04436v1 Announce Type: new Abstract: Federated fine-tuning of large language models (LLMs) enables collaborative tuning across distributed clients. However, due to the large size of LLMs, local updates in federated learning (FL) may incur substantial video random-access memory (VRAM) usage....

1 min 1 month, 2 weeks ago
ead
LOW Academic European Union

On Emergences of Non-Classical Statistical Characteristics in Classical Neural Networks

arXiv:2603.04451v1 Announce Type: new Abstract: Inspired by measurement incompatibility and Bell-family inequalities in quantum mechanics, we propose the Non-Classical Network (NCnet), a simple classical neural architecture that stably exhibits non-classical statistical behaviors under typical and interpretable experimental setups. We find...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

VSPrefill: Vertical-Slash Sparse Attention with Lightweight Indexing for Long-Context Prefilling

arXiv:2603.04460v1 Announce Type: new Abstract: The quadratic complexity of self-attention during the prefill phase impedes long-context inference in large language models. Existing sparse attention methods face a trade-off among context adaptivity, sampling overhead, and fine-tuning costs. We propose VSPrefill, a...

1 min 1 month, 2 weeks ago
ead
LOW Academic European Union

MAD-SmaAt-GNet: A Multimodal Advection-Guided Neural Network for Precipitation Nowcasting

arXiv:2603.04461v1 Announce Type: new Abstract: Precipitation nowcasting (short-term forecasting) is still often performed using numerical solvers for physical equations, which are computationally expensive and make limited use of the large volumes of available weather data. Deep learning models have shown...

1 min 1 month, 2 weeks ago
ead
LOW Academic International

Understanding the Dynamics of Demonstration Conflict in In-Context Learning

arXiv:2603.04464v1 Announce Type: new Abstract: In-context learning enables large language models to perform novel tasks through few-shot demonstrations. However, demonstrations per se can naturally contain noise and conflicting examples, making this capability vulnerable. To understand how models process such conflicts,...

1 min 1 month, 2 weeks ago
ead
LOW Academic European Union

An LLM-Guided Query-Aware Inference System for GNN Models on Large Knowledge Graphs

arXiv:2603.04545v1 Announce Type: new Abstract: Efficient inference for graph neural networks (GNNs) on large knowledge graphs (KGs) is essential for many real-world applications. GNN inference queries are computationally expensive and vary in complexity, as each involves a different number of...

1 min 1 month, 2 weeks ago
ead
LOW Academic United States

Why Do Neural Networks Forget: A Study of Collapse in Continual Learning

arXiv:2603.04580v1 Announce Type: new Abstract: Catastrophic forgetting is a major problem in continual learning, and lots of approaches arise to reduce it. However, most of them are evaluated through task accuracy, which ignores the internal model structure. Recent research suggests...

1 min 1 month, 2 weeks ago
ead
Previous Page 53 of 71 Next

Impact Distribution

Critical 0
High 0
Medium 7
Low 2110