All Practice Areas

Immigration Law

이민법

Jurisdiction: All US KR EU Intl
LOW Academic United States

LLM-Driven Heuristic Synthesis for Industrial Process Control: Lessons from Hot Steel Rolling

arXiv:2603.20537v1 Announce Type: new Abstract: Industrial process control demands policies that are interpretable and auditable, requirements that black-box neural policies struggle to meet. We study an LLM-driven heuristic synthesis framework for hot steel rolling, in which a language model iteratively...

1 min 3 weeks, 4 days ago
ead
LOW Academic United States

The Library Theorem: How External Organization Governs Agentic Reasoning Capacity

arXiv:2603.21272v1 Announce Type: new Abstract: Externalized reasoning is already exploited by transformer-based agents through chain-of-thought, but structured retrieval -- indexing over one's own reasoning state -- remains underexplored. We formalize the transformer context window as an I/O page and prove...

1 min 3 weeks, 4 days ago
ead
LOW Academic United States

FinReflectKG -- HalluBench: GraphRAG Hallucination Benchmark for Financial Question Answering Systems

arXiv:2603.20252v1 Announce Type: new Abstract: As organizations increasingly integrate AI-powered question-answering systems into financial information systems for compliance, risk assessment, and decision support, ensuring the factual accuracy of AI-generated outputs becomes a critical engineering challenge. Current Knowledge Graph (KG)-augmented QA...

1 min 3 weeks, 4 days ago
ead
LOW Academic United States

ARYA: A Physics-Constrained Composable & Deterministic World Model Architecture

arXiv:2603.21340v1 Announce Type: new Abstract: This paper presents ARYA, a composable, physics-constrained, deterministic world model architecture built on five foundational principles: nano models, composability, causal reasoning, determinism, and architectural AI safety. We demonstrate that ARYA satisfies all canonical world model...

1 min 3 weeks, 4 days ago
ead
LOW Conference United States

Reaffirming our Code of Conduct

1 min 3 weeks, 4 days ago
removal
LOW Conference United States

NeurIPS 2026 Evaluations & Datasets FAQ

7 min 3 weeks, 4 days ago
ead
LOW Academic United States

RLVR Training of LLMs Does Not Improve Thinking Ability for General QA: Evaluation Method and a Simple Solution

arXiv:2603.20799v1 Announce Type: new Abstract: Reinforcement learning from verifiable rewards (RLVR) stimulates the thinking processes of large language models (LLMs), substantially enhancing their reasoning abilities on verifiable tasks. It is often assumed that similar gains should transfer to general question...

1 min 3 weeks, 4 days ago
ead
LOW News United States

Court appears ready to overturn state law allowing for late-arriving mail-in ballots

The Supreme Court on Monday appeared ready to overturn a Mississippi law that allows mail-in ballots to be counted as long as they are postmarked by, and then received within […]The postCourt appears ready to overturn state law allowing for...

1 min 3 weeks, 4 days ago
ead
LOW News United States

SCOTUStoday for Monday, March 23

Good morning, and welcome to the March argument session, which includes the argument on birthright citizenship on Wednesday, April 1. This Thursday, March 26, SCOTUSblog is teaming up with Briefly […]The postSCOTUStoday for Monday, March 23appeared first onSCOTUSblog.

1 min 3 weeks, 4 days ago
citizenship
LOW Academic United States

A Subgoal-driven Framework for Improving Long-Horizon LLM Agents

arXiv:2603.19685v1 Announce Type: new Abstract: Large language model (LLM)-based agents have emerged as powerful autonomous controllers for digital environments, including mobile interfaces, operating systems, and web browsers. Web navigation, for example, requires handling dynamic content and long sequences of actions,...

1 min 3 weeks, 5 days ago
ead
LOW Academic United States

Probing to Refine: Reinforcement Distillation of LLMs via Explanatory Inversion

arXiv:2603.19266v1 Announce Type: cross Abstract: Distilling robust reasoning capabilities from large language models (LLMs) into smaller, computationally efficient student models remains an unresolved challenge. Despite recent advances, distilled models frequently suffer from superficial pattern memorization and subpar generalization. To overcome...

1 min 3 weeks, 5 days ago
tps
LOW Academic United States

Improving Automatic Summarization of Radiology Reports through Mid-Training of Large Language Models

arXiv:2603.19275v1 Announce Type: cross Abstract: Automatic summarization of radiology reports is an essential application to reduce the burden on physicians. Previous studies have widely used the "pre-training, fine-tuning" strategy to adapt large language models (LLMs) for summarization. This study proposed...

1 min 3 weeks, 5 days ago
ead
LOW Academic United States

Joint Return and Risk Modeling with Deep Neural Networks for Portfolio Construction

arXiv:2603.19288v1 Announce Type: cross Abstract: Portfolio construction traditionally relies on separately estimating expected returns and covariance matrices using historical statistics, often leading to suboptimal allocation under time-varying market conditions. This paper proposes a joint return and risk modeling framework based...

1 min 3 weeks, 5 days ago
ead
LOW Academic United States

Scalable Prompt Routing via Fine-Grained Latent Task Discovery

arXiv:2603.19415v1 Announce Type: new Abstract: Prompt routing dynamically selects the most appropriate large language model from a pool of candidates for each query, optimizing performance while managing costs. As model pools scale to include dozens of frontier models with narrow...

1 min 3 weeks, 5 days ago
ead
LOW Academic United States

CLaRE-ty Amid Chaos: Quantifying Representational Entanglement to Predict Ripple Effects in LLM Editing

arXiv:2603.19297v1 Announce Type: new Abstract: The static knowledge representations of large language models (LLMs) inevitably become outdated or incorrect over time. While model-editing techniques offer a promising solution by modifying a model's factual associations, they often produce unpredictable ripple effects,...

1 min 3 weeks, 5 days ago
tps
LOW Academic United States

Scalable Cross-Facility Federated Learning for Scientific Foundation Models on Multiple Supercomputers

arXiv:2603.19544v1 Announce Type: new Abstract: Artificial Intelligence for scientific applications increasingly requires training large models on data that cannot be centralized due to privacy constraints, data sovereignty, or the sheer volume of data generated. Federated learning (FL) addresses this by...

1 min 3 weeks, 5 days ago
ead
LOW News United States

Delve accused of misleading customers with ‘fake compliance’

An anonymous Substack post accuses compliance startup Delve of “falsely” convincing “hundreds of customers they were compliant” with privacy and security regulations.

1 min 3 weeks, 6 days ago
ead
LOW News United States

Oral argument live blog for Wednesday, April 1

On Wednesday, April 1, we will be live blogging as the court hears argument in Trump v. Barbara, on the constitutionality of President Donald Trump’s executive order on birthright citizenship. […]The postOral argument live blog for Wednesday, April 1appeared first...

1 min 4 weeks ago
citizenship
LOW Academic United States

The Validity Gap in Health AI Evaluation: A Cross-Sectional Analysis of Benchmark Composition

arXiv:2603.18294v1 Announce Type: new Abstract: Background: Clinical trials rely on transparent inclusion criteria to ensure generalizability. In contrast, benchmarks validating health-related large language models (LLMs) rarely characterize the "patient" or "query" populations they contain. Without defined composition, aggregate performance metrics...

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

MemMA: Coordinating the Memory Cycle through Multi-Agent Reasoning and In-Situ Self-Evolution

arXiv:2603.18718v1 Announce Type: new Abstract: Memory-augmented LLM agents maintain external memory banks to support long-horizon interaction, yet most existing systems treat construction, retrieval, and utilization as isolated subroutines. This creates two coupled challenges: strategic blindness on the forward path of...

1 min 4 weeks, 1 day ago
tps
LOW Academic United States

Balanced Thinking: Improving Chain of Thought Training in Vision Language Models

arXiv:2603.18656v1 Announce Type: new Abstract: Multimodal reasoning in vision-language models (VLMs) typically relies on a two-stage process: supervised fine-tuning (SFT) and reinforcement learning (RL). In standard SFT, all tokens contribute equally to the loss, even though reasoning data are inherently...

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

Reflection in the Dark: Exposing and Escaping the Black Box in Reflective Prompt Optimization

arXiv:2603.18388v1 Announce Type: new Abstract: Automatic prompt optimization (APO) has emerged as a powerful paradigm for improving LLM performance without manual prompt engineering. Reflective APO methods such as GEPA iteratively refine prompts by diagnosing failure cases, but the optimization process...

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

AlignMamba-2: Enhancing Multimodal Fusion and Sentiment Analysis with Modality-Aware Mamba

arXiv:2603.18462v1 Announce Type: new Abstract: In the era of large-scale pre-trained models, effectively adapting general knowledge to specific affective computing tasks remains a challenge, particularly regarding computational efficiency and multimodal heterogeneity. While Transformer-based methods have excelled at modeling inter-modal dependencies,...

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

Analysis Of Linguistic Stereotypes in Single and Multi-Agent Generative AI Architectures

arXiv:2603.18729v1 Announce Type: new Abstract: Many works in the literature show that LLM outputs exhibit discriminatory behaviour, triggering stereotype-based inferences based on the dialect in which the inputs are written. This bias has been shown to be particularly pronounced when...

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

Consumer-to-Clinical Language Shifts in Ambient AI Draft Notes and Clinician-Finalized Documentation: A Multi-level Analysis

arXiv:2603.18327v1 Announce Type: new Abstract: Ambient AI generates draft clinical notes from patient-clinician conversations, often using lay or consumer-oriented phrasing to support patient understanding instead of standardized clinical terminology. How clinicians revise these drafts for professional documentation conventions remains unclear....

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

Frayed RoPE and Long Inputs: A Geometric Perspective

arXiv:2603.18017v1 Announce Type: new Abstract: Rotary Positional Embedding (RoPE) is a widely adopted technique for encoding position in language models, which, while effective, causes performance breakdown when input length exceeds training length. Prior analyses assert (rightly) that long inputs cause...

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

Engineering Verifiable Modularity in Transformers via Per-Layer Supervision

arXiv:2603.18029v1 Announce Type: new Abstract: Transformers resist surgical control. Ablating an attention head identified as critical for capitalization produces minimal behavioral change because distributed redundancy compensates for damage. This Hydra effect renders interpretability illusory: we may identify components through correlation,...

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

Path-Constrained Mixture-of-Experts

arXiv:2603.18297v1 Announce Type: new Abstract: Sparse Mixture-of-Experts (MoE) architectures enable efficient scaling by activating only a subset of parameters for each input. However, conventional MoE routing selects each layer's experts independently, creating N^L possible expert paths -- for N experts...

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

Seeking Universal Shot Language Understanding Solutions

arXiv:2603.18448v1 Announce Type: new Abstract: Shot language understanding (SLU) is crucial for cinematic analysis but remains challenging due to its diverse cinematographic dimensions and subjective expert judgment. While vision-language models (VLMs) have shown strong ability in general visual understanding, recent...

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

AIMER: Calibration-Free Task-Agnostic MoE Pruning

arXiv:2603.18492v1 Announce Type: new Abstract: Mixture-of-Experts (MoE) language models increase parameter capacity without proportional per-token compute, but the deployment still requires storing all experts, making expert pruning important for reducing memory and serving overhead. Existing task-agnostic expert pruning methods are...

1 min 4 weeks, 1 day ago
ead
Previous Page 6 of 19 Next

Impact Distribution

Critical 0
High 0
Medium 7
Low 2110