All Practice Areas

Labor & Employment

노동·고용법

Jurisdiction: All US KR EU Intl
LOW News United States

What oral arguments and opinion authorships can actually tell us

Empirical SCOTUS is a recurring series by Adam Feldman that looks at Supreme Court data, primarily in the form of opinions and oral arguments, to provide insights into the justices’ decision making and […]The postWhat oral arguments and opinion authorships...

1 min 1 week, 3 days ago
ada
LOW Law Review International

First Ideas

2 min 1 week, 3 days ago
termination
LOW Academic International

BWTA: Accurate and Efficient Binarized Transformer by Algorithm-Hardware Co-design

arXiv:2604.03957v1 Announce Type: new Abstract: Ultra low-bit quantization brings substantial efficiency for Transformer-based models, but the accuracy degradation and limited GPU support hinder its wide usage. In this paper, we analyze zero-point distortion in binarization and propose a Binary Weights...

1 min 1 week, 3 days ago
ada
LOW Academic International

VERT: Reliable LLM Judges for Radiology Report Evaluation

arXiv:2604.03376v1 Announce Type: new Abstract: Current literature on radiology report evaluation has focused primarily on designing LLM-based metrics and fine-tuning small models for chest X-rays. However, it remains unclear whether these approaches are robust when applied to reports from other...

1 min 1 week, 3 days ago
ada
LOW Academic European Union

DARE: Diffusion Large Language Models Alignment and Reinforcement Executor

arXiv:2604.04215v1 Announce Type: new Abstract: Diffusion large language models (dLLMs) are emerging as a compelling alternative to dominant autoregressive models, replacing strictly sequential token generation with iterative denoising and parallel generation dynamics. However, their open-source ecosystem remains fragmented across model...

1 min 1 week, 3 days ago
ada
LOW Academic International

CAWN: Continuous Acoustic Wave Networks for Autoregressive Language Modeling

arXiv:2604.04250v1 Announce Type: new Abstract: Modern Large Language Models (LLMs) rely on Transformer self-attention, which scales quadratically with sequence length. Recent linear-time alternatives, like State Space Models (SSMs), often suffer from signal degradation over extended contexts. We introduce the Continuous...

1 min 1 week, 3 days ago
ada
LOW Academic International

Scaling DPPs for RAG: Density Meets Diversity

arXiv:2604.03240v1 Announce Type: new Abstract: Retrieval-Augmented Generation (RAG) enhances Large Language Models (LLMs) by grounding generation in external knowledge, yielding relevance responses that are aligned with factual evidence and evolving corpora. Standard RAG pipelines construct context through relevance ranking, performing...

1 min 1 week, 3 days ago
ada
LOW Academic United States

Embedding Enhancement via Fine-Tuned Language Models for Learner-Item Cognitive Modeling

arXiv:2604.04088v1 Announce Type: new Abstract: Learner-item cognitive modeling plays a central role in the web-based online intelligent education system by enabling cognitive diagnosis (CD) across diverse online educational scenarios. Although ID embedding remains the mainstream approach in cognitive modeling due...

1 min 1 week, 3 days ago
ada
LOW Academic International

CoALFake: Collaborative Active Learning with Human-LLM Co-Annotation for Cross-Domain Fake News Detection

arXiv:2604.04174v1 Announce Type: new Abstract: The proliferation of fake news across diverse domains highlights critical limitations in current detection systems, which often exhibit narrow domain specificity and poor generalization. Existing cross-domain approaches face two key challenges: (1) reliance on labelled...

1 min 1 week, 3 days ago
labor
LOW Academic International

LightThinker++: From Reasoning Compression to Memory Management

arXiv:2604.03679v1 Announce Type: new Abstract: Large language models (LLMs) excel at complex reasoning, yet their efficiency is limited by the surging cognitive overhead of long thought traces. In this paper, we propose LightThinker, a method that enables LLMs to dynamically...

1 min 1 week, 3 days ago
ada
LOW Academic United States

The Format Tax

arXiv:2604.03616v1 Announce Type: new Abstract: Asking a large language model to respond in JSON should be a formatting choice, not a capability tax. Yet we find that structured output requirements -- JSON, XML, LaTeX, Markdown -- substantially degrade reasoning and...

1 min 1 week, 3 days ago
ada
LOW Academic United States

Solar-VLM: Multimodal Vision-Language Models for Augmented Solar Power Forecasting

arXiv:2604.04145v1 Announce Type: new Abstract: Photovoltaic (PV) power forecasting plays a critical role in power system dispatch and market participation. Because PV generation is highly sensitive to weather conditions and cloud motion, accurate forecasting requires effective modeling of complex spatiotemporal...

1 min 1 week, 3 days ago
ada
LOW Academic United States

Profile-Then-Reason: Bounded Semantic Complexity for Tool-Augmented Language Agents

arXiv:2604.04131v1 Announce Type: new Abstract: Large language model agents that use external tools are often implemented through reactive execution, in which reasoning is repeatedly recomputed after each observation, increasing latency and sensitivity to error propagation. This work introduces Profile--Then--Reason (PTR),...

1 min 1 week, 3 days ago
ada
LOW Academic European Union

Neural Operators for Multi-Task Control and Adaptation

arXiv:2604.03449v1 Announce Type: new Abstract: Neural operator methods have emerged as powerful tools for learning mappings between infinite-dimensional function spaces, yet their potential in optimal control remains largely unexplored. We focus on multi-task control problems, whose solution is a mapping...

1 min 1 week, 3 days ago
ada
LOW Academic United States

Readable Minds: Emergent Theory-of-Mind-Like Behavior in LLM Poker Agents

arXiv:2604.04157v1 Announce Type: new Abstract: Theory of Mind (ToM) -- the ability to model others' mental states -- is fundamental to human social cognition. Whether large language models (LLMs) can develop ToM has been tested exclusively through static vignettes, leaving...

1 min 1 week, 3 days ago
ada
LOW Academic United States

Investigating Data Interventions for Subgroup Fairness: An ICU Case Study

arXiv:2604.03478v1 Announce Type: new Abstract: In high-stakes settings where machine learning models are used to automate decision-making about individuals, the presence of algorithmic bias can exacerbate systemic harm to certain subgroups of people. These biases often stem from the underlying...

1 min 1 week, 3 days ago
labor
LOW Academic International

Toward Full Autonomous Laboratory Instrumentation Control with Large Language Models

arXiv:2604.03286v1 Announce Type: new Abstract: The control of complex laboratory instrumentation often requires significant programming expertise, creating a barrier for researchers lacking computational skills. This work explores the potential of large language models (LLMs), such as ChatGPT, and LLM-based artificial...

1 min 1 week, 3 days ago
labor
LOW Academic International

AdaptFuse: Training-Free Sequential Preference Learning via Externalized Bayesian Inference

arXiv:2604.03925v1 Announce Type: new Abstract: Large language models struggle to accumulate evidence across multiple rounds of user interaction, failing to update their beliefs in a manner consistent with Bayesian inference. Existing solutions require fine-tuning on sensitive user interaction data, limiting...

1 min 1 week, 3 days ago
ada
LOW Academic International

Why Attend to Everything? Focus is the Key

arXiv:2604.03260v1 Announce Type: new Abstract: We introduce Focus, a method that learns which token pairs matter rather than approximating all of them. Learnable centroids assign tokens to groups; distant attention is restricted to same-group pairs while local attention operates at...

1 min 1 week, 3 days ago
ada
LOW Academic International

Unveiling Language Routing Isolation in Multilingual MoE Models for Interpretable Subnetwork Adaptation

arXiv:2604.03592v1 Announce Type: new Abstract: Mixture-of-Experts (MoE) models exhibit striking performance disparities across languages, yet the internal mechanisms driving these gaps remain poorly understood. In this work, we conduct a systematic analysis of expert routing patterns in MoE models, revealing...

1 min 1 week, 3 days ago
ada
LOW Academic International

Towards the AI Historian: Agentic Information Extraction from Primary Sources

arXiv:2604.03553v1 Announce Type: new Abstract: AI is supporting, accelerating, and automating scientific discovery across a diverse set of fields. However, AI adoption in historical research remains limited due to the lack of solutions designed for historians. In this technical progress...

1 min 1 week, 3 days ago
ada
LOW Academic European Union

Multirate Stein Variational Gradient Descent for Efficient Bayesian Sampling

arXiv:2604.03981v1 Announce Type: new Abstract: Many particle-based Bayesian inference methods use a single global step size for all parts of the update. In Stein variational gradient descent (SVGD), however, each update combines two qualitatively different effects: attraction toward high-posterior regions...

1 min 1 week, 3 days ago
ada
LOW Academic International

Apparent Age Estimation: Challenges and Outcomes

arXiv:2604.03335v1 Announce Type: new Abstract: Apparent age estimation is a valuable tool for business personalization, yet current models frequently exhibit demographic biases. We review prior works on the DEX method by applying distribution learning techniques such as Mean-Variance Loss (MVL)...

1 min 1 week, 3 days ago
ada
LOW Academic International

Don't Blink: Evidence Collapse during Multimodal Reasoning

arXiv:2604.04207v1 Announce Type: new Abstract: Reasoning VLMs can become more accurate while progressively losing visual grounding as they think. This creates task-conditional danger zones where low-entropy predictions are confident but ungrounded, a failure mode text-only monitoring cannot detect. Evaluating three...

1 min 1 week, 3 days ago
ada
LOW Academic International

Shorter, but Still Trustworthy? An Empirical Study of Chain-of-Thought Compression

arXiv:2604.04120v1 Announce Type: new Abstract: Long chain-of-thought (Long-CoT) reasoning models have motivated a growing body of work on compressing reasoning traces to reduce inference cost, yet existing evaluations focus almost exclusively on task accuracy and token savings. Trustworthiness properties, whether...

1 min 1 week, 3 days ago
ada
LOW Academic International

Automated Attention Pattern Discovery at Scale in Large Language Models

arXiv:2604.03764v1 Announce Type: new Abstract: Large language models have found success by scaling up capabilities to work in general settings. The same can unfortunately not be said for interpretability methods. The current trend in mechanistic interpretability is to provide precise...

1 min 1 week, 3 days ago
ada
LOW Academic International

Improving Model Performance by Adapting the KGE Metric to Account for System Non-Stationarity

arXiv:2604.03906v1 Announce Type: new Abstract: Geoscientific systems tend to be characterized by pronounced temporal non-stationarity, arising from seasonal and climatic variability in hydrometeorological drivers, and from natural and anthropogenic changes to land use and cover. As has been pointed out,...

1 min 1 week, 3 days ago
ada
LOW Academic International

Where to Steer: Input-Dependent Layer Selection for Steering Improves LLM Alignment

arXiv:2604.03867v1 Announce Type: new Abstract: Steering vectors have emerged as a lightweight and effective approach for aligning large language models (LLMs) at inference time, enabling modulation over model behaviors by shifting LLM representations towards a target behavior. However, existing methods...

1 min 1 week, 3 days ago
ada
LOW Academic International

Scalable Variational Bayesian Fine-Tuning of LLMs via Orthogonalized Low-Rank Adapters

arXiv:2604.03388v1 Announce Type: new Abstract: When deploying large language models (LLMs) to safety-critical applications, uncertainty quantification (UQ) is of utmost importance to self-assess the reliability of the LLM-based decisions. However, such decisions typically suffer from overconfidence, particularly after parameter-efficient fine-tuning...

1 min 1 week, 3 days ago
ada
LOW Academic International

Diagonal-Tiled Mixed-Precision Attention for Efficient Low-Bit MXFP Inference

arXiv:2604.03950v1 Announce Type: new Abstract: Transformer-based large language models (LLMs) have demonstrated remarkable performance across a wide range of real-world tasks, but their inference cost remains prohibitively high due to the quadratic complexity of attention and the memory bandwidth limitations...

1 min 1 week, 3 days ago
ada
Previous Page 7 of 52 Next

Impact Distribution

Critical 0
High 1
Medium 4
Low 1553