All Practice Areas

Labor & Employment

노동·고용법

Jurisdiction: All US KR EU Intl
LOW Academic International

TSEmbed: Unlocking Task Scaling in Universal Multimodal Embeddings

arXiv:2603.04772v1 Announce Type: new Abstract: Despite the exceptional reasoning capabilities of Multimodal Large Language Models (MLLMs), their adaptation into universal embedding models is significantly impeded by task conflict. To address this, we propose TSEmbed, a universal multimodal embedding framework that...

1 min 1 month, 1 week ago
ada
LOW Academic International

Autoscoring Anticlimax: A Meta-analytic Understanding of AI's Short-answer Shortcomings and Wording Weaknesses

arXiv:2603.04820v1 Announce Type: new Abstract: Automated short-answer scoring lags other LLM applications. We meta-analyze 890 culminating results across a systematic review of LLM short-answer scoring studies, modeling the traditional effect size of Quadratic Weighted Kappa (QWK) with mixed effects metaregression....

1 min 1 month, 1 week ago
discrimination
LOW Academic International

SinhaLegal: A Benchmark Corpus for Information Extraction and Analysis in Sinhala Legislative Texts

arXiv:2603.04854v1 Announce Type: new Abstract: SinhaLegal introduces a Sinhala legislative text corpus containing approximately 2 million words across 1,206 legal documents. The dataset includes two types of legal documents: 1,065 Acts dated from 1981 to 2014 and 141 Bills from...

1 min 1 month, 1 week ago
ada
LOW Academic International

Free Lunch for Pass@$k$? Low Cost Diverse Sampling for Diffusion Language Models

arXiv:2603.04893v1 Announce Type: new Abstract: Diverse outputs in text generation are necessary for effective exploration in complex reasoning tasks, such as code generation and mathematical problem solving. Such Pass@$k$ problems benefit from distinct candidates covering the solution space. However, traditional...

1 min 1 month, 1 week ago
ada
LOW Academic International

AILS-NTUA at SemEval-2026 Task 3: Efficient Dimensional Aspect-Based Sentiment Analysis

arXiv:2603.04933v1 Announce Type: new Abstract: In this paper, we present AILS-NTUA system for Track-A of SemEval-2026 Task 3 on Dimensional Aspect-Based Sentiment Analysis (DimABSA), which encompasses three complementary problems: Dimensional Aspect Sentiment Regression (DimASR), Dimensional Aspect Sentiment Triplet Extraction (DimASTE),...

1 min 1 month, 1 week ago
ada
LOW Academic International

When Weak LLMs Speak with Confidence, Preference Alignment Gets Stronger

arXiv:2603.04968v1 Announce Type: new Abstract: Preference alignment is an essential step in adapting large language models (LLMs) to human values, but existing approaches typically depend on costly human annotations or large-scale API-based models. We explore whether a weak LLM can...

1 min 1 month, 1 week ago
ada
LOW Academic International

MPCEval: A Benchmark for Multi-Party Conversation Generation

arXiv:2603.04969v1 Announce Type: new Abstract: Multi-party conversation generation, such as smart reply and collaborative assistants, is an increasingly important capability of generative AI, yet its evaluation remains a critical bottleneck. Compared to two-party dialogue, multi-party settings introduce distinct challenges, including...

1 min 1 month, 1 week ago
labor
LOW Academic International

Thin Keys, Full Values: Reducing KV Cache via Low-Dimensional Attention Selection

arXiv:2603.04427v1 Announce Type: new Abstract: Standard transformer attention uses identical dimensionality for queries, keys, and values ($d_q = d_k = d_v = \dmodel$). Our insight is that these components serve fundamentally different roles, and this symmetry is unnecessary. Queries and...

1 min 1 month, 1 week ago
ada
LOW Academic International

VSPrefill: Vertical-Slash Sparse Attention with Lightweight Indexing for Long-Context Prefilling

arXiv:2603.04460v1 Announce Type: new Abstract: The quadratic complexity of self-attention during the prefill phase impedes long-context inference in large language models. Existing sparse attention methods face a trade-off among context adaptivity, sampling overhead, and fine-tuning costs. We propose VSPrefill, a...

1 min 1 month, 1 week ago
ada
LOW Academic International

Understanding the Dynamics of Demonstration Conflict in In-Context Learning

arXiv:2603.04464v1 Announce Type: new Abstract: In-context learning enables large language models to perform novel tasks through few-shot demonstrations. However, demonstrations per se can naturally contain noise and conflicting examples, making this capability vulnerable. To understand how models process such conflicts,...

1 min 1 month, 1 week ago
ada
LOW Academic International

PDE foundation model-accelerated inverse estimation of system parameters in inertial confinement fusion

arXiv:2603.04606v1 Announce Type: new Abstract: PDE foundation models are typically pretrained on large, diverse corpora of PDE datasets and can be adapted to new settings with limited task-specific data. However, most downstream evaluations focus on forward problems, such as autoregressive...

1 min 1 month, 1 week ago
ada
LOW Academic International

When Sensors Fail: Temporal Sequence Models for Robust PPO under Sensor Drift

arXiv:2603.04648v1 Announce Type: new Abstract: Real-world reinforcement learning systems must operate under distributional drift in their observation streams, yet most policy architectures implicitly assume fully observed and noise-free states. We study robustness of Proximal Policy Optimization (PPO) under temporally persistent...

1 min 1 month, 1 week ago
ada
LOW Academic International

Engineering Regression Without Real-Data Training: Domain Adaptation for Tabular Foundation Models Using Multi-Dataset Embeddings

arXiv:2603.04692v1 Announce Type: new Abstract: Predictive modeling in engineering applications has long been dominated by bespoke models and small, siloed tabular datasets, limiting the applicability of large-scale learning approaches. Despite recent progress in tabular foundation models, the resulting synthetic training...

1 min 1 month, 1 week ago
ada
LOW News International

How 1,000+ customer calls shaped a breakout enterprise AI startup

On this episode of Build Mode, David Park joins Isabelle Johannessen to discuss how he and his team are intentionally iterating, fundraising, and scaling Narada.

1 min 1 month, 1 week ago
ada
LOW Academic International

A Dual-Helix Governance Approach Towards Reliable Agentic AI for WebGIS Development

arXiv:2603.04390v1 Announce Type: new Abstract: WebGIS development requires rigor, yet agentic AI frequently fails due to five large language model (LLM) limitations: context constraints, cross-session forgetting, stochasticity, instruction failure, and adaptation rigidity. We propose a dual-helix governance framework reframing these...

1 min 1 month, 1 week ago
ada
LOW Academic International

From Conflict to Consensus: Boosting Medical Reasoning via Multi-Round Agentic RAG

arXiv:2603.03292v1 Announce Type: cross Abstract: Large Language Models (LLMs) exhibit high reasoning capacity in medical question-answering, but their tendency to produce hallucinations and outdated knowledge poses critical risks in healthcare fields. While Retrieval-Augmented Generation (RAG) mitigates these issues, existing methods...

1 min 1 month, 1 week ago
ada
LOW Academic International

TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation

arXiv:2603.03298v1 Announce Type: cross Abstract: Large Language Models (LLMs) have improved substantially alignment, yet their behavior remains highly sensitive to prompt phrasing. This brittleness has motivated automated prompt engineering, but most existing methods (i) require a task-specific training set, (ii)...

1 min 1 month, 1 week ago
ada
LOW Academic International

Token-Oriented Object Notation vs JSON: A Benchmark of Plain and Constrained Decoding Generation

arXiv:2603.03306v1 Announce Type: cross Abstract: Recently presented Token-Oriented Object Notation (TOON) aims to replace JSON as a serialization format for passing structured data to LLMs with significantly reduced token usage. While showing solid accuracy in LLM comprehension, there is a...

1 min 1 month, 1 week ago
ada
LOW Academic International

Discern Truth from Falsehood: Reducing Over-Refusal via Contrastive Refinement

arXiv:2603.03323v1 Announce Type: cross Abstract: Large language models (LLMs) aligned for safety often suffer from over-refusal, the tendency to reject seemingly toxic or benign prompts by misclassifying them as toxic. This behavior undermines models' helpfulness and restricts usability in sensitive...

1 min 1 month, 1 week ago
ada
LOW Academic International

Certainty robustness: Evaluating LLM stability under self-challenging prompts

arXiv:2603.03330v1 Announce Type: new Abstract: Large language models (LLMs) often present answers with high apparent confidence despite lacking an explicit mechanism for reasoning about certainty or truth. While existing benchmarks primarily evaluate single-turn accuracy, truthfulness or confidence calibration, they do...

1 min 1 month, 1 week ago
ada
LOW Academic International

Fragile Thoughts: How Large Language Models Handle Chain-of-Thought Perturbations

arXiv:2603.03332v1 Announce Type: new Abstract: Chain-of-Thought (CoT) prompting has emerged as a foundational technique for eliciting reasoning from Large Language Models (LLMs), yet the robustness of this approach to corruptions in intermediate reasoning steps remains poorly understood. This paper presents...

1 min 1 month, 1 week ago
ada
LOW Academic International

Training-free Dropout Sampling for Semantic Token Acceptance in Speculative Decoding

arXiv:2603.03333v1 Announce Type: new Abstract: Speculative decoding accelerates large language model inference by proposing tokens with a lightweight draft model and selectively accepting them using a target model. This work introduces DropMatch, a novel approach that matches draft tokens to...

1 min 1 month, 1 week ago
ada
LOW Academic International

Farther the Shift, Sparser the Representation: Analyzing OOD Mechanisms in LLMs

arXiv:2603.03415v1 Announce Type: new Abstract: In this work, we investigate how Large Language Models (LLMs) adapt their internal representations when encountering inputs of increasing difficulty, quantified as the degree of out-of-distribution (OOD) shift. We reveal a consistent and quantifiable phenomenon:...

1 min 1 month, 1 week ago
ada
LOW Academic International

[Re] FairDICE: A Gap Between Theory And Practice

arXiv:2603.03454v1 Announce Type: new Abstract: Offline Reinforcement Learning (RL) is an emerging field of RL in which policies are learned solely from demonstrations. Within offline RL, some environments involve balancing multiple objectives, but existing multi-objective offline RL algorithms do not...

1 min 1 month, 1 week ago
ada
LOW Academic International

Test-Time Meta-Adaptation with Self-Synthesis

arXiv:2603.03524v1 Announce Type: new Abstract: As strong general reasoners, large language models (LLMs) encounter diverse domains and tasks, where the ability to adapt and self-improve at test time is valuable. We introduce MASS, a meta-learning framework that enables LLMs to...

1 min 1 month, 1 week ago
ada
LOW Academic International

Trade-offs in Ensembling, Merging and Routing Among Parameter-Efficient Experts

arXiv:2603.03535v1 Announce Type: new Abstract: While large language models (LLMs) fine-tuned with lightweight adapters achieve strong performance across diverse tasks, their performance on individual tasks depends on the fine-tuning strategy. Fusing independently trained models with different strengths has shown promise...

1 min 1 month, 1 week ago
ada
LOW Academic International

Transport Clustering: Solving Low-Rank Optimal Transport via Clustering

arXiv:2603.03578v1 Announce Type: new Abstract: Optimal transport (OT) finds a least cost transport plan between two probability distributions using a cost matrix defined on pairs of points. Unlike standard OT, which infers unstructured pointwise mappings, low-rank optimal transport explicitly constrains...

1 min 1 month, 1 week ago
ada
LOW Academic International

NuMuon: Nuclear-Norm-Constrained Muon for Compressible LLM Training

arXiv:2603.03597v1 Announce Type: new Abstract: The rapid progress of large language models (LLMs) is increasingly constrained by memory and deployment costs, motivating compression methods for practical deployment. Many state-of-the-art compression pipelines leverage the low-rank structure of trained weight matrices, a...

1 min 1 month, 1 week ago
ada
LOW Academic International

LEA: Label Enumeration Attack in Vertical Federated Learning

arXiv:2603.03777v1 Announce Type: new Abstract: A typical Vertical Federated Learning (VFL) scenario involves several participants collaboratively training a machine learning model, where each party has different features for the same samples, with labels held exclusively by one party. Since labels...

1 min 1 month, 1 week ago
labor
LOW Academic International

When and Where to Reset Matters for Long-Term Test-Time Adaptation

arXiv:2603.03796v1 Announce Type: new Abstract: When continual test-time adaptation (TTA) persists over the long term, errors accumulate in the model and further cause it to predict only a few classes for all inputs, a phenomenon known as model collapse. Recent...

1 min 1 month, 1 week ago
ada
Previous Page 23 of 30 Next

Impact Distribution

Critical 0
High 1
Medium 4
Low 1553