All Practice Areas

Labor & Employment

노동·고용법

Jurisdiction: All US KR EU Intl
LOW News International

How 1,000+ customer calls shaped a breakout enterprise AI startup

On this episode of Build Mode, David Park joins Isabelle Johannessen to discuss how he and his team are intentionally iterating, fundraising, and scaling Narada.

1 min 1 month, 2 weeks ago
ada
LOW Academic International

A Dual-Helix Governance Approach Towards Reliable Agentic AI for WebGIS Development

arXiv:2603.04390v1 Announce Type: new Abstract: WebGIS development requires rigor, yet agentic AI frequently fails due to five large language model (LLM) limitations: context constraints, cross-session forgetting, stochasticity, instruction failure, and adaptation rigidity. We propose a dual-helix governance framework reframing these...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

From Conflict to Consensus: Boosting Medical Reasoning via Multi-Round Agentic RAG

arXiv:2603.03292v1 Announce Type: cross Abstract: Large Language Models (LLMs) exhibit high reasoning capacity in medical question-answering, but their tendency to produce hallucinations and outdated knowledge poses critical risks in healthcare fields. While Retrieval-Augmented Generation (RAG) mitigates these issues, existing methods...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

TTSR: Test-Time Self-Reflection for Continual Reasoning Improvement

arXiv:2603.03297v1 Announce Type: cross Abstract: Test-time Training enables model adaptation using only test questions and offers a promising paradigm for improving the reasoning ability of large language models (LLMs). However, it faces two major challenges: test questions are often highly...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation

arXiv:2603.03298v1 Announce Type: cross Abstract: Large Language Models (LLMs) have improved substantially alignment, yet their behavior remains highly sensitive to prompt phrasing. This brittleness has motivated automated prompt engineering, but most existing methods (i) require a task-specific training set, (ii)...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Token-Oriented Object Notation vs JSON: A Benchmark of Plain and Constrained Decoding Generation

arXiv:2603.03306v1 Announce Type: cross Abstract: Recently presented Token-Oriented Object Notation (TOON) aims to replace JSON as a serialization format for passing structured data to LLMs with significantly reduced token usage. While showing solid accuracy in LLM comprehension, there is a...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Discern Truth from Falsehood: Reducing Over-Refusal via Contrastive Refinement

arXiv:2603.03323v1 Announce Type: cross Abstract: Large language models (LLMs) aligned for safety often suffer from over-refusal, the tendency to reject seemingly toxic or benign prompts by misclassifying them as toxic. This behavior undermines models' helpfulness and restricts usability in sensitive...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

Controllable and explainable personality sliders for LLMs at inference time

arXiv:2603.03326v1 Announce Type: cross Abstract: Aligning Large Language Models (LLMs) with specific personas typically relies on expensive and monolithic Supervised Fine-Tuning (SFT) or RLHF. While effective, these methods require training distinct models for every target personality profile. Inference-time activation steering...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

Combating data scarcity in recommendation services: Integrating cognitive types of VARK and neural network technologies (LLM)

arXiv:2603.03309v1 Announce Type: new Abstract: Cold start scenarios present fundamental obstacles to effective recommendation generation, particularly when dealing with users lacking interaction history or items with sparse metadata. This research proposes an innovative hybrid framework that leverages Large Language Models...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

Entropic-Time Inference: Self-Organizing Large Language Model Decoding Beyond Attention

arXiv:2603.03310v1 Announce Type: new Abstract: Modern large language model (LLM) inference engines optimize throughput and latency under fixed decoding rules, treating generation as a linear progression in token time. We propose a fundamentally different paradigm: entropic\-time inference, where decoding is...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Certainty robustness: Evaluating LLM stability under self-challenging prompts

arXiv:2603.03330v1 Announce Type: new Abstract: Large language models (LLMs) often present answers with high apparent confidence despite lacking an explicit mechanism for reasoning about certainty or truth. While existing benchmarks primarily evaluate single-turn accuracy, truthfulness or confidence calibration, they do...

1 min 1 month, 2 weeks ago
ada
LOW Academic United States

PulseLM: A Foundation Dataset and Benchmark for PPG-Text Learning

arXiv:2603.03331v1 Announce Type: new Abstract: Photoplethysmography (PPG) is a widely used non-invasive sensing modality for continuous cardiovascular and physiological monitoring across clinical, laboratory, and wearable settings. While existing PPG datasets support a broad range of downstream tasks, they typically provide...

1 min 1 month, 2 weeks ago
labor
LOW Academic International

Fragile Thoughts: How Large Language Models Handle Chain-of-Thought Perturbations

arXiv:2603.03332v1 Announce Type: new Abstract: Chain-of-Thought (CoT) prompting has emerged as a foundational technique for eliciting reasoning from Large Language Models (LLMs), yet the robustness of this approach to corruptions in intermediate reasoning steps remains poorly understood. This paper presents...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Training-free Dropout Sampling for Semantic Token Acceptance in Speculative Decoding

arXiv:2603.03333v1 Announce Type: new Abstract: Speculative decoding accelerates large language model inference by proposing tokens with a lightweight draft model and selectively accepting them using a target model. This work introduces DropMatch, a novel approach that matches draft tokens to...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Farther the Shift, Sparser the Representation: Analyzing OOD Mechanisms in LLMs

arXiv:2603.03415v1 Announce Type: new Abstract: In this work, we investigate how Large Language Models (LLMs) adapt their internal representations when encountering inputs of increasing difficulty, quantified as the degree of out-of-distribution (OOD) shift. We reveal a consistent and quantifiable phenomenon:...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

RADAR: Learning to Route with Asymmetry-aware DistAnce Representations

arXiv:2603.03388v1 Announce Type: new Abstract: Recent neural solvers have achieved strong performance on vehicle routing problems (VRPs), yet they mainly assume symmetric Euclidean distances, restricting applicability to real-world scenarios. A core challenge is encoding the relational features in asymmetric distance...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

Towards Improved Sentence Representations using Token Graphs

arXiv:2603.03389v1 Announce Type: new Abstract: Obtaining a single-vector representation from a Large Language Model's (LLM) token-level outputs is a critical step for nearly all sentence-level tasks. However, standard pooling methods like mean or max aggregation treat tokens as an independent...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

[Re] FairDICE: A Gap Between Theory And Practice

arXiv:2603.03454v1 Announce Type: new Abstract: Offline Reinforcement Learning (RL) is an emerging field of RL in which policies are learned solely from demonstrations. Within offline RL, some environments involve balancing multiple objectives, but existing multi-objective offline RL algorithms do not...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

When Small Variations Become Big Failures: Reliability Challenges in Compute-in-Memory Neural Accelerators

arXiv:2603.03491v1 Announce Type: new Abstract: Compute-in-memory (CiM) architectures promise significant improvements in energy efficiency and throughput for deep neural network acceleration by alleviating the von Neumann bottleneck. However, their reliance on emerging non-volatile memory devices introduces device-level non-idealities-such as write...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Test-Time Meta-Adaptation with Self-Synthesis

arXiv:2603.03524v1 Announce Type: new Abstract: As strong general reasoners, large language models (LLMs) encounter diverse domains and tasks, where the ability to adapt and self-improve at test time is valuable. We introduce MASS, a meta-learning framework that enables LLMs to...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

mlx-snn: Spiking Neural Networks on Apple Silicon via MLX

arXiv:2603.03529v1 Announce Type: new Abstract: We introduce mlx-snn, the first spiking neural network (SNN) library built natively on Apple's MLX framework. As SNN research grows rapidly, all major libraries -- snnTorch, Norse, SpikingJelly, Lava -- target PyTorch or custom backends,...

1 min 1 month, 2 weeks ago
ada
LOW Academic United States

Role-Aware Conditional Inference for Spatiotemporal Ecosystem Carbon Flux Prediction

arXiv:2603.03531v1 Announce Type: new Abstract: Accurate prediction of terrestrial ecosystem carbon fluxes (e.g., CO$_2$, GPP, and CH$_4$) is essential for understanding the global carbon cycle and managing its impacts. However, prediction remains challenging due to strong spatiotemporal heterogeneity: ecosystem flux...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Trade-offs in Ensembling, Merging and Routing Among Parameter-Efficient Experts

arXiv:2603.03535v1 Announce Type: new Abstract: While large language models (LLMs) fine-tuned with lightweight adapters achieve strong performance across diverse tasks, their performance on individual tasks depends on the fine-tuning strategy. Fusing independently trained models with different strengths has shown promise...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Transport Clustering: Solving Low-Rank Optimal Transport via Clustering

arXiv:2603.03578v1 Announce Type: new Abstract: Optimal transport (OT) finds a least cost transport plan between two probability distributions using a cost matrix defined on pairs of points. Unlike standard OT, which infers unstructured pointwise mappings, low-rank optimal transport explicitly constrains...

1 min 1 month, 2 weeks ago
ada
LOW Academic United States

Hybrid Belief Reinforcement Learning for Efficient Coordinated Spatial Exploration

arXiv:2603.03595v1 Announce Type: new Abstract: Coordinating multiple autonomous agents to explore and serve spatially heterogeneous demand requires jointly learning unknown spatial patterns and planning trajectories that maximize task performance. Pure model-based approaches provide structured uncertainty estimates but lack adaptive policy...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

NuMuon: Nuclear-Norm-Constrained Muon for Compressible LLM Training

arXiv:2603.03597v1 Announce Type: new Abstract: The rapid progress of large language models (LLMs) is increasingly constrained by memory and deployment costs, motivating compression methods for practical deployment. Many state-of-the-art compression pipelines leverage the low-rank structure of trained weight matrices, a...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

Adaptive Sensing of Continuous Physical Systems for Machine Learning

arXiv:2603.03650v1 Announce Type: new Abstract: Physical dynamical systems can be viewed as natural information processors: their systems preserve, transform, and disperse input information. This perspective motivates learning not only from data generated by such systems, but also how to measure...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

Graph Negative Feedback Bias Correction Framework for Adaptive Heterophily Modeling

arXiv:2603.03662v1 Announce Type: new Abstract: Graph Neural Networks (GNNs) have emerged as a powerful framework for processing graph-structured data. However, conventional GNNs and their variants are inherently limited by the homophily assumption, leading to degradation in performance on heterophilic graphs....

1 min 1 month, 2 weeks ago
ada
LOW Academic International

LEA: Label Enumeration Attack in Vertical Federated Learning

arXiv:2603.03777v1 Announce Type: new Abstract: A typical Vertical Federated Learning (VFL) scenario involves several participants collaboratively training a machine learning model, where each party has different features for the same samples, with labels held exclusively by one party. Since labels...

1 min 1 month, 2 weeks ago
labor
LOW Academic International

When and Where to Reset Matters for Long-Term Test-Time Adaptation

arXiv:2603.03796v1 Announce Type: new Abstract: When continual test-time adaptation (TTA) persists over the long term, errors accumulate in the model and further cause it to predict only a few classes for all inputs, a phenomenon known as model collapse. Recent...

1 min 1 month, 2 weeks ago
ada
Previous Page 40 of 52 Next

Impact Distribution

Critical 0
High 1
Medium 4
Low 1553