All Practice Areas

Labor & Employment

노동·고용법

Jurisdiction: All US KR EU Intl
LOW Academic International

PDE foundation model-accelerated inverse estimation of system parameters in inertial confinement fusion

arXiv:2603.04606v1 Announce Type: new Abstract: PDE foundation models are typically pretrained on large, diverse corpora of PDE datasets and can be adapted to new settings with limited task-specific data. However, most downstream evaluations focus on forward problems, such as autoregressive...

1 min 1 month, 1 week ago
ada
LOW Academic International

When Sensors Fail: Temporal Sequence Models for Robust PPO under Sensor Drift

arXiv:2603.04648v1 Announce Type: new Abstract: Real-world reinforcement learning systems must operate under distributional drift in their observation streams, yet most policy architectures implicitly assume fully observed and noise-free states. We study robustness of Proximal Policy Optimization (PPO) under temporally persistent...

1 min 1 month, 1 week ago
ada
LOW Academic International

Engineering Regression Without Real-Data Training: Domain Adaptation for Tabular Foundation Models Using Multi-Dataset Embeddings

arXiv:2603.04692v1 Announce Type: new Abstract: Predictive modeling in engineering applications has long been dominated by bespoke models and small, siloed tabular datasets, limiting the applicability of large-scale learning approaches. Despite recent progress in tabular foundation models, the resulting synthetic training...

1 min 1 month, 1 week ago
ada
LOW Law Review United States

The Untold Story of the Proto-Smith Era: Justice O’Connor’s Papers and the Court’s Free Exercise Revolution

Justice O’Connor’s recently released Supreme Court papers reveal the untold story of how the Court systematically dismantled religious accommodation protections in the decade leading up to Employment Division v. Smith. While Smith’s abandonment of strict scrutiny for neutral, generally applicable...

1 min 1 month, 1 week ago
employment
LOW Think Tank United States

Partner & Partners

3 min 1 month, 1 week ago
labor
LOW News International

How 1,000+ customer calls shaped a breakout enterprise AI startup

On this episode of Build Mode, David Park joins Isabelle Johannessen to discuss how he and his team are intentionally iterating, fundraising, and scaling Narada.

1 min 1 month, 1 week ago
ada
LOW Academic International

A Dual-Helix Governance Approach Towards Reliable Agentic AI for WebGIS Development

arXiv:2603.04390v1 Announce Type: new Abstract: WebGIS development requires rigor, yet agentic AI frequently fails due to five large language model (LLM) limitations: context constraints, cross-session forgetting, stochasticity, instruction failure, and adaptation rigidity. We propose a dual-helix governance framework reframing these...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

From Conflict to Consensus: Boosting Medical Reasoning via Multi-Round Agentic RAG

arXiv:2603.03292v1 Announce Type: cross Abstract: Large Language Models (LLMs) exhibit high reasoning capacity in medical question-answering, but their tendency to produce hallucinations and outdated knowledge poses critical risks in healthcare fields. While Retrieval-Augmented Generation (RAG) mitigates these issues, existing methods...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

TTSR: Test-Time Self-Reflection for Continual Reasoning Improvement

arXiv:2603.03297v1 Announce Type: cross Abstract: Test-time Training enables model adaptation using only test questions and offers a promising paradigm for improving the reasoning ability of large language models (LLMs). However, it faces two major challenges: test questions are often highly...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation

arXiv:2603.03298v1 Announce Type: cross Abstract: Large Language Models (LLMs) have improved substantially alignment, yet their behavior remains highly sensitive to prompt phrasing. This brittleness has motivated automated prompt engineering, but most existing methods (i) require a task-specific training set, (ii)...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Token-Oriented Object Notation vs JSON: A Benchmark of Plain and Constrained Decoding Generation

arXiv:2603.03306v1 Announce Type: cross Abstract: Recently presented Token-Oriented Object Notation (TOON) aims to replace JSON as a serialization format for passing structured data to LLMs with significantly reduced token usage. While showing solid accuracy in LLM comprehension, there is a...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Discern Truth from Falsehood: Reducing Over-Refusal via Contrastive Refinement

arXiv:2603.03323v1 Announce Type: cross Abstract: Large language models (LLMs) aligned for safety often suffer from over-refusal, the tendency to reject seemingly toxic or benign prompts by misclassifying them as toxic. This behavior undermines models' helpfulness and restricts usability in sensitive...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

Controllable and explainable personality sliders for LLMs at inference time

arXiv:2603.03326v1 Announce Type: cross Abstract: Aligning Large Language Models (LLMs) with specific personas typically relies on expensive and monolithic Supervised Fine-Tuning (SFT) or RLHF. While effective, these methods require training distinct models for every target personality profile. Inference-time activation steering...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

Combating data scarcity in recommendation services: Integrating cognitive types of VARK and neural network technologies (LLM)

arXiv:2603.03309v1 Announce Type: new Abstract: Cold start scenarios present fundamental obstacles to effective recommendation generation, particularly when dealing with users lacking interaction history or items with sparse metadata. This research proposes an innovative hybrid framework that leverages Large Language Models...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

Entropic-Time Inference: Self-Organizing Large Language Model Decoding Beyond Attention

arXiv:2603.03310v1 Announce Type: new Abstract: Modern large language model (LLM) inference engines optimize throughput and latency under fixed decoding rules, treating generation as a linear progression in token time. We propose a fundamentally different paradigm: entropic\-time inference, where decoding is...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Certainty robustness: Evaluating LLM stability under self-challenging prompts

arXiv:2603.03330v1 Announce Type: new Abstract: Large language models (LLMs) often present answers with high apparent confidence despite lacking an explicit mechanism for reasoning about certainty or truth. While existing benchmarks primarily evaluate single-turn accuracy, truthfulness or confidence calibration, they do...

1 min 1 month, 2 weeks ago
ada
LOW Academic United States

PulseLM: A Foundation Dataset and Benchmark for PPG-Text Learning

arXiv:2603.03331v1 Announce Type: new Abstract: Photoplethysmography (PPG) is a widely used non-invasive sensing modality for continuous cardiovascular and physiological monitoring across clinical, laboratory, and wearable settings. While existing PPG datasets support a broad range of downstream tasks, they typically provide...

1 min 1 month, 2 weeks ago
labor
LOW Academic International

Fragile Thoughts: How Large Language Models Handle Chain-of-Thought Perturbations

arXiv:2603.03332v1 Announce Type: new Abstract: Chain-of-Thought (CoT) prompting has emerged as a foundational technique for eliciting reasoning from Large Language Models (LLMs), yet the robustness of this approach to corruptions in intermediate reasoning steps remains poorly understood. This paper presents...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Training-free Dropout Sampling for Semantic Token Acceptance in Speculative Decoding

arXiv:2603.03333v1 Announce Type: new Abstract: Speculative decoding accelerates large language model inference by proposing tokens with a lightweight draft model and selectively accepting them using a target model. This work introduces DropMatch, a novel approach that matches draft tokens to...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Farther the Shift, Sparser the Representation: Analyzing OOD Mechanisms in LLMs

arXiv:2603.03415v1 Announce Type: new Abstract: In this work, we investigate how Large Language Models (LLMs) adapt their internal representations when encountering inputs of increasing difficulty, quantified as the degree of out-of-distribution (OOD) shift. We reveal a consistent and quantifiable phenomenon:...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

RADAR: Learning to Route with Asymmetry-aware DistAnce Representations

arXiv:2603.03388v1 Announce Type: new Abstract: Recent neural solvers have achieved strong performance on vehicle routing problems (VRPs), yet they mainly assume symmetric Euclidean distances, restricting applicability to real-world scenarios. A core challenge is encoding the relational features in asymmetric distance...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

Towards Improved Sentence Representations using Token Graphs

arXiv:2603.03389v1 Announce Type: new Abstract: Obtaining a single-vector representation from a Large Language Model's (LLM) token-level outputs is a critical step for nearly all sentence-level tasks. However, standard pooling methods like mean or max aggregation treat tokens as an independent...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

[Re] FairDICE: A Gap Between Theory And Practice

arXiv:2603.03454v1 Announce Type: new Abstract: Offline Reinforcement Learning (RL) is an emerging field of RL in which policies are learned solely from demonstrations. Within offline RL, some environments involve balancing multiple objectives, but existing multi-objective offline RL algorithms do not...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

When Small Variations Become Big Failures: Reliability Challenges in Compute-in-Memory Neural Accelerators

arXiv:2603.03491v1 Announce Type: new Abstract: Compute-in-memory (CiM) architectures promise significant improvements in energy efficiency and throughput for deep neural network acceleration by alleviating the von Neumann bottleneck. However, their reliance on emerging non-volatile memory devices introduces device-level non-idealities-such as write...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Test-Time Meta-Adaptation with Self-Synthesis

arXiv:2603.03524v1 Announce Type: new Abstract: As strong general reasoners, large language models (LLMs) encounter diverse domains and tasks, where the ability to adapt and self-improve at test time is valuable. We introduce MASS, a meta-learning framework that enables LLMs to...

1 min 1 month, 2 weeks ago
ada
LOW Academic European Union

mlx-snn: Spiking Neural Networks on Apple Silicon via MLX

arXiv:2603.03529v1 Announce Type: new Abstract: We introduce mlx-snn, the first spiking neural network (SNN) library built natively on Apple's MLX framework. As SNN research grows rapidly, all major libraries -- snnTorch, Norse, SpikingJelly, Lava -- target PyTorch or custom backends,...

1 min 1 month, 2 weeks ago
ada
LOW Academic United States

Role-Aware Conditional Inference for Spatiotemporal Ecosystem Carbon Flux Prediction

arXiv:2603.03531v1 Announce Type: new Abstract: Accurate prediction of terrestrial ecosystem carbon fluxes (e.g., CO$_2$, GPP, and CH$_4$) is essential for understanding the global carbon cycle and managing its impacts. However, prediction remains challenging due to strong spatiotemporal heterogeneity: ecosystem flux...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Trade-offs in Ensembling, Merging and Routing Among Parameter-Efficient Experts

arXiv:2603.03535v1 Announce Type: new Abstract: While large language models (LLMs) fine-tuned with lightweight adapters achieve strong performance across diverse tasks, their performance on individual tasks depends on the fine-tuning strategy. Fusing independently trained models with different strengths has shown promise...

1 min 1 month, 2 weeks ago
ada
LOW Academic International

Transport Clustering: Solving Low-Rank Optimal Transport via Clustering

arXiv:2603.03578v1 Announce Type: new Abstract: Optimal transport (OT) finds a least cost transport plan between two probability distributions using a cost matrix defined on pairs of points. Unlike standard OT, which infers unstructured pointwise mappings, low-rank optimal transport explicitly constrains...

1 min 1 month, 2 weeks ago
ada
LOW Academic United States

Hybrid Belief Reinforcement Learning for Efficient Coordinated Spatial Exploration

arXiv:2603.03595v1 Announce Type: new Abstract: Coordinating multiple autonomous agents to explore and serve spatially heterogeneous demand requires jointly learning unknown spatial patterns and planning trajectories that maximize task performance. Pure model-based approaches provide structured uncertainty estimates but lack adaptive policy...

1 min 1 month, 2 weeks ago
ada
Previous Page 40 of 52 Next

Impact Distribution

Critical 0
High 1
Medium 4
Low 1553