All Practice Areas

AI & Technology Law

AI·기술법

Jurisdiction: All US KR EU Intl
LOW Academic International

HYVE: Hybrid Views for LLM Context Engineering over Machine Data

arXiv:2604.05400v1 Announce Type: new Abstract: Machine data is central to observability and diagnosis in modern computing systems, appearing in logs, metrics, telemetry traces, and configuration snapshots. When provided to large language models (LLMs), this data typically arrives as a mixture...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

CODESTRUCT: Code Agents over Structured Action Spaces

arXiv:2604.05407v1 Announce Type: new Abstract: LLM-based code agents treat repositories as unstructured text, applying edits through brittle string matching that frequently fails due to formatting drift or ambiguous patterns. We propose reframing the codebase as a structured action space where...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

Cross-fitted Proximal Learning for Model-Based Reinforcement Learning

arXiv:2604.05185v1 Announce Type: new Abstract: Model-based reinforcement learning is attractive for sequential decision-making because it explicitly estimates reward and transition models and then supports planning through simulated rollouts. In offline settings with hidden confounding, however, models learned directly from observational...

1 min 1 week, 3 days ago
ai bias
LOW Academic International

Do Domain-specific Experts exist in MoE-based LLMs?

arXiv:2604.05267v1 Announce Type: new Abstract: In the era of Large Language Models (LLMs), the Mixture of Experts (MoE) architecture has emerged as an effective approach for training extremely large models with improved computational efficiency. This success builds upon extensive prior...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

Breakthrough the Suboptimal Stable Point in Value-Factorization-Based Multi-Agent Reinforcement Learning

arXiv:2604.05297v1 Announce Type: new Abstract: Value factorization, a popular paradigm in MARL, faces significant theoretical and algorithmic bottlenecks: its tendency to converge to suboptimal solutions remains poorly understood and unsolved. Theoretically, existing analyses fail to explain this due to their...

1 min 1 week, 3 days ago
ai algorithm
LOW Academic International

Context-Agent: Dynamic Discourse Trees for Non-Linear Dialogue

arXiv:2604.05552v1 Announce Type: new Abstract: Large Language Models demonstrate outstanding performance in many language tasks but still face fundamental challenges in managing the non-linear flow of human conversation. The prevalent approach of treating dialogue history as a flat, linear sequence...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

TFRBench: A Reasoning Benchmark for Evaluating Forecasting Systems

arXiv:2604.05364v1 Announce Type: new Abstract: We introduce TFRBench, the first benchmark designed to evaluate the reasoning capabilities of forecasting systems. Traditionally, time-series forecasting has been evaluated solely on numerical accuracy, treating foundation models as ``black boxes.'' Unlike existing benchmarks, TFRBench...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

Learning to Edit Knowledge via Instruction-based Chain-of-Thought Prompting

arXiv:2604.05540v1 Announce Type: new Abstract: Large language models (LLMs) can effectively handle outdated information through knowledge editing. However, current approaches face two key limitations: (I) Poor generalization: Most approaches rigidly inject new knowledge without ensuring that the model can use...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

See the Forest for the Trees: Loosely Speculative Decoding via Visual-Semantic Guidance for Efficient Inference of Video LLMs

arXiv:2604.05650v1 Announce Type: new Abstract: Video Large Language Models (Video-LLMs) excel in video understanding but suffer from high inference latency during autoregressive generation. Speculative Decoding (SD) mitigates this by applying a draft-and-verify paradigm, yet existing methods are constrained by rigid...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

Efficient Inference for Large Vision-Language Models: Bottlenecks, Techniques, and Prospects

arXiv:2604.05546v1 Announce Type: new Abstract: Large Vision-Language Models (LVLMs) enable sophisticated reasoning over images and videos, yet their inference is hindered by a systemic efficiency barrier known as visual token dominance. This overhead is driven by a multi-regime interplay between...

1 min 1 week, 3 days ago
ai algorithm
LOW Academic International

Not All Turns Are Equally Hard: Adaptive Thinking Budgets For Efficient Multi-Turn Reasoning

arXiv:2604.05164v1 Announce Type: new Abstract: As LLM reasoning performance plateau, improving inference-time compute efficiency is crucial to mitigate overthinking and long thinking traces even for simple queries. Prior approaches including length regularization, adaptive routing, and difficulty-based budget allocation primarily focus...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

Bypassing the CSI Bottleneck: MARL-Driven Spatial Control for Reflector Arrays

arXiv:2604.05162v1 Announce Type: new Abstract: Reconfigurable Intelligent Surfaces (RIS) are pivotal for next-generation smart radio environments, yet their practical deployment is severely bottlenecked by the intractable computational overhead of Channel State Information (CSI) estimation. To bypass this fundamental physical-layer barrier,...

1 min 1 week, 3 days ago
ai autonomous
LOW Academic International

ActivityEditor: Learning to Synthesize Physically Valid Human Mobility

arXiv:2604.05529v1 Announce Type: new Abstract: Human mobility modeling is indispensable for diverse urban applications. However, existing data-driven methods often suffer from data scarcity, limiting their applicability in regions where historical trajectories are unavailable or restricted. To bridge this gap, we...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

Improving Sparse Memory Finetuning

arXiv:2604.05248v1 Announce Type: new Abstract: Large Language Models (LLMs) are typically static after training, yet real-world applications require continual adaptation to new knowledge without degrading existing capabilities. Standard approaches to updating models, like full finetuning or parameter-efficient methods (e.g., LoRA),...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

RAG or Learning? Understanding the Limits of LLM Adaptation under Continuous Knowledge Drift in the Real World

arXiv:2604.05096v1 Announce Type: new Abstract: Large language models (LLMs) acquire most of their knowledge during pretraining, which ties them to a fixed snapshot of the world and makes adaptation to continuously evolving knowledge challenging. As facts, entities, and events change...

1 min 1 week, 3 days ago
ai llm
LOW Academic International

Affording Process Auditability with QualAnalyzer: An Atomistic LLM Analysis Tool for Qualitative Research

arXiv:2604.03820v1 Announce Type: new Abstract: Large language models are increasingly used for qualitative data analysis, but many workflows obscure how analytic conclusions are produced. We present QualAnalyzer, an open-source Chrome extension for Google Workspace that supports atomistic LLM analysis by...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

When Adaptive Rewards Hurt: Causal Probing and the Switching-Stability Dilemma in LLM-Guided LEO Satellite Scheduling

arXiv:2604.03562v1 Announce Type: new Abstract: Adaptive reward design for deep reinforcement learning (DRL) in multi-beam LEO satellite scheduling is motivated by the intuition that regime-aware reward weights should outperform static ones. We systematically test this intuition and uncover a switching-stability...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

Selective Forgetting for Large Reasoning Models

arXiv:2604.03571v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) generate structured chains of thought (CoTs) before producing final answers, making them especially vulnerable to knowledge leakage through intermediate reasoning steps. Yet, the memorization of sensitive information in the training data...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

CoALFake: Collaborative Active Learning with Human-LLM Co-Annotation for Cross-Domain Fake News Detection

arXiv:2604.04174v1 Announce Type: new Abstract: The proliferation of fake news across diverse domains highlights critical limitations in current detection systems, which often exhibit narrow domain specificity and poor generalization. Existing cross-domain approaches face two key challenges: (1) reliance on labelled...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

Resource-Conscious Modeling for Next- Day Discharge Prediction Using Clinical Notes

arXiv:2604.03498v1 Announce Type: new Abstract: Timely discharge prediction is essential for optimizing bed turnover and resource allocation in elective spine surgery units. This study evaluates the feasibility of lightweight, fine-tuned large language models (LLMs) and traditional text-based models for predicting...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

I-CALM: Incentivizing Confidence-Aware Abstention for LLM Hallucination Mitigation

arXiv:2604.03904v1 Announce Type: new Abstract: Large language models (LLMs) frequently produce confident but incorrect answers, partly because common binary scoring conventions reward answering over honestly expressing uncertainty. We study whether prompt-only interventions -- explicitly announcing reward schemes for answer-versus-abstain decisions...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

The limits of bio-molecular modeling with large language models : a cross-scale evaluation

arXiv:2604.03361v1 Announce Type: new Abstract: The modeling of bio-molecular system across molecular scales remains a central challenge in scientific research. Large language models (LLMs) are increasingly applied to bio-molecular discovery, yet systematic evaluation across multi-scale biological problems and rigorous assessment...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

Comparative reversal learning reveals rigid adaptation in LLMs under non-stationary uncertainty

arXiv:2604.04182v1 Announce Type: new Abstract: Non-stationary environments require agents to revise previously learned action values when contingencies change. We treat large language models (LLMs) as sequential decision policies in a two-option probabilistic reversal-learning task with three latent states and switch...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models

arXiv:2604.04020v1 Announce Type: new Abstract: This paper primarily focuses on the hallucinations caused due to AI language models(LLMs).LLMs have shown extraordinary Language understanding and generation capabilities .Still it has major a disadvantage hallucinations which give outputs which are factually incorrect...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

CAWN: Continuous Acoustic Wave Networks for Autoregressive Language Modeling

arXiv:2604.04250v1 Announce Type: new Abstract: Modern Large Language Models (LLMs) rely on Transformer self-attention, which scales quadratically with sequence length. Recent linear-time alternatives, like State Space Models (SSMs), often suffer from signal degradation over extended contexts. We introduce the Continuous...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

Researchers waste 80% of LLM annotation costs by classifying one text at a time

arXiv:2604.03684v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly being used for text classification across the social sciences, yet researchers overwhelmingly classify one text per variable per prompt. Coding 100,000 texts on four variables requires 400,000 API calls....

1 min 1 week, 4 days ago
ai llm
LOW Academic International

Cultural Authenticity: Comparing LLM Cultural Representations to Native Human Expectations

arXiv:2604.03493v1 Announce Type: new Abstract: Cultural representation in Large Language Model (LLM) outputs has primarily been evaluated through the proxies of cultural diversity and factual accuracy. However, a crucial gap remains in assessing cultural alignment: the degree to which generated...

1 min 1 week, 4 days ago
ai llm
LOW Academic International

POEMetric: The Last Stanza of Humanity

arXiv:2604.03695v1 Announce Type: new Abstract: Large Language Models (LLMs) can compose poetry, but how far are they from human poets? In this paper, we introduce POEMetric, the first comprehensive framework for poetry evaluation, examining 1) basic instruction-following abilities in generating...

1 min 1 week, 4 days ago
ai llm
LOW News International

Startup Battlefield 200 applications open: a chance for VC access, TechCrunch coverage, and $100K

Nominate your startup, or one you know that deserves the spotlight, and finish the process by applying. Selected 200 have a chance at VC access, TechCrunch coverage, and $100K for Startup Battlefield 200. Applications close on May 27.

1 min 1 week, 4 days ago
ai robotics
LOW Academic International

Automated Conjecture Resolution with Formal Verification

arXiv:2604.03789v1 Announce Type: new Abstract: Recent advances in large language models have significantly improved their ability to perform mathematical reasoning, extending from elementary problem solving to increasingly capable performance on research-level problems. However, reliably solving and verifying such problems remains...

1 min 1 week, 4 days ago
ai autonomous
Previous Page 19 of 118 Next

Impact Distribution

Critical 0
High 57
Medium 938
Low 4987