All Practice Areas

AI & Technology Law

AI·기술법

Jurisdiction: All US KR EU Intl
LOW Academic International

ValueGround: Evaluating Culture-Conditioned Visual Value Grounding in MLLMs

arXiv:2604.06484v1 Announce Type: new Abstract: Cultural values are expressed not only through language but also through visual scenes and everyday social practices. Yet existing evaluations of cultural values in language models are almost entirely text-only, making it unclear whether models...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

SensorPersona: An LLM-Empowered System for Continual Persona Extraction from Longitudinal Mobile Sensor Streams

arXiv:2604.06204v1 Announce Type: new Abstract: Personalization is essential for Large Language Model (LLM)-based agents to adapt to users' preferences and improve response quality and task performance. However, most existing approaches infer personas from chat histories, which capture only self-disclosed information...

1 min 1 week, 2 days ago
ai llm
LOW News International

Atlassian launches visual AI tools and third-party agents in Confluence

Confluence users can now create visual assets within the software in addition to new third-party agents working with Lovable, Replit, and Gamma.

1 min 1 week, 2 days ago
ai artificial intelligence
LOW Academic International

TelcoAgent-Bench: A Multilingual Benchmark for Telecom AI Agents

arXiv:2604.06209v1 Announce Type: new Abstract: The integration of large language model (LLM) agents into telecom networks introduces new challenges, related to intent recognition, tool execution, and resolution generation, while taking into consideration different operational constraints. In this paper, we introduce...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

TalkLoRA: Communication-Aware Mixture of Low-Rank Adaptation for Large Language Models

arXiv:2604.06291v1 Announce Type: new Abstract: Low-Rank Adaptation (LoRA) enables parameter-efficient fine-tuning of Large Language Models (LLMs), and recent Mixture-of-Experts (MoE) extensions further enhance flexibility by dynamically combining multiple LoRA experts. However, existing MoE-augmented LoRA methods assume that experts operate independently,...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

FMI@SU ToxHabits: Evaluating LLMs Performance on Toxic Habit Extraction in Spanish Clinical Texts

arXiv:2604.06403v1 Announce Type: new Abstract: The paper presents an approach for the recognition of toxic habits named entities in Spanish clinical texts. The approach was developed for the ToxHabits Shared Task. Our team participated in subtask 1, which aims to...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

Discrete Flow Matching Policy Optimization

arXiv:2604.06491v1 Announce Type: new Abstract: We introduce Discrete flow Matching policy Optimization (DoMinO), a unified framework for Reinforcement Learning (RL) fine-tuning Discrete Flow Matching (DFM) models under a broad class of policy gradient methods. Our key idea is to view...

1 min 1 week, 2 days ago
ai bias
LOW Academic International

SHAPE: Stage-aware Hierarchical Advantage via Potential Estimation for LLM Reasoning

arXiv:2604.06636v1 Announce Type: new Abstract: Process supervision has emerged as a promising approach for enhancing LLM reasoning, yet existing methods fail to distinguish meaningful progress from mere verbosity, leading to limited reasoning capabilities and unresolved token inefficiency. To address this,...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

Improving Robustness In Sparse Autoencoders via Masked Regularization

arXiv:2604.06495v1 Announce Type: new Abstract: Sparse autoencoders (SAEs) are widely used in mechanistic interpretability to project LLM activations onto sparse latent spaces. However, sparsity alone is an imperfect proxy for interpretability, and current training objectives often result in brittle latent...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

Scientific Knowledge-driven Decoding Constraints Improving the Reliability of LLMs

arXiv:2604.06603v1 Announce Type: new Abstract: Large language models (LLMs) have shown strong knowledge reserves and task-solving capabilities, but still face the challenge of severe hallucination, hindering their practical application. Though scientific theories and rules can efficiently direct the behaviors of...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

Does a Global Perspective Help Prune Sparse MoEs Elegantly?

arXiv:2604.06542v1 Announce Type: new Abstract: Empirical scaling laws for language models have encouraged the development of ever-larger LLMs, despite their growing computational and memory costs. Sparse Mixture-of-Experts (MoEs) offer a promising alternative by activating only a subset of experts per...

1 min 1 week, 2 days ago
ai llm
LOW News International

How our digital devices are putting our right to privacy at risk

Law professor Andrew Guthrie Ferguson chats with Ars about his new book,Your Data Will Be Used Against You.

1 min 1 week, 2 days ago
ai surveillance
LOW Academic International

Illocutionary Explanation Planning for Source-Faithful Explanations in Retrieval-Augmented Language Models

arXiv:2604.06211v1 Announce Type: new Abstract: Natural language explanations produced by large language models (LLMs) are often persuasive, but not necessarily scrutable: users cannot easily verify whether the claims in an explanation are supported by evidence. In XAI, this motivates a...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

State-of-the-Art Arabic Language Modeling with Sparse MoE Fine-Tuning and Chain-of-Thought Distillation

arXiv:2604.06421v1 Announce Type: new Abstract: This paper introduces Arabic-DeepSeek-R1, an application-driven open-source Arabic LLM that leverages a sparse MoE backbone to address the digital equity gap for under-represented languages, and establishes a new SOTA across the entire Open Arabic LLM...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

LLM-Augmented Knowledge Base Construction For Root Cause Analysis

arXiv:2604.06171v1 Announce Type: new Abstract: Communications networks now form the backbone of our digital world, with fast and reliable connectivity. However, even with appropriate redundancy and failover mechanisms, it is difficult to guarantee "five 9s" (99.999 %) reliability, requiring rapid...

1 min 1 week, 2 days ago
ai llm
LOW News International

To beat Altman in court, Musk offers to give all damages to OpenAI nonprofit

Musk won’t seek a “single dollar” in OpenAI suit after asking to pocket up to $134 billion.

1 min 1 week, 2 days ago
ai artificial intelligence
LOW Academic International

SubFLOT: Submodel Extraction for Efficient and Personalized Federated Learning via Optimal Transport

arXiv:2604.06631v1 Announce Type: new Abstract: Federated Learning (FL) enables collaborative model training while preserving data privacy, but its practical deployment is hampered by system and statistical heterogeneity. While federated network pruning offers a path to mitigate these issues, existing methods...

1 min 1 week, 2 days ago
ai data privacy
LOW Academic International

The Stepwise Informativeness Assumption: Why are Entropy Dynamics and Reasoning Correlated in LLMs?

arXiv:2604.06192v1 Announce Type: new Abstract: Recent work uses entropy-based signals at multiple representation levels to study reasoning in large language models, but the field remains largely empirical. A central unresolved puzzle is why internal entropy dynamics, defined under the predictive...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

Distributional Open-Ended Evaluation of LLM Cultural Value Alignment Based on Value Codebook

arXiv:2604.06210v1 Announce Type: new Abstract: As LLMs are globally deployed, aligning their cultural value orientations is critical for safety and user engagement. However, existing benchmarks face the Construct-Composition-Context ($C^3$) challenge: relying on discriminative, multiple-choice formats that probe value knowledge rather...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

FLeX: Fourier-based Low-rank EXpansion for multilingual transfer

arXiv:2604.06253v1 Announce Type: new Abstract: Cross-lingual code generation is critical in enterprise environments where multiple programming languages coexist. However, fine-tuning large language models (LLMs) individually for each language is computationally prohibitive. This paper investigates whether parameter-efficient fine-tuning methods and optimizer...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

Extracting Breast Cancer Phenotypes from Clinical Notes: Comparing LLMs with Classical Ontology Methods

arXiv:2604.06208v1 Announce Type: new Abstract: A significant amount of data held in Oncology Electronic Medical Records (EMRs) is contained in unstructured provider notes -- including but not limited to the chemotherapy (or cancer treatment) outcome, different biomarkers, the tumor's location,...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

Cross-Lingual Transfer and Parameter-Efficient Adaptation in the Turkic Language Family: A Theoretical Framework for Low-Resource Language Models

arXiv:2604.06202v1 Announce Type: new Abstract: Large language models (LLMs) have transformed natural language processing, yet their capabilities remain uneven across languages. Most multilingual models are trained primarily on high-resource languages, leaving many languages with large speaker populations underrepresented in both...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

DiffuMask: Diffusion Language Model for Token-level Prompt Pruning

arXiv:2604.06627v1 Announce Type: new Abstract: In-Context Learning and Chain-of-Thought prompting improve reasoning in large language models (LLMs). These typically come at the cost of longer, more expensive prompts that may contain redundant information. Prompt compression based on pruning offers a...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

AE-ViT: Stable Long-Horizon Parametric Partial Differential Equations Modeling

arXiv:2604.06475v1 Announce Type: new Abstract: Deep Learning Reduced Order Models (ROMs) are becoming increasingly popular as surrogate models for parametric partial differential equations (PDEs) due to their ability to handle high-dimensional data, approximate highly nonlinear mappings, and utilize GPUs. Existing...

1 min 1 week, 2 days ago
ai deep learning
LOW Academic International

Distributed Interpretability and Control for Large Language Models

arXiv:2604.06483v1 Announce Type: new Abstract: Large language models that require multiple GPU cards to host are usually the most capable models. It is necessary to understand and steer these models, but the current technologies do not support the interpretability and...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

RAGEN-2: Reasoning Collapse in Agentic RL

arXiv:2604.06268v1 Announce Type: new Abstract: RL training of multi-turn LLM agents is inherently unstable, and reasoning quality directly determines task performance. Entropy is widely used to track reasoning stability. However, entropy only measures diversity within the same input, and cannot...

1 min 1 week, 2 days ago
ai llm
LOW News International

OpenAI releases a new safety blueprint to address the rise in child sexual exploitation

OpenAI's new Child Safety Blueprint aims to tackle the alarming rise in child sexual exploitation linked to advancements in AI.

1 min 1 week, 2 days ago
ai chatgpt
LOW Academic International

Limits of Difficulty Scaling: Hard Samples Yield Diminishing Returns in GRPO-Tuned SLMs

arXiv:2604.06298v1 Announce Type: new Abstract: Recent alignment work on Large Language Models (LLMs) suggests preference optimization can improve reasoning by shifting probability mass toward better solutions. We test this claim in a resource-constrained setting by applying GRPO with LoRA to...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

The Depth Ceiling: On the Limits of Large Language Models in Discovering Latent Planning

arXiv:2604.06427v1 Announce Type: new Abstract: The viability of chain-of-thought (CoT) monitoring hinges on models being unable to reason effectively in their latent representations. Yet little is known about the limits of such latent reasoning in LLMs. We test these limits...

1 min 1 week, 2 days ago
ai llm
LOW Academic International

Tool-MCoT: Tool Augmented Multimodal Chain-of-Thought for Content Safety Moderation

arXiv:2604.06205v1 Announce Type: new Abstract: The growth of online platforms and user content requires strong content moderation systems that can handle complex inputs from various media types. While large language models (LLMs) are effective, their high computational cost and latency...

1 min 1 week, 2 days ago
ai llm
Previous Page 17 of 118 Next

Impact Distribution

Critical 0
High 57
Medium 938
Low 4987