All Practice Areas

AI & Technology Law

AI·기술법

Jurisdiction: All US KR EU Intl
LOW News International

OpenAI adds open source tools to help developers build for teen safety

Rather than working from scratch to figure out how to make AI safer for teens, developers can use these policies to fortify what they build.

1 min 4 weeks ago
ai chatgpt
LOW News International

Agile Robots becomes the latest robotics company to partner with Google DeepMind

Agile Robots will incorporate Google DeepMind's robotics foundation models into its bots while collecting data for the AI research lab.

1 min 4 weeks ago
ai robotics
LOW Academic European Union

Domain-Specialized Tree of Thought through Plug-and-Play Predictors

arXiv:2603.20267v1 Announce Type: new Abstract: While Large Language Models (LLMs) have advanced complex reasoning, prominent methods like the Tree of Thoughts (ToT) framework face a critical trade-off between exploration depth and computational efficiency. Existing ToT implementations often rely on heavyweight...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic European Union

Grounded Chess Reasoning in Language Models via Master Distillation

arXiv:2603.20510v1 Announce Type: new Abstract: Language models often lack grounded reasoning capabilities in specialized domains where training data is scarce but bespoke systems excel. We introduce a general framework for distilling expert system reasoning into natural language chain-of-thought explanations, enabling...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic European Union

Graph of States: Solving Abductive Tasks with Large Language Models

arXiv:2603.21250v1 Announce Type: new Abstract: Logical reasoning encompasses deduction, induction, and abduction. However, while Large Language Models (LLMs) have effectively mastered the former two, abductive reasoning remains significantly underexplored. Existing frameworks, predominantly designed for static deductive tasks, fail to generalize...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic United States

Enhancing Safety of Large Language Models via Embedding Space Separation

arXiv:2603.20206v1 Announce Type: new Abstract: Large language models (LLMs) have achieved impressive capabilities, yet ensuring their safety against harmful prompts remains a critical challenge. Recent work has revealed that the latent representations (embeddings) of harmful and safe queries in LLMs...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic International

The Intelligent Disobedience Game: Formulating Disobedience in Stackelberg Games and Markov Decision Processes

arXiv:2603.20994v1 Announce Type: new Abstract: In shared autonomy, a critical tension arises when an automated assistant must choose between obeying a human's instruction and deliberately overriding it to prevent harm. This safety-critical behavior is known as intelligent disobedience. To formalize...

1 min 4 weeks, 1 day ago
ai algorithm
LOW Academic International

Position: Multi-Agent Algorithmic Care Systems Demand Contestability for Trustworthy AI

arXiv:2603.20595v1 Announce Type: new Abstract: Multi-agent systems (MAS) are increasingly used in healthcare to support complex decision-making through collaboration among specialized agents. Because these systems act as collective decision-makers, they raise challenges for trust, accountability, and human oversight. Existing approaches...

1 min 4 weeks, 1 day ago
ai algorithm
LOW Conference United States

NeurIPS Datasets & Benchmarks Track: From Art to Science in AI Evaluations

5 min 4 weeks, 1 day ago
ai algorithm
LOW Academic International

Children's Intelligence Tests Pose Challenges for MLLMs? KidGym: A 2D Grid-Based Reasoning Benchmark for MLLMs

arXiv:2603.20209v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) combine the linguistic strengths of LLMs with the ability to process multimodal data, enbaling them to address a broader range of visual tasks. Because MLLMs aim at more general, human-like...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic European Union

ConsRoute:Consistency-Aware Adaptive Query Routing for Cloud-Edge-Device Large Language Models

arXiv:2603.21237v1 Announce Type: new Abstract: Large language models (LLMs) deliver impressive capabilities but incur substantial inference latency and cost, which hinders their deployment in latency-sensitive and resource-constrained scenarios. Cloud-edge-device collaborative inference has emerged as a promising paradigm by dynamically routing...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic United States

Where can AI be used? Insights from a deep ontology of work activities

arXiv:2603.20619v1 Announce Type: new Abstract: Artificial intelligence (AI) is poised to profoundly reshape how work is executed and organized, but we do not yet have deep frameworks for understanding where AI can be used. Here we provide a comprehensive ontology...

1 min 4 weeks, 1 day ago
ai artificial intelligence
LOW Academic European Union

AgenticGEO: A Self-Evolving Agentic System for Generative Engine Optimization

arXiv:2603.20213v1 Announce Type: new Abstract: Generative search engines represent a transition from traditional ranking-based retrieval to Large Language Model (LLM)-based synthesis, transforming optimization goals from ranking prominence towards content inclusion. Generative Engine Optimization (GEO), specifically, aims to maximize visibility and...

News Monitor (1_14_4)

For AI & Technology Law practice area relevance, this article discusses the development of AgenticGEO, a self-evolving agentic framework for Generative Engine Optimization (GEO), which aims to maximize visibility and attribution in black-box summarized outputs by strategically manipulating source content. The research highlights the limitations of existing methods, which rely on static heuristics and are prone to overfitting, and proposes a novel approach that can adapt to diverse content and changing engine behaviors. This development has implications for the regulation of generative search engines and the optimization of content in AI-driven systems. Key legal developments include: * The increasing use of Large Language Models (LLMs) in search engines, which transforms optimization goals from ranking prominence to content inclusion. * The need for more flexible and adaptive optimization strategies to address the unpredictable behaviors of black-box engines. * The potential for self-evolving agentic frameworks like AgenticGEO to improve content quality and robustness in AI-driven systems. Research findings highlight the limitations of existing methods, including: * The reliance on static heuristics and single-prompt optimization, which are prone to overfitting. * The impractical amount of interaction feedback required from engines to optimize strategies. * The need for more efficient and effective optimization methods to mitigate interaction costs. Policy signals include: * The potential for regulatory frameworks to address the optimization of content in AI-driven systems, particularly in the context of generative search engines. * The need for more nuanced approaches to regulating AI-driven

Commentary Writer (1_14_6)

**Jurisdictional Comparison and Commentary: AgenticGEO's Impact on AI & Technology Law Practice** The emergence of AgenticGEO, a self-evolving agentic framework for Generative Engine Optimization (GEO), highlights the need for regulatory frameworks to address the complexities of AI-driven content manipulation. In the US, the Federal Trade Commission (FTC) is likely to scrutinize AgenticGEO's potential to manipulate search engine results, potentially violating Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices. In contrast, Korea's Personal Information Protection Act (PIPA) may not directly address the implications of AgenticGEO, but its provisions on data protection and algorithmic transparency may be relevant in regulating AI-driven content manipulation. Internationally, the European Union's General Data Protection Regulation (GDPR) and the European Commission's AI White Paper may provide a framework for regulating AgenticGEO's use of personal data and AI-driven decision-making processes. However, the lack of harmonized regulations across jurisdictions may create challenges in ensuring consistent enforcement and accountability for AI-driven content manipulation. As AgenticGEO's capabilities continue to evolve, regulatory frameworks must adapt to address the complex issues of AI-driven content manipulation, data protection, and algorithmic transparency. **Implications Analysis:** 1. **Data Protection:** AgenticGEO's reliance on personal data and AI-driven decision-making processes raises concerns about data protection and the potential for biased or manipulated content. Regulatory

AI Liability Expert (1_14_9)

As the AI Liability & Autonomous Systems Expert, I'll provide domain-specific expert analysis of the article's implications for practitioners, noting relevant case law, statutory, and regulatory connections. **Implications for Practitioners:** 1. **Emerging AI Liability Concerns:** The development of self-evolving agentic systems like AgenticGEO raises concerns about liability for AI-generated content, particularly in cases where the system manipulates source content to maximize visibility and attribution. This may lead to increased scrutiny of AI-generated content and potential liability for its accuracy, completeness, or potential harm. 2. **Regulatory Hurdles:** The use of self-evolving agentic systems may require compliance with existing regulations, such as the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA), which govern the use of AI and machine learning in data processing and decision-making. 3. **Intellectual Property Concerns:** The strategic manipulation of source content to maximize visibility and attribution may raise concerns about copyright infringement, trademark infringement, or other intellectual property (IP) issues. **Relevant Case Law, Statutory, and Regulatory Connections:** 1. **Federal Trade Commission (FTC) Guidance on AI and Machine Learning:** The FTC has issued guidance on the use of AI and machine learning in advertising and marketing, emphasizing the importance of transparency and accountability in AI-driven decision-making (FTC, 2019). 2. **Section 230 of the Communications Decency Act:** This

Statutes: CCPA
1 min 4 weeks, 1 day ago
ai llm
LOW Academic International

CRoCoDiL: Continuous and Robust Conditioned Diffusion for Language

arXiv:2603.20210v1 Announce Type: new Abstract: Masked Diffusion Models (MDMs) provide an efficient non-causal alternative to autoregressive generation but often struggle with token dependencies and semantic incoherence due to their reliance on discrete marginal distributions. We address these limitations by shifting...

1 min 4 weeks, 1 day ago
ai algorithm
LOW Academic International

AgentComm-Bench: Stress-Testing Cooperative Embodied AI Under Latency, Packet Loss, and Bandwidth Collapse

arXiv:2603.20285v1 Announce Type: new Abstract: Cooperative multi-agent methods for embodied AI are almost universally evaluated under idealized communication: zero latency, no packet loss, and unlimited bandwidth. Real-world deployment on robots with wireless links, autonomous vehicles on congested networks, or drone...

News Monitor (1_14_4)

Analysis of the article for AI & Technology Law practice area relevance: The article "AgentComm-Bench: Stress-Testing Cooperative Embodied AI Under Latency, Packet Loss, and Bandwidth Collapse" highlights the importance of evaluating AI systems in real-world scenarios, rather than idealized conditions. The research findings demonstrate that AI systems can be significantly impacted by communication impairments, such as latency, packet loss, and bandwidth collapse, which can result in catastrophic performance drops. This article is relevant to AI & Technology Law practice areas, particularly in the context of liability and accountability, as it underscores the need for robust evaluation protocols and communication strategies to mitigate the risks associated with AI system failures. Key legal developments, research findings, and policy signals include: 1. **Real-world evaluation of AI systems**: The article emphasizes the importance of evaluating AI systems in real-world scenarios, rather than idealized conditions, which can lead to more accurate assessments of their performance and limitations. 2. **Communication impairments and AI system failures**: The research findings demonstrate that AI systems can be significantly impacted by communication impairments, which can result in catastrophic performance drops, highlighting the need for robust evaluation protocols and communication strategies. 3. **Liability and accountability**: The article's focus on the risks associated with AI system failures underscores the need for legal frameworks that address liability and accountability in the development and deployment of AI systems. Policy signals and implications for AI & Technology Law practice areas include: 1. **Developing robust evaluation protocols**: The

Commentary Writer (1_14_6)

**Jurisdictional Comparison and Analytical Commentary** The introduction of AgentComm-Bench, a benchmark suite and evaluation protocol for cooperative embodied AI, has significant implications for the development and deployment of AI systems in various jurisdictions. In the US, the Federal Trade Commission (FTC) has emphasized the importance of testing AI systems under real-world conditions to ensure their safety and reliability. Similarly, in Korea, the Ministry of Science and ICT has implemented regulations to ensure the safe development and deployment of AI systems, including those used in robotics and autonomous vehicles. Internationally, the European Union's General Data Protection Regulation (GDPR) and the International Organization for Standardization (ISO) have established guidelines for AI system testing and evaluation. **Comparison of Approaches** In the US, the FTC's approach to AI testing and evaluation focuses on ensuring that AI systems are transparent, explainable, and fair. In contrast, Korea's approach emphasizes the importance of testing AI systems under real-world conditions, including those with communication impairments. Internationally, the GDPR and ISO guidelines emphasize the importance of testing AI systems for data protection and security. **Implications Analysis** The introduction of AgentComm-Bench has significant implications for the development and deployment of AI systems in various jurisdictions. The benchmark suite and evaluation protocol provide a systematic way to stress-test cooperative embodied AI under real-world communication conditions, which is essential for ensuring the safety and reliability of AI systems. The results of the experiments reveal that communication-dependent tasks degrade catastrophically

AI Liability Expert (1_14_9)

As an AI Liability & Autonomous Systems Expert, I analyze the implications of this article for practitioners in the field of AI and autonomous systems. The introduction of AgentComm-Bench, a benchmark suite and evaluation protocol, highlights the need for stress-testing cooperative embodied AI under real-world communication impairments. This is particularly relevant in the context of liability frameworks, where the performance of autonomous systems is often evaluated under idealized conditions. In the United States, the Federal Aviation Administration (FAA) has established guidelines for the evaluation of autonomous systems, including those related to communication and sensor data (14 CFR Part 91.113). The article's findings on the catastrophic degradation of performance under communication impairments are consistent with the FAA's emphasis on the importance of robustness and fault tolerance in autonomous systems. The article's discussion of the interaction between impairment type and task design is also relevant to the concept of "design defect" in product liability law. Under the Restatement (Second) of Torts § 402A, a product can be considered defective if it fails to perform as intended due to a flaw in its design or manufacture. In the context of autonomous systems, the article's findings on the vulnerability of perception fusion to corrupted data may be seen as a design defect, particularly if the system is not designed to mitigate such vulnerabilities. In terms of regulatory connections, the article's focus on communication impairments is also relevant to the European Union's General Safety Regulation for drones (EU Regulation 2019/945),

Statutes: § 402, art 91
1 min 4 weeks, 1 day ago
ai autonomous
LOW Academic International

Expected Reward Prediction, with Applications to Model Routing

arXiv:2603.20217v1 Announce Type: new Abstract: Reward models are a standard tool to score responses from LLMs. Reward models are built to rank responses to a fixed prompt sampled from a single model, for example to choose the best of n...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic United States

Profit is the Red Team: Stress-Testing Agents in Strategic Economic Interactions

arXiv:2603.20925v1 Announce Type: new Abstract: As agentic systems move into real-world deployments, their decisions increasingly depend on external inputs such as retrieved content, tool outputs, and information provided by other actors. When these inputs can be strategically shaped by adversaries,...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic United States

The Library Theorem: How External Organization Governs Agentic Reasoning Capacity

arXiv:2603.21272v1 Announce Type: new Abstract: Externalized reasoning is already exploited by transformer-based agents through chain-of-thought, but structured retrieval -- indexing over one's own reasoning state -- remains underexplored. We formalize the transformer context window as an I/O page and prove...

1 min 4 weeks, 1 day ago
ai algorithm
LOW Academic International

Knowledge Boundary Discovery for Large Language Models

arXiv:2603.21022v1 Announce Type: new Abstract: We propose Knowledge Boundary Discovery (KBD), a reinforcement learning based framework to explore the knowledge boundaries of the Large Language Models (LLMs). We define the knowledge boundary by automatically generating two types of questions: (i)...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic International

FactorSmith: Agentic Simulation Generation via Markov Decision Process Decomposition with Planner-Designer-Critic Refinement

arXiv:2603.20270v1 Announce Type: new Abstract: Generating executable simulations from natural language specifications remains a challenging problem due to the limited reasoning capacity of large language models (LLMs) when confronted with large, interconnected codebases. This paper presents FactorSmith, a framework that...

News Monitor (1_14_4)

Analysis of the academic article for AI & Technology Law practice area relevance: The article presents FactorSmith, a framework for generating executable simulations from natural language specifications, which has implications for AI-generated content and intellectual property law. The research findings suggest that FactorSmith's ability to decompose complex tasks into modular steps and iterate through quality refinement could be relevant to the development of AI systems that create original works, potentially raising questions about authorship, copyright, and liability. The article's focus on combining different AI approaches to achieve a specific goal also highlights the need for regulatory frameworks to address the integration of multiple AI technologies. Key legal developments, research findings, and policy signals: 1. **AI-generated content and intellectual property law**: The article's focus on generating executable simulations from natural language specifications raises questions about authorship, copyright, and liability in the context of AI-generated content. 2. **Modular AI systems and liability**: The FactorSmith framework's ability to decompose complex tasks into modular steps could lead to new liability concerns, as different components of the AI system may be responsible for different aspects of the generated content. 3. **Regulatory frameworks for integrated AI systems**: The article highlights the need for regulatory frameworks to address the integration of multiple AI technologies, such as the agentic trio architecture used in FactorSmith.

Commentary Writer (1_14_6)

**Jurisdictional Comparison and Analytical Commentary** The FactorSmith framework's impact on AI & Technology Law practice is multifaceted, with implications for jurisdictions such as the US, Korea, and internationally. In the US, this development may raise concerns about intellectual property protection for AI-generated simulations, as well as liability for AI-driven decision-making processes. In contrast, Korea's AI industry-focused policies may view FactorSmith as a promising innovation, potentially leading to increased investment in AI research and development. Internationally, the FactorSmith framework may contribute to the ongoing debate on AI governance, particularly regarding the use of AI in high-stakes decision-making contexts. The European Union's AI regulations, for instance, emphasize transparency, accountability, and human oversight, which may influence how FactorSmith is implemented and regulated in EU member states. In jurisdictions like Singapore, which has established a regulatory framework for AI, FactorSmith may be seen as a valuable tool for enhancing AI decision-making processes, while also raising questions about data protection and cybersecurity. **Key Takeaways** 1. **Intellectual Property Protection**: The US may need to revisit its intellectual property laws to address the increasing use of AI-generated simulations, including those created using FactorSmith. 2. **Liability and Accountability**: As AI-driven decision-making processes become more prevalent, jurisdictions will need to establish clear guidelines for liability and accountability in AI-generated simulations. 3. **Regulatory Frameworks**: Internationally, regulatory frameworks will need to adapt to the

AI Liability Expert (1_14_9)

As an AI Liability & Autonomous Systems Expert, I'll provide domain-specific expert analysis of this article's implications for practitioners. The FactorSmith framework presents a novel approach to generating executable simulations from natural language specifications, leveraging factored POMDP decomposition and a hierarchical planner-designer-critic agentic workflow. This development has implications for the design and deployment of autonomous systems, particularly in the context of AI-generated simulations. From a liability perspective, the use of FactorSmith could raise questions about the responsibility for errors or inaccuracies in the generated simulations. For instance, if a simulation generated using FactorSmith causes harm or damage, who would be liable: the developer of FactorSmith, the user who input the simulation specification, or the AI model itself? In terms of statutory and regulatory connections, the development of autonomous systems like FactorSmith may be subject to existing regulations such as the European Union's General Data Protection Regulation (GDPR) and the United States' Federal Trade Commission (FTC) guidelines on AI. For example, the FTC's guidance on AI emphasizes the importance of transparency and accountability in AI decision-making processes. Case law connections may include precedents related to AI-generated content and liability, such as the 2019 case of _Warner/Chappell Music, Inc. v. ReDigi Inc._, which involved a dispute over AI-generated music. However, it's essential to note that the specific laws and regulations governing AI-generated simulations are still evolving and may not be directly applicable

1 min 4 weeks, 1 day ago
ai llm
LOW Academic United States

FinReflectKG -- HalluBench: GraphRAG Hallucination Benchmark for Financial Question Answering Systems

arXiv:2603.20252v1 Announce Type: new Abstract: As organizations increasingly integrate AI-powered question-answering systems into financial information systems for compliance, risk assessment, and decision support, ensuring the factual accuracy of AI-generated outputs becomes a critical engineering challenge. Current Knowledge Graph (KG)-augmented QA...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic International

Towards Intelligent Geospatial Data Discovery: a knowledge graph-driven multi-agent framework powered by large language models

arXiv:2603.20670v1 Announce Type: new Abstract: The rapid growth in the volume, variety, and velocity of geospatial data has created data ecosystems that are highly distributed, heterogeneous, and semantically inconsistent. Existing data catalogs, portals, and infrastructures still rely largely on keyword-based...

1 min 4 weeks, 1 day ago
ai autonomous
LOW Academic European Union

LLM-Enhanced Energy Contrastive Learning for Out-of-Distribution Detection in Text-Attributed Graphs

arXiv:2603.20293v1 Announce Type: new Abstract: Text-attributed graphs, where nodes are enriched with textual attributes, have become a powerful tool for modeling real-world networks such as citation, social, and transaction networks. However, existing methods for learning from these graphs often assume...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic International

Do LLM-Driven Agents Exhibit Engagement Mechanisms? Controlled Tests of Information Load, Descriptive Norms, and Popularity Cues

arXiv:2603.20911v1 Announce Type: new Abstract: Large language models make agent-based simulation more behaviorally expressive, but they also sharpen a basic methodological tension: fluent, human-like output is not, by itself, evidence for theory. We evaluate what an LLM-driven simulation can credibly...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic International

Seed1.8 Model Card: Towards Generalized Real-World Agency

arXiv:2603.20633v1 Announce Type: new Abstract: We present Seed1.8, a foundation model aimed at generalized real-world agency: going beyond single-turn prediction to multi-turn interaction, tool use, and multi-step execution. Seed1.8 keeps strong LLM and vision-language performance while supporting a unified agentic...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic International

Multi-Agent Debate with Memory Masking

arXiv:2603.20215v1 Announce Type: new Abstract: Large language models (LLMs) have recently demonstrated impressive capabilities in reasoning tasks. Currently, mainstream LLM reasoning frameworks predominantly focus on scaling up inference-time sampling to enhance performance. In particular, among all LLM reasoning frameworks, *multi-agent...

1 min 4 weeks, 1 day ago
ai llm
LOW Academic International

Modeling Epistemic Uncertainty in Social Perception via Rashomon Set Agents

arXiv:2603.20750v1 Announce Type: new Abstract: We present an LLM-driven multi-agent probabilistic modeling framework that demonstrates how differences in students' subjective social perceptions arise and evolve in real-world classroom settings, under constraints from an observed social network and limited questionnaire data....

1 min 4 weeks, 1 day ago
ai llm
LOW Academic International

ProMAS: Proactive Error Forecasting for Multi-Agent Systems Using Markov Transition Dynamics

arXiv:2603.20260v1 Announce Type: new Abstract: The integration of Large Language Models into Multi-Agent Systems (MAS) has enabled the so-lution of complex, long-horizon tasks through collaborative reasoning. However, this collec-tive intelligence is inherently fragile, as a single logical fallacy can rapidly...

1 min 4 weeks, 1 day ago
ai autonomous
LOW Academic International

AutoMOOSE: An Agentic AI for Autonomous Phase-Field Simulation

arXiv:2603.20986v1 Announce Type: new Abstract: Multiphysics simulation frameworks such as MOOSE provide rigorous engines for phase-field materials modeling, yet adoption is constrained by the expertise required to construct valid input files, coordinate parameter sweeps, diagnose failures, and extract quantitative results....

1 min 4 weeks, 1 day ago
ai autonomous
LOW Academic International

Beyond Test-Time Compute Strategies: Advocating Energy-per-Token in LLM Inference

arXiv:2603.20224v1 Announce Type: new Abstract: Large Language Models (LLMs) demonstrate exceptional performance across diverse tasks but come with substantial energy and computational costs, particularly in request-heavy scenarios. In many real-world applications, the full scale and capabilities of LLMs are often...

1 min 4 weeks, 1 day ago
ai llm
Previous Page 50 of 200 Next

Impact Distribution

Critical 0
High 57
Medium 938
Low 4987