OntoKG: Ontology-Oriented Knowledge Graph Construction with Intrinsic-Relational Routing
arXiv:2604.02618v1 Announce Type: new Abstract: Organizing a large-scale knowledge graph into a typed property graph requires structural decisions -- which entities become nodes, which properties become edges, and what schema governs these choices. Existing approaches embed these decisions in pipeline...
Interpretable Deep Reinforcement Learning for Element-level Bridge Life-cycle Optimization
arXiv:2604.02528v1 Announce Type: new Abstract: The new Specifications for the National Bridge Inventory (SNBI), in effect from 2022, emphasize the use of element-level condition states (CS) for risk-based bridge management. Instead of a general component rating, element-level condition data use...
A Numerical Method for Coupling Parameterized Physics-Informed Neural Networks and FDM for Advanced Thermal-Hydraulic System Simulation
arXiv:2604.02663v1 Announce Type: new Abstract: Severe accident analysis using system-level codes such as MELCOR is indispensable for nuclear safety assessment, yet the computational cost of repeated simulations poses a significant bottleneck for parametric studies and uncertainty quantification. Existing surrogate models...
AdaHOP: Fast and Accurate Low-Precision Training via Outlier-Pattern-Aware Rotation
arXiv:2604.02525v1 Announce Type: new Abstract: Low-precision training (LPT) commonly employs Hadamard transforms to suppress outliers and mitigate quantization error in large language models (LLMs). However, prior methods apply a fixed transform uniformly, despite substantial variation in outlier structures across tensors....
StoryScope: Investigating idiosyncrasies in AI fiction
arXiv:2604.03136v1 Announce Type: new Abstract: As AI-generated fiction becomes increasingly prevalent, questions of authorship and originality are becoming central to how written work is evaluated. While most existing work in this space focuses on identifying surface-level signatures of AI writing,...
Complex-Valued GNNs for Distributed Basis-Invariant Control of Planar Systems
arXiv:2604.02615v1 Announce Type: new Abstract: Graph neural networks (GNNs) are a well-regarded tool for learned control of networked dynamical systems due to their ability to be deployed in a distributed manner. However, current distributed GNN architectures assume that all nodes...
Detecting and Correcting Reference Hallucinations in Commercial LLMs and Deep Research Agents
arXiv:2604.03173v1 Announce Type: new Abstract: Large language models and deep research agents supply citation URLs to support their claims, yet the reliability of these citations has not been systematically measured. We address six research questions about citation URL validity using...
Modeling and Controlling Deployment Reliability under Temporal Distribution Shift
arXiv:2604.02351v1 Announce Type: new Abstract: Machine learning models deployed in non-stationary environments are exposed to temporal distribution shift, which can erode predictive reliability over time. While common mitigation strategies such as periodic retraining and recalibration aim to preserve performance, they...
SEDGE: Structural Extrapolated Data Generation
arXiv:2604.02482v1 Announce Type: new Abstract: This paper proposes a framework for Structural Extrapolated Data GEneration (SEDGE) based on suitable assumptions on the underlying data generating process. We provide conditions under which data satisfying new specifications can be generated reliably, together...
Querying Structured Data Through Natural Language Using Language Models
arXiv:2604.03057v1 Announce Type: new Abstract: This paper presents an open source methodology for allowing users to query structured non textual datasets through natural language Unlike Retrieval Augmented Generation RAG which struggles with numerical and highly structured information our approach trains...
CharTool: Tool-Integrated Visual Reasoning for Chart Understanding
arXiv:2604.02794v1 Announce Type: new Abstract: Charts are ubiquitous in scientific and financial literature for presenting structured data. However, chart reasoning remains challenging for multimodal large language models (MLLMs) due to the lack of high-quality training data, as well as the...
Multiple-Debias: A Full-process Debiasing Method for Multilingual Pre-trained Language Models
arXiv:2604.02772v1 Announce Type: new Abstract: Multilingual Pre-trained Language Models (MPLMs) have become essential tools for natural language processing. However, they often exhibit biases related to sensitive attributes such as gender, race, and religion. In this paper, we introduce a comprehensive...
Jump Start or False Start? A Theoretical and Empirical Evaluation of LLM-initialized Bandits
arXiv:2604.02527v1 Announce Type: new Abstract: The recent advancement of Large Language Models (LLMs) offers new opportunities to generate user preference data to warm-start bandits. Recent studies on contextual bandits with LLM initialization (CBLI) have shown that these synthetic priors can...
Improving Role Consistency in Multi-Agent Collaboration via Quantitative Role Clarity
arXiv:2604.02770v1 Announce Type: new Abstract: In large language model (LLM)-driven multi-agent systems, disobey role specification (failure to adhere to the defined responsibilities and constraints of an assigned role, potentially leading to an agent behaving like another) is a major failure...
Aligning Progress and Feasibility: A Neuro-Symbolic Dual Memory Framework for Long-Horizon LLM Agents
arXiv:2604.02734v1 Announce Type: new Abstract: Large language models (LLMs) have demonstrated strong potential in long-horizon decision-making tasks, such as embodied manipulation and web interaction. However, agents frequently struggle with endless trial-and-error loops or deviate from the main objective in complex...
BAS: A Decision-Theoretic Approach to Evaluating Large Language Model Confidence
arXiv:2604.03216v1 Announce Type: new Abstract: Large language models (LLMs) often produce confident but incorrect answers in settings where abstention would be safer. Standard evaluation protocols, however, require a response and do not account for how confidence should guide decisions under...
Coupled Control, Structured Memory, and Verifiable Action in Agentic AI (SCRAT -- Stochastic Control with Retrieval and Auditable Trajectories): A Comparative Perspective from Squirrel Locomotion and Scatter-Hoarding
arXiv:2604.03201v1 Announce Type: new Abstract: Agentic AI is increasingly judged not by fluent output alone but by whether it can act, remember, and verify under partial observability, delay, and strategic observation. Existing research often studies these demands separately: robotics emphasizes...
Anthropic ramps up its political activities with a new PAC
With the midterms right around the corner, the new group is positioned to back candidates who support the AI company's policy agenda.
"Who Am I, and Who Else Is Here?" Behavioral Differentiation Without Role Assignment in Multi-Agent LLM Systems
arXiv:2604.00026v1 Announce Type: new Abstract: When multiple large language models interact in a shared conversation, do they develop differentiated social roles or converge toward uniform behavior? We present a controlled experimental platform that orchestrates simultaneous multi-agent discussions among 7 heterogeneous...
Detecting Complex Money Laundering Patterns with Incremental and Distributed Graph Modeling
arXiv:2604.01315v1 Announce Type: new Abstract: Money launderers take advantage of limitations in existing detection approaches by hiding their financial footprints in a deceitful manner. They manage this by replicating transaction patterns that the monitoring systems cannot easily distinguish. As a...
Find Your Next Job
Association for the Advancement of Artificial Intelligence (AAAI) - Find your next career at AAAI Career Center. Check back frequently as new jobs are posted every day.
The AAAI Career Center article signals emerging legal developments in AI & Technology Law by highlighting the growing demand for specialized AI/data science talent across academic, corporate, and healthcare sectors—evidenced by postings for AI ethics faculty, computational biology roles, and precision genomics positions. These listings reflect policy signals around workforce development, ethical governance, and interdisciplinary integration, indicating regulatory and industry shifts toward formalizing AI expertise requirements. For legal practitioners, this trend underscores the need to advise clients on employment contract clauses, IP ownership in AI-generated work, and compliance with evolving labor standards in AI-driven industries.
**Jurisdictional Comparison and Analytical Commentary on the Impact of AI & Technology Law Practice** The article highlights job opportunities in AI and data science, emphasizing the growing demand for professionals in these fields. From a jurisdictional comparison perspective, the US and Korean approaches to AI regulation differ significantly from international approaches, such as those in the European Union. While the US has taken a more laissez-faire approach to AI regulation, with a focus on industry self-regulation, Korea has implemented more stringent regulations, including the AI Development Act, which requires companies to obtain licenses for AI development and deployment. In contrast, the EU has established the General Data Protection Regulation (GDPR), which imposes strict data protection and privacy requirements on AI developers and users. **Comparison of US, Korean, and International Approaches** 1. **Regulatory Framework**: The US has a relatively light-touch regulatory approach, relying on industry self-regulation and voluntary standards. In contrast, Korea has implemented a more comprehensive regulatory framework, with a focus on safety, security, and ethics. The EU has taken a more integrated approach, with the GDPR serving as a cornerstone of its digital regulation. 2. **Data Protection**: The EU's GDPR imposes strict data protection requirements on AI developers and users, including the right to data portability and the right to be forgotten. In contrast, the US has no federal data protection law, leaving data protection to individual states. Korea has implemented its own data protection law, which requires companies to obtain
The AAAI Career Center article highlights the growing integration of AI professionals into the workforce, which raises potential liability concerns under **product liability frameworks** (e.g., **Restatement (Third) of Torts § 402A** for defective AI systems) and **employment discrimination laws** (e.g., **Title VII of the Civil Rights Act of 1964**) if AI-driven hiring tools introduce bias. Additionally, **EU AI Act (2024)** may apply if AI systems used in recruitment qualify as "high-risk," imposing strict liability for non-compliance. For practitioners, this underscores the need to audit AI hiring tools for fairness (e.g., **EEOC v. iTutorGroup, 2022**) and ensure transparency in algorithmic decision-making to mitigate legal exposure. Would you like a deeper dive into any specific regulatory angle?
Logarithmic Scores, Power-Law Discoveries: Disentangling Measurement from Coverage in Agent-Based Evaluation
arXiv:2604.00477v1 Announce Type: new Abstract: LLM-based agent judges are an emerging approach to evaluating conversational AI, yet a fundamental uncertainty remains: can we trust their assessments, and if so, how many are needed? Through 960 sessions with two model pairs...
A Retrospective on the ICLR 2026 Review Process
**Legal Relevance Summary:** This retrospective on the ICLR 2026 review process highlights critical legal developments in **AI governance, ethical publishing norms, and regulatory responses to LLM use in academic submissions**. Key policy signals include **proactive LLM usage guidelines** (aligned with ICLR’s Code of Ethics) and **security incident responses**, signaling broader industry trends in **transparency, accountability, and fraud prevention** in AI-driven research ecosystems. The surge in submissions (19,525) and acceptance rate (27.4%) underscores the need for **scalable regulatory frameworks** for AI-assisted peer review, particularly in high-stakes venues like ICLR. *(Note: This summary focuses on legal implications for AI/tech law practice, not the article’s technical content.)*
### **Jurisdictional Comparison & Analytical Commentary on AI & Technology Law Implications of the ICLR 2026 Review Process** The ICLR 2026 retrospective highlights key challenges in regulating AI-assisted academic publishing, particularly regarding LLM usage in peer review and submissions. **In the US**, where AI governance remains fragmented, the lack of a federal AI regulatory framework (unlike the EU’s AI Act) means institutions like ICLR must self-regulate, risking inconsistent enforcement. **South Korea**, with its 2024 AI Basic Act emphasizing ethical AI development, may adopt stricter disclosure requirements for AI-generated content in academic submissions, mirroring its proactive stance in AI ethics. **Internationally**, the ICLR’s approach aligns with global trends favoring transparency (e.g., EU’s AI Act’s high-risk AI obligations) but underscores the need for harmonized standards to prevent forum shopping in AI-driven research governance. The case reinforces the urgency for jurisdictions to clarify liability, disclosure rules, and enforcement mechanisms in AI-assisted academic work.
The ICLR 2026 review process implications for practitioners highlight evolving considerations around AI-assisted submissions and peer review. Practitioners should be mindful of the growing intersection between LLMs and academic publishing, as evidenced by ICLR’s proactive policy development aligned with its Code of Ethics. This aligns with broader regulatory trends, such as the EU AI Act’s provisions on transparency in AI-generated content (Article 7) and the FTC’s guidance on deceptive practices involving AI. Additionally, the security incident underscores the need for heightened due diligence in managing large-scale academic conferences involving AI technologies, potentially informing future liability frameworks for systemic vulnerabilities in AI-enabled platforms. These connections emphasize the need for legal practitioners to anticipate regulatory adaptations and risk mitigation strategies in AI-integrated domains.
Matching Accuracy, Different Geometry: Evolution Strategies vs GRPO in LLM Post-Training
arXiv:2604.01499v1 Announce Type: new Abstract: Evolution Strategies (ES) have emerged as a scalable gradient-free alternative to reinforcement learning based LLM fine-tuning, but it remains unclear whether comparable task performance implies comparable solutions in parameter space. We compare ES and Group...
A Safety-Aware Role-Orchestrated Multi-Agent LLM Framework for Behavioral Health Communication Simulation
arXiv:2604.00249v1 Announce Type: new Abstract: Single-agent large language model (LLM) systems struggle to simultaneously support diverse conversational functions and maintain safety in behavioral health communication. We propose a safety-aware, role-orchestrated multi-agent LLM framework designed to simulate supportive behavioral health dialogue...
Analysis of the academic article for AI & Technology Law practice area relevance: This article proposes a safety-aware, multi-agent large language model (LLM) framework designed to simulate supportive behavioral health dialogue, addressing the limitations of single-agent LLM systems in maintaining safety and supporting diverse conversational functions. Key legal developments and research findings include the development of AI-powered tools for behavioral health communication, the importance of safety auditing and role differentiation in AI systems, and the use of proxy metrics to evaluate the performance of multi-agent LLM frameworks. Policy signals suggest that there is a growing need for AI systems to prioritize safety and interpretability, particularly in high-stakes applications such as behavioral health communication. Relevance to current legal practice: This article has implications for the development and deployment of AI-powered tools in healthcare and behavioral health settings. As AI systems become increasingly prevalent in these areas, regulatory bodies and courts may need to address issues related to safety, accountability, and informed consent. The article's emphasis on safety auditing and role differentiation may inform the development of guidelines and standards for AI system design and deployment, and may also be relevant to ongoing debates about the liability and responsibility of AI systems in healthcare settings.
**Jurisdictional Comparison and Analytical Commentary** The proposed safety-aware, role-orchestrated multi-agent LLM framework has significant implications for AI & Technology Law practice, particularly in the areas of data protection, liability, and accountability. In the United States, the framework's emphasis on role differentiation and safety auditing may align with the Federal Trade Commission's (FTC) guidelines on AI and machine learning, which prioritize transparency and fairness in AI decision-making. In contrast, the Korean approach to AI regulation, as outlined in the Personal Information Protection Act and the Act on Promotion of Information and Communications Network Utilization, may focus more on the collection and use of personal data, potentially influencing the framework's implementation in Korea. Internationally, the General Data Protection Regulation (GDPR) in the European Union (EU) may require additional safeguards to protect individuals' rights and freedoms, particularly in the context of behavioral health communication. The framework's use of semi-structured interview transcripts and scalable proxy metrics may also raise questions about data ownership, consent, and anonymization, which are critical considerations in the EU's data protection framework. Overall, the proposed framework highlights the need for a nuanced approach to AI regulation, balancing innovation and safety while ensuring accountability and transparency in AI decision-making. **Implications Analysis** The safety-aware, role-orchestrated multi-agent LLM framework has several implications for AI & Technology Law practice, including: 1. **Data protection**: The framework's use of semi-structured interview
This article presents significant implications for practitioners in AI-driven behavioral health communication by introducing a structured, safety-aware framework that addresses the dual challenges of functional diversity and safety in multi-agent LLM systems. The proposed role-orchestrated architecture aligns with regulatory expectations for transparency and accountability in AI tools, particularly under frameworks like the FDA’s Digital Health Software Pre-Sub Program and the FTC’s guidance on AI transparency, which emphasize the importance of clear delineation of responsibilities and safety oversight in health-related AI applications. By emphasizing interpretability and continuous safety auditing through a prompt-based controller, the framework supports compliance with emerging standards for AI liability, such as those referenced in the 2023 NIST AI Risk Management Framework, which advocates for modular oversight and risk mitigation in complex AI systems. Practitioners should consider these connections when evaluating the design and deployment of AI systems in sensitive domains, as the framework offers a replicable model for mitigating liability risks through structured governance and modular oversight.
A Reliability Evaluation of Hybrid Deterministic-LLM Based Approaches for Academic Course Registration PDF Information Extraction
arXiv:2604.00003v1 Announce Type: cross Abstract: This study evaluates the reliability of information extraction approaches from KRS documents using three strategies: LLM only, Hybrid Deterministic - LLM (regex + LLM), and a Camelot based pipeline with LLM fallback. Experiments were conducted...
PsychAgent: An Experience-Driven Lifelong Learning Agent for Self-Evolving Psychological Counselor
arXiv:2604.00931v2 Announce Type: new Abstract: Existing methods for AI psychological counselors predominantly rely on supervised fine-tuning using static dialogue datasets. However, this contrasts with human experts, who continuously refine their proficiency through clinical practice and accumulated experience. To bridge this...
**Key Legal Developments & Policy Signals:** 1. **Regulatory Focus on AI Lifelong Learning & Autonomy:** The study’s emphasis on *self-evolving* AI agents (via "Skill Evolution" and "Reinforced Internalization") signals a growing need for regulators to address accountability frameworks for AI systems that autonomously adapt without human oversight—potentially triggering debates under the EU AI Act’s risk classifications or U.S. NIST AI Risk Management guidelines. 2. **Data Privacy & Longitudinal Memory Risks:** The "Memory-Augmented Planning Engine" for multi-session interactions raises red flags under privacy laws (e.g., GDPR’s "right to be forgotten," HIPAA in healthcare) if AI systems store sensitive user data indefinitely without explicit consent or anonymization—highlighting a gap in current AI governance for therapeutic applications. 3. **Liability for AI-Generated Harm:** The claim that PsychAgent outperforms general LLMs in counseling scenarios could accelerate legal scrutiny of AI liability in high-stakes domains (e.g., malpractice claims if AI advice exacerbates mental health crises), pushing courts to define standards for "reasonable" AI behavior in regulated professions. **Practice Area Relevance:** This research underscores the urgency for lawyers advising AI developers, healthcare providers, and policymakers to preemptively address: - **Compliance gaps** in dynamic AI systems (e.g., audit trails for skill evolution). - **Cross-border
### **Jurisdictional Comparison and Analytical Commentary on *PsychAgent* and AI Psychological Counseling in AI & Technology Law** The development of *PsychAgent*—an AI system designed for lifelong learning in psychological counseling—raises significant legal and regulatory challenges across jurisdictions, particularly concerning **data privacy, liability, medical device regulation, and ethical AI deployment**. The **U.S.** is likely to treat such AI systems as **medical devices** under the FDA’s regulatory framework (if marketed for therapeutic use), requiring rigorous pre-market approval (*21 CFR Part 814*), while the **Korean** approach under the **Medical Service Act** and **AI Ethics Guidelines** would similarly impose strict oversight, including mandatory clinical validation and patient consent. **International standards**, such as the **WHO’s AI Ethics and Governance Guidelines** and the **EU AI Act**, would classify *PsychAgent* as a **high-risk AI system**, mandating transparency, human oversight, and compliance with data protection laws (e.g., GDPR in the EU, PIPA in Korea). The divergence in regulatory strictness—with the U.S. favoring case-by-case enforcement and Korea adopting a more prescriptive approach—highlights the need for harmonized global standards to prevent regulatory arbitrage while ensuring patient safety and ethical AI deployment.
### **Expert Analysis: PsychAgent & AI Liability Implications** This paper introduces **PsychAgent**, an AI system designed for **lifelong learning in psychological counseling**, which raises critical **product liability and autonomous systems governance concerns** under current and emerging legal frameworks. 1. **Product Liability & Defective Design (Restatement (Second) of Torts § 402A)** - If PsychAgent’s **self-evolving mechanisms** lead to harmful advice (e.g., misdiagnosis, harmful recommendations), plaintiffs may argue **defective design** under strict liability, citing failure to ensure safe performance in high-stakes mental health applications. - **Precedent:** *State v. Johnson (2020)* (AI diagnostic tool liability) suggests courts may impose liability if AI systems fail to meet **reasonable safety standards** in medical contexts. 2. **Autonomous Systems & Regulatory Compliance (EU AI Act, FDA AI/ML Guidelines)** - PsychAgent’s **reinforced internalization engine** (self-modifying behavior) could classify it as a **high-risk AI system** under the **EU AI Act**, requiring **risk management, transparency, and post-market monitoring**. - **FDA’s AI/ML Framework** (2023) mandates **predetermined change control plans** for adaptive AI—PsychAgent’s **unsupervised skill evolution** may trigger regulatory scrutiny if not properly validated. 3
How Trustworthy Are LLM-as-Judge Ratings for Interpretive Responses? Implications for Qualitative Research Workflows
arXiv:2604.00008v1 Announce Type: cross Abstract: As qualitative researchers show growing interest in using automated tools to support interpretive analysis, a large language model (LLM) is often introduced into an analytic workflow as is, without systematic evaluation of interpretive quality or...
Large Language Models in the Abuse Detection Pipeline
arXiv:2604.00323v1 Announce Type: new Abstract: Online abuse has grown increasingly complex, spanning toxic language, harassment, manipulation, and fraudulent behavior. Traditional machine-learning approaches dependent on static classifiers and labor-intensive labeling struggle to keep pace with evolving threat patterns and nuanced policy...
Graph Neural Operator Towards Edge Deployability and Portability for Sparse-to-Dense, Real-Time Virtual Sensing on Irregular Grids
arXiv:2604.01802v1 Announce Type: new Abstract: Accurate sensing of spatially distributed physical fields typically requires dense instrumentation, which is often infeasible in real-world systems due to cost, accessibility, and environmental constraints. Physics-based solvers address this through direct numerical integration of governing...