Learning the Signature of Memorization in Autoregressive Language Models
arXiv:2604.03199v1 Announce Type: new Abstract: All prior membership inference attacks for fine-tuned language models use hand-crafted heuristics (e.g., loss thresholding, Min-K\%, reference calibration), each bounded by the designer's intuition. We introduce the first transferable learned attack, enabled by the observation...
Cross-subject Muscle Fatigue Detection via Adversarial and Supervised Contrastive Learning with Inception-Attention Network
arXiv:2604.02670v1 Announce Type: new Abstract: Muscle fatigue detection plays an important role in physical rehabilitation. Previous researches have demonstrated that sEMG offers superior sensitivity in detecting muscle fatigue compared to other biological signals. However, features extracted from sEMG may vary...
Convolutional Surrogate for 3D Discrete Fracture-Matrix Tensor Upscaling
arXiv:2604.02335v1 Announce Type: new Abstract: Modeling groundwater flow in three-dimensional fractured crystalline media requires accounting for strong spatial heterogeneity induced by fractures. Fine-scale discrete fracture-matrix (DFM) simulations can capture this complexity but are computationally expensive, especially when repeated evaluations are...
Prism: Policy Reuse via Interpretable Strategy Mapping in Reinforcement Learning
arXiv:2604.02353v1 Announce Type: cross Abstract: We present PRISM (Policy Reuse via Interpretable Strategy Mapping), a framework that grounds reinforcement learning agents' decisions in discrete, causally validated concepts and uses those concepts as a zero-shot transfer interface between agents trained with...
Analytic Drift Resister for Non-Exemplar Continual Graph Learning
arXiv:2604.02633v1 Announce Type: new Abstract: Non-Exemplar Continual Graph Learning (NECGL) seeks to eliminate the privacy risks intrinsic to rehearsal-based paradigms by retaining solely class-level prototype representations rather than raw graph examples for mitigating catastrophic forgetting. However, this design choice inevitably...
Differentiable Symbolic Planning: A Neural Architecture for Constraint Reasoning with Learned Feasibility
arXiv:2604.02350v1 Announce Type: cross Abstract: Neural networks excel at pattern recognition but struggle with constraint reasoning -- determining whether configurations satisfy logical or physical constraints. We introduce Differentiable Symbolic Planning (DSP), a neural architecture that performs discrete symbolic reasoning while...
Skeleton-based Coherence Modeling in Narratives
arXiv:2604.02451v1 Announce Type: new Abstract: Modeling coherence in text has been a task that has excited NLP researchers since a long time. It has applications in detecting incoherent structures and helping the author fix them. There has been recent work...
TR-ICRL: Test-Time Rethinking for In-Context Reinforcement Learning
arXiv:2604.00438v1 Announce Type: new Abstract: In-Context Reinforcement Learning (ICRL) enables Large Language Models (LLMs) to learn online from external rewards directly within the context window. However, a central challenge in ICRL is reward estimation, as models typically lack access to...
One Panel Does Not Fit All: Case-Adaptive Multi-Agent Deliberation for Clinical Prediction
arXiv:2604.00085v1 Announce Type: new Abstract: Large language models applied to clinical prediction exhibit case-level heterogeneity: simple cases yield consistent outputs, while complex cases produce divergent predictions under minor prompt changes. Existing single-agent strategies sample from one role-conditioned distribution, and multi-agent...
Towards Reliable Truth-Aligned Uncertainty Estimation in Large Language Models
arXiv:2604.00445v1 Announce Type: new Abstract: Uncertainty estimation (UE) aims to detect hallucinated outputs of large language models (LLMs) to improve their reliability. However, UE metrics often exhibit unstable performance across configurations, which significantly limits their applicability. In this work, we...
UQ-SHRED: uncertainty quantification of shallow recurrent decoder networks for sparse sensing via engression
arXiv:2604.01305v1 Announce Type: new Abstract: Reconstructing high-dimensional spatiotemporal fields from sparse sensor measurements is critical in a wide range of scientific applications. The SHallow REcurrent Decoder (SHRED) architecture is a recent state-of-the-art architecture that reconstructs high-quality spatial domain from hyper-sparse...
Ontology-Constrained Neural Reasoning in Enterprise Agentic Systems: A Neurosymbolic Architecture for Domain-Grounded AI Agents
arXiv:2604.00555v1 Announce Type: new Abstract: Enterprise adoption of Large Language Models (LLMs) is constrained by hallucination, domain drift, and the inability to enforce regulatory compliance at the reasoning level. We present a neurosymbolic architecture implemented within the Foundation AgenticOS (FAOS)...
This academic article is highly relevant to **AI & Technology Law practice**, particularly in **enterprise AI governance, regulatory compliance, and risk mitigation**. The proposed **neurosymbolic architecture** directly addresses critical legal challenges such as **hallucination risks, domain drift, and enforceable compliance mechanisms**—key concerns under frameworks like the **EU AI Act, GDPR, and sector-specific regulations** (e.g., financial services, healthcare). The study’s empirical validation across **five industries, including Vietnamese banking and insurance**, signals a growing need for **localized, ontology-driven AI governance models** to ensure regulatory alignment in emerging markets. Would you like a deeper analysis of the legal implications for compliance frameworks or cross-border AI deployment?
### **Jurisdictional Comparison & Analytical Commentary on Ontology-Constrained Neural Reasoning in Enterprise Agentic Systems** This paper introduces a neurosymbolic architecture that could significantly influence **AI governance, compliance, and liability frameworks** across jurisdictions, particularly in how regulatory oversight interacts with AI-driven decision-making. The **U.S.** may adopt a sectoral approach, leveraging ontology-based constraints to enhance **NIST AI Risk Management Framework (AI RMF)** compliance, while **South Korea** could integrate this model into its **AI Act-aligned regulatory sandbox** to ensure domain-specific adherence. Internationally, the **EU AI Act**’s risk-based classification may benefit from such architectures in high-stakes sectors (e.g., healthcare), though concerns about **algorithmic opacity** and **jurisdictional enforcement** remain unresolved. The paper’s emphasis on **asymmetric neurosymbolic coupling** could reshape legal debates on **AI accountability**, particularly in cross-border enterprise deployments where multiple regulatory regimes intersect. Would you like a deeper dive into any specific jurisdiction’s potential regulatory response?
### **Expert Analysis of "Ontology-Constrained Neural Reasoning in Enterprise Agentic Systems" for AI Liability & Autonomous Systems Practitioners** This paper presents a **neurosymbolic architecture** that mitigates key enterprise AI risks—hallucinations, domain drift, and regulatory non-compliance—by enforcing **ontology-constrained reasoning** in LLM-based agents. From a **product liability and regulatory compliance** perspective, this approach aligns with **EU AI Act (2024) risk-based obligations** (e.g., transparency, human oversight, and risk mitigation for high-risk AI systems) and **U.S. FDA guidance on AI/ML in medical devices** (21 CFR Part 820), where formal validation of reasoning paths is critical for safety-critical applications. The **asymmetric neurosymbolic coupling** model directly addresses **AI liability concerns** by: 1. **Constraining inputs** (e.g., tool discovery, context assembly) to reduce harmful outputs—akin to **negligence-based product liability** under *Restatement (Third) of Torts § 2* (failure to exercise reasonable care in design). 2. **Validating outputs** (e.g., compliance checking, reasoning verification) to meet **regulatory expectations** (e.g., **EEA GDPR Art. 22**, requiring safeguards against automated decisions with legal effects). For practitioners, this framework suggests **
NeurIPS 2026 Call for Position Papers
The **NeurIPS 2026 Call for Position Papers** signals a growing emphasis on **proactive legal and policy discourse within AI research**, particularly in shaping future regulatory frameworks. By inviting interdisciplinary arguments—spanning technical, ethical, and legal perspectives—it underscores the need for **early-stage policy engagement** from legal practitioners to influence AI governance debates. The track’s focus on **novelty, rigor, and contemporary relevance** suggests that legal scholars should prioritize forward-looking analyses (e.g., liability for generative AI, cross-border data regimes) to align with evolving AI ethics and compliance standards.
### **Jurisdictional Comparison & Analytical Commentary on NeurIPS 2026 Position Papers in AI & Technology Law** The **NeurIPS 2026 Call for Position Papers** underscores the growing institutionalization of AI governance debates within technical research communities, reflecting a shift toward **proactive, interdisciplinary policy discourse** rather than purely technical advancement. While the **U.S.** tends to prioritize **self-regulation and industry-led standards** (e.g., NIST AI Risk Management Framework), **South Korea** emphasizes **state-driven governance** (e.g., the *AI Basic Act*), and **international bodies** (e.g., OECD, UNESCO) seek harmonized frameworks—NeurIPS’s inclusion of policy-oriented submissions signals a **convergence of technical and legal perspectives**, particularly in areas like **AI ethics, liability, and regulatory compliance**. This development could influence **jurisdictional approaches** by legitimizing **technical experts as stakeholders in legal policymaking**, potentially accelerating **evidence-based regulation** in AI governance. *(Balanced, non-advisory commentary; jurisdictional comparisons are generalized for analytical purposes.)*
### **Expert Analysis on NeurIPS 2026 Position Papers & AI Liability Implications** The **NeurIPS 2026 Call for Position Papers** underscores the growing need for **interdisciplinary discourse** on AI governance, particularly in **liability frameworks** for autonomous systems. Position papers in this domain can shape future **regulatory and statutory developments**, such as the **EU AI Liability Directive (AILD)** and **U.S. state-level AI laws**, by advocating for **risk-based liability models** (e.g., strict liability for high-risk AI systems under the **EU AI Act**). **Key Legal Connections:** 1. **EU AI Act (2024)** – Position papers could argue for **harmonized liability rules** for AI-induced harms, aligning with the Act’s risk-tiered approach. 2. **Product Liability Directive (PLD) Reform (2022)** – Discussions may influence **strict liability expansions** for defective AI systems, as seen in **Case C-300/14 (Wathelet v. Toyota)** (autonomous vehicle defects). 3. **U.S. State Laws (e.g., California’s SB 1047)** – Position papers could advocate for **developer accountability standards**, mirroring emerging **algorithmic harm statutes**. Practitioners should monitor these submissions for **emerging liability theories**, as they
Optimsyn: Influence-Guided Rubrics Optimization for Synthetic Data Generation
arXiv:2604.00536v1 Announce Type: new Abstract: Large language models (LLMs) achieve strong downstream performance largely due to abundant supervised fine-tuning (SFT) data. However, high-quality SFT data in knowledge-intensive domains such as humanities, social sciences, medicine, law, and finance is scarce because...
Pseudo-Quantized Actor-Critic Algorithm for Robustness to Noisy Temporal Difference Error
arXiv:2604.01613v1 Announce Type: new Abstract: In reinforcement learning (RL), temporal difference (TD) errors are widely adopted for optimizing value and policy functions. However, since the TD error is defined by a bootstrap method, its computation tends to be noisy and...
This academic article presents a novel reinforcement learning (RL) algorithm designed to mitigate noisy temporal difference (TD) errors, a common challenge in AI/ML systems. While the paper focuses on technical improvements—such as pseudo-quantization of TD errors and the use of divergences for robustness—its implications for **AI & Technology Law** are indirect but noteworthy. The research underscores the need for **regulatory frameworks addressing AI robustness and reliability**, particularly in high-stakes applications (e.g., autonomous systems, healthcare). Policymakers may leverage such findings to justify stricter **AI safety standards** or **certification requirements** for RL-based systems, aligning with emerging global AI governance trends (e.g., EU AI Act, NIST AI Risk Management Framework). For legal practitioners, the paper signals potential **liability risks** in AI deployments where noisy TD errors could lead to failures, reinforcing the importance of **documenting algorithmic safeguards** in compliance strategies.
### **Jurisdictional Comparison & Analytical Commentary on AI & Technology Law Implications of *Pseudo-Quantized Actor-Critic Algorithm for Robustness to Noisy Temporal Difference Error*** The paper’s advancement in **robust reinforcement learning (RL) algorithms**—particularly its method for mitigating noisy temporal difference (TD) errors—has significant implications for **AI safety regulation, liability frameworks, and intellectual property (IP) in autonomous systems**, though jurisdictional responses vary in emphasis. In the **U.S.**, where AI governance is increasingly **sector-specific** (e.g., NIST AI Risk Management Framework, FDA’s AI/ML medical device guidance), this research could inform **regulatory sandboxes** for autonomous systems, with agencies like the **SEC or FAA** potentially requiring robustness testing for high-stakes RL applications (e.g., trading algorithms, drones). Meanwhile, **Korea’s AI Act (proposed under the *Framework Act on AI*)** aligns with the EU’s risk-based approach but may prioritize **mandatory explainability standards** for RL systems, given Korea’s focus on **transparency in AI decision-making** (e.g., *Act on the Promotion of AI Industry*). **Internationally**, the **OECD AI Principles** and **UNESCO Recommendation on AI Ethics** would likely frame this work under **safety-by-design** principles, though enforcement remains soft law. **China’s AI regulations
### **Expert Analysis: Implications for AI Liability & Autonomous Systems Practitioners** This paper introduces a **pseudo-quantized actor-critic algorithm** designed to mitigate noisy temporal difference (TD) errors in reinforcement learning (RL), which could have significant implications for **AI liability frameworks**—particularly in autonomous systems where safety-critical decisions depend on stable RL policies. #### **Key Legal & Regulatory Connections:** 1. **Product Liability & Defective AI Systems (Restatement (Third) of Torts § 2):** - If an autonomous system (e.g., a self-driving car or industrial robot) relies on RL with unstable TD errors leading to a harmful decision, **negligence or strict product liability** could apply if the algorithm fails to meet **reasonable safety standards** (e.g., ISO 26262 for automotive AI). Courts may assess whether the developer took **reasonable precautions** (e.g., robustness checks) to prevent foreseeable failures. 2. **EU AI Act & Risk-Based Liability (Proposal for AI Liability Directive):** - Under the **EU AI Act**, high-risk AI systems (e.g., autonomous vehicles) must ensure **transparency, robustness, and error mitigation**. This paper’s **pseudo-quantization approach** could be argued as a **state-of-the-art safety measure** to reduce liability exposure if a system’s RL policy causes harm due to noisy TD
Signals: Trajectory Sampling and Triage for Agentic Interactions
arXiv:2604.00356v1 Announce Type: new Abstract: Agentic applications based on large language models increasingly rely on multi-step interaction loops involving planning, action execution, and environment feedback. While such systems are now deployed at scale, improving them post-deployment remains challenging. Agent trajectories...
This academic article introduces a **lightweight signal-based triage framework** for large language model (LLM) agentic interactions, addressing the scalability and cost challenges of post-deployment improvement in AI systems. The proposed taxonomy of signals (interaction, execution, environment) offers a structured approach to filtering and prioritizing agent trajectories for review, potentially influencing **AI governance and compliance frameworks** by enabling more efficient auditing of AI behavior. The findings suggest **policy relevance** in areas such as AI safety monitoring, risk-based regulatory compliance, and the development of standardized evaluation metrics for AI systems in high-stakes applications.
### **Analytical Commentary: *Signals: Trajectory Sampling and Triage for Agentic Interactions* in AI & Technology Law** The paper’s *signal-based triage framework* for agentic AI interactions introduces efficiency gains in post-deployment monitoring—a critical legal and operational concern. **In the U.S.**, where AI governance emphasizes risk-based regulation (e.g., NIST AI Risk Management Framework, sectoral laws like the EU AI Act’s future U.S. equivalents), this method could mitigate compliance burdens by prioritizing high-risk trajectories for review, aligning with the Biden administration’s *Executive Order on AI* emphasis on transparency. **South Korea’s approach**, under the *AI Act (proposed)* and *Personal Information Protection Act (PIPA)*, would likely scrutinize the framework’s data minimization and purpose limitation—especially if signals involve personal data—while appreciating its role in reducing human review costs in high-stakes sectors like finance. **Internationally**, the framework resonates with the *OECD AI Principles* (transparency, accountability) and the *G7 Hiroshima AI Process*, though jurisdictions like the EU may demand stricter auditing standards under the *AI Act’s* high-risk classification. The paper’s taxonomy of signals (e.g., "misalignment," "stagnation") could also inform *algorithmic accountability laws* (e.g., NYC Local Law 144), where failure detection is legally salient. **Bal
### **Expert Analysis: Implications for AI Liability & Autonomous Systems Practitioners** This paper introduces a **trajectory triage framework** that could significantly impact AI liability frameworks by improving post-deployment monitoring and accountability for autonomous agentic systems. The proposed "signal-based" approach (e.g., detecting misalignment, stagnation, or failure loops) aligns with **negligence-based liability standards** (e.g., *Restatement (Third) of Torts § 3*) by enabling proactive risk mitigation. If deployed in safety-critical domains (e.g., healthcare, finance, or robotics), this method could help satisfy **duty-of-care obligations** under product liability law (e.g., *Restatement (Third) of Products Liability § 1*) by demonstrating reasonable post-market surveillance. Additionally, the taxonomy of failure modes (e.g., stagnation, exhaustion) mirrors **regulatory expectations** in AI governance, such as the EU AI Act’s emphasis on **continuous monitoring (Art. 61)** and **risk management (Annex III)**. Practitioners should consider whether such triage systems could serve as **evidence of due diligence** in litigation, particularly in cases involving AI-driven decision-making where failure to detect harmful trajectories could lead to liability under **strict product liability** or **premises liability** doctrines. Would you like a deeper dive into specific liability risks (e.g., autonomous vehicle accidents, medical AI mal
Artificial Intelligence and International Law: Legal Implications of AI Development and Global Regulation
This paper examines the legal implications of artificial intelligence (AI) development within the framework of public international law. Employing a doctrinal and comparative legal methodology, it surveys the principal international and regional regulatory instruments currently governing AI — including the...
Large Language Models Unpack Complex Political Opinions through Target-Stance Extraction
arXiv:2603.23531v1 Announce Type: new Abstract: Political polarization emerges from a complex interplay of beliefs about policies, figures, and issues. However, most computational analyses reduce discourse to coarse partisan labels, overlooking how these beliefs interact. This is especially evident in online...
Thinking with Tables: Enhancing Multi-Modal Tabular Understanding via Neuro-Symbolic Reasoning
arXiv:2603.24004v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) have demonstrated remarkable reasoning capabilities across modalities such as images and text. However, tabular data, despite being a critical real-world modality, remains relatively underexplored in multimodal learning. In this paper,...
Beyond Accuracy: Introducing a Symbolic-Mechanistic Approach to Interpretable Evaluation
arXiv:2603.23517v1 Announce Type: new Abstract: Accuracy-based evaluation cannot reliably distinguish genuine generalization from shortcuts like memorization, leakage, or brittle heuristics, especially in small-data regimes. In this position paper, we argue for mechanism-aware evaluation that combines task-relevant symbolic rules with mechanistic...
Residual Attention Physics-Informed Neural Networks for Robust Multiphysics Simulation of Steady-State Electrothermal Energy Systems
arXiv:2603.23578v1 Announce Type: new Abstract: Efficient thermal management and precise field prediction are critical for the design of advanced energy systems, including electrohydrodynamic transport, microfluidic energy harvesters, and electrically driven thermal regulators. However, the steady-state simulation of these electrothermal coupled...
LineMVGNN: Anti-Money Laundering with Line-Graph-Assisted Multi-View Graph Neural Networks
arXiv:2603.23584v1 Announce Type: new Abstract: Anti-money laundering (AML) systems are important for protecting the global economy. However, conventional rule-based methods rely on domain knowledge, leading to suboptimal accuracy and a lack of scalability. Graph neural networks (GNNs) for digraphs (directed...
Steering Code LLMs with Activation Directions for Language and Library Control
arXiv:2603.23629v1 Announce Type: new Abstract: Code LLMs often default to particular programming languages and libraries under neutral prompts. We investigate whether these preferences are encoded as approximately linear directions in activation space that can be manipulated at inference time. Using...
The Luna Bound Propagator for Formal Analysis of Neural Networks
arXiv:2603.23878v1 Announce Type: new Abstract: The parameterized CROWN analysis, a.k.a., alpha-CROWN, has emerged as a practically successful bound propagation method for neural network verification. However, existing implementations of alpha-CROWN are limited to Python, which complicates integration into existing DNN verifiers...
This article highlights a significant technical advancement in neural network verification with the introduction of "Luna," a C++ implementation of bound propagation methods. For AI & Technology Law, this signals a growing emphasis on **verifiability and explainability of AI systems**, particularly in high-stakes applications. Improved tools like Luna could become critical for demonstrating compliance with future AI regulations requiring robust safety, reliability, and transparency, impacting legal due diligence and liability assessments for AI developers and deployers.
The development of Luna, a C++-based bound propagator for neural network verification, offers significant implications for AI & Technology Law, particularly concerning the burgeoning regulatory focus on AI safety, robustness, and explainability. Its enhanced integration capabilities and efficiency could become a critical tool for demonstrating compliance in various jurisdictions. In the **United States**, the emphasis on "responsible AI" frameworks from NIST and executive orders suggests that tools like Luna could be pivotal for companies seeking to prove the safety and reliability of their AI systems, especially in high-risk applications like autonomous vehicles or medical devices. The ability to formally analyze neural networks for robustness against adversarial attacks or out-of-distribution inputs directly addresses concerns raised by agencies like the FDA or NHTSA regarding AI-driven product liability and consumer protection. The C++ implementation's potential for production-level integration makes it particularly attractive for enterprises navigating evolving product liability standards where robust verification evidence could mitigate legal risk. **South Korea**, with its robust regulatory push in AI, particularly through the AI Act and data protection laws, would likely view Luna as a valuable asset for fostering trustworthy AI. Korean regulators often prioritize transparency and accountability, and a formal verification tool that can demonstrate the bounds of a neural network's behavior aligns well with these objectives. For industries like finance or smart city infrastructure, where AI adoption is high and regulatory scrutiny is increasing, Luna could provide the necessary technical assurances to meet compliance requirements and build public trust. Furthermore, given Korea's strong emphasis on
This article on "Luna" presents a significant development for practitioners in AI liability. By offering a C++ implementation of advanced bound propagation methods (CROWN, alpha-CROWN), Luna facilitates more robust formal verification of neural networks. This directly addresses the "black box" problem in AI and strengthens a developer's defense against claims of negligence or design defect under product liability theories, as it provides a more robust means to demonstrate due diligence in validating AI system behavior, potentially aligning with emerging AI Act requirements for risk management systems.
Stochastic Dimension-Free Zeroth-Order Estimator for High-Dimensional and High-Order PINNs
arXiv:2603.24002v1 Announce Type: new Abstract: Physics-Informed Neural Networks (PINNs) for high-dimensional and high-order partial differential equations (PDEs) are primarily constrained by the $\mathcal{O}(d^k)$ spatial derivative complexity and the $\mathcal{O}(P)$ memory overhead of backpropagation (BP). While randomized spatial estimators successfully reduce...
Beyond Preset Identities: How Agents Form Stances and Boundaries in Generative Societies
arXiv:2603.23406v1 Announce Type: new Abstract: While large language models simulate social behaviors, their capacity for stable stance formation and identity negotiation during complex interventions remains unclear. To overcome the limitations of static evaluations, this paper proposes a novel mixed-methods framework...
Whether, Not Which: Mechanistic Interpretability Reveals Dissociable Affect Reception and Emotion Categorization in LLMs
arXiv:2603.22295v1 Announce Type: new Abstract: Large language models appear to develop internal representations of emotion -- "emotion circuits," "emotion neurons," and structured emotional manifolds have been reported across multiple model families. But every study making these claims uses stimuli signalled...
Dynamic Fusion-Aware Graph Convolutional Neural Network for Multimodal Emotion Recognition in Conversations
arXiv:2603.22345v1 Announce Type: new Abstract: Multimodal emotion recognition in conversations (MERC) aims to identify and understand the emotions expressed by speakers during utterance interaction from multiple modalities (e.g., text, audio, images, etc.). Existing studies have shown that GCN can improve...
HyFI: Hyperbolic Feature Interpolation for Brain-Vision Alignment
arXiv:2603.22721v1 Announce Type: new Abstract: Recent progress in artificial intelligence has encouraged numerous attempts to understand and decode human visual system from brain signals. These prior works typically align neural activity independently with semantic and perceptual features extracted from images...
When Language Models Lose Their Mind: The Consequences of Brain Misalignment
arXiv:2603.23091v1 Announce Type: new Abstract: While brain-aligned large language models (LLMs) have garnered attention for their potential as cognitive models and for potential for enhanced safety and trustworthiness in AI, the role of this brain alignment for linguistic competence remains...