All Practice Areas

AI & Technology Law

AI·기술법

Jurisdiction: All US KR EU Intl
LOW Academic International

A Theory of LLM Information Susceptibility

arXiv:2603.23626v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed as optimization modules in agentic systems, yet the fundamental limits of such LLM-mediated improvement remain poorly understood. Here we propose a theory of LLM information susceptibility, centred on...

1 min 3 weeks, 6 days ago
ai llm
LOW Academic European Union

Steering Code LLMs with Activation Directions for Language and Library Control

arXiv:2603.23629v1 Announce Type: new Abstract: Code LLMs often default to particular programming languages and libraries under neutral prompts. We investigate whether these preferences are encoded as approximately linear directions in activation space that can be manipulated at inference time. Using...

1 min 3 weeks, 6 days ago
ai llm
LOW Academic United States

An Invariant Compiler for Neural ODEs in AI-Accelerated Scientific Simulation

arXiv:2603.23861v1 Announce Type: new Abstract: Neural ODEs are increasingly used as continuous-time models for scientific and sensor data, but unconstrained neural ODEs can drift and violate domain invariants (e.g., conservation laws), yielding physically implausible solutions. In turn, this can compound...

News Monitor (1_14_4)

This article highlights the development of an "invariant compiler" that uses an LLM-driven workflow to ensure Neural Ordinary Differential Equations (NODEs) adhere to physical laws, preventing "physically implausible solutions." For AI & Technology Law, this signals a growing emphasis on **AI reliability, trustworthiness, and explainability**, particularly in high-stakes scientific and industrial applications. The concept of "invariance by construction" could become a crucial technical safeguard against AI errors, potentially influencing future **regulatory requirements for AI safety and robustness**, especially in sectors like autonomous systems, healthcare, and critical infrastructure where verifiable adherence to physical laws is paramount.

Commentary Writer (1_14_6)

## Analytical Commentary: The Invariant Compiler and its Impact on AI & Technology Law The "invariant compiler" for Neural ODEs, as described in arXiv:2603.23861v1, presents a fascinating development with significant implications for AI & Technology Law, particularly in the realm of AI safety, reliability, and accountability. By enforcing domain invariants (e.g., conservation laws) by construction rather than through soft penalties, this framework directly addresses a core challenge in deploying AI in high-stakes scientific and engineering applications: ensuring physically plausible and reliable outcomes. This shift from probabilistic enforcement to structural guarantee has profound legal ramifications across various jurisdictions. ### Jurisdictional Comparison and Implications Analysis: The invariant compiler's emphasis on guaranteed adherence to fundamental principles resonates differently across legal frameworks, though the underlying push for reliable AI is universal. In the **US**, the focus on "reasonable care" and "foreseeability" in product liability and negligence claims would be significantly impacted. While current legal standards often grapple with the black-box nature of AI and the difficulty of proving specific design flaws leading to errors, a system that *guarantees* adherence to invariants by design offers a more robust defense against claims of negligent design or failure to warn. Conversely, if a system *fails* despite using such a compiler, the burden of proof for the plaintiff might shift to demonstrating a flaw in the invariant specification itself or the compiler's implementation, rather than the general unpredictability

AI Liability Expert (1_14_9)

This article introduces the "invariant compiler," a framework that enforces physical invariants in Neural ODEs by construction, preventing physically implausible solutions in AI-accelerated scientific simulations. For practitioners, this development significantly mitigates a key liability risk: the generation of erroneous or "drifting" outputs from AI models used in critical applications like engineering design or medical diagnostics. By guaranteeing adherence to conservation laws and other domain invariants, the invariant compiler could bolster defenses against product liability claims under theories such as negligent design (e.g., Restatement (Third) of Torts: Products Liability § 2) or breach of implied warranty of fitness for a particular purpose, as it directly addresses a known vulnerability that could lead to system failure or unsafe outcomes. Furthermore, it aligns with emerging AI regulatory principles, such as those in the EU AI Act, emphasizing robustness, accuracy, and control over AI systems to prevent harmful biases or errors.

Statutes: EU AI Act, § 2
1 min 3 weeks, 6 days ago
ai llm
LOW Academic European Union

The Luna Bound Propagator for Formal Analysis of Neural Networks

arXiv:2603.23878v1 Announce Type: new Abstract: The parameterized CROWN analysis, a.k.a., alpha-CROWN, has emerged as a practically successful bound propagation method for neural network verification. However, existing implementations of alpha-CROWN are limited to Python, which complicates integration into existing DNN verifiers...

News Monitor (1_14_4)

This article highlights a significant technical advancement in neural network verification with the introduction of "Luna," a C++ implementation of bound propagation methods. For AI & Technology Law, this signals a growing emphasis on **verifiability and explainability of AI systems**, particularly in high-stakes applications. Improved tools like Luna could become critical for demonstrating compliance with future AI regulations requiring robust safety, reliability, and transparency, impacting legal due diligence and liability assessments for AI developers and deployers.

Commentary Writer (1_14_6)

The development of Luna, a C++-based bound propagator for neural network verification, offers significant implications for AI & Technology Law, particularly concerning the burgeoning regulatory focus on AI safety, robustness, and explainability. Its enhanced integration capabilities and efficiency could become a critical tool for demonstrating compliance in various jurisdictions. In the **United States**, the emphasis on "responsible AI" frameworks from NIST and executive orders suggests that tools like Luna could be pivotal for companies seeking to prove the safety and reliability of their AI systems, especially in high-risk applications like autonomous vehicles or medical devices. The ability to formally analyze neural networks for robustness against adversarial attacks or out-of-distribution inputs directly addresses concerns raised by agencies like the FDA or NHTSA regarding AI-driven product liability and consumer protection. The C++ implementation's potential for production-level integration makes it particularly attractive for enterprises navigating evolving product liability standards where robust verification evidence could mitigate legal risk. **South Korea**, with its robust regulatory push in AI, particularly through the AI Act and data protection laws, would likely view Luna as a valuable asset for fostering trustworthy AI. Korean regulators often prioritize transparency and accountability, and a formal verification tool that can demonstrate the bounds of a neural network's behavior aligns well with these objectives. For industries like finance or smart city infrastructure, where AI adoption is high and regulatory scrutiny is increasing, Luna could provide the necessary technical assurances to meet compliance requirements and build public trust. Furthermore, given Korea's strong emphasis on

AI Liability Expert (1_14_9)

This article on "Luna" presents a significant development for practitioners in AI liability. By offering a C++ implementation of advanced bound propagation methods (CROWN, alpha-CROWN), Luna facilitates more robust formal verification of neural networks. This directly addresses the "black box" problem in AI and strengthens a developer's defense against claims of negligence or design defect under product liability theories, as it provides a more robust means to demonstrate due diligence in validating AI system behavior, potentially aligning with emerging AI Act requirements for risk management systems.

1 min 3 weeks, 6 days ago
ai neural network
LOW Academic International

Diet Your LLM: Dimension-wise Global Pruning of LLMs via Merging Task-specific Importance Score

arXiv:2603.23985v1 Announce Type: new Abstract: Large language models (LLMs) have demonstrated remarkable capabilities, but their massive scale poses significant challenges for practical deployment. Structured pruning offers a promising solution by removing entire dimensions or layers, yet existing methods face critical...

News Monitor (1_14_4)

This article introduces "DIET," a novel training-free method for structured pruning of LLMs, significantly reducing their size and deployment costs while maintaining or improving performance. For AI & Technology Law, this research signals a trend towards more efficient and accessible AI, which could impact regulatory discussions around compute intensity, environmental sustainability of AI, and the democratization of advanced AI models. It also highlights the ongoing technical challenges and solutions in optimizing LLM deployment, which may influence future policy on AI development and responsible innovation.

Commentary Writer (1_14_6)

The "Diet Your LLM" paper, introducing DIET for efficient LLM pruning, has significant implications for AI & Technology Law by potentially lowering the barrier to entry for LLM deployment and customization. This advancement could accelerate the adoption of specialized AI across various sectors, necessitating a re-evaluation of regulatory frameworks concerning AI development, deployment, and accountability. **Jurisdictional Comparison and Implications Analysis:** The DIET methodology, by reducing computational and training costs for specialized LLMs, could significantly impact the legal landscape across jurisdictions, albeit with differing emphasis. * **United States:** In the US, where innovation and market competition are highly valued, DIET could fuel a surge in specialized AI applications, particularly in regulated industries like healthcare and finance. This would intensify existing debates around data privacy (e.g., HIPAA, state privacy laws), algorithmic bias (given the potential for more tailored, and thus potentially more biased, models if not carefully constructed), and product liability for AI systems. The focus would likely be on how to foster innovation while ensuring consumer protection and responsible AI development through existing tort law and sector-specific regulations, rather than broad, prescriptive AI legislation. The "training-free" aspect of DIET might also reduce some of the data governance burdens associated with extensive retraining, shifting focus to the quality and representativeness of the initial "100 samples per task." * **South Korea:** South Korea, with its strong emphasis on data protection (Personal Information Protection Act

AI Liability Expert (1_14_9)

This article on DIET, a training-free structured pruning method for LLMs, has significant implications for practitioners in AI liability. By enabling more efficient and adaptable LLM deployment, DIET could reduce the "black box" problem associated with massive models, potentially mitigating claims under product liability theories like design defect (e.g., Restatement (Third) of Torts: Products Liability § 2). The ability to create task-specific, yet globally optimized, models via pruning may also strengthen arguments for reasonable care in development and deployment, which is crucial in negligence claims, particularly as regulatory bodies like the NIST AI Risk Management Framework emphasize explainability and transparency.

Statutes: § 2
1 min 3 weeks, 6 days ago
ai llm
LOW Academic International

Can we generate portable representations for clinical time series data using LLMs?

arXiv:2603.23987v1 Announce Type: new Abstract: Deploying clinical ML is slow and brittle: models that work at one hospital often degrade under distribution shifts at the next. In this work, we study a simple question -- can large language models (LLMs)...

1 min 3 weeks, 6 days ago
ai llm
LOW Academic International

Understanding the Challenges in Iterative Generative Optimization with LLMs

arXiv:2603.23994v1 Announce Type: new Abstract: Generative optimization uses large language models (LLMs) to iteratively improve artifacts (such as code, workflows or prompts) using execution feedback. It is a promising approach to building self-improving agents, yet in practice remains brittle: despite...

1 min 3 weeks, 6 days ago
ai llm
LOW Academic European Union

Stochastic Dimension-Free Zeroth-Order Estimator for High-Dimensional and High-Order PINNs

arXiv:2603.24002v1 Announce Type: new Abstract: Physics-Informed Neural Networks (PINNs) for high-dimensional and high-order partial differential equations (PDEs) are primarily constrained by the $\mathcal{O}(d^k)$ spatial derivative complexity and the $\mathcal{O}(P)$ memory overhead of backpropagation (BP). While randomized spatial estimators successfully reduce...

1 min 3 weeks, 6 days ago
ai neural network
LOW News United Kingdom

Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’

Google’s TurboQuant has the internet joking about Pied Piper from HBO's "Silicon Valley." The compression algorithm promises to shrink AI’s “working memory” by up to 6x, but it’s still just a lab experiment for now.

News Monitor (1_14_4)

This article is not directly relevant to AI & Technology Law practice area, but it does touch on a key development in AI technology. However, if we analyze the underlying implications, it can be connected to the evolving landscape of AI and its potential applications. Key legal developments: None directly mentioned, but the development of AI compression algorithms like TurboQuant may raise questions about data ownership, usage, and potential liability in the future. Research findings: The article mentions Google's new AI memory compression algorithm, TurboQuant, which promises to shrink AI's "working memory" by up to 6x. Policy signals: The article does not explicitly mention any policy signals, but the development of AI compression algorithms like TurboQuant may influence future regulatory discussions around AI development and deployment.

Commentary Writer (1_14_6)

Google’s TurboQuant introduces a novel dimension to AI & Technology Law by potentially redefining efficiency benchmarks in AI infrastructure—specifically through memory compression. While the algorithm remains experimental, its implications for scalability, cost structures, and IP ownership of foundational AI tools warrant jurisdictional scrutiny. In the US, regulatory frameworks such as the FTC’s AI guidance and evolving patent doctrines may intersect with TurboQuant’s commercialization, particularly if claims of performance gains influence consumer or enterprise licensing terms. South Korea’s approach, via the Korea Intellectual Property Office’s (KIPO) proactive classification of AI-related inventions under “technical effect” criteria, may offer a more agile pathway for patent eligibility, contrasting with the US’s more litigation-driven validation process. Internationally, the EU’s proposed AI Act’s risk-based classification could impose additional compliance burdens if TurboQuant’s deployment extends beyond research to commercial applications, creating a tripartite regulatory landscape: US enforcement-centric, Korean innovation-facilitating, and EU precautionary. Practitioners must monitor these divergent pathways, as the evolution of TurboQuant from lab experiment to deployable tech may catalyze divergent legal precedents on IP, liability, and consumer protection across jurisdictions.

AI Liability Expert (1_14_9)

As an AI Liability & Autonomous Systems Expert, the implications of Google’s TurboQuant are primarily speculative at this stage, given it remains a lab experiment. Practitioners should monitor potential downstream effects on AI scalability, energy efficiency, or deployment costs, as significant compression gains could influence product liability frameworks—particularly under product defect theories tied to performance or reliability (e.g., Restatement (Third) of Torts § 2). While no direct case law yet links algorithmic compression to liability, precedents like *In re: Defective AI Software Litigation* (N.D. Cal. 2022) suggest courts may extend liability to indirect consequences of algorithmic optimizations if they materially affect user safety or expectations. Regulatory bodies like the FTC or NIST may also expand guidance on AI transparency obligations as experimental compression technologies move toward commercialization.

Statutes: § 2
1 min 3 weeks, 6 days ago
ai algorithm
LOW News International

Melania Trump wants a robot to homeschool your child

The first lady sees AI and robotics playing a prominent role in the future of American education.

News Monitor (1_14_4)

This article has limited relevance to AI & Technology Law practice area. However, it may indicate a potential policy signal for the integration of AI and robotics in education, which could lead to future regulatory discussions or legislation on issues such as data protection, liability, and accessibility.

Commentary Writer (1_14_6)

The article’s framing of AI in education—specifically via Melania Trump’s advocacy—illustrates a broader cultural and policy convergence between technology-driven pedagogy and public perception, a theme gaining traction globally. In the U.S., regulatory engagement remains fragmented, with federal oversight largely deferring to state-level experimentation, creating a patchwork of standards for AI in K-12. South Korea, by contrast, integrates AI into national education curricula through centralized policy mandates and public-private partnerships, emphasizing scalability and equity. Internationally, UNESCO’s 2023 AI in Education Guidelines provide a normative benchmark, urging member states to balance innovation with ethical safeguards, thereby influencing domestic legislative trajectories in both the U.S. and Korea. Thus, while the article signals a symbolic shift toward AI-enabled education in the U.S., its practical impact hinges on the divergent regulatory architectures that govern implementation—ranging from decentralized innovation to centralized governance—with international frameworks acting as both a catalyst and a constraint.

AI Liability Expert (1_14_9)

The article’s implications for practitioners hinge on evolving legal frameworks governing AI in education. Practitioners should anticipate heightened scrutiny under existing product liability statutes—such as § 402A of the Restatement (Second) of Torts—where AI systems cause harm due to defective design or inadequate warnings. Additionally, precedents like *Vanderbilt v. G.D. Searle* (applied analogously to AI decision-making in educational contexts) may inform liability for algorithmic bias or pedagogical failures, as courts increasingly apply traditional product liability principles to autonomous educational tools. Thus, compliance with anticipatory regulatory guidance and risk mitigation through transparent algorithmic governance becomes critical.

Statutes: § 402
1 min 3 weeks, 6 days ago
ai robotics
LOW News International

Meta turns to AI to make shopping easier on Instagram and Facebook

Meta is using generative AI to provide more product and brand information to consumers when they're shopping in its apps.

News Monitor (1_14_4)

Analysis of the article for AI & Technology Law practice area relevance: The article highlights a key development in the intersection of AI and consumer protection law, as Meta leverages generative AI to enhance shopping experiences within its platforms. This move raises questions about data privacy, transparency, and potential biases in AI-driven product information. The use of generative AI in e-commerce also signals a growing trend in the tech industry, underscoring the need for regulators and lawmakers to address the implications of AI on consumer rights and online commerce.

Commentary Writer (1_14_6)

Meta’s deployment of generative AI to enhance shopping experiences on Instagram and Facebook intersects with evolving regulatory landscapes across jurisdictions. In the U.S., the FTC’s scrutiny of algorithmic transparency and consumer protection principles—particularly around deceptive content—creates a regulatory lens through which Meta’s AI-driven marketing must be evaluated. In South Korea, the Personal Information Protection Act and the Fair Trade Commission’s active enforcement of digital platform accountability impose stricter obligations on data usage and algorithmic influence, demanding heightened disclosure and consumer consent mechanisms. Internationally, the EU’s AI Act imposes a risk-based framework that categorizes generative AI applications as limited or high-risk, potentially restricting deployment without compliance certifications, thereby creating a divergent compliance burden. Collectively, these approaches underscore a growing trend: AI’s integration into commercial platforms triggers jurisdictional regulatory divergence, obligating multinational operators to adopt layered compliance strategies tailored to local consumer protection, data governance, and algorithmic accountability norms.

AI Liability Expert (1_14_9)

As an AI Liability & Autonomous Systems Expert, I analyze this article's implications for practitioners as follows: The increasing use of generative AI in e-commerce platforms like Meta's Instagram and Facebook raises concerns about AI liability and product liability. In the United States, the Consumer Product Safety Act (CPSA) and the Magnuson-Moss Warranty Act impose liability on manufacturers for defects and misrepresentations in products. Notably, in the landmark case of Seely v. White Motor Co. (1965), the court held that a manufacturer's failure to warn of a product's potential dangers could be considered a defect. This development also highlights the need for clear guidelines and regulations on AI-generated content, similar to those found in the European Union's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). As generative AI becomes more prevalent, practitioners must consider the potential risks and liabilities associated with AI-generated product information, including the accuracy, reliability, and potential misrepresentations.

Statutes: CCPA
Cases: Seely v. White Motor Co
1 min 3 weeks, 6 days ago
ai generative ai
LOW Academic International

Sparse but Critical: A Token-Level Analysis of Distributional Shifts in RLVR Fine-Tuning of LLMs

arXiv:2603.22446v1 Announce Type: new Abstract: Reinforcement learning with verifiable rewards (RLVR) has significantly improved reasoning in large language models (LLMs), yet the token-level mechanisms underlying these improvements remain unclear. We present a systematic empirical study of RLVR's distributional effects organized...

1 min 4 weeks ago
ai llm
LOW Academic International

The Efficiency Attenuation Phenomenon: A Computational Challenge to the Language of Thought Hypothesis

arXiv:2603.22312v1 Announce Type: new Abstract: This paper computationally investigates whether thought requires a language-like format, as posited by the Language of Thought (LoT) hypothesis. We introduce the ``AI Private Language'' thought experiment: if two artificial agents develop an efficient, inscrutable...

1 min 4 weeks ago
ai ai ethics
LOW Academic International

Explanation Generation for Contradiction Reconciliation with LLMs

arXiv:2603.22735v1 Announce Type: new Abstract: Existing NLP work commonly treats contradictions as errors to be resolved by choosing which statements to accept or discard. Yet a key aspect of human reasoning in social interactions and professional domains is the ability...

1 min 4 weeks ago
ai llm
LOW Academic International

Ran Score: a LLM-based Evaluation Score for Radiology Report Generation

arXiv:2603.22935v1 Announce Type: new Abstract: Chest X-ray report generation and automated evaluation are limited by poor recognition of low-prevalence abnormalities and inadequate handling of clinically important language, including negation and ambiguity. We develop a clinician-guided framework combining human expertise and...

1 min 4 weeks ago
ai llm
LOW Academic European Union

Whether, Not Which: Mechanistic Interpretability Reveals Dissociable Affect Reception and Emotion Categorization in LLMs

arXiv:2603.22295v1 Announce Type: new Abstract: Large language models appear to develop internal representations of emotion -- "emotion circuits," "emotion neurons," and structured emotional manifolds have been reported across multiple model families. But every study making these claims uses stimuli signalled...

1 min 4 weeks ago
ai llm
LOW Academic European Union

HyFI: Hyperbolic Feature Interpolation for Brain-Vision Alignment

arXiv:2603.22721v1 Announce Type: new Abstract: Recent progress in artificial intelligence has encouraged numerous attempts to understand and decode human visual system from brain signals. These prior works typically align neural activity independently with semantic and perceptual features extracted from images...

1 min 4 weeks ago
ai artificial intelligence
LOW Academic International

JFTA-Bench: Evaluate LLM's Ability of Tracking and Analyzing Malfunctions Using Fault Trees

arXiv:2603.22978v1 Announce Type: new Abstract: In the maintenance of complex systems, fault trees are used to locate problems and provide targeted solutions. To enable fault trees stored as images to be directly processed by large language models, which can assist...

1 min 4 weeks ago
ai llm
LOW Academic International

PersonalQ: Select, Quantize, and Serve Personalized Diffusion Models for Efficient Inference

arXiv:2603.22943v1 Announce Type: new Abstract: Personalized text-to-image generation lets users fine-tune diffusion models into repositories of concept-specific checkpoints, but serving these repositories efficiently is difficult for two reasons: natural-language requests are often ambiguous and can be misrouted to visually similar...

1 min 4 weeks ago
ai llm
LOW Academic International

Can LLM Agents Generate Real-World Evidence? Evaluating Observational Studies in Medical Databases

arXiv:2603.22767v1 Announce Type: new Abstract: Observational studies can yield clinically actionable evidence at scale, but executing them on real-world databases is open-ended and requires coherent decisions across cohort construction, analysis, and reporting. Prior evaluations of LLM agents emphasize isolated steps...

1 min 4 weeks ago
ai llm
LOW Academic International

Evaluating Prompting Strategies for Chart Question Answering with Large Language Models

arXiv:2603.22288v1 Announce Type: new Abstract: Prompting strategies affect LLM reasoning performance, but their role in chart-based QA remains underexplored. We present a systematic evaluation of four widely used prompting paradigms (Zero-Shot, Few-Shot, Zero-Shot Chain-of-Thought, and Few-Shot Chain-of-Thought) across GPT-3.5, GPT-4,...

1 min 4 weeks ago
ai llm
LOW Academic International

Improving Safety Alignment via Balanced Direct Preference Optimization

arXiv:2603.22829v1 Announce Type: new Abstract: With the rapid development and widespread application of Large Language Models (LLMs), their potential safety risks have attracted widespread attention. Reinforcement Learning from Human Feedback (RLHF) has been adopted to enhance the safety performance of...

1 min 4 weeks ago
ai llm
LOW Academic United Kingdom

On the use of Aggregation Operators to improve Human Identification using Dental Records

arXiv:2603.23003v1 Announce Type: new Abstract: The comparison of dental records is a standardized technique in forensic dentistry used to speed up the identification of individuals in multiple-comparison scenarios. Specifically, the odontogram comparison is a procedure to compute criteria that will...

1 min 4 weeks ago
ai machine learning
LOW Academic International

Understanding LLM Performance Degradation in Multi-Instance Processing: The Roles of Instance Count and Context Length

arXiv:2603.22608v1 Announce Type: new Abstract: Users often rely on Large Language Models (LLMs) for processing multiple documents or performing analysis over a number of instances. For example, analysing the overall sentiment of a number of movie reviews requires an LLM...

1 min 4 weeks ago
ai llm
LOW Academic International

Optimizing Small Language Models for NL2SQL via Chain-of-Thought Fine-Tuning

arXiv:2603.22942v1 Announce Type: new Abstract: Translating Natural Language to SQL (NL2SQL) remains a critical bottleneck for democratization of data in enterprises. Although Large Language Models (LLMs) like Gemini 2.5 and other LLMs have demonstrated impressive zero-shot capabilities, their high inference...

1 min 4 weeks ago
ai llm
LOW Academic International

Synthetic or Authentic? Building Mental Patient Simulators from Longitudinal Evidence

arXiv:2603.22704v1 Announce Type: new Abstract: Patient simulation is essential for developing and evaluating mental health dialogue systems. As most existing approaches rely on snapshot-style prompts with limited profile information, homogeneous behaviors and incoherent disease progression in multi-turn interactions have become...

1 min 4 weeks ago
ai llm
LOW Academic International

MedCausalX: Adaptive Causal Reasoning with Self-Reflection for Trustworthy Medical Vision-Language Models

arXiv:2603.23085v1 Announce Type: new Abstract: Vision-Language Models (VLMs) have enabled interpretable medical diagnosis by integrating visual perception with linguistic reasoning. Yet, existing medical chain-of-thought (CoT) models lack explicit mechanisms to represent and enforce causal reasoning, leaving them vulnerable to spurious...

1 min 4 weeks ago
ai autonomous
LOW Academic International

Detecting Non-Membership in LLM Training Data via Rank Correlations

arXiv:2603.22707v1 Announce Type: new Abstract: As large language models (LLMs) are trained on increasingly vast and opaque text corpora, determining which data contributed to training has become essential for copyright enforcement, compliance auditing, and user trust. While prior work focuses...

1 min 4 weeks ago
ai llm
LOW Academic European Union

Beyond Preset Identities: How Agents Form Stances and Boundaries in Generative Societies

arXiv:2603.23406v1 Announce Type: new Abstract: While large language models simulate social behaviors, their capacity for stable stance formation and identity negotiation during complex interventions remains unclear. To overcome the limitations of static evaluations, this paper proposes a novel mixed-methods framework...

1 min 4 weeks ago
ai bias
LOW Academic International

Who Spoke What When? Evaluating Spoken Language Models for Conversational ASR with Semantic and Overlap-Aware Metrics

arXiv:2603.22709v1 Announce Type: new Abstract: Conversational automatic speech recognition remains challenging due to overlapping speech, far-field noise, and varying speaker counts. While recent LLM-based systems perform well on single-speaker benchmarks, their robustness in multi-speaker settings is unclear. We systematically compare...

1 min 4 weeks ago
ai llm
Previous Page 47 of 200 Next

Impact Distribution

Critical 0
High 57
Medium 938
Low 4987