Cognitively Layered Data Synthesis for Domain Adaptation of LLMs to Space Situational Awareness
arXiv:2603.09231v1 Announce Type: new Abstract: Large language models (LLMs) demonstrate exceptional performance on general-purpose tasks. however, transferring them to complex engineering domains such as space situational awareness (SSA) remains challenging owing to insufficient structural alignment with mission chains, the absence...
The Confidence Gate Theorem: When Should Ranked Decision Systems Abstain?
arXiv:2603.09947v1 Announce Type: new Abstract: Ranked decision systems -- recommenders, ad auctions, clinical triage queues -- must decide when to intervene in ranked outputs and when to abstain. We study when confidence-based abstention monotonically improves decision quality, and when it...
Model Merging in the Era of Large Language Models: Methods, Applications, and Future Directions
arXiv:2603.09938v1 Announce Type: new Abstract: Model merging has emerged as a transformative paradigm for combining the capabilities of multiple neural networks into a single unified model without additional training. With the rapid proliferation of fine-tuned large language models~(LLMs), merging techniques...
BiCLIP: Domain Canonicalization via Structured Geometric Transformation
arXiv:2603.08942v1 Announce Type: cross Abstract: Recent advances in vision-language models (VLMs) have demonstrated remarkable zero-shot capabilities, yet adapting these models to specialized domains remains a significant challenge. Building on recent theoretical insights suggesting that independently trained VLMs are related by...
MAPLE: Elevating Medical Reasoning from Statistical Consensus to Process-Led Alignment
arXiv:2603.08987v1 Announce Type: new Abstract: Recent advances in medical large language models have explored Test-Time Reinforcement Learning (TTRL) to enhance reasoning. However, standard TTRL often relies on majority voting (MV) as a heuristic supervision signal, which can be unreliable in...
Democratising Clinical AI through Dataset Condensation for Classical Clinical Models
arXiv:2603.09356v1 Announce Type: new Abstract: Dataset condensation (DC) learns a compact synthetic dataset that enables models to match the performance of full-data training, prioritising utility over distributional fidelity. While typically explored for computational efficiency, DC also holds promise for healthcare...
Lying to Win: Assessing LLM Deception through Human-AI Games and Parallel-World Probing
arXiv:2603.07202v1 Announce Type: new Abstract: As Large Language Models (LLMs) transition into autonomous agentic roles, the risk of deception-defined behaviorally as the systematic provision of false information to satisfy external incentives-poses a significant challenge to AI safety. Existing benchmarks often...
RILEC: Detection and Generation of L1 Russian Interference Errors in English Learner Texts
arXiv:2603.07366v1 Announce Type: new Abstract: Many errors in student essays can be explained by influence from the native language (L1). L1 interference refers to errors influenced by a speaker's first language, such as using stadion instead of stadium, reflecting lexical...
Switchable Activation Networks
arXiv:2603.06601v1 Announce Type: new Abstract: Deep neural networks, and more recently large-scale generative models such as large language models (LLMs) and large vision-action models (LVAs), achieve remarkable performance across diverse domains, yet their prohibitive computational cost hinders deployment in resource-constrained...
Orion: Characterizing and Programming Apple's Neural Engine for LLM Training and Inference
arXiv:2603.06728v1 Announce Type: new Abstract: Over two billion Apple devices ship with a Neural Processing Unit (NPU) - the Apple Neural Engine (ANE) - yet this accelerator remains largely unused for large language model workloads. CoreML, Apple's public ML framework,...
Don't Freeze, Don't Crash: Extending the Safe Operating Range of Neural Navigation in Dense Crowds
arXiv:2603.06729v1 Announce Type: new Abstract: Navigating safely through dense crowds requires collision avoidance that generalizes beyond the densities seen during training. Learning-based crowd navigation can break under out-of-distribution crowd sizes due to density-sensitive observation normalization and social-cost scaling, while analytical...
NOTAI.AI: Explainable Detection of Machine-Generated Text via Curvature and Feature Attribution
arXiv:2603.05617v1 Announce Type: new Abstract: We present NOTAI.AI, an explainable framework for machine-generated text detection that extends Fast-DetectGPT by integrating curvature-based signals with neural and stylometric features in a supervised setting. The system combines 17 interpretable features, including Conditional Probability...
Governing artificial intelligence: ethical, legal and technical opportunities and challenges
This paper is the introduction to the special issue entitled: ‘Governing artificial intelligence: ethical, legal and technical opportunities and challenges'. Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like urban infrastructure, law enforcement, banking, healthcare...
Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions
Artificial intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal...
A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI
Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for...
Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law
Empirical evidence is mounting that artificial intelligence applications threaten to discriminate against legally protected groups. This raises intricate questions for EU law. The existing categories of EU anti-discrimination law do not provide an easy fit for algorithmic decision making. Furthermore,...
AI copyright policy considerations for Botswana and South Africa – Compensation for starving artists feeding generative AI
The balancing act which domestic intellectual property policy is now challenged to strike is between fostering growth in technological innovation and incentivising creative labour. Ordinarily, these two considerations should not be mutually exclusive, but generative artificial intelligence (Gen AI) has...
Generative AI in fashion design creation: a copyright analysis of AI-assisted designs
Abstract The growing use of generative artificial intelligence technology (gen-AI) technology in design creation offers valuable tool for increasing efficiency and for widening the creative perspectives of fashion designers. However, adopting AI tools in the fashion design process raises important...
The International Regulation of Artificial Intelligence Influence on the Information Law of Ukraine
The article is devoted to the international regulation on artificial intelligence influence on the Information Law of Ukraine. It was noted that the principles of regulation of artificial intelligence should be reflected in the Information Law of Ukraine. Based on...
Natural Language, Legal Hurdles: Navigating the Complexities in Natural Language Processing Development and Application
This article delves into the legal challenges faced in developing and deploying Natural Language Processing (NLP) technologies, focusing particularly on the European Union’s legal framework, especially the DSM Directive, the InfoSoc Directive, and the Artificial Intelligence Act. It addresses the...
Copyright and AI training data—transparency to the rescue?
Abstract Generative Artificial Intelligence (AI) models must be trained on vast quantities of data, much of which is composed of copyrighted material. However, AI developers frequently use such content without seeking permission from rightsholders, leading to calls for requirements to...
In Defence of Principlism in AI Ethics and Governance
Artificial Intelligence in Business Law: Navigating Regulation, Ethics, and Governance
Abstract: This chapter examines the transformative role of artificial intelligence (AI) in business law, focusing on the regulatory, ethical, and governance challenges it presents. As AI applications in legal processes grow—ranging from compliance automation and contract management to risk assessment...
Putting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance
The organizational use of artificial intelligence (AI) has rapidly spread across various sectors. Alongside the awareness of the benefits brought by AI, there is a growing consensus on the necessity of tackling the risks and potential harms, such as bias...
AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing
Balancing Privacy and Progress: A Review of Privacy Challenges, Systemic Oversight, and Patient Perceptions in AI-Driven Healthcare
Integrating Artificial Intelligence (AI) in healthcare represents a transformative shift with substantial potential for enhancing patient care. This paper critically examines this integration, confronting significant ethical, legal, and technological challenges, particularly in patient privacy, decision-making autonomy, and data integrity. A...
There is a UW For You
We are the Universities of Wisconsin. Each of our 13 universities offers its own strengths, stories, and life-changing opportunities that create real-world impact.
TDM copyright for AI in Europe: a view from Portugal
Abstract The development of artificial intelligence (AI) justified the introduction at the level of the European Union (EU) of a new copyright exception regarding text and data mining (TDM) for purposes of scientific research conducted by research organizations and entities...
Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law
Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them
Abstract The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building...