Academic

Artificial Intelligence and International Law: Legal Implications of AI Development and Global Regulation

This paper examines the legal implications of artificial intelligence (AI) development within the framework of public international law. Employing a doctrinal and comparative legal methodology, it surveys the principal international and regional regulatory instruments currently governing AI — including the European Union Artificial Intelligence Act, the OECD Principles on Artificial Intelligence, and UNESCO's Recommendation on the Ethics of Artificial Intelligence — alongside national frameworks in Uzbekistan, the United States, and China. The paper further analyses the unresolved question of AI's legal personhood and liability, using the landmark Fraley v. Facebook (2015) biometric data case as illustrative jurisprudence. Findings indicate that no jurisdiction currently recognises AI as an independent legal subject; liability continues to rest with human developers, operators, and users. The paper concludes that a single binding multilateral instrument is necessary to address jurisdic

N
Narziev Oybek Elbek Ugli
· · 1 min read · 27 views

This paper examines the legal implications of artificial intelligence (AI) development within the framework of public international law. Employing a doctrinal and comparative legal methodology, it surveys the principal international and regional regulatory instruments currently governing AI — including the European Union Artificial Intelligence Act, the OECD Principles on Artificial Intelligence, and UNESCO's Recommendation on the Ethics of Artificial Intelligence — alongside national frameworks in Uzbekistan, the United States, and China. The paper further analyses the unresolved question of AI's legal personhood and liability, using the landmark Fraley v. Facebook (2015) biometric data case as illustrative jurisprudence. Findings indicate that no jurisdiction currently recognises AI as an independent legal subject; liability continues to rest with human developers, operators, and users. The paper concludes that a single binding multilateral instrument is necessary to address jurisdictional fragmentation, protect digital human rights, and anticipate future technological developments.

Executive Summary

The article provides a thorough doctrinal and comparative analysis of the legal implications of AI within international law, examining key regulatory instruments such as the EU AI Act, OECD Principles, and UNESCO’s Ethics Recommendation, alongside national frameworks in Uzbekistan, the U.S., and China. It effectively identifies a critical gap in the recognition of AI as a legal person and confirms the persistence of human liability across jurisdictions. The comparative methodology enhances analytical depth, while the identification of a potential multilateral instrument offers a constructive path forward. The use of Fraley v. Facebook as a jurisprudential reference is apt, though limited by its biometric data context.

Key Points

  • Identification of regulatory fragmentation across international and national frameworks
  • Analysis of unresolved legal personhood and liability issues
  • Recommendation for a binding multilateral instrument to address jurisdictional gaps

Merits

Comprehensive Comparative Analysis

The paper effectively maps out the interplay between international instruments and national laws, offering a nuanced synthesis of legal positions.

Demerits

Limited Jurisprudential Scope

The reliance on Fraley v. Facebook, while illustrative, constrains the depth of analysis due to its specific factual parameters and lack of direct AI legal personhood precedent.

Expert Commentary

This article contributes meaningfully to the discourse on AI governance by offering a structured, comparative lens on the current legal landscape. The absence of a recognized legal person for AI is a profound legal lacuna that warrants urgent scholarly and legislative attention. While the author rightly identifies the need for a multilateral instrument, the paper could have further enriched its analysis by engaging with emerging academic debates on algorithmic accountability and the potential for sui generis legal entities, such as 'electronic persons' under EU administrative law precedents. Moreover, the paper’s assumption that human liability is universally unchallenged overlooks recent scholarly proposals in the U.S. and EU concerning vicarious liability for autonomous systems. These omissions represent a missed opportunity to deepen the debate. Nevertheless, the work stands as a seminal contribution to the field, offering a clear roadmap for future research and policy reform.

Recommendations

  • 1. Encourage academic institutions to establish interdisciplinary AI governance research hubs to monitor evolving legal precedents globally.
  • 2. Advocate for the establishment of a UN working group dedicated to drafting a preliminary framework for a binding multilateral AI governance instrument, with stakeholder input from civil society, academia, and industry.

Sources

Original: CrossRef