Skip to main content
Academic

Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation

Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system’s entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system’s life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple

N
Natalia Díaz-Rodríguez
· · 1 min read · 3 views

Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system’s entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system’s life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial for the present and future of our society.

Executive Summary

The article discusses the concept of Trustworthy Artificial Intelligence (AI) and its seven technical requirements, which should be met throughout the system's life cycle. It emphasizes the importance of a holistic vision that considers the trustworthiness of all processes and actors involved in the system's life cycle. The article also introduces the concept of responsible AI systems, which can be realized through auditing processes and regulatory sandboxes. The authors conclude that regulation is key to reaching a consensus on the future of AI and that trustworthy and responsible AI systems are crucial for the present and future of society.

Key Points

  • Trustworthy AI is based on seven technical requirements: human agency and oversight, robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental wellbeing, and accountability
  • A holistic vision of trustworthy AI considers the trustworthiness of all processes and actors involved in the system's life cycle
  • The concept of responsible AI systems can be realized through auditing processes and regulatory sandboxes

Merits

Comprehensive Framework

The article provides a comprehensive framework for understanding trustworthy AI, including its technical requirements and the importance of a holistic vision

Demerits

Lack of Concrete Examples

The article could benefit from more concrete examples of how the technical requirements can be implemented in practice

Expert Commentary

The article provides a thoughtful and comprehensive analysis of the concept of trustworthy AI and its technical requirements. The introduction of the concept of responsible AI systems is a significant contribution to the field, and the authors' emphasis on the importance of regulation is well-taken. However, the article could benefit from more concrete examples and case studies to illustrate the implementation of the technical requirements in practice. Overall, the article is a valuable addition to the literature on trustworthy AI and has significant implications for both practitioners and policymakers.

Recommendations

  • Policymakers and regulatory bodies should prioritize the development of regulations that promote trustworthy and responsible AI systems
  • Developers and deployers of AI systems should prioritize the implementation of the technical requirements for trustworthy AI, including human agency and oversight, robustness and safety, and transparency

Sources