Defining Explainable AI for Requirements Analysis
arXiv:2602.19071v1 Announce Type: new Abstract: Explainable Artificial Intelligence (XAI) has become popular in the last few years. The Artificial Intelligence (AI) community in general, and the Machine Learning (ML) community in particular, is coming to the realisation that in many applications, for AI to be trusted, it must not only demonstrate good performance in its decisionmaking, but it also must explain these decisions and convince us that it is making the decisions for the right reasons. However, different applications have different requirements on the information required of the underlying AI system in order to convince us that it is worthy of our trust. How do we define these requirements? In this paper, we present three dimensions for categorising the explanatory requirements of different applications. These are Source, Depth and Scope. We focus on the problem of matching up the explanatory requirements of different applications with the capabilities of underlying ML tec
arXiv:2602.19071v1 Announce Type: new Abstract: Explainable Artificial Intelligence (XAI) has become popular in the last few years. The Artificial Intelligence (AI) community in general, and the Machine Learning (ML) community in particular, is coming to the realisation that in many applications, for AI to be trusted, it must not only demonstrate good performance in its decisionmaking, but it also must explain these decisions and convince us that it is making the decisions for the right reasons. However, different applications have different requirements on the information required of the underlying AI system in order to convince us that it is worthy of our trust. How do we define these requirements? In this paper, we present three dimensions for categorising the explanatory requirements of different applications. These are Source, Depth and Scope. We focus on the problem of matching up the explanatory requirements of different applications with the capabilities of underlying ML techniques to provide them. We deliberately avoid including aspects of explanation that are already well-covered by the existing literature and we focus our discussion on ML although the principles apply to AI more broadly.
Executive Summary
This article proposes a framework for categorizing explainable AI (XAI) requirements, emphasizing the need for a systematic approach to match application-specific requirements with machine learning (ML) capabilities. The authors introduce three dimensions: Source, Depth, and Scope, to define explanatory requirements. The paper focuses on ML, but its principles can be applied to AI more broadly. The framework offers a structured way to identify and address XAI needs, potentially improving trust in AI decision-making. However, the article's scope is limited to ML, and the applicability of the framework to other AI domains is unclear. The proposed dimensions provide a useful starting point for further research and development in XAI.
Key Points
- ▸ The paper proposes a framework for categorizing XAI requirements.
- ▸ The authors introduce three dimensions: Source, Depth, and Scope.
- ▸ The framework focuses on ML, but its principles can be applied to AI more broadly.
Merits
Strength in Systematic Approach
The proposed framework offers a structured method for identifying and addressing XAI needs, which can improve trust in AI decision-making.
Practical Application
The framework can be used to guide the development of XAI solutions in various industries and applications.
Demerits
Limited Scope
The article's focus on ML may limit the framework's applicability to other AI domains, such as computer vision or natural language processing.
Need for Further Research
The proposed dimensions require further development and validation to ensure their effectiveness in a broader range of applications.
Expert Commentary
The article's contribution to the field of XAI is significant, as it provides a systematic approach to categorizing explanatory requirements. However, the framework's limitations, such as its focus on ML, need to be addressed through further research and development. The proposed dimensions offer a useful starting point for exploring XAI requirements, but their effectiveness in a broader range of applications requires validation. The implications of the framework are far-reaching, with potential applications in various industries and policy areas.
Recommendations
- ✓ Further research is needed to develop and validate the proposed dimensions in a broader range of applications.
- ✓ The framework should be applied to other AI domains, such as computer vision and natural language processing, to ensure its applicability and effectiveness.