Owen-based Semantics and Hierarchy-Aware Explanation (O-Shap)
arXiv:2602.17107v1 Announce Type: new Abstract: Shapley value-based methods have become foundational in explainable artificial intelligence (XAI), offering theoretically grounded feature attributions through cooperative game theory. However, in practice, particularly in vision tasks, the assumption of feature independence breaks down, as features (i.e., pixels) often exhibit strong spatial and semantic dependencies. To address this, modern SHAP implementations now include the Owen value, a hierarchical generalization of the Shapley value that supports group attributions. While the Owen value preserves the foundations of Shapley values, its effectiveness critically depends on how feature groups are defined. We show that commonly used segmentations (e.g., axis-aligned or SLIC) violate key consistency properties, and propose a new segmentation approach that satisfies the $T$-property to ensure semantic alignment across hierarchy levels. This hierarchy enables computational pruning while
arXiv:2602.17107v1 Announce Type: new Abstract: Shapley value-based methods have become foundational in explainable artificial intelligence (XAI), offering theoretically grounded feature attributions through cooperative game theory. However, in practice, particularly in vision tasks, the assumption of feature independence breaks down, as features (i.e., pixels) often exhibit strong spatial and semantic dependencies. To address this, modern SHAP implementations now include the Owen value, a hierarchical generalization of the Shapley value that supports group attributions. While the Owen value preserves the foundations of Shapley values, its effectiveness critically depends on how feature groups are defined. We show that commonly used segmentations (e.g., axis-aligned or SLIC) violate key consistency properties, and propose a new segmentation approach that satisfies the $T$-property to ensure semantic alignment across hierarchy levels. This hierarchy enables computational pruning while improving attribution accuracy and interpretability. Experiments on image and tabular datasets demonstrate that O-Shap outperforms baseline SHAP variants in attribution precision, semantic coherence, and runtime efficiency, especially when structure matters.
Executive Summary
The article introduces O-Shap, a novel approach to explainable artificial intelligence (XAI) that extends the Shapley value-based methods by incorporating hierarchy-aware explanations. By utilizing the Owen value, O-Shap addresses the limitations of traditional SHAP methods, which assume feature independence. The proposed approach ensures semantic alignment across hierarchy levels, resulting in improved attribution accuracy, interpretability, and runtime efficiency. Experiments demonstrate the effectiveness of O-Shap in outperforming baseline SHAP variants on image and tabular datasets.
Key Points
- ▸ O-Shap extends SHAP methods by incorporating hierarchy-aware explanations using the Owen value
- ▸ The approach addresses the limitations of traditional SHAP methods by accounting for feature dependencies
- ▸ The proposed segmentation approach satisfies the $T$-property, ensuring semantic alignment across hierarchy levels
Merits
Improved Attribution Accuracy
O-Shap's hierarchy-aware approach enables more accurate feature attributions, particularly in vision tasks with strong spatial and semantic dependencies
Demerits
Complexity of Hierarchy Definition
The effectiveness of O-Shap critically depends on the definition of feature groups, which can be complex and challenging to determine
Expert Commentary
The introduction of O-Shap marks a significant advancement in the field of XAI, as it addresses the long-standing issue of feature dependencies in SHAP methods. By incorporating hierarchy-aware explanations, O-Shap provides a more nuanced understanding of feature attributions, which is essential for high-stakes applications. However, the complexity of defining feature groups and hierarchies remains a challenge that requires further research and development. Overall, O-Shap has the potential to become a cornerstone of XAI research, enabling more accurate, interpretable, and transparent AI models.
Recommendations
- ✓ Further research is needed to develop more efficient and effective methods for defining feature groups and hierarchies
- ✓ O-Shap should be applied to a broader range of domains and applications to demonstrate its generalizability and effectiveness