Transparent AI for Mathematics: Transformer-Based Large Language Models for Mathematical Entity Relationship Extraction with XAI
arXiv:2603.06348v1 Announce Type: new Abstract: Mathematical text understanding is a challenging task due to the presence of specialized entities and complex relationships between them. This study formulates mathematical problem interpretation as a Mathematical Entity Relation Extraction (MERE) task, where operands are treated as entities and operators as their relationships. Transformer-based models are applied to automatically extract these relations from mathematical text, with Bidirectional Encoder Representations from Transformers (BERT) achieving the best performance, reaching an accuracy of 99.39%. To enhance transparency and trust in the model's predictions, Explainable Artificial Intelligence (XAI) is incorporated using Shapley Additive Explanations (SHAP). The explainability analysis reveals how specific textual and mathematical features influence relation prediction, providing insights into feature importance and model behavior. By combining transformer-based learning, a ta
arXiv:2603.06348v1 Announce Type: new Abstract: Mathematical text understanding is a challenging task due to the presence of specialized entities and complex relationships between them. This study formulates mathematical problem interpretation as a Mathematical Entity Relation Extraction (MERE) task, where operands are treated as entities and operators as their relationships. Transformer-based models are applied to automatically extract these relations from mathematical text, with Bidirectional Encoder Representations from Transformers (BERT) achieving the best performance, reaching an accuracy of 99.39%. To enhance transparency and trust in the model's predictions, Explainable Artificial Intelligence (XAI) is incorporated using Shapley Additive Explanations (SHAP). The explainability analysis reveals how specific textual and mathematical features influence relation prediction, providing insights into feature importance and model behavior. By combining transformer-based learning, a task-specific dataset, and explainable modeling, this work offers an effective and interpretable framework for MERE, supporting future applications in automated problem solving, knowledge graph construction, and intelligent educational systems.
Executive Summary
This article proposes a novel approach to mathematical entity relationship extraction using transformer-based large language models, achieving an accuracy of 99.39% with BERT. The incorporation of Explainable Artificial Intelligence (XAI) via Shapley Additive Explanations (SHAP) enhances transparency and trust in the model's predictions, providing insights into feature importance and model behavior. This framework has significant implications for automated problem solving, knowledge graph construction, and intelligent educational systems.
Key Points
- ▸ Transformer-based models for mathematical entity relationship extraction
- ▸ Incorporation of Explainable Artificial Intelligence (XAI) for transparency and trust
- ▸ Achievement of 99.39% accuracy with BERT
Merits
Effective Framework
The proposed framework combines transformer-based learning, a task-specific dataset, and explainable modeling, offering an effective and interpretable approach to mathematical entity relationship extraction.
High Accuracy
The model achieves a high accuracy of 99.39%, demonstrating its potential for real-world applications.
Demerits
Limited Generalizability
The model's performance may not generalize to other domains or tasks, requiring further testing and validation.
Dependence on High-Quality Data
The model's accuracy relies on high-quality, task-specific data, which may be challenging to obtain or create.
Expert Commentary
The proposed framework represents a significant advancement in mathematical entity relationship extraction, leveraging the strengths of transformer-based models and XAI. The high accuracy achieved by the model demonstrates its potential for real-world applications, particularly in automated problem solving and intelligent educational systems. However, further research is needed to address the limitations of the model, including its dependence on high-quality data and limited generalizability. The incorporation of XAI via SHAP provides valuable insights into feature importance and model behavior, contributing to the development of more transparent and trustworthy AI systems.
Recommendations
- ✓ Further testing and validation of the model to ensure generalizability
- ✓ Development of more robust and adaptable frameworks for mathematical entity relationship extraction