Cross-Modal Rationale Transfer for Explainable Humanitarian Classification on Social Media
arXiv:2603.18611v1 Announce Type: new Abstract: Advances in social media data dissemination enable the provision of real-time information during a crisis. The information comes from different classes, such as infrastructure damages, persons missing or stranded in the affected zone, etc. Existing methods attempted to classify text and images into various humanitarian categories, but their decision-making process remains largely opaque, which affects their deployment in real-life applications. Recent work has sought to improve transparency by extracting textual rationales from tweets to explain predicted classes. However, such explainable classification methods have mostly focused on text, rather than crisis-related images. In this paper, we propose an interpretable-by-design multimodal classification framework. Our method first learns the joint representation of text and image using a visual language transformer model and extracts text rationales. Next, it extracts the image rationales
arXiv:2603.18611v1 Announce Type: new Abstract: Advances in social media data dissemination enable the provision of real-time information during a crisis. The information comes from different classes, such as infrastructure damages, persons missing or stranded in the affected zone, etc. Existing methods attempted to classify text and images into various humanitarian categories, but their decision-making process remains largely opaque, which affects their deployment in real-life applications. Recent work has sought to improve transparency by extracting textual rationales from tweets to explain predicted classes. However, such explainable classification methods have mostly focused on text, rather than crisis-related images. In this paper, we propose an interpretable-by-design multimodal classification framework. Our method first learns the joint representation of text and image using a visual language transformer model and extracts text rationales. Next, it extracts the image rationales via the mapping with text rationales. Our approach demonstrates how to learn rationales in one modality from another through cross-modal rationale transfer, which saves annotation effort. Finally, tweets are classified based on extracted rationales. Experiments are conducted over CrisisMMD benchmark dataset, and results show that our proposed method boosts the classification Macro-F1 by 2-35% while extracting accurate text tokens and image patches as rationales. Human evaluation also supports the claim that our proposed method is able to retrieve better image rationale patches (12%) that help to identify humanitarian classes. Our method adapts well to new, unseen datasets in zero-shot mode, achieving an accuracy of 80%.
Executive Summary
The article proposes a novel approach to explainable humanitarian classification on social media by introducing an interpretable-by-design multimodal classification framework. The method leverages a visual language transformer model to learn joint representations of text and images, extracting text and image rationales through cross-modal rationale transfer. Experimental results demonstrate significant improvements in classification accuracy and human evaluation supports the efficacy of the proposed method. The approach adapts well to new, unseen datasets in zero-shot mode, achieving high accuracy. This work contributes to the development of more transparent and effective humanitarian classification systems, with potential applications in crisis management and social media monitoring.
Key Points
- ▸ The proposed framework integrates text and image modalities to enhance explainability in humanitarian classification.
- ▸ Cross-modal rationale transfer is used to extract accurate text tokens and image patches as rationales, reducing annotation effort.
- ▸ The method demonstrates improved classification accuracy and adapts well to new datasets in zero-shot mode.
Merits
Strength in Explainability
The framework's ability to extract accurate rationales from both text and image modalities enhances transparency in humanitarian classification decision-making processes.
Improved Accuracy
The method achieves significant improvements in classification accuracy, particularly on the CrisisMMD benchmark dataset.
Demerits
Limited Datasets
The proposed method is primarily evaluated on a single benchmark dataset, limiting its generalizability to diverse crisis scenarios and social media platforms.
Computational Resource Requirements
The use of advanced transformer models may pose computational challenges, particularly when dealing with large volumes of social media data.
Expert Commentary
The article makes a compelling case for the importance of explainability in humanitarian classification on social media. The proposed framework's ability to integrate text and image modalities through cross-modal rationale transfer is a significant contribution to the field. However, the method's limitations, such as its reliance on a single benchmark dataset and potential computational resource requirements, should be carefully considered. Further research is needed to evaluate the framework's generalizability and scalability. Nevertheless, this work has the potential to positively impact crisis management and social media monitoring, making it a valuable contribution to the field.
Recommendations
- ✓ Future research should focus on evaluating the proposed method on diverse crisis scenarios and social media platforms to improve its generalizability.
- ✓ The authors should investigate techniques to mitigate computational resource requirements and improve scalability for large-scale social media data.