PerSoMed: A Large-Scale Balanced Dataset for Persian Social Media Text Classification
arXiv:2602.19333v1 Announce Type: new Abstract: This research introduces the first large-scale, well-balanced Persian social media text classification dataset, specifically designed to address the lack of comprehensive resources in this domain. The dataset comprises 36,000 posts across nine categories (Economic, Artistic, Sports, Political, Social, Health, Psychological, Historical, and Science & Technology), each containing 4,000 samples to ensure balanced class distribution. Data collection involved 60,000 raw posts from various Persian social media platforms, followed by rigorous preprocessing and hybrid annotation combining ChatGPT-based few-shot prompting with human verification. To mitigate class imbalance, we employed undersampling with semantic redundancy removal and advanced data augmentation strategies integrating lexical replacement and generative prompting. We benchmarked several models, including BiLSTM, XLM-RoBERTa (with LoRA and AdaLoRA adaptations), FaBERT, SBERT-based
arXiv:2602.19333v1 Announce Type: new Abstract: This research introduces the first large-scale, well-balanced Persian social media text classification dataset, specifically designed to address the lack of comprehensive resources in this domain. The dataset comprises 36,000 posts across nine categories (Economic, Artistic, Sports, Political, Social, Health, Psychological, Historical, and Science & Technology), each containing 4,000 samples to ensure balanced class distribution. Data collection involved 60,000 raw posts from various Persian social media platforms, followed by rigorous preprocessing and hybrid annotation combining ChatGPT-based few-shot prompting with human verification. To mitigate class imbalance, we employed undersampling with semantic redundancy removal and advanced data augmentation strategies integrating lexical replacement and generative prompting. We benchmarked several models, including BiLSTM, XLM-RoBERTa (with LoRA and AdaLoRA adaptations), FaBERT, SBERT-based architectures, and the Persian-specific TookaBERT (Base and Large). Experimental results show that transformer-based models consistently outperform traditional neural networks, with TookaBERT-Large achieving the best performance (Precision: 0.9622, Recall: 0.9621, F1- score: 0.9621). Class-wise evaluation further confirms robust performance across all categories, though social and political texts exhibited slightly lower scores due to inherent ambiguity. This research presents a new high-quality dataset and provides comprehensive evaluations of cutting-edge models, establishing a solid foundation for further developments in Persian NLP, including trend analysis, social behavior modeling, and user classification. The dataset is publicly available to support future research endeavors.
Executive Summary
This article presents PerSoMed, a large-scale and balanced Persian social media text classification dataset designed to address the lack of comprehensive resources in this domain. The dataset comprises 36,000 posts across nine categories, with rigorous preprocessing and hybrid annotation combining ChatGPT-based few-shot prompting with human verification. The study benchmarks several models, including transformer-based architectures, and finds that TookaBERT-Large achieves the best performance. The dataset is publicly available to support future research endeavors. This study provides a solid foundation for developments in Persian NLP, including trend analysis, social behavior modeling, and user classification.
Key Points
- ▸ PerSoMed is a large-scale and balanced Persian social media text classification dataset.
- ▸ The dataset comprises 36,000 posts across nine categories with rigorous preprocessing and hybrid annotation.
- ▸ Transformer-based models, particularly TookaBERT-Large, outperform traditional neural networks in text classification tasks.
Merits
Significant Contribution
The PerSoMed dataset addresses the lack of comprehensive resources in Persian social media text classification, providing a valuable resource for future research.
State-of-the-Art Methods
The study employs cutting-edge models, including transformer-based architectures, to demonstrate their effectiveness in text classification tasks.
Publicly Available Dataset
The PerSoMed dataset is publicly available, enabling researchers to access and build upon this resource for future studies.
Demerits
Class Imbalance Mitigation
While undersampling with semantic redundancy removal and data augmentation strategies are employed, the study does not provide a detailed analysis of their effectiveness in addressing class imbalance.
Limited Evaluation of Human Verification
The study primarily relies on ChatGPT-based few-shot prompting for annotation, with human verification playing a secondary role; a more comprehensive evaluation of human verification's impact on dataset quality would be beneficial.
Expert Commentary
The PerSoMed dataset and its associated methods demonstrate the importance of developing culturally and linguistically diverse resources for NLP research. While the study makes significant contributions to Persian NLP, the limitations mentioned above highlight areas for further improvement. The publicly available dataset and its applications in various industries and policy decisions make this study a valuable addition to the NLP community.
Recommendations
- ✓ Future studies should investigate the impact of human verification on dataset quality and explore alternative methods for addressing class imbalance.
- ✓ The development of more diverse and comprehensive datasets, including those from other languages and cultures, can further advance NLP research and applications.