SozKZ: Training Efficient Small Language Models for Kazakh from Scratch
arXiv:2603.20854v1 Announce Type: new Abstract: Kazakh, a Turkic language spoken by over 22 million people, remains underserved by existing multilingual language models, which allocate minimal capacity to low-resource languages and employ tokenizers ill-suited to agglutinative morphology. We present SozKZ, a family of Llama-architecture language models (50M-600M parameters) trained entirely from scratch on 9 billion tokens of Kazakh text with a dedicated 50K BPE tokenizer. We evaluate all models on three Kazakh benchmarks -- multiple-choice cultural QA, reading comprehension (Belebele), and topic classification (SIB-200) -- alongside five multilingual baselines ranging from 500M to 3B parameters. Our 600M model achieves 30.3% accuracy on Kazakh cultural QA, approaching the 32.0% of Llama-3.2-1B (2x larger), and 25.5% on SIB-200 topic classification, surpassing all evaluated multilingual models up to 2B parameters. We observe consistent scaling from 50M to 600M, with MC QA accuracy ris
arXiv:2603.20854v1 Announce Type: new Abstract: Kazakh, a Turkic language spoken by over 22 million people, remains underserved by existing multilingual language models, which allocate minimal capacity to low-resource languages and employ tokenizers ill-suited to agglutinative morphology. We present SozKZ, a family of Llama-architecture language models (50M-600M parameters) trained entirely from scratch on 9 billion tokens of Kazakh text with a dedicated 50K BPE tokenizer. We evaluate all models on three Kazakh benchmarks -- multiple-choice cultural QA, reading comprehension (Belebele), and topic classification (SIB-200) -- alongside five multilingual baselines ranging from 500M to 3B parameters. Our 600M model achieves 30.3% accuracy on Kazakh cultural QA, approaching the 32.0% of Llama-3.2-1B (2x larger), and 25.5% on SIB-200 topic classification, surpassing all evaluated multilingual models up to 2B parameters. We observe consistent scaling from 50M to 600M, with MC QA accuracy rising from 22.8% to 30.3%, suggesting that further scaling remains beneficial. These results demonstrate that small, dedicated models trained from scratch with a language-appropriate tokenizer offer a viable path for low-resource language technology, achieving competitive performance at a fraction of the computational cost. All models and the tokenizer are released under open licenses.
Executive Summary
This article presents SozKZ, a family of language models trained from scratch on Kazakh text with a dedicated tokenizer, achieving competitive performance on Kazakh benchmarks at a fraction of the computational cost. The 600M model outperforms multilingual baselines up to 2B parameters on topic classification and approaches the accuracy of a larger Llama-3.2-1B model on cultural QA. The results demonstrate the viability of small, dedicated models for low-resource language technology. The models and tokenizer are released under open licenses.
Key Points
- ▸ SozKZ is a family of Llama-architecture language models trained on 9 billion tokens of Kazakh text
- ▸ The models employ a dedicated 50K BPE tokenizer suitable for agglutinative morphology
- ▸ The 600M model achieves competitive performance on Kazakh benchmarks at a fraction of the computational cost
Merits
Strength in Low-Resource Language Technology
SozKZ demonstrates the potential of small, dedicated models for low-resource languages, offering a viable path for technology development at a lower computational cost.
Competitive Performance
The 600M model achieves competitive performance on Kazakh benchmarks, approaching the accuracy of larger multilingual models.
Demerits
Limited Generalizability
The models' performance is limited to Kazakh language and may not generalize to other agglutinative languages.
Tokenizer Customization
The dedicated tokenizer may require customization for other languages with similar morphology.
Expert Commentary
The article's findings on SozKZ's performance and scalability are significant, offering new insights into the development of language models for low-resource languages. However, the limitations of the models' generalizability and tokenizer customization requirements should be carefully considered. The open licensing of the models and tokenizer is a welcome development, promoting the adoption of more accessible and inclusive language technology solutions. Future research should focus on extending SozKZ's approach to other agglutinative languages and exploring its potential applications in practical and policy contexts.
Recommendations
- ✓ Future research should focus on extending SozKZ's approach to other agglutinative languages and exploring its potential applications in practical and policy contexts.
- ✓ Developers should consider customizing the dedicated tokenizer for other languages with similar morphology to improve its generalizability.
Sources
Original: arXiv - cs.CL