ExpLang: Improved Exploration and Exploitation in LLM Reasoning with On-Policy Thinking Language Selection
arXiv:2602.21887v1 Announce Type: new Abstract: Current large reasoning models (LRMs) have shown strong ability on challenging tasks after reinforcement learning (RL) based post-training. However, previous work mainly focuses on English reasoning in expectation of the strongest performance, despite the demonstrated potential advantage of multilingual thinking, as well as the requirement for native thinking traces by global users. In this paper, we propose ExpLang, a novel LLM post-training pipeline that enables on-policy thinking language selection to improve exploration and exploitation during RL with the use of multiple languages. The results show that our method steadily outperforms English-only training with the same training budget, while showing high thinking language compliance for both seen and unseen languages. Analysis shows that, by enabling on-policy thinking language selection as an action during RL, ExpLang effectively extends the RL exploration space with diversified la
arXiv:2602.21887v1 Announce Type: new Abstract: Current large reasoning models (LRMs) have shown strong ability on challenging tasks after reinforcement learning (RL) based post-training. However, previous work mainly focuses on English reasoning in expectation of the strongest performance, despite the demonstrated potential advantage of multilingual thinking, as well as the requirement for native thinking traces by global users. In this paper, we propose ExpLang, a novel LLM post-training pipeline that enables on-policy thinking language selection to improve exploration and exploitation during RL with the use of multiple languages. The results show that our method steadily outperforms English-only training with the same training budget, while showing high thinking language compliance for both seen and unseen languages. Analysis shows that, by enabling on-policy thinking language selection as an action during RL, ExpLang effectively extends the RL exploration space with diversified language preference and improves the RL exploitation outcome with leveraged non-English advantage. The method is orthogonal to most RL algorithms and opens up a new perspective on using multilinguality to improve LRMs.
Executive Summary
This article proposes ExpLang, a novel post-training pipeline for large reasoning models (LRMs) that enables on-policy thinking language selection to improve exploration and exploitation during reinforcement learning (RL) with multiple languages. The results show that ExpLang outperforms English-only training with the same training budget, while demonstrating high thinking language compliance for seen and unseen languages. The method effectively extends the RL exploration space with diversified language preference and improves the RL exploitation outcome with leveraged non-English advantage. This work has significant implications for improving LRMs and opens up a new perspective on using multilinguality to enhance their performance.
Key Points
- ▸ ExpLang proposes on-policy thinking language selection to improve exploration and exploitation during RL with multiple languages.
- ▸ The method outperforms English-only training with the same training budget and demonstrates high thinking language compliance.
- ▸ ExpLang effectively extends the RL exploration space with diversified language preference and improves the RL exploitation outcome.
Merits
Improves Exploration and Exploitation
ExpLang enables on-policy thinking language selection, which improves exploration and exploitation during RL with multiple languages.
Enhances Multilinguality
The method leverages non-English advantage and demonstrates high thinking language compliance for seen and unseen languages.
Orthogonal to Most RL Algorithms
ExpLang is orthogonal to most RL algorithms, making it a versatile and adaptable solution.
Demerits
Limited Evaluation
The evaluation of ExpLang is limited to a specific set of tasks and languages, which may not generalize to all domains and languages.
Dependence on High-Quality Training Data
ExpLang requires high-quality training data to achieve optimal performance, which may be a challenge in certain domains.
Expert Commentary
The article makes a significant contribution to the field of large reasoning models by proposing a novel post-training pipeline that enables on-policy thinking language selection. The results demonstrate the effectiveness of ExpLang in improving exploration and exploitation during RL with multiple languages. However, the evaluation is limited, and the method's dependence on high-quality training data is a concern. Nevertheless, the implications of ExpLang are significant, and it has the potential to enhance the performance of LRMs in real-world applications. Future research should aim to generalize the method to more domains and languages and address the concerns related to cultural bias in AI.
Recommendations
- ✓ Further evaluation of ExpLang on a more diverse set of tasks and languages to generalize the results.
- ✓ Investigation into the method's dependence on high-quality training data and the development of strategies to mitigate this dependence.