Academic

DeReason: A Difficulty-Aware Curriculum Improves Decoupled SFT-then-RL Training for General Reasoning

arXiv:2603.11193v1 Announce Type: new Abstract: Reinforcement learning with Verifiable Rewards (RLVR) has emerged as a powerful paradigm for eliciting reasoning capabilities in large language models, particularly in mathematics and coding. While recent efforts have extended this paradigm to broader general scientific (STEM) domains, the complex interplay between supervised fine-tuning (SFT) and RL in these contexts remains underexplored. In this paper, we conduct controlled experiments revealing a critical challenge: for general STEM domains, RL applied directly to base models is highly sample-inefficient and is consistently surpassed by supervised fine-tuning (SFT) on moderate-quality responses. Yet sequential SFT followed by RL can further improve performance, suggesting that the two stages play complementary roles, and that how training data is allocated between them matters. Therefore, we propose DeReason, a difficulty-based data decoupling strategy for general reasoning. DeReason

arXiv:2603.11193v1 Announce Type: new Abstract: Reinforcement learning with Verifiable Rewards (RLVR) has emerged as a powerful paradigm for eliciting reasoning capabilities in large language models, particularly in mathematics and coding. While recent efforts have extended this paradigm to broader general scientific (STEM) domains, the complex interplay between supervised fine-tuning (SFT) and RL in these contexts remains underexplored. In this paper, we conduct controlled experiments revealing a critical challenge: for general STEM domains, RL applied directly to base models is highly sample-inefficient and is consistently surpassed by supervised fine-tuning (SFT) on moderate-quality responses. Yet sequential SFT followed by RL can further improve performance, suggesting that the two stages play complementary roles, and that how training data is allocated between them matters. Therefore, we propose DeReason, a difficulty-based data decoupling strategy for general reasoning. DeReason partitions training data by reasoning intensity estimated via LLM-based scoring into reasoning-intensive and non-reasoning-intensive subsets. It allocates broad-coverage, non-reasoning-intensive problems to SFT to establish foundational domain knowledge, and reserves a focused subset of difficult problems for RL to cultivate complex reasoning. We demonstrate that this principled decoupling yields better performance than randomly splitting the data for sequential SFT and RL. Extensive experiments on general STEM and mathematical benchmarks demonstrate that our decoupled curriculum training significantly outperforms SFT-only, RL-only, and random-split baselines. Our work provides a systematic study of the interplay between SFT and RL for general reasoning, offering a highly effective and generalized post-training recipe.

Executive Summary

The article proposes DeReason, a difficulty-aware curriculum that improves decoupled SFT-then-RL training for general reasoning in large language models. It partitions training data into reasoning-intensive and non-reasoning-intensive subsets, allocating broad-coverage problems to SFT and reserving difficult problems for RL. This approach yields better performance than random data splitting and outperforms SFT-only, RL-only, and random-split baselines in extensive experiments on STEM and mathematical benchmarks.

Key Points

  • DeReason is a difficulty-based data decoupling strategy for general reasoning
  • It partitions training data into reasoning-intensive and non-reasoning-intensive subsets
  • The approach outperforms SFT-only, RL-only, and random-split baselines in STEM and mathematical benchmarks

Merits

Effective Decoupling

DeReason's decoupling strategy allows for more efficient allocation of training data, leading to improved performance in general reasoning tasks.

Demerits

Limited Generalizability

The approach may not generalize well to other domains or tasks, and the effectiveness of DeReason in non-STEM contexts is unclear.

Expert Commentary

The article provides a systematic study of the interplay between SFT and RL for general reasoning, offering a highly effective and generalized post-training recipe. The proposed DeReason approach demonstrates the importance of careful data allocation and decoupling in achieving improved performance in complex reasoning tasks. However, further research is needed to fully understand the limitations and potential applications of this approach, particularly in non-STEM contexts.

Recommendations

  • Further experimentation to evaluate the generalizability of DeReason to other domains and tasks
  • Investigation into the explainability and transparency of DeReason's LLM-based scoring approach

Sources