Academic

RLVR Training of LLMs Does Not Improve Thinking Ability for General QA: Evaluation Method and a Simple Solution

arXiv:2603.20799v1 Announce Type: new Abstract: Reinforcement learning from verifiable rewards (RLVR) stimulates the thinking processes of large language models (LLMs), substantially enhancing their reasoning abilities on verifiable tasks. It is often assumed that similar gains should transfer to general question answering (GQA), but this assumption has not been thoroughly validated. To assess whether RLVR automatically improves LLM performance on GQA, we propose a Cross-Generation evaluation framework that measures the quality of intermediate reasoning by feeding the generated thinking context into LLMs of varying capabilities. Our evaluation leads to a discouraging finding: the efficacy of the thinking process on GQA tasks is markedly lower than on verifiable tasks, suggesting that explicit training on GQA remains necessary in addition to training on verifiable tasks. We further observe that direct RL training on GQA is less effective than RLVR. Our hypothesis is that, whereas verif

K
Kaiyuan Li, Jing-Cheng Pang, Yang Yu
· · 1 min read · 3 views

arXiv:2603.20799v1 Announce Type: new Abstract: Reinforcement learning from verifiable rewards (RLVR) stimulates the thinking processes of large language models (LLMs), substantially enhancing their reasoning abilities on verifiable tasks. It is often assumed that similar gains should transfer to general question answering (GQA), but this assumption has not been thoroughly validated. To assess whether RLVR automatically improves LLM performance on GQA, we propose a Cross-Generation evaluation framework that measures the quality of intermediate reasoning by feeding the generated thinking context into LLMs of varying capabilities. Our evaluation leads to a discouraging finding: the efficacy of the thinking process on GQA tasks is markedly lower than on verifiable tasks, suggesting that explicit training on GQA remains necessary in addition to training on verifiable tasks. We further observe that direct RL training on GQA is less effective than RLVR. Our hypothesis is that, whereas verifiable tasks demand robust logical chains to obtain high rewards, GQA tasks often admit shortcuts to high rewards without cultivating high-quality thinking. To avoid possible shortcuts, we introduce a simple method, Separated Thinking And Response Training (START), which first trains only the thinking process, using rewards defined on the final answer. We show that START improves both the quality of thinking and the final answer across several GQA benchmarks and RL algorithms.

Executive Summary

This article presents a comprehensive evaluation of the effectiveness of reinforcement learning from verifiable rewards (RLVR) in enhancing the thinking abilities of large language models (LLMs) on general question answering (GQA) tasks. The authors propose a novel evaluation framework, Cross-Generation, and demonstrate that RLVR does not automatically improve LLM performance on GQA tasks. Instead, they introduce a simple method, Separated Thinking And Response Training (START), which trains the thinking process separately from the final answer, leading to improved performance on multiple GQA benchmarks and RL algorithms.

Key Points

  • RLVR training does not automatically improve LLM performance on GQA tasks.
  • The efficacy of the thinking process on GQA tasks is lower than on verifiable tasks.
  • Direct RL training on GQA is less effective than RLVR.
  • START improves both the quality of thinking and the final answer on GQA tasks.

Merits

Strength of the Evaluation Framework

The proposed Cross-Generation evaluation framework provides a comprehensive assessment of the thinking abilities of LLMs on GQA tasks.

Novelty of the START Method

The introduction of the Separated Thinking And Response Training (START) method offers a simple and effective solution to improve LLM performance on GQA tasks.

Demerits

Assumption of Verifiable Tasks

The assumption that verifiable tasks require robust logical chains to obtain high rewards may not generalize to all GQA tasks.

Limited Generalizability

The effectiveness of the START method may be limited to specific GQA benchmarks and RL algorithms.

Expert Commentary

The article presents a thorough evaluation of the effectiveness of RLVR in enhancing the thinking abilities of LLMs on GQA tasks. The proposed evaluation framework and the introduction of the START method are significant contributions to the field. However, the assumption of verifiable tasks and the limited generalizability of the START method are notable limitations. The study's findings have significant practical and policy implications, highlighting the need for further research in this area.

Recommendations

  • Future research should investigate the effectiveness of the START method on a broader range of GQA benchmarks and RL algorithms.
  • The development of more comprehensive evaluation frameworks for LLMs is necessary to fully understand their thinking abilities and potential biases.

Sources

Original: arXiv - cs.CL