Skip to main content
Academic

Test-Time Scaling with Diffusion Language Models via Reward-Guided Stitching

arXiv:2602.22871v1 Announce Type: new Abstract: Reasoning with large language models often benefits from generating multiple chains-of-thought, but existing aggregation strategies are typically trajectory-level (e.g., selecting the best trace or voting on the final answer), discarding useful intermediate work from partial or "nearly correct" attempts. We propose Stitching Noisy Diffusion Thoughts, a self-consistency framework that turns cheap diffusion-sampled reasoning into a reusable pool of step-level candidates. Given a problem, we (i) sample many diverse, low-cost reasoning trajectories using a masked diffusion language model, (ii) score every intermediate step with an off-the-shelf process reward model (PRM), and (iii) stitch these highest-quality steps across trajectories into a composite rationale. This rationale then conditions an autoregressive (AR) model (solver) to recompute only the final answer. This modular pipeline separates exploration (diffusion) from evaluation and

arXiv:2602.22871v1 Announce Type: new Abstract: Reasoning with large language models often benefits from generating multiple chains-of-thought, but existing aggregation strategies are typically trajectory-level (e.g., selecting the best trace or voting on the final answer), discarding useful intermediate work from partial or "nearly correct" attempts. We propose Stitching Noisy Diffusion Thoughts, a self-consistency framework that turns cheap diffusion-sampled reasoning into a reusable pool of step-level candidates. Given a problem, we (i) sample many diverse, low-cost reasoning trajectories using a masked diffusion language model, (ii) score every intermediate step with an off-the-shelf process reward model (PRM), and (iii) stitch these highest-quality steps across trajectories into a composite rationale. This rationale then conditions an autoregressive (AR) model (solver) to recompute only the final answer. This modular pipeline separates exploration (diffusion) from evaluation and solution synthesis, avoiding monolithic unified hybrids while preserving broad search. Across math reasoning benchmarks, we find that step-level recombination is most beneficial on harder problems, and ablations highlight the importance of the final AR solver in converting stitched but imperfect rationales into accurate answers. Using low-confidence diffusion sampling with parallel, independent rollouts, our training-free framework improves average accuracy by up to 23.8% across six math and coding tasks. At the same time, it achieves up to a 1.8x latency reduction relative to both traditional diffusion models (e.g., Dream, LLaDA) and unified architectures (e.g., TiDAR). Code is available at https://github.com/roymiles/diffusion-stitching.

Executive Summary

This article presents a novel framework called Stitching Noisy Diffusion Thoughts, which utilizes a self-consistency approach to leverage the strengths of diffusion language models in generating multiple chains-of-thought. The framework aggregates intermediate steps from diverse reasoning trajectories, scored by a process reward model, to form a composite rationale. This rationale is then conditioned on an autoregressive model to compute the final answer. The authors demonstrate significant improvements in accuracy and latency reduction across math and coding tasks, making the framework a promising approach for scalable reasoning. The modular pipeline design separates exploration and evaluation, avoiding the limitations of monolithic unified hybrids. However, the framework's reliance on off-the-shelf reward models and the potential for imperfect rationales to influence the final answer are notable considerations.

Key Points

  • Stitching Noisy Diffusion Thoughts framework aggregates intermediate steps from diverse reasoning trajectories
  • Composite rationale is conditioned on an autoregressive model to compute the final answer
  • Significant improvements in accuracy and latency reduction across math and coding tasks

Merits

Strength in Modularity

The framework's modular pipeline design separates exploration and evaluation, allowing for scalable reasoning and avoiding the limitations of monolithic unified hybrids.

Demerits

Limitation in Reward Model Reliance

The framework relies on off-the-shelf reward models, which may not always accurately capture the nuances of reasoning and problem-solving.

Risk of Imperfect Rationales

The use of imperfect rationales may influence the final answer, which could lead to decreased accuracy and reliability.

Expert Commentary

The Stitching Noisy Diffusion Thoughts framework presents a compelling approach to scalable reasoning, leveraging the strengths of diffusion language models and autoregressive models. However, the reliance on off-the-shelf reward models and the potential for imperfect rationales to influence the final answer are notable considerations. Further research is needed to address these limitations and develop more advanced reward models and evaluation metrics. The framework's modular design and ability to separate exploration and evaluation make it a promising approach for large-scale reasoning applications.

Recommendations

  • Develop more advanced reward models that can accurately capture the nuances of reasoning and problem-solving.
  • Investigate the use of more robust evaluation metrics to address the potential for imperfect rationales to influence the final answer.

Sources