Skip to main content
Academic

Preference Optimization for Review Question Generation Improves Writing Quality

arXiv:2602.15849v1 Announce Type: cross Abstract: Peer review relies on substantive, evidence-based questions, yet existing LLM-based approaches often generate surface-level queries, drawing over 50\% of their question tokens from a paper's first page. To bridge this gap, we develop IntelliReward, a novel reward model built from a frozen autoregressive LLM with trainable multi-head transformers over the final 50 token states, which outperforms API-based SFT baselines in predicting expert-level human preferences. By applying Decoupled Clip and Dynamic Sampling Policy Optimization (DAPO) with IntelliReward, we train IntelliAsk, a question-generation model aligned with human standards of effort, evidence, and grounding. We find consistent improvements on reasoning and writing benchmarks, suggesting reviewer-question quality correlates with broader capabilities. Compared to the Qwen3-32B base model, IntelliAsk shows measurable gains across diverse benchmarks, specifically improving perfor

arXiv:2602.15849v1 Announce Type: cross Abstract: Peer review relies on substantive, evidence-based questions, yet existing LLM-based approaches often generate surface-level queries, drawing over 50\% of their question tokens from a paper's first page. To bridge this gap, we develop IntelliReward, a novel reward model built from a frozen autoregressive LLM with trainable multi-head transformers over the final 50 token states, which outperforms API-based SFT baselines in predicting expert-level human preferences. By applying Decoupled Clip and Dynamic Sampling Policy Optimization (DAPO) with IntelliReward, we train IntelliAsk, a question-generation model aligned with human standards of effort, evidence, and grounding. We find consistent improvements on reasoning and writing benchmarks, suggesting reviewer-question quality correlates with broader capabilities. Compared to the Qwen3-32B base model, IntelliAsk shows measurable gains across diverse benchmarks, specifically improving performance on reasoning tasks like MuSR (68.3 vs 64.7 Acc) and complex writing evaluations such as WritingBench (8.31 vs 8.07). We release our implementation, expert preference annotations, and the IntelliReward model to provide an automatic evaluation benchmark for grounding, effort, and evidence in LLM-generated review questions.

Executive Summary

The article presents IntelliReward, a novel reward model, and IntelliAsk, a question-generation model that improves writing quality by optimizing preference for review question generation. The models outperform existing LLM-based approaches and API-based SFT baselines in predicting expert-level human preferences. The results show consistent improvements on reasoning and writing benchmarks, suggesting a correlation between reviewer-question quality and broader capabilities.

Key Points

  • IntelliReward is a novel reward model built from a frozen autoregressive LLM with trainable multi-head transformers
  • IntelliAsk is a question-generation model aligned with human standards of effort, evidence, and grounding
  • The models demonstrate measurable gains across diverse benchmarks, including reasoning tasks and complex writing evaluations

Merits

Improved Performance

The models show significant improvements in predicting expert-level human preferences and generating high-quality review questions

Novel Approach

The introduction of IntelliReward and IntelliAsk provides a new perspective on optimizing preference for review question generation

Demerits

Limited Context

The article focuses on a specific application of LLM-based approaches, which may limit the generalizability of the results

Dependence on Expert Annotations

The models rely on expert preference annotations, which may be time-consuming and costly to obtain

Expert Commentary

The article presents a significant contribution to the field of natural language processing, as it addresses the long-standing challenge of generating high-quality review questions. The introduction of IntelliReward and IntelliAsk provides a novel approach to optimizing preference for review question generation, which has the potential to improve writing quality and enhance human-AI collaboration. However, further research is needed to explore the limitations and generalizability of the models, as well as their potential applications in various domains.

Recommendations

  • Future studies should investigate the applicability of IntelliReward and IntelliAsk to different domains and tasks
  • Researchers should explore methods to reduce the dependence on expert annotations and improve the explainability and transparency of the models

Sources