ReflexiCoder: Teaching Large Language Models to Self-Reflect on Generated Code and Self-Correct It via Reinforcement Learning
arXiv:2603.05863v1 Announce Type: new Abstract: While Large Language Models (LLMs) have revolutionized code generation, standard "System 1" approaches, generating solutions in a single forward pass, often hit a performance ceiling when faced with complex algorithmic tasks. Existing iterative refinement strategies attempt to bridge this gap at inference time, yet they predominantly rely on external oracles, execution feedback, or computationally expensive prompt-response cycles. In this work, we propose ReflexiCoder, a novel reinforcement learning (RL) framework that internalizes the structured reasoning trajectory, encompassing initial generation, bug and optimization aware reflection, and self-correction, directly into the model's weights. Unlike prior methods, ReflexiCoder shifts the paradigm from external-dependent refinement to an intrinsic, fully autonomous self-reflection and self-correction capabilities at inference time. We utilize an RL-zero training paradigm with granular re
arXiv:2603.05863v1 Announce Type: new Abstract: While Large Language Models (LLMs) have revolutionized code generation, standard "System 1" approaches, generating solutions in a single forward pass, often hit a performance ceiling when faced with complex algorithmic tasks. Existing iterative refinement strategies attempt to bridge this gap at inference time, yet they predominantly rely on external oracles, execution feedback, or computationally expensive prompt-response cycles. In this work, we propose ReflexiCoder, a novel reinforcement learning (RL) framework that internalizes the structured reasoning trajectory, encompassing initial generation, bug and optimization aware reflection, and self-correction, directly into the model's weights. Unlike prior methods, ReflexiCoder shifts the paradigm from external-dependent refinement to an intrinsic, fully autonomous self-reflection and self-correction capabilities at inference time. We utilize an RL-zero training paradigm with granular reward functions to optimize the entire reflection-correction trajectory, teaching the model how to debug without reliance on ground-truth feedback or execution engines at inference time. Extensive experiments across seven benchmarks demonstrate that our ReflexiCoder-8B establishes a new state-of-the-art (SOTA) among leading open-source models in the 1.5B-14B range, achieving 94.51% (87.20%) on HumanEval (Plus), 81.80% (78.57%) on MBPP (Plus), 35.00% on BigCodeBench, 52.21% on LiveCodeBench, and 37.34% on CodeForces in a single-attempt setting, rivaling or surpassing proprietary models like GPT-5.1. Notably, our framework is significantly more token-efficient than base models, reducing inference-time compute overhead by approximately 40% through disciplined, high-speed reasoning and reflection patterns. Source code is available at https://github.com/juyongjiang/ReflexiCoder.
Executive Summary
The article introduces ReflexiCoder, a novel reinforcement learning framework that enables large language models to self-reflect and self-correct generated code without relying on external oracles or execution feedback. ReflexiCoder achieves state-of-the-art performance on several benchmarks, including HumanEval and MBPP, and demonstrates significant improvements in token efficiency and inference-time compute overhead. The framework's autonomous self-reflection and self-correction capabilities make it a promising approach for improving the accuracy and efficiency of code generation tasks.
Key Points
- ▸ ReflexiCoder uses reinforcement learning to internalize the structured reasoning trajectory for code generation
- ▸ The framework achieves state-of-the-art performance on several benchmarks, including HumanEval and MBPP
- ▸ ReflexiCoder reduces inference-time compute overhead by approximately 40% through disciplined, high-speed reasoning and reflection patterns
Merits
Improved Accuracy
ReflexiCoder's self-reflection and self-correction capabilities lead to improved accuracy in code generation tasks
Increased Efficiency
The framework's autonomous self-reflection and self-correction capabilities reduce inference-time compute overhead
Demerits
Limited Generalizability
ReflexiCoder's performance may not generalize to all code generation tasks or domains
Computational Requirements
The framework's reinforcement learning approach may require significant computational resources for training
Expert Commentary
ReflexiCoder represents a significant advancement in the field of code generation, demonstrating the potential for large language models to self-reflect and self-correct without relying on external oracles or execution feedback. The framework's improved accuracy and efficiency make it a promising approach for improving the accuracy and efficiency of code generation tasks. However, further research is needed to address the limitations and potential risks associated with ReflexiCoder, including its limited generalizability and potential vulnerability to adversarial attacks.
Recommendations
- ✓ Further research is needed to improve the generalizability and robustness of ReflexiCoder
- ✓ The development of regulatory and ethical frameworks is necessary to address the potential risks and concerns associated with the use of AI in code generation