Execution-Verified Reinforcement Learning for Optimization Modeling
arXiv:2604.00442v1 Announce Type: new Abstract: Automating optimization modeling with LLMs is a promising path toward scalable decision intelligence, but existing approaches either rely on agentic pipelines built on closed-source LLMs with high inference latency, or fine-tune smaller LLMs using costly process supervision that often overfits to a single solver API. Inspired by reinforcement learning with verifiable rewards, we propose Execution-Verified Optimization Modeling (EVOM), an execution-verified learning framework that treats a mathematical programming solver as a deterministic, interactive verifier. Given a natural-language problem and a target solver, EVOM generates solver-specific code, executes it in a sandboxed harness, and converts execution outcomes into scalar rewards, optimized with GRPO and DAPO in a closed-loop generate-execute-feedback-update process. This outcome-only formulation removes the need for process-level supervision, and enables cross-solver generalizati
arXiv:2604.00442v1 Announce Type: new Abstract: Automating optimization modeling with LLMs is a promising path toward scalable decision intelligence, but existing approaches either rely on agentic pipelines built on closed-source LLMs with high inference latency, or fine-tune smaller LLMs using costly process supervision that often overfits to a single solver API. Inspired by reinforcement learning with verifiable rewards, we propose Execution-Verified Optimization Modeling (EVOM), an execution-verified learning framework that treats a mathematical programming solver as a deterministic, interactive verifier. Given a natural-language problem and a target solver, EVOM generates solver-specific code, executes it in a sandboxed harness, and converts execution outcomes into scalar rewards, optimized with GRPO and DAPO in a closed-loop generate-execute-feedback-update process. This outcome-only formulation removes the need for process-level supervision, and enables cross-solver generalization by switching the verification environment rather than reconstructing solver-specific datasets. Experiments on NL4OPT, MAMO, IndustryOR, and OptiBench across Gurobi, OR-Tools, and COPT show that EVOM matches or outperforms process-supervised SFT, supports zero-shot solver transfer, and achieves effective low-cost solver adaptation by continuing training under the target solver backend.
Executive Summary
This article proposes a novel approach to automated optimization modeling using large language models (LLMs) called Execution-Verified Optimization Modeling (EVOM). EVOM leverages reinforcement learning with verifiable rewards to optimize mathematical programming solvers, eliminating the need for process-level supervision and enabling cross-solver generalization. Experiments demonstrate EVOM's effectiveness in matching or outperforming process-supervised methods, supporting zero-shot solver transfer, and achieving low-cost solver adaptation. The proposed framework has the potential to revolutionize optimization modeling by providing a scalable, flexible, and efficient solution for decision intelligence.
Key Points
- ▸ EVOM treats a mathematical programming solver as a deterministic, interactive verifier in a reinforcement learning framework.
- ▸ The approach eliminates the need for process-level supervision and enables cross-solver generalization.
- ▸ EVOM outperforms process-supervised SFT in experiments and supports zero-shot solver transfer.
Merits
Scalability
EVOM's ability to leverage LLMs and reinforcement learning enables scalable optimization modeling, making it a promising solution for complex decision-making problems.
Flexibility
The proposed framework allows for switching the verification environment, enabling cross-solver generalization and reducing the need for solver-specific datasets.
Efficiency
EVOM's closed-loop generate-execute-feedback-update process optimizes solver performance while minimizing inference latency.
Demerits
Dependence on LLMs
EVOM's performance relies heavily on the quality and availability of LLMs, which may introduce bias and variability in optimization results.
Complexity
The proposed framework may require significant computational resources and expertise to implement and fine-tune.
Expert Commentary
EVOM's innovative approach to optimization modeling demonstrates the potential of combining LLMs and reinforcement learning to achieve scalable, flexible, and efficient decision intelligence. While there are concerns regarding dependence on LLMs and complexity, the proposed framework has significant implications for various industries and policy-making. To further develop EVOM, researchers should focus on improving the quality and availability of LLMs, as well as reducing the complexity of the framework. Additionally, exploring the use of EVOM in real-world applications and evaluating its performance in different domains will be essential for its widespread adoption.
Recommendations
- ✓ Develop and refine the quality and availability of LLMs to reduce dependence on proprietary models.
- ✓ Implement and fine-tune EVOM in various domains to evaluate its performance and scalability.
Sources
Original: arXiv - cs.AI