Skip to main content
Academic

Toward Scalable Verifiable Reward: Proxy State-Based Evaluation for Multi-turn Tool-Calling LLM Agents

arXiv:2602.16246v1 Announce Type: new Abstract: Interactive large language model (LLM) agents operating via multi-turn dialogue and multi-step tool calling are increasingly used in production. Benchmarks for these agents must both reliably compare models and yield on-policy training data. Prior agentic benchmarks (e.g., tau-bench, tau2-bench, AppWorld) rely on fully deterministic backends, which are costly to build and iterate. We propose Proxy State-Based Evaluation, an LLM-driven simulation framework that preserves final state-based evaluation without a deterministic database. Specifically, a scenario specifies the user goal, user/system facts, expected final state, and expected agent behavior, and an LLM state tracker infers a structured proxy state from the full interaction trace. LLM judges then verify goal completion and detect tool/user hallucinations against scenario constraints. Empirically, our benchmark produces stable, model-differentiating rankings across families and inf

arXiv:2602.16246v1 Announce Type: new Abstract: Interactive large language model (LLM) agents operating via multi-turn dialogue and multi-step tool calling are increasingly used in production. Benchmarks for these agents must both reliably compare models and yield on-policy training data. Prior agentic benchmarks (e.g., tau-bench, tau2-bench, AppWorld) rely on fully deterministic backends, which are costly to build and iterate. We propose Proxy State-Based Evaluation, an LLM-driven simulation framework that preserves final state-based evaluation without a deterministic database. Specifically, a scenario specifies the user goal, user/system facts, expected final state, and expected agent behavior, and an LLM state tracker infers a structured proxy state from the full interaction trace. LLM judges then verify goal completion and detect tool/user hallucinations against scenario constraints. Empirically, our benchmark produces stable, model-differentiating rankings across families and inference-time reasoning efforts, and its on-/off-policy rollouts provide supervision that transfers to unseen scenarios. Careful scenario specification yields near-zero simulator hallucination rates as supported by ablation studies. The framework also supports sensitivity analyses over user personas. Human-LLM judge agreement exceeds 90%, indicating reliable automated evaluation. Overall, proxy state-based evaluation offers a practical, scalable alternative to deterministic agentic benchmarks for industrial LLM agents.

Executive Summary

This article proposes a novel evaluation framework for large language model (LLM) agents, namely Proxy State-Based Evaluation. The framework simulates multi-turn dialogue and multi-step tool calling scenarios, leveraging an LLM to infer a structured proxy state from the full interaction trace. This approach enables scalable and reliable evaluation, with human-LLM judge agreement exceeding 90%. The framework also supports sensitivity analyses over user personas and produces stable, model-differentiating rankings. While offering a practical alternative to deterministic agentic benchmarks, the framework's limitations include potential simulator hallucination rates and the need for careful scenario specification. The article's empirical results demonstrate the effectiveness of Proxy State-Based Evaluation, with implications for industrial LLM agent development and deployment.

Key Points

  • Proxy State-Based Evaluation simulates multi-turn dialogue and multi-step tool calling scenarios using an LLM state tracker.
  • The framework enables scalable and reliable evaluation, with human-LLM judge agreement exceeding 90%.
  • Proxy State-Based Evaluation supports sensitivity analyses over user personas and produces stable, model-differentiating rankings.

Merits

Strength in Scalability

The proposed framework offers a scalable alternative to deterministic agentic benchmarks, enabling reliable evaluation of industrial LLM agents.

Demerits

Potential Simulator Hallucination

Ablation studies suggest near-zero simulator hallucination rates, but careful scenario specification is necessary to mitigate this limitation.

Expert Commentary

The proposed Proxy State-Based Evaluation framework offers a significant advancement in the evaluation of large language model agents. By leveraging an LLM to infer a structured proxy state from the full interaction trace, the framework enables scalable and reliable evaluation. The results demonstrate the effectiveness of the framework, with human-LLM judge agreement exceeding 90%. However, careful scenario specification and mitigation of potential simulator hallucination rates are essential to fully realize the benefits of this approach. The implications of Proxy State-Based Evaluation are far-reaching, with significant potential for industrial LLM agent development and deployment. Nevertheless, policymakers and industry stakeholders must address concerns about accountability, transparency, and bias.

Recommendations

  • Further research is needed to fully understand the impact of Proxy State-Based Evaluation on industrial LLM agent development and deployment.
  • Policymakers and industry stakeholders should prioritize the development of guidelines and best practices for the deployment of Proxy State-Based Evaluation and similar frameworks.

Sources