Advancing Multimodal Judge Models through a Capability-Oriented Benchmark and MCTS-Driven Data Generation
arXiv:2603.00546v1 Announce Type: new Abstract: Using Multimodal Large Language Models (MLLMs) as judges to achieve precise and consistent evaluations has gradually become an emerging paradigm across various domains. Evaluating the capability and reliability of MLLM-as-a-judge systems is therefore essential for ensuring trustworthy assessment. Existing judge benchmarks categorize samples by task types but fail to capture the fundamental judgment capabilities required for reliable evaluation. In this work, we introduce M-JudgeBench, a ten-dimensional capability-oriented benchmark designed to comprehensively assess the judgment abilities of MLLMs. Our benchmark decomposes evaluation into pairwise Chain-of-Thought (CoT) comparison, length bias avoidance, and process error detection tasks, jointly covering ten fine-grained subtasks. This design enables diagnosis of model reliability across reasoning styles, response lengths, and cross-model variations. Systematic evaluation uncovers the s
arXiv:2603.00546v1 Announce Type: new Abstract: Using Multimodal Large Language Models (MLLMs) as judges to achieve precise and consistent evaluations has gradually become an emerging paradigm across various domains. Evaluating the capability and reliability of MLLM-as-a-judge systems is therefore essential for ensuring trustworthy assessment. Existing judge benchmarks categorize samples by task types but fail to capture the fundamental judgment capabilities required for reliable evaluation. In this work, we introduce M-JudgeBench, a ten-dimensional capability-oriented benchmark designed to comprehensively assess the judgment abilities of MLLMs. Our benchmark decomposes evaluation into pairwise Chain-of-Thought (CoT) comparison, length bias avoidance, and process error detection tasks, jointly covering ten fine-grained subtasks. This design enables diagnosis of model reliability across reasoning styles, response lengths, and cross-model variations. Systematic evaluation uncovers the systematic weaknesses in existing MLLM-as-a-judge systems. To address this issue, we further propose Judge-MCTS, a data construction framework generating pairwise reasoning trajectories with various correctness and length. Using Judge-MCTS, we construct an MCTS-augmented dataset and train M-Judger, a series of strong judge models. Extensive experiments demonstrate the superiority of M-Judger on existing judge benchmarks as well as M-JudgeBench. Overall, our work establishes a more principled foundation for evaluating MLLM-as-a-judge through M-JudgeBench and Judge-MCTS framework, paving the way for future research on judge model evaluation and capability-driven judge training.
Executive Summary
This article presents a novel approach to evaluating Multimodal Large Language Models (MLLMs) as judges in various domains. The authors introduce M-JudgeBench, a ten-dimensional benchmark that assesses the judgment capabilities of MLLMs, and Judge-MCTS, a data construction framework generating pairwise reasoning trajectories. The proposed framework enables diagnosis of model reliability across reasoning styles, response lengths, and cross-model variations. Extensive experiments demonstrate the superiority of M-Judger, a series of strong judge models, on existing benchmarks and M-JudgeBench. The article establishes a more principled foundation for evaluating MLLM-as-a-judge systems, paving the way for future research on judge model evaluation and capability-driven judge training.
Key Points
- ▸ The authors introduce M-JudgeBench, a ten-dimensional benchmark for evaluating MLLM-as-a-judge systems.
- ▸ The proposed framework decomposes evaluation into pairwise Chain-of-Thought (CoT) comparison, length bias avoidance, and process error detection tasks.
- ▸ Judge-MCTS is a data construction framework generating pairwise reasoning trajectories with various correctness and length.
Merits
Strength
The proposed framework provides a comprehensive assessment of MLLM judgment capabilities, enabling diagnosis of model reliability across various dimensions.
Originality
The introduction of M-JudgeBench and Judge-MCTS presents a novel approach to evaluating MLLM-as-a-judge systems.
Impact
The work establishes a more principled foundation for evaluating MLLM-as-a-judge systems, paving the way for future research on judge model evaluation and capability-driven judge training.
Demerits
Limitation
The proposed framework may not be universally applicable, as the evaluation tasks and metrics may need to be tailored to specific domains or use cases.
Scalability
The complexity of M-JudgeBench and Judge-MCTS may limit their scalability to large-scale MLLM-as-a-judge systems.
Expert Commentary
This article presents a significant contribution to the field of MLLM-as-a-judge evaluation, offering a comprehensive framework for assessing the judgment capabilities of these models. The introduction of M-JudgeBench and Judge-MCTS provides a more principled approach to evaluating MLLM-as-a-judge systems, enabling diagnosis of model reliability across various dimensions. While the proposed framework may have limitations in terms of scalability and universality, it has the potential to significantly impact the development and deployment of MLLM-as-a-judge systems in various domains. The work also highlights the need for more research on judge model evaluation and capability-driven judge training, which could inform policy decisions related to AI adoption and deployment.
Recommendations
- ✓ Recommendation 1: Future research should focus on extending the proposed framework to evaluate the robustness of MLLMs against adversarial attacks and developing more scalable and universal evaluation methods.
- ✓ Recommendation 2: The development of more principled foundations for evaluating MLLM-as-a-judge systems should be prioritized, with a focus on informing policy decisions related to AI adoption and deployment.