DeepFact: Co-Evolving Benchmarks and Agents for Deep Research Factuality
arXiv:2603.05912v1 Announce Type: new Abstract: Search-augmented LLM agents can produce deep research reports (DRRs), but verifying claim-level factuality remains challenging. Existing fact-checkers are primarily designed for general-domain, factoid-style atomic claims, and there is no benchmark to test whether such verifiers transfer to DRRs. Yet building such a benchmark is itself difficult. We first show that static expert-labeled benchmarks are brittle in this setting: in a controlled study with PhD-level specialists, unassisted experts achieve only 60.8% accuracy on a hidden micro-gold set of verifiable claims. We propose Evolving Benchmarking via Audit-then-Score (AtS), where benchmark labels and rationales are explicitly revisable: when a verifier disagrees with the current benchmark, it must submit evidence; an auditor adjudicates the dispute; and accepted revisions update the benchmark before models are scored. Across four AtS rounds, expert micro-gold accuracy rises to 90.9%
arXiv:2603.05912v1 Announce Type: new Abstract: Search-augmented LLM agents can produce deep research reports (DRRs), but verifying claim-level factuality remains challenging. Existing fact-checkers are primarily designed for general-domain, factoid-style atomic claims, and there is no benchmark to test whether such verifiers transfer to DRRs. Yet building such a benchmark is itself difficult. We first show that static expert-labeled benchmarks are brittle in this setting: in a controlled study with PhD-level specialists, unassisted experts achieve only 60.8% accuracy on a hidden micro-gold set of verifiable claims. We propose Evolving Benchmarking via Audit-then-Score (AtS), where benchmark labels and rationales are explicitly revisable: when a verifier disagrees with the current benchmark, it must submit evidence; an auditor adjudicates the dispute; and accepted revisions update the benchmark before models are scored. Across four AtS rounds, expert micro-gold accuracy rises to 90.9%, indicating experts are substantially more reliable as auditors than as one-shot labelers. We instantiate AtS as DeepFact-Bench, a versioned DRR factuality benchmark with auditable rationales, and DeepFact-Eval, a document-level verification agent (with a grouped lite variant) that outperforms existing verifiers on DeepFact-Bench and transfers well to external factuality datasets.
Executive Summary
This article proposes a novel approach to benchmarking and evaluating the factuality of deep research reports. The authors, through a controlled study, demonstrate the brittleness of static expert-labeled benchmarks in this setting and introduce Evolving Benchmarking via Audit-then-Score (AtS), a dynamic benchmarking framework that allows for the revision of benchmark labels and rationales. The authors instantiate AtS as DeepFact-Bench and DeepFact-Eval, a document-level verification agent that outperforms existing verifiers on DeepFact-Bench and transfers well to external factuality datasets. The proposed framework has the potential to significantly improve the accuracy of fact-checking in deep research reports.
Key Points
- ▸ Static expert-labeled benchmarks are brittle in evaluating deep research reports.
- ▸ Evolving Benchmarking via Audit-then-Score (AtS) is proposed as a dynamic benchmarking framework.
- ▸ DeepFact-Bench and DeepFact-Eval are instantiated as a versioned DRR factuality benchmark and a document-level verification agent, respectively.
Merits
Strength in Dynamic Benchmarking
The proposed framework allows for the revision of benchmark labels and rationales, making it more adaptable to the complexities of deep research reports.
Improved Accuracy
The authors demonstrate a significant improvement in accuracy, from 60.8% to 90.9%, through the use of AtS.
Demerits
Complexity and Resource-Intensiveness
The proposed framework requires significant resources and expertise to implement and maintain.
Expert Commentary
The proposed framework is a significant step forward in the evaluation of deep research reports. However, its implementation and maintenance will require significant resources and expertise. The framework's potential to improve accuracy is substantial, but it also raises important questions about the role of humans in the fact-checking process and the potential for bias in dynamic benchmarking. As such, it is essential to carefully consider the implications of this framework and its potential applications.
Recommendations
- ✓ Further research is necessary to explore the scalability and generalizability of the proposed framework.
- ✓ The development of more efficient and cost-effective methods for implementing and maintaining the framework is essential.