Decomposing Physician Disagreement in HealthBench
arXiv:2602.22758v1 Announce Type: new Abstract: We decompose physician disagreement in the HealthBench medical AI evaluation dataset to understand where variance resides and what observable features can explain it. Rubric identity accounts for 15.8% of met/not-met label variance but only 3.6-6.9% of disagreement variance; physician identity accounts for just 2.4%. The dominant 81.8% case-level residual is not reduced by HealthBench's metadata labels (z = -0.22, p = 0.83), normative rubric language (pseudo R^2 = 1.2%), medical specialty (0/300 Tukey pairs significant), surface-feature triage (AUC = 0.58), or embeddings (AUC = 0.485). Disagreement follows an inverted-U with completion quality (AUC = 0.689), confirming physicians agree on clearly good or bad outputs but split on borderline cases. Physician-validated uncertainty categories reveal that reducible uncertainty (missing context, ambiguous phrasing) more than doubles disagreement odds (OR = 2.55, p < 10^(-24)), while irreducibl
arXiv:2602.22758v1 Announce Type: new Abstract: We decompose physician disagreement in the HealthBench medical AI evaluation dataset to understand where variance resides and what observable features can explain it. Rubric identity accounts for 15.8% of met/not-met label variance but only 3.6-6.9% of disagreement variance; physician identity accounts for just 2.4%. The dominant 81.8% case-level residual is not reduced by HealthBench's metadata labels (z = -0.22, p = 0.83), normative rubric language (pseudo R^2 = 1.2%), medical specialty (0/300 Tukey pairs significant), surface-feature triage (AUC = 0.58), or embeddings (AUC = 0.485). Disagreement follows an inverted-U with completion quality (AUC = 0.689), confirming physicians agree on clearly good or bad outputs but split on borderline cases. Physician-validated uncertainty categories reveal that reducible uncertainty (missing context, ambiguous phrasing) more than doubles disagreement odds (OR = 2.55, p < 10^(-24)), while irreducible uncertainty (genuine medical ambiguity) has no effect (OR = 1.01, p = 0.90), though even the former explains only ~3% of total variance. The agreement ceiling in medical AI evaluation is thus largely structural, but the reducible/irreducible dissociation suggests that closing information gaps in evaluation scenarios could lower disagreement where inherent clinical ambiguity does not, pointing toward actionable evaluation design improvements.
Executive Summary
This study decomposes physician disagreement in the HealthBench medical AI evaluation dataset to identify sources of variance and observable features that explain disagreement. The results suggest that physician identity accounts for a relatively small proportion of disagreement variance, with the majority of variance attributed to case-level residuals. The study finds that physician-validated uncertainty categories, particularly reducible uncertainty, have a significant impact on disagreement, but only explain a small proportion of total variance. The findings highlight the importance of closing information gaps in evaluation scenarios to lower disagreement, and point to actionable evaluation design improvements.
Key Points
- ▸ Physician identity accounts for a small proportion of disagreement variance.
- ▸ Case-level residuals dominate disagreement variance.
- ▸ Reducible uncertainty has a significant impact on disagreement, but only explains a small proportion of total variance.
Merits
Methodological Rigor
The study employs a rigorous methodology, utilizing the HealthBench dataset and physician-validated uncertainty categories to decompose physician disagreement.
Contributions to the Field
The study provides valuable insights into the sources of physician disagreement and the impact of reducible uncertainty on disagreement, offering actionable evaluation design improvements.
Demerits
Limited Generalizability
The study's findings may not be generalizable to other medical AI evaluation datasets or clinical contexts.
Insufficient Exploration of Irreducible Uncertainty
The study's limited exploration of irreducible uncertainty may not fully capture its impact on physician disagreement.
Expert Commentary
The study's findings provide valuable insights into the sources of physician disagreement and the impact of reducible uncertainty on disagreement. The results highlight the importance of closing information gaps in evaluation scenarios to lower disagreement, and offer actionable evaluation design improvements. However, the study's limited exploration of irreducible uncertainty and limited generalizability of the findings are notable limitations. Further research is needed to fully capture the impact of irreducible uncertainty on physician disagreement and to explore the generalizability of the findings.
Recommendations
- ✓ Future studies should explore the impact of irreducible uncertainty on physician disagreement in more detail.
- ✓ Evaluation designers should prioritize closing information gaps in evaluation scenarios to lower physician disagreement.