CURE: A Multimodal Benchmark for Clinical Understanding and Retrieval Evaluation
arXiv:2603.19274v1 Announce Type: cross Abstract: Multimodal large language models (MLLMs) demonstrate considerable potential in clinical diagnostics, a domain that inherently requires synthesizing complex visual and textual data alongside consulting authoritative medical literature. However, existing benchmarks primarily evaluate MLLMs in end-to-end answering scenarios. This limits the ability to disentangle a model's foundational multimodal reasoning from its proficiency in evidence retrieval and application. We introduce the Clinical Understanding and Retrieval Evaluation (CURE) benchmark. Comprising $500$ multimodal clinical cases mapped to physician-cited reference literature, CURE evaluates reasoning and retrieval under controlled evidence settings to disentangle their respective contributions. We evaluate state-of-the-art MLLMs across distinct evidence-gathering paradigms in both closed-ended and open-ended diagnosis tasks. Evaluations reveal a stark dichotomy: while advanced m
arXiv:2603.19274v1 Announce Type: cross Abstract: Multimodal large language models (MLLMs) demonstrate considerable potential in clinical diagnostics, a domain that inherently requires synthesizing complex visual and textual data alongside consulting authoritative medical literature. However, existing benchmarks primarily evaluate MLLMs in end-to-end answering scenarios. This limits the ability to disentangle a model's foundational multimodal reasoning from its proficiency in evidence retrieval and application. We introduce the Clinical Understanding and Retrieval Evaluation (CURE) benchmark. Comprising $500$ multimodal clinical cases mapped to physician-cited reference literature, CURE evaluates reasoning and retrieval under controlled evidence settings to disentangle their respective contributions. We evaluate state-of-the-art MLLMs across distinct evidence-gathering paradigms in both closed-ended and open-ended diagnosis tasks. Evaluations reveal a stark dichotomy: while advanced models demonstrate clinical reasoning proficiency when supplied with physician reference evidence (achieving up to $73.4\%$ accuracy on differential diagnosis), their performance substantially declines (as low as $25.4\%$) when reliant on independent retrieval mechanisms. This disparity highlights the dual challenges of effectively integrating multimodal clinical evidence and retrieving precise supporting literature. CURE is publicly available at https://github.com/yanniangu/CURE.
Executive Summary
This article introduces the Clinical Understanding and Retrieval Evaluation (CURE) benchmark, a multimodal evaluation framework designed to assess the clinical reasoning and evidence retrieval capabilities of large language models (LLMs) in the context of clinical diagnostics. The CURE benchmark consists of 500 multimodal clinical cases mapped to physician-cited reference literature, and evaluations reveal a stark dichotomy between model performance when supplied with physician reference evidence versus independent retrieval mechanisms. This disparity highlights the dual challenges of effectively integrating multimodal clinical evidence and retrieving precise supporting literature. The CURE benchmark is publicly available and has significant implications for the development and evaluation of LLMs in clinical diagnostics.
Key Points
- ▸ The CURE benchmark is a multimodal evaluation framework for assessing clinical reasoning and evidence retrieval capabilities of LLMs.
- ▸ Evaluations reveal a stark dichotomy between model performance when supplied with physician reference evidence versus independent retrieval mechanisms.
- ▸ The CURE benchmark highlights the dual challenges of effectively integrating multimodal clinical evidence and retrieving precise supporting literature.
Merits
Strength
The CURE benchmark provides a comprehensive evaluation framework for assessing the clinical reasoning and evidence retrieval capabilities of LLMs in the context of clinical diagnostics.
Demerits
Limitation
The CURE benchmark is limited to evaluating LLMs in a controlled evidence setting, which may not accurately reflect real-world clinical scenarios.
Expert Commentary
The CURE benchmark is a significant contribution to the field of clinical diagnostics, providing a comprehensive evaluation framework for assessing the clinical reasoning and evidence retrieval capabilities of LLMs. However, as with any benchmark, there are limitations to consider. The CURE benchmark is limited to evaluating LLMs in a controlled evidence setting, which may not accurately reflect real-world clinical scenarios. Nevertheless, the CURE benchmark has significant implications for the development and evaluation of LLMs in clinical diagnostics, highlighting the importance of effective evidence retrieval and integration. As such, it is essential for policymakers to consider the development and evaluation of LLMs in clinical diagnostics, particularly in terms of evidence retrieval and integration.
Recommendations
- ✓ Develop and evaluate LLMs using the CURE benchmark to assess their clinical reasoning and evidence retrieval capabilities.
- ✓ Consider the development and evaluation of LLMs in clinical diagnostics, particularly in terms of evidence retrieval and integration, in policymaking and clinical practice.
Sources
Original: arXiv - cs.AI