Contradiction to Consensus: Dual Perspective, Multi Source Retrieval Based Claim Verification with Source Level Disagreement using LLM
arXiv:2602.18693v1 Announce Type: new Abstract: The spread of misinformation across digital platforms can pose significant societal risks. Claim verification, a.k.a. fact-checking, systems can help identify potential misinformation. However, their efficacy is limited by the knowledge sources that they rely on. Most automated claim verification systems depend on a single knowledge source and utilize the supporting evidence from that source; they ignore the disagreement of their source with others. This limits their knowledge coverage and transparency. To address these limitations, we present a novel system for open-domain claim verification (ODCV) that leverages large language models (LLMs), multi-perspective evidence retrieval, and cross-source disagreement analysis. Our approach introduces a novel retrieval strategy that collects evidence for both the original and the negated forms of a claim, enabling the system to capture supporting and contradicting information from diverse source
arXiv:2602.18693v1 Announce Type: new Abstract: The spread of misinformation across digital platforms can pose significant societal risks. Claim verification, a.k.a. fact-checking, systems can help identify potential misinformation. However, their efficacy is limited by the knowledge sources that they rely on. Most automated claim verification systems depend on a single knowledge source and utilize the supporting evidence from that source; they ignore the disagreement of their source with others. This limits their knowledge coverage and transparency. To address these limitations, we present a novel system for open-domain claim verification (ODCV) that leverages large language models (LLMs), multi-perspective evidence retrieval, and cross-source disagreement analysis. Our approach introduces a novel retrieval strategy that collects evidence for both the original and the negated forms of a claim, enabling the system to capture supporting and contradicting information from diverse sources: Wikipedia, PubMed, and Google. These evidence sets are filtered, deduplicated, and aggregated across sources to form a unified and enriched knowledge base that better reflects the complexity of real-world information. This aggregated evidence is then used for claim verification using LLMs. We further enhance interpretability by analyzing model confidence scores to quantify and visualize inter-source disagreement. Through extensive evaluation on four benchmark datasets with five LLMs, we show that knowledge aggregation not only improves claim verification but also reveals differences in source-specific reasoning. Our findings underscore the importance of embracing diversity, contradiction, and aggregation in evidence for building reliable and transparent claim verification systems
Executive Summary
This article presents a novel approach to open-domain claim verification (ODCV) that leverages large language models (LLMs), multi-perspective evidence retrieval, and cross-source disagreement analysis. The proposed system collects evidence for both the original and negated forms of a claim, enabling the capture of supporting and contradicting information from diverse sources. This aggregated evidence is then used for claim verification using LLMs. The authors demonstrate the efficacy of their approach through extensive evaluation on four benchmark datasets with five LLMs, showing improvements in claim verification and revelation of differences in source-specific reasoning.
Key Points
- ▸ The proposed system addresses limitations of existing claim verification systems by leveraging multi-perspective evidence retrieval and cross-source disagreement analysis.
- ▸ The use of LLMs enables the system to capture supporting and contradicting information from diverse sources.
- ▸ The authors demonstrate the efficacy of their approach through extensive evaluation on four benchmark datasets with five LLMs.
Merits
Strength
The proposed system provides a novel approach to ODCV that addresses limitations of existing systems by leveraging multi-perspective evidence retrieval and cross-source disagreement analysis.
Strength
The use of LLMs enables the system to capture supporting and contradicting information from diverse sources, improving the accuracy and reliability of claim verification.
Demerits
Limitation
The proposed system relies on the availability of diverse sources, which may not always be feasible in real-world scenarios.
Limitation
The evaluation of the proposed system is limited to four benchmark datasets, which may not be representative of all possible scenarios.
Expert Commentary
The proposed system presents a novel and promising approach to ODCV, leveraging the strengths of LLMs and multi-perspective evidence retrieval. However, the system's reliance on diverse sources and the limited evaluation on benchmark datasets are notable limitations. To further develop this approach, it would be beneficial to investigate the use of additional sources and to evaluate the system on a broader range of datasets. Additionally, the application of this approach in real-world scenarios, such as fact-checking and claim verification in social media and online news sources, would be a valuable next step.
Recommendations
- ✓ Investigate the use of additional sources to further improve the accuracy and reliability of claim verification.
- ✓ Evaluate the proposed system on a broader range of datasets to ensure its efficacy in diverse scenarios.