Academic

ICE: Intervention-Consistent Explanation Evaluation with Statistical Grounding for LLMs

arXiv:2603.18579v1 Announce Type: new Abstract: Evaluating whether explanations faithfully reflect a model's reasoning remains an open problem. Existing benchmarks use single interventions without statistical testing, making it impossible to distinguish genuine faithfulness from chance-level performance. We introduce ICE (Intervention-Consistent Explanation), a framework that compares explanations against matched random baselines via randomization tests under multiple intervention operators, yielding win rates with confidence intervals. Evaluating 7 LLMs across 4 English tasks, 6 non-English languages, and 2 attribution methods, we find that faithfulness is operator-dependent: operator gaps reach up to 44 percentage points, with deletion typically inflating estimates on short text but the pattern reversing on long text, suggesting that faithfulness should be interpreted comparatively across intervention operators rather than as a single score. Randomized baselines reveal anti-faithful

A
Abhinaba Basu, Pavan Chakraborty
· · 1 min read · 17 views

arXiv:2603.18579v1 Announce Type: new Abstract: Evaluating whether explanations faithfully reflect a model's reasoning remains an open problem. Existing benchmarks use single interventions without statistical testing, making it impossible to distinguish genuine faithfulness from chance-level performance. We introduce ICE (Intervention-Consistent Explanation), a framework that compares explanations against matched random baselines via randomization tests under multiple intervention operators, yielding win rates with confidence intervals. Evaluating 7 LLMs across 4 English tasks, 6 non-English languages, and 2 attribution methods, we find that faithfulness is operator-dependent: operator gaps reach up to 44 percentage points, with deletion typically inflating estimates on short text but the pattern reversing on long text, suggesting that faithfulness should be interpreted comparatively across intervention operators rather than as a single score. Randomized baselines reveal anti-faithfulness in one-third of configurations, and faithfulness shows zero correlation with human plausibility (|r| < 0.04). Multilingual evaluation reveals dramatic model-language interactions not explained by tokenization alone. We release the ICE framework and ICEBench benchmark.

Executive Summary

This study, ICE: Intervention-Consistent Explanation Evaluation with Statistical Grounding for LLMs, introduces a novel framework for evaluating the faithfulness of large language model (LLM) explanations. By comparing explanations against matched random baselines via randomization tests under multiple intervention operators, ICE yields win rates with confidence intervals, providing a more reliable measure of faithfulness. The authors' findings suggest that faithfulness is operator-dependent, and anti-faithfulness is prevalent in one-third of configurations. The study also reveals dramatic model-language interactions not explained by tokenization alone, highlighting the need for more nuanced evaluation methods. The release of the ICE framework and ICEBench benchmark offers a valuable contribution to the field, enabling more accurate and informative evaluation of LLM explanations.

Key Points

  • ICE introduces a novel framework for evaluating LLM explanations
  • Faithfulness is operator-dependent and anti-faithfulness is prevalent in one-third of configurations
  • Model-language interactions are significant and not explained by tokenization alone

Merits

Strengths in Methodology

The use of randomization tests and multiple intervention operators provides a more robust evaluation of LLM explanations, and the release of the ICE framework and ICEBench benchmark offers a valuable contribution to the field.

Demerits

Limitations in Generalizability

The study's results may not generalize to other languages or tasks, and the use of a specific set of LLMs and attribution methods may limit the applicability of the findings.

Expert Commentary

The study's introduction of the ICE framework and ICEBench benchmark is a significant contribution to the field, offering a more robust and nuanced evaluation of LLM explanations. However, the limitations in generalizability and the use of a specific set of LLMs and attribution methods may limit the applicability of the findings. Nevertheless, the study's implications for the development of more transparent and accountable AI systems are significant, and highlight the need for more sophisticated approaches to multilingual NLP. As experts in the field, we recommend that researchers and developers adopt the ICE framework and ICEBench benchmark, and explore the development of more nuanced evaluation methods for LLMs.

Recommendations

  • Researchers and developers should adopt the ICE framework and ICEBench benchmark for evaluating LLM explanations.
  • The development of more nuanced evaluation methods for LLMs, taking into account the study's findings on operator-dependent faithfulness and anti-faithfulness, is essential for improving the transparency and accountability of AI systems.

Sources