Academic

LLM-Assisted Causal Structure Disambiguation and Factor Extraction for Legal Judgment Prediction

arXiv:2603.11446v1 Announce Type: new Abstract: Mainstream methods for Legal Judgment Prediction (LJP) based on Pre-trained Language Models (PLMs) heavily rely on the statistical correlation between case facts and judgment results. This paradigm lacks explicit modeling of legal constituent elements and underlying causal logic, making models prone to learning spurious correlations and suffering from poor robustness. While introducing causal inference can mitigate this issue, existing causal LJP methods face two critical bottlenecks in real-world legal texts: inaccurate legal factor extraction with severe noise, and significant uncertainty in causal structure discovery due to Markov equivalence under sparse features. To address these challenges, we propose an enhanced causal inference framework that integrates Large Language Model (LLM) priors with statistical causal discovery. First, we design a coarse-to-fine hybrid extraction mechanism combining statistical sampling and LLM semantic

Y
Yuzhi Liang, Lixiang Ma, Xinrong Zhu
· · 1 min read · 54 views

Video Coverage

Revolutionizing Legal Judgment Prediction with LLM-Assisted Causal Structure Disambiguation

5 min March 16, 2026

arXiv:2603.11446v1 Announce Type: new Abstract: Mainstream methods for Legal Judgment Prediction (LJP) based on Pre-trained Language Models (PLMs) heavily rely on the statistical correlation between case facts and judgment results. This paradigm lacks explicit modeling of legal constituent elements and underlying causal logic, making models prone to learning spurious correlations and suffering from poor robustness. While introducing causal inference can mitigate this issue, existing causal LJP methods face two critical bottlenecks in real-world legal texts: inaccurate legal factor extraction with severe noise, and significant uncertainty in causal structure discovery due to Markov equivalence under sparse features. To address these challenges, we propose an enhanced causal inference framework that integrates Large Language Model (LLM) priors with statistical causal discovery. First, we design a coarse-to-fine hybrid extraction mechanism combining statistical sampling and LLM semantic reasoning to accurately identify and purify standard legal constituent elements. Second, to resolve structural uncertainty, we introduce an LLM-assisted causal structure disambiguation mechanism. By utilizing the LLM as a constrained prior knowledge base, we conduct probabilistic evaluation and pruning on ambiguous causal directions to generate legally compliant candidate causal graphs. Finally, a causal-aware judgment prediction model is constructed by explicitly constraining text attention intensity via the generated causal graphs. Extensive experiments on multiple benchmark datasets, including LEVEN , QA, and CAIL, demonstrate that our proposed method significantly outperforms state-of-the-art baselines in both predictive accuracy and robustness, particularly in distinguishing confusing charges.

Executive Summary

This article proposes an enhanced causal inference framework for Legal Judgment Prediction (LJP) that leverages Large Language Model (LLM) priors and statistical causal discovery. The framework addresses two critical bottlenecks in real-world legal texts: inaccurate legal factor extraction and significant uncertainty in causal structure discovery. The proposed method outperforms state-of-the-art baselines in predictive accuracy and robustness, particularly in distinguishing confusing charges. The framework's design and experimental results demonstrate its potential to improve LJP models' reliability and effectiveness.

Key Points

  • Integration of LLM priors with statistical causal discovery for LJP
  • Coarse-to-fine hybrid extraction mechanism for accurate legal factor extraction
  • LLM-assisted causal structure disambiguation mechanism for resolving structural uncertainty

Merits

Improved Predictive Accuracy

The proposed method demonstrates significant improvements in predictive accuracy and robustness compared to state-of-the-art baselines.

Enhanced Causal Inference

The framework's integration of LLM priors and statistical causal discovery enables more accurate modeling of legal constituent elements and underlying causal logic.

Demerits

Computational Complexity

The proposed method's reliance on LLMs and statistical causal discovery may increase computational complexity and require significant resources.

Dependence on LLM Quality

The framework's performance is contingent upon the quality and accuracy of the LLM priors, which may be a limitation in certain scenarios.

Expert Commentary

The proposed framework represents a significant advancement in LJP research, as it addresses critical limitations in existing methods. The integration of LLM priors and statistical causal discovery enables more accurate modeling of legal constituent elements and underlying causal logic. However, further research is necessary to evaluate the framework's performance in diverse legal contexts and to address potential limitations, such as computational complexity and dependence on LLM quality. The implications of this research are far-reaching, with potential applications in improving the reliability and effectiveness of LJP models and contributing to more transparent and explainable AI regulations in the legal domain.

Recommendations

  • Further evaluation of the framework's performance in diverse legal contexts and datasets
  • Investigation into the potential applications of the proposed method in other domains, such as healthcare and finance

Sources