Prediction, persuasion, and the jurisprudence of behaviourism
There is a growing literature critiquing the unreflective application of big data, predictive analytics, artificial intelligence, and machine-learning techniques to social problems. Such methods may reflect biases rather than reasoned decision making. They also may leave those affected by automated sorting and categorizing unable to understand the basis of the decisions affecting them. Despite these problems, machine-learning experts are feeding judicial opinions to algorithms to predict how future cases will be decided. We call the use of such predictive analytics in judicial contexts a jurisprudence of behaviourism as it rests on a fundamentally Skinnerian model of cognition as a black-boxed transformation of inputs into outputs. In this model, persuasion is passé; what matters is prediction. After describing and critiquing a recent study that has advanced this jurisprudence of behaviourism, we question the value of such research. Widespread deployment of prediction models not based
There is a growing literature critiquing the unreflective application of big data, predictive analytics, artificial intelligence, and machine-learning techniques to social problems. Such methods may reflect biases rather than reasoned decision making. They also may leave those affected by automated sorting and categorizing unable to understand the basis of the decisions affecting them. Despite these problems, machine-learning experts are feeding judicial opinions to algorithms to predict how future cases will be decided. We call the use of such predictive analytics in judicial contexts a jurisprudence of behaviourism as it rests on a fundamentally Skinnerian model of cognition as a black-boxed transformation of inputs into outputs. In this model, persuasion is passé; what matters is prediction. After describing and critiquing a recent study that has advanced this jurisprudence of behaviourism, we question the value of such research. Widespread deployment of prediction models not based on the meaning of important precedents and facts may endanger the core rule-of-law values.
Executive Summary
The article critiques the application of big data, predictive analytics, AI, and machine learning in legal contexts, particularly in predicting judicial outcomes. It argues that such methods, termed 'jurisprudence of behaviourism,' risk undermining the rule of law by prioritizing prediction over reasoned decision-making and transparency. The authors analyze a recent study advancing this approach, questioning its value and potential dangers to core legal values.
Key Points
- ▸ Critique of unreflective application of predictive technologies in legal contexts.
- ▸ Introduction of the concept 'jurisprudence of behaviourism' based on Skinnerian models.
- ▸ Analysis of a recent study that uses algorithms to predict judicial outcomes.
- ▸ Concerns about the impact on rule-of-law values and transparency.
- ▸ Questioning the value and potential dangers of such predictive models.
Merits
Critical Analysis
The article provides a rigorous critique of the use of predictive technologies in legal contexts, highlighting potential biases and lack of transparency.
Conceptual Framework
Introduces the concept of 'jurisprudence of behaviourism,' which is a novel and insightful framework for understanding the implications of predictive analytics in law.
Practical Relevance
The discussion on the potential impact on rule-of-law values is highly relevant to current debates about the use of AI in legal decision-making.
Demerits
Lack of Empirical Evidence
The critique is largely theoretical and would benefit from empirical evidence or case studies to support the arguments.
Broad Generalizations
The article makes broad generalizations about the use of predictive technologies without fully acknowledging potential benefits or nuanced applications.
Limited Scope
The focus is primarily on judicial contexts, which may limit the applicability of the critique to other areas where predictive technologies are used.
Expert Commentary
The article presents a timely and critical analysis of the growing trend of applying predictive technologies in legal contexts. The concept of 'jurisprudence of behaviourism' is particularly insightful, as it highlights the potential dangers of prioritizing prediction over reasoned decision-making. The critique is well-reasoned and aligns with broader concerns about bias and transparency in AI. However, the article could benefit from more empirical evidence to support its arguments. Additionally, while the focus on judicial contexts is valuable, a broader discussion of the implications for other areas of law would enhance the article's relevance. Overall, the article makes a significant contribution to the debate on the use of predictive technologies in law and underscores the need for careful consideration of their potential impacts on the rule of law.
Recommendations
- ✓ Conduct empirical research to provide concrete evidence of the biases and lack of transparency in predictive legal technologies.
- ✓ Expand the analysis to include other areas of law beyond judicial contexts to provide a more comprehensive critique.