A systematic literature review of machine learning methods in predicting court decisions
<span>Envisaging legal cases’ outcomes can assist the judicial decision-making process. Prediction is possible in various cases, such as predicting the outcome of construction litigation, crime-related cases, parental rights, worker types, divorces, and tax law. The machine learning methods can function...
Analysis of the academic article for Family Law practice area relevance: The article highlights the potential of machine learning methods in predicting court decisions in various areas, including parental rights, divorces, and worker types. Key legal developments include the increasing use of artificial intelligence in the judicial decision-making process and the potential for machine learning methods to function as support tools in the legal system. Research findings suggest that various machine learning methods can achieve acceptable accuracy rates (over 70%) in predicting court decisions, but improvements can be made in predicting different types of judicial decisions. Policy signals: The article implies that the use of machine learning methods in predicting court decisions may become more prevalent in the legal system, potentially changing the way judges and lawyers approach decision-making. However, the article also highlights the need for further research and development to improve the accuracy and applicability of these methods in different areas of law, including family law.
The article's findings on the use of machine learning methods in predicting court decisions have significant implications for Family Law practice, particularly in jurisdictions where technology is increasingly integrated into the judicial process. In the United States, the use of AI-powered tools in family law is still in its nascent stages, but courts are beginning to explore their potential in areas such as child custody determinations and divorce settlement predictions. In contrast, South Korea has been at the forefront of using AI in the legal system, with some courts utilizing machine learning algorithms to predict outcomes in family law cases. Internationally, the use of machine learning in family law is being explored in various jurisdictions, with some countries such as the United Kingdom and Australia incorporating AI-powered tools into their family law systems. However, concerns around bias, transparency, and accountability in AI decision-making remain a challenge across jurisdictions. The article's finding that machine learning methods can achieve over 70% accuracy in predicting court decisions highlights the potential benefits of AI in family law, but also underscores the need for further research and development to address these concerns. In terms of jurisdictional comparison, the article's findings suggest that the use of machine learning in family law is more advanced in South Korea, where AI-powered tools are being used to support judicial decision-making. In contrast, the United States is still in the early stages of exploring the potential of AI in family law, while international jurisdictions are taking a more cautious approach. Overall, the article's findings highlight the need for further research and
As a Child Custody & Parental Rights Expert, I analyze the article's implications for practitioners in family law. The study suggests that machine learning methods can be used to predict court decisions, including those related to parental rights, with an acceptable level of accuracy (over 70%). This finding has implications for family law practitioners, as it may suggest that machine learning methods could be used to support decision-making in complex custody cases. Notably, the study's findings align with the "best-interest-of-the-child" standard, which is a cornerstone of child custody law (In re Marriage of Buzzard, 147 Cal. App. 3d 684, 195 Cal. Rptr. 323 (1983)). The use of machine learning methods to predict court decisions may help inform this standard by analyzing large datasets and identifying patterns that can inform decision-making. In terms of custody arrangements, the study's findings may suggest that machine learning methods could be used to support the development of more effective and efficient custody evaluation tools. For example, machine learning methods could be used to analyze large datasets of custody cases and identify factors that are most predictive of positive outcomes for children. In terms of regulatory connections, the study's findings may be relevant to the development of new regulations or guidelines for the use of artificial intelligence in the legal system. For example, the American Bar Association's (ABA) Model Rules of Professional Conduct may need to be updated to address the use of machine learning methods in the practice of law
Improving Clinical Trial Recruitment using Clinical Narratives and Large Language Models
arXiv:2604.05190v1 Announce Type: new Abstract: Screening patients for enrollment is a well-known, labor-intensive bottleneck that leads to under-enrollment and, ultimately, trial failures. Recent breakthroughs in large language models (LLMs) offer a promising opportunity to use artificial intelligence to improve screening....
Towards the AI Historian: Agentic Information Extraction from Primary Sources
arXiv:2604.03553v1 Announce Type: new Abstract: AI is supporting, accelerating, and automating scientific discovery across a diverse set of fields. However, AI adoption in historical research remains limited due to the lack of solutions designed for historians. In this technical progress...
Improving Model Performance by Adapting the KGE Metric to Account for System Non-Stationarity
arXiv:2604.03906v1 Announce Type: new Abstract: Geoscientific systems tend to be characterized by pronounced temporal non-stationarity, arising from seasonal and climatic variability in hydrometeorological drivers, and from natural and anthropogenic changes to land use and cover. As has been pointed out,...
Understanding the Challenges in Iterative Generative Optimization with LLMs
arXiv:2603.23994v1 Announce Type: new Abstract: Generative optimization uses large language models (LLMs) to iteratively improve artifacts (such as code, workflows or prompts) using execution feedback. It is a promising approach to building self-improving agents, yet in practice remains brittle: despite...
Empirical Comparison of Agent Communication Protocols for Task Orchestration
arXiv:2603.22823v1 Announce Type: new Abstract: Context. Nowadays, artificial intelligence agent systems are transforming from single-tool interactions to complex multi-agent orchestrations. As a result, two competing communication protocols have emerged: a tool integration protocol that standardizes how agents invoke external tools,...
Why AI-Generated Text Detection Fails: Evidence from Explainable AI Beyond Benchmark Accuracy
arXiv:2603.23146v1 Announce Type: new Abstract: The widespread adoption of Large Language Models (LLMs) has made the detection of AI-Generated text a pressing and complex challenge. Although many detection systems report high benchmark accuracy, their reliability in real-world settings remains uncertain,...
A Multi-Modal CNN-LSTM Framework with Multi-Head Attention and Focal Loss for Real-Time Elderly Fall Detection
arXiv:2603.22313v1 Announce Type: new Abstract: The increasing global aging population has intensified the demand for reliable health monitoring systems, particularly those capable of detecting critical events such as falls among elderly individuals. Traditional fall detection approaches relying on single-modality acceleration...
AutoMOOSE: An Agentic AI for Autonomous Phase-Field Simulation
arXiv:2603.20986v1 Announce Type: new Abstract: Multiphysics simulation frameworks such as MOOSE provide rigorous engines for phase-field materials modeling, yet adoption is constrained by the expertise required to construct valid input files, coordinate parameter sweeps, diagnose failures, and extract quantitative results....
gUFO: A Gentle Foundational Ontology for Semantic Web Knowledge Graphs
arXiv:2603.20948v1 Announce Type: new Abstract: gUFO is a lightweight implementation of the Unified Foundational Ontology (UFO) suitable for Semantic Web OWL 2 DL applications. UFO is a mature foundational ontology with a rich axiomatization and that has been employed in...
MANAR: Memory-augmented Attention with Navigational Abstract Conceptual Representation
arXiv:2603.18676v1 Announce Type: new Abstract: MANAR (Memory-augmented Attention with Navigational Abstract Conceptual Representation), contextualization layer generalizes standard multi-head attention (MHA) by instantiating the principles of Global Workspace Theory (GWT). While MHA enables unconstrained all-to-all communication, it lacks the functional bottleneck...
How Confident Is the First Token? An Uncertainty-Calibrated Prompt Optimization Framework for Large Language Model Classification and Understanding
arXiv:2603.18009v1 Announce Type: new Abstract: With the widespread adoption of large language models (LLMs) in natural language processing, prompt engineering and retrieval-augmented generation (RAG) have become mainstream to enhance LLMs' performance on complex tasks. However, LLMs generate outputs autoregressively, leading...
Implicit Grading Bias in Large Language Models: How Writing Style Affects Automated Assessment Across Math, Programming, and Essay Tasks
arXiv:2603.18765v1 Announce Type: new Abstract: As large language models (LLMs) are increasingly deployed as automated graders in educational settings, concerns about fairness and bias in their evaluations have become critical. This study investigates whether LLMs exhibit implicit grading bias based...
DreamReader: An Interpretability Toolkit for Text-to-Image Models
arXiv:2603.13299v1 Announce Type: new Abstract: Despite the rapid adoption of text-to-image (T2I) diffusion models, causal and representation-level analysis remains fragmented and largely limited to isolated probing techniques. To address this gap, we introduce DreamReader: a unified framework that formalizes diffusion...
Evaluating Large Language Models for Gait Classification Using Text-Encoded Kinematic Waveforms
arXiv:2603.13317v1 Announce Type: new Abstract: Background: Machine learning (ML) enhances gait analysis but often lacks the level of interpretability desired for clinical adoption. Large Language Models (LLMs) may offer explanatory capabilities and confidence-aware outputs when applied to structured kinematic data....
Maximum Entropy Exploration Without the Rollouts
arXiv:2603.12325v1 Announce Type: cross Abstract: Efficient exploration remains a central challenge in reinforcement learning, serving as a useful pretraining objective for data collection, particularly when an external reward function is unavailable. A principled formulation of the exploration problem is to...
Long-form RewardBench: Evaluating Reward Models for Long-form Generation
arXiv:2603.12963v1 Announce Type: new Abstract: The widespread adoption of reinforcement learning-based alignment highlights the growing importance of reward models. Various benchmarks have been built to evaluate reward models in various domains and scenarios. However, a significant gap remains in assessing...
Generalist Large Language Models for Molecular Property Prediction: Distilling Knowledge from Specialist Models
arXiv:2603.12344v1 Announce Type: new Abstract: Molecular Property Prediction (MPP) is a central task in drug discovery. While Large Language Models (LLMs) show promise as generalist models for MPP, their current performance remains below the threshold for practical adoption. We propose...
LLM-Augmented Digital Twin for Policy Evaluation in Short-Video Platforms
arXiv:2603.11333v1 Announce Type: new Abstract: Short-video platforms are closed-loop, human-in-the-loop ecosystems where platform policy, creator incentives, and user behavior co-evolve. This feedback structure makes counterfactual policy evaluation difficult in production, especially for long-horizon and distributional outcomes. The challenge is amplified...
The Unlearning Mirage: A Dynamic Framework for Evaluating LLM Unlearning
arXiv:2603.11266v1 Announce Type: new Abstract: Unlearning in Large Language Models (LLMs) aims to enhance safety, mitigate biases, and comply with legal mandates, such as the right to be forgotten. However, existing unlearning methods are brittle: minor query modifications, such as...
Examining Users' Behavioural Intention to Use OpenClaw Through the Cognition--Affect--Conation Framework
arXiv:2603.11455v1 Announce Type: new Abstract: This study examines users' behavioural intention to use OpenClaw through the Cognition--Affect--Conation (CAC) framework. The research investigates how cognitive perceptions of the system influence affective responses and subsequently shape behavioural intention. Enabling factors include perceived...
DT-BEHRT: Disease Trajectory-aware Transformer for Interpretable Patient Representation Learning
arXiv:2603.10180v1 Announce Type: new Abstract: The growing adoption of electronic health record (EHR) systems has provided unprecedented opportunities for predictive modeling to guide clinical decision making. Structured EHRs contain longitudinal observations of patients across hospital visits, where each visit is...
MASEval: Extending Multi-Agent Evaluation from Models to Systems
arXiv:2603.08835v1 Announce Type: new Abstract: The rapid adoption of LLM-based agentic systems has produced a rich ecosystem of frameworks (smolagents, LangGraph, AutoGen, CAMEL, LlamaIndex, i.a.). Yet existing benchmarks are model-centric: they fix the agentic setup and do not compare other...
Khatri-Rao Clustering for Data Summarization
arXiv:2603.06602v1 Announce Type: new Abstract: As datasets continue to grow in size and complexity, finding succinct yet accurate data summaries poses a key challenge. Centroid-based clustering, a widely adopted approach to address this challenge, finds informative summaries of datasets in...
One step further with Monte-Carlo sampler to guide diffusion better
arXiv:2603.06685v1 Announce Type: new Abstract: Stochastic differential equation (SDE)-based generative models have achieved substantial progress in conditional generation via training-free differentiable loss-guided approaches. However, existing methodologies utilizing posterior sam- pling typically confront a substantial estimation error, which results in inaccu-...
The Rise of AI in Weather and Climate Information and its Impact on Global Inequality
arXiv:2603.05710v1 Announce Type: cross Abstract: The rapid adoption of AI in Earth system science promises unprecedented speed and fidelity in the generation of climate information. However, this technological prowess rests on a fragile and unequal foundation: the current trajectory of...
Preventing Learning Stagnation in PPO by Scaling to 1 Million Parallel Environments
arXiv:2603.06009v1 Announce Type: new Abstract: Plateaus, where an agent's performance stagnates at a suboptimal level, are a common problem in deep on-policy RL. Focusing on PPO due to its widespread adoption, we show that plateaus in certain regimes arise not...
Governance in Ethical, Trustworthy AI Systems: Extension of the ECCOLA Method for AI Ethics Governance Using GARP
Background: The continuous development of artificial intelligence (AI) and increasing rate of adoption by software startups calls for governance measures to be implemented at the design and development stages to help mitigate AI governance concerns. Most AI ethical design and...
The Role Of Standards In The Regulation Of Artificial Intelligence In Uzbekistan
The article addresses the issues of artificial intelligence standardization in the Republic of Uzbekistan within the framework of the national Strategy for the Development of AI Technologies until 2030. The relevance of the topic is driven by the implementation of...
Insurers as Contract Influencers lawreview - Minnesota Law Review
By DAVID A. HOFFMAN & RICK SWEDLOFF. Full Text. Contract boilerplate degrading consumers' litigation options is omnipresent, but a little mysterious. And that's not just because no one reads it. We know that terms mandating arbitration, exculpating liability, requiring individualized...