Skip to main content
Academic

CARE: An Explainable Computational Framework for Assessing Client-Perceived Therapeutic Alliance Using Large Language Models

arXiv:2602.20648v1 Announce Type: new Abstract: Client perceptions of the therapeutic alliance are critical for counseling effectiveness. Accurately capturing these perceptions remains challenging, as traditional post-session questionnaires are burdensome and often delayed, while existing computational approaches produce coarse scores, lack interpretable rationales, and fail to model holistic session context. We present CARE, an LLM-based framework to automatically predict multi-dimensional alliance scores and generate interpretable rationales from counseling transcripts. Built on the CounselingWAI dataset and enriched with 9,516 expert-curated rationales, CARE is fine-tuned using rationale-augmented supervision with the LLaMA-3.1-8B-Instruct backbone. Experiments show that CARE outperforms leading LLMs and substantially reduces the gap between counselor evaluations and client-perceived alliance, achieving over 70% higher Pearson correlation with client ratings. Rationale-augmented su

A
Anqi Li, Chenxiao Wang, Yu Lu, Renjun Xu, Lizhi Ma, Zhenzhong Lan
· · 1 min read · 8 views

arXiv:2602.20648v1 Announce Type: new Abstract: Client perceptions of the therapeutic alliance are critical for counseling effectiveness. Accurately capturing these perceptions remains challenging, as traditional post-session questionnaires are burdensome and often delayed, while existing computational approaches produce coarse scores, lack interpretable rationales, and fail to model holistic session context. We present CARE, an LLM-based framework to automatically predict multi-dimensional alliance scores and generate interpretable rationales from counseling transcripts. Built on the CounselingWAI dataset and enriched with 9,516 expert-curated rationales, CARE is fine-tuned using rationale-augmented supervision with the LLaMA-3.1-8B-Instruct backbone. Experiments show that CARE outperforms leading LLMs and substantially reduces the gap between counselor evaluations and client-perceived alliance, achieving over 70% higher Pearson correlation with client ratings. Rationale-augmented supervision further improves predictive accuracy. CARE also produces high-quality, contextually grounded rationales, validated by both automatic and human evaluations. Applied to real-world Chinese online counseling sessions, CARE uncovers common alliance-building challenges, illustrates how interaction patterns shape alliance development, and provides actionable insights, demonstrating its potential as an AI-assisted tool for supporting mental health care.

Executive Summary

This study presents CARE, a novel explainable computational framework leveraging large language models (LLMs) to assess client-perceived therapeutic alliance using counseling transcripts. CARE outperforms existing LLMs, achieving a 70% higher Pearson correlation with client ratings. The framework generates interpretable rationales, validated by both automatic and human evaluations, and showcases its potential as an AI-assisted tool for supporting mental health care. By reducing the gap between counselor evaluations and client-perceived alliance, CARE offers actionable insights into common alliance-building challenges and interaction patterns shaping alliance development. This research has significant implications for the mental health care sector, particularly in the context of online counseling sessions.

Key Points

  • CARE is an LLM-based framework for assessing client-perceived therapeutic alliance using counseling transcripts
  • CARE outperforms existing LLMs in predicting multi-dimensional alliance scores and generating interpretable rationales
  • Rationale-augmented supervision enhances predictive accuracy and improves rationale quality

Merits

Advancements in Explainability

CARE's ability to generate interpretable rationales for client-perceived alliance predictions marks a significant leap in explainability, making it a valuable tool for mental health care professionals.

Improved Accuracy

CARE's enhanced predictive accuracy, particularly when augmented with rationale-augmented supervision, demonstrates its potential to improve the effectiveness of therapeutic alliances.

Demerits

Data Availability Limitations

The study relies on a specific dataset (CounselingWAI) and may not generalize to other counseling contexts or populations.

Dependence on LLMs

CARE's performance is contingent upon the capabilities and limitations of the underlying LLMs (LLaMA-3.1-8B-Instruct), which may introduce bias and variability.

Expert Commentary

While CARE demonstrates remarkable promise in assessing client-perceived therapeutic alliance, its reliance on LLMs and specific dataset limitations must be carefully considered. Furthermore, the study's focus on online counseling sessions raises questions about the broader applicability of CARE in diverse counseling contexts. Nevertheless, the framework's ability to generate interpretable rationales and improve predictive accuracy marks a significant advancement in explainable AI for mental health care. As the field continues to evolve, it is essential to prioritize the development of robust, explainable AI tools like CARE, which have the potential to enhance the effectiveness of therapeutic alliances and improve mental health care outcomes.

Recommendations

  • Future research should investigate the generalizability of CARE across diverse counseling contexts and populations.
  • The development of more robust, explainable AI frameworks like CARE requires careful consideration of potential biases and variability, as well as the integration of multiple data sources and perspectives.

Sources