Skip to main content
Academic

AI as Teammate or Tool? A Review of Human-AI Interaction in Decision Support

arXiv:2602.15865v1 Announce Type: cross Abstract: The integration of Artificial Intelligence (AI) necessitates determining whether systems function as tools or collaborative teammates. In this study, by synthesizing Human-AI Interaction (HAI) literature, we analyze this distinction across four dimensions: interaction design, trust calibration, collaborative frameworks and healthcare applications. Our analysis reveals that static interfaces and miscalibrated trust limit AI efficacy. Performance hinges on aligning transparency with cognitive workflows, yet a fluency trap often inflates trust without improving decision-making. Consequently, an overemphasis on explainability leaves systems largely passive. Our findings show that current AI systems remain largely passive due to an overreliance on explainability-centric designs and that transitioning AI to an active teammate requires adaptive, context-aware interactions that support shared mental models and the dynamic negotiation of author

arXiv:2602.15865v1 Announce Type: cross Abstract: The integration of Artificial Intelligence (AI) necessitates determining whether systems function as tools or collaborative teammates. In this study, by synthesizing Human-AI Interaction (HAI) literature, we analyze this distinction across four dimensions: interaction design, trust calibration, collaborative frameworks and healthcare applications. Our analysis reveals that static interfaces and miscalibrated trust limit AI efficacy. Performance hinges on aligning transparency with cognitive workflows, yet a fluency trap often inflates trust without improving decision-making. Consequently, an overemphasis on explainability leaves systems largely passive. Our findings show that current AI systems remain largely passive due to an overreliance on explainability-centric designs and that transitioning AI to an active teammate requires adaptive, context-aware interactions that support shared mental models and the dynamic negotiation of authority between humans and AI.

Executive Summary

The article 'AI as Teammate or Tool? A Review of Human-AI Interaction in Decision Support' explores the evolving role of AI in decision-making processes, focusing on whether AI systems function as tools or collaborative teammates. The study synthesizes Human-AI Interaction (HAI) literature across four dimensions: interaction design, trust calibration, collaborative frameworks, and healthcare applications. It highlights that static interfaces and miscalibrated trust limit AI efficacy, emphasizing the need for adaptive, context-aware interactions that support shared mental models and dynamic authority negotiation between humans and AI. The findings suggest that current AI systems are largely passive due to an overreliance on explainability-centric designs, and advocate for a shift towards more active, collaborative roles for AI.

Key Points

  • AI systems are often passive due to static interfaces and miscalibrated trust.
  • Overemphasis on explainability limits AI's potential as a collaborative teammate.
  • Adaptive, context-aware interactions are crucial for effective Human-AI collaboration.
  • Dynamic negotiation of authority between humans and AI is essential for improved decision-making.

Merits

Comprehensive Literature Review

The article provides a thorough synthesis of HAI literature, covering multiple dimensions of Human-AI interaction, which offers a holistic view of the current state of AI in decision support.

Practical Insights

The study offers actionable insights into improving AI systems by emphasizing the need for adaptive interactions and dynamic authority negotiation, which can be directly applied to real-world scenarios.

Interdisciplinary Approach

The analysis spans various fields, including healthcare applications, demonstrating the broad relevance of the findings across different domains.

Demerits

Limited Empirical Data

While the article synthesizes existing literature, it lacks original empirical data, which could strengthen the validity and applicability of the findings.

Generalization Challenges

The findings may not be universally applicable, as the effectiveness of adaptive interactions and dynamic authority negotiation could vary across different contexts and industries.

Theoretical Focus

The article is heavily theoretical, which may limit its immediate practical impact, as it does not provide specific guidelines or frameworks for implementing the suggested changes.

Expert Commentary

The article 'AI as Teammate or Tool?' provides a timely and insightful exploration of the evolving role of AI in decision support. The distinction between AI as a tool and as a teammate is crucial, as it directly impacts the effectiveness and acceptance of AI systems in various fields. The study's emphasis on adaptive interactions and dynamic authority negotiation is particularly noteworthy, as it addresses a significant gap in current AI design. However, the lack of empirical data and the theoretical nature of the analysis limit the immediate practical applicability of the findings. Future research should focus on conducting empirical studies to validate the proposed frameworks and provide specific guidelines for implementing adaptive, context-aware AI systems. Additionally, the article's interdisciplinary approach highlights the need for collaboration between AI researchers, ethicists, and practitioners to address the complex challenges of Human-AI interaction. Overall, the article makes a valuable contribution to the ongoing discourse on AI's role in decision-making and sets a solid foundation for further exploration in this critical area.

Recommendations

  • Conduct empirical studies to validate the proposed frameworks for adaptive, context-aware interactions and dynamic authority negotiation.
  • Develop specific guidelines and best practices for implementing the suggested changes in AI design, tailored to different industries and contexts.

Sources