Academic

AI Hallucination from Students' Perspective: A Thematic Analysis

arXiv:2602.17671v1 Announce Type: cross Abstract: As students increasingly rely on large language models, hallucinations pose a growing threat to learning. To mitigate this, AI literacy must expand beyond prompt engineering to address how students should detect and respond to LLM hallucinations. To support this, we need to understand how students experience hallucinations, how they detect them, and why they believe they occur. To investigate these questions, we asked university students three open-ended questions about their experiences with AI hallucinations, their detection strategies, and their mental models of why hallucinations occur. Sixty-three students responded to the survey. Thematic analysis of their responses revealed that reported hallucination issues primarily relate to incorrect or fabricated citations, false information, overconfident but misleading responses, poor adherence to prompts, persistence in incorrect answers, and sycophancy. To detect hallucinations, student

A
Abdulhadi Shoufan, Ahmad-Azmi-Abdelhamid Esmaeil
· · 1 min read · 10 views

arXiv:2602.17671v1 Announce Type: cross Abstract: As students increasingly rely on large language models, hallucinations pose a growing threat to learning. To mitigate this, AI literacy must expand beyond prompt engineering to address how students should detect and respond to LLM hallucinations. To support this, we need to understand how students experience hallucinations, how they detect them, and why they believe they occur. To investigate these questions, we asked university students three open-ended questions about their experiences with AI hallucinations, their detection strategies, and their mental models of why hallucinations occur. Sixty-three students responded to the survey. Thematic analysis of their responses revealed that reported hallucination issues primarily relate to incorrect or fabricated citations, false information, overconfident but misleading responses, poor adherence to prompts, persistence in incorrect answers, and sycophancy. To detect hallucinations, students rely either on intuitive judgment or on active verification strategies, such as cross-checking with external sources or re-prompting the model. Students' explanations for why hallucinations occur reflected several mental models, including notable misconceptions. Many described AI as a research engine that fabricates information when it cannot locate an answer in its "database." Others attributed hallucinations to issues with training data, inadequate prompting, or the model's inability to understand or verify information. These findings illuminate vulnerabilities in AI-supported learning and highlight the need for explicit instruction in verification protocols, accurate mental models of generative AI, and awareness of behaviors such as sycophancy and confident delivery that obscure inaccuracy. The study contributes empirical evidence for integrating hallucination awareness and mitigation into AI literacy curricula.

Executive Summary

This study examines university students' experiences with AI hallucinations, their detection strategies, and mental models of why hallucinations occur. The thematic analysis reveals that students encounter various types of hallucinations, including incorrect citations and overconfident responses. Students rely on intuitive judgment or active verification strategies to detect hallucinations. The findings highlight the need for explicit instruction in verification protocols, accurate mental models of generative AI, and awareness of behaviors that obscure inaccuracy.

Key Points

  • Students encounter various types of AI hallucinations, including incorrect citations and overconfident responses
  • Students use intuitive judgment or active verification strategies to detect hallucinations
  • Students' mental models of AI hallucinations often reflect misconceptions about how AI works

Merits

Empirical Evidence

The study provides empirical evidence for integrating hallucination awareness and mitigation into AI literacy curricula, which is essential for developing effective strategies to address AI hallucinations.

Demerits

Limited Sample Size

The study's sample size of 63 students may not be representative of the broader student population, which could limit the generalizability of the findings.

Expert Commentary

This study contributes to our understanding of the complex issues surrounding AI hallucinations in educational settings. The findings underscore the need for a multifaceted approach to addressing AI hallucinations, including the development of AI literacy curricula, educator training, and policymakers' support. By examining students' experiences and mental models, the study provides valuable insights into the cognitive and social factors that influence AI-supported learning. However, further research is needed to fully address the limitations of the study and to develop effective strategies for mitigating AI hallucinations.

Recommendations

  • Develop and implement AI literacy curricula that prioritize hallucination awareness and mitigation
  • Provide educators with training and resources to effectively address AI hallucinations in their teaching practices

Sources