Academic

Human-in-the-Loop Control of Objective Drift in LLM-Assisted Computer Science Education

arXiv:2604.00281v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly embedded in computer science education through AI-assisted programming tools, yet such workflows often exhibit objective drift, in which locally plausible outputs diverge from stated task specifications. Existing instructional responses frequently emphasize tool-specific prompting practices, limiting durability as AI platforms evolve. This paper adopts a human-centered stance, treating human-in-the-loop (HITL) control as a stable educational problem rather than a transitional step toward AI autonomy. Drawing on systems engineering and control-theoretic concepts, we frame objectives and world models as operational artifacts that students configure to stabilize AI-assisted work. We propose a pilot undergraduate CS laboratory curriculum that explicitly separates planning from execution and trains students to specify acceptance criteria and architectural constraints prior to code generation. In s

M
Mark Dranias, Adam Whitley
· · 1 min read · 1 views

arXiv:2604.00281v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly embedded in computer science education through AI-assisted programming tools, yet such workflows often exhibit objective drift, in which locally plausible outputs diverge from stated task specifications. Existing instructional responses frequently emphasize tool-specific prompting practices, limiting durability as AI platforms evolve. This paper adopts a human-centered stance, treating human-in-the-loop (HITL) control as a stable educational problem rather than a transitional step toward AI autonomy. Drawing on systems engineering and control-theoretic concepts, we frame objectives and world models as operational artifacts that students configure to stabilize AI-assisted work. We propose a pilot undergraduate CS laboratory curriculum that explicitly separates planning from execution and trains students to specify acceptance criteria and architectural constraints prior to code generation. In selected labs, the curriculum also introduces deliberate, concept-aligned drift to support diagnosis and recovery from specification violations. We report a sensitivity power analysis for a three-arm pilot design comparing unstructured AI use, structured planning, and structured planning with injected drift, establishing detectable effect sizes under realistic section-level constraints. The contribution is a theory-driven, methodologically explicit foundation for HITL pedagogy that renders control competencies teachable across evolving AI tools.

Executive Summary

This article proposes a human-centered approach to mitigating objective drift in large language model (LLM)-assisted computer science education. The authors develop a theory-driven, methodologically explicit foundation for human-in-the-loop (HITL) pedagogy, enabling control competencies to be taught across evolving AI tools. A pilot undergraduate CS laboratory curriculum is designed to separate planning from execution and train students to specify acceptance criteria and architectural constraints. The study reports a sensitivity power analysis for a three-arm pilot design comparing unstructured AI use, structured planning, and structured planning with injected drift. The findings establish detectable effect sizes under realistic section-level constraints, providing a foundation for scalable HITL education. This approach has significant implications for the future of AI-assisted education, emphasizing the importance of human control and agency in AI-driven environments.

Key Points

  • The article adopts a human-centered stance on HITL control in AI-assisted education.
  • The authors develop a theory-driven, methodologically explicit foundation for HITL pedagogy.
  • A pilot undergraduate CS laboratory curriculum is designed to separate planning from execution and train students in acceptance criteria and architectural constraints.

Merits

Strength

The article provides a theoretically grounded and methodologically explicit approach to HITL pedagogy, addressing a critical need in AI-assisted education.

Novelty

The article introduces a novel approach to HITL control, emphasizing the importance of human control and agency in AI-driven environments.

Demerits

Limitation

The study is limited to a pilot design, and its findings may not be generalizable to larger populations or more complex educational settings.

Expert Commentary

This article represents a significant contribution to the field of AI-assisted education, providing a theoretically grounded and methodologically explicit foundation for HITL pedagogy. The study's emphasis on human control and agency in AI-driven environments is particularly noteworthy, highlighting the need for more human-centered approaches to AI development and deployment. The findings of this study have significant implications for education policy and practice, and the curriculum and pedagogy developed in this study can be adapted and scaled to other educational settings.

Recommendations

  • Future research should explore the scalability and generalizability of the HITL pedagogy developed in this study, as well as its applicability to more complex educational settings.
  • Policymakers and educators should prioritize the development and implementation of human-centered approaches to AI-assisted education, emphasizing the importance of human control and agency in AI-driven environments.

Sources

Original: arXiv - cs.AI