MaskingAgent: Preventing LLM Tutor from Providing Full Solutions in Python Programming CoursesGlobal
Recent advances in large language models (LLMs) have increased interest in their use as tutors in programming education. However, in our preliminary use of an LLM tutor in a Python programming course, we observed a critical issue: the unintended disclosure of full solutions. Such disclosure can hinder learning, as students may rely on complete answers without engaging in problem solving. This problem persists even when system messages explicitly instruct the model not to provide full code, as students can often succeed in eliciting full solutions through prompt engineering. To address this issue, we propose MaskingAgent, an agent that automatically inspects tutor responses. It then executes any included code and determines whether it represents a correct solution. If so, MaskingAgent masks essential parts of the code, providing students with a scaffolded version that encourages active engagement while still offering meaningful guidance. Through experiments and instructor evaluations, we demonstrate that MaskingAgent effectively prevents solution leakage while preserving the pedagogical benefits of LLM tutors.