Owlgorithm: Supporting Self-Regulated Learning in Competitive Programming through LLM-Driven Reflection
We present Owlgorithm, an educational platform designed to support self-regulated learning in competitive programming through AI-generated reflective questions. Owlgorithm leverages GPT-4o to generate context-aware, metacognitive prompts tailored to individual student submissions. Integrated into a sophomore- and junior-level competitive programming course, the system provided reflective prompts adapted to student outcomes: guiding deeper conceptual understanding for correct submissions and facilitating structured debugging for partially incorrect or failed submissions.
Our preliminary assessment based on student ratings and teaching assistant feedback indicates both promising benefits and notable limitations. While many people found the generated questions useful for reflection and troubleshooting, concerns were raised about response feedback accuracy and practical usability in the classroom. These results suggest potential advantages of LLM-supported reflection for novice programmers, although further improvements are needed to enhance reliability and educational value for senior students and experienced teaching assistants.
From our experience, several key insights emerged: generative AI tools can effectively support structured reflection, but careful prompt engineering, dynamic adaptation, and usability improvements are critical to realizing their full potential in educational contexts. We offer specific recommendations for instructors integrating similar tools and outline next steps to enhance Owlgorithm’s educational impact.