Diagnosing Students’ Understanding of Objects and Classes in OOP
Misconceptions in programming can significantly hinder students’ conceptual development and their ability to apply core knowledge across contexts. Particularly in object-oriented programming (OOP), students exhibit persistent conceptual misconceptions surrounding key concepts such as classes and objects that hinder their performance in advanced topics and courses. Traditional methods of identifying misconceptions, including interviews and detailed analysis of open-ended responses, are insightful but impractical for large classrooms. While coding platforms are widely used, they mainly detect syntax and output errors without revealing students’ conceptual understanding. Furthermore, these systems often lack psychometric validation and fail to incorporate distractors grounded in authentic student thinking and misconceptions. My dissertation work addresses these gaps by developing and evaluating a three-tier diagnostic tool designed to identify misconceptions specifically in classes and objects using Kane’s Validity Framework. The tool integrates answer, reason, and confidence tiers to capture both students’ conceptual understanding and their level of certainty. Advances in large language models (LLMs) are leveraged to automate the generation and plausibility scoring of distractors, enabling scalable and psychometrically sound item development. With this work, I aim to contribute to the field by providing instructors with a research-informed tool for diagnosing misconceptions about classes and objects in large-scale CS2 courses that is psychometrically validated for their uses. By combining structured diagnostic assessment design with recent advances in language models, the three-tier diagnostic tool will be a scalable, evidence-based approach to improving the teaching and learning of foundational OOP programming concepts.