Topic-Level Feedback Summarization for an Explanation-Based Classroom Response System
Fostering engagement among undergraduate computer science students in large-lecture settings can be challenging for instructors. Didactic teaching styles common in such lectures may not be as effective as dialogic teaching, but the overhead involved with dialogic teaching may preclude its use in large introductory computer science courses. Classroom response systems such as multiple-choice ``clicker'' systems provide a way for students to engage with an instructor, but evidence suggests that multiple-choice questions may not foster deep thought in the way that open-ended questions do. Open-ended questions foster deeper engagement and AI-enabled learning analytics offer a powerful method of automatically assessing student responses, but grading text responses produced by students and summarizing class-wide performance during lectures presents unique difficulties, especially for algorithmic questions prevalent in computer science lectures.
We present a system for delivering, assessing, and summarizing students’ responses to open-ended question prompts during such lectures. A central part of this system is the effective summarization of topic-level student responses to provide actionable feedback to the instructor. This poster presents an early phase of this summarization work, analyzing both the quantitative semantic accuracy and qualitative usefulness to instructors of an LLM-based system that summarizes student feedback. Semantic accuracy is found to be high and consistent across different summarization methods. Instructor interviews identified several key qualities of summaries that would be useful, including format preferences and features that would allow selective insight into student responses related to misconceptions.