Enrollments in introductory computer science (CS1) courses continue to rise, making it difficult for instructors to deliver rapid, individualized feedback that addresses students’ misconceptions at scale. We present an analysis framework and instructor tool that leverage large language models (LLMs) to classify, cluster, and present students’ coding errors in real time. Our approach comprises two main contributions: (1) a prompt-engineered workflow for automatic error detection and a clustering pipeline using universal sentence encoders, KMeans, and t-SNE to group errors into thematic clusters; and (2) a dashboard that enables instructors to review class-wide, LLM-identified errors and dynamically tailor instruction toward current student misunderstandings. Our automated thematic clustering system is able to surface conceptual and strategic pitfalls that often persist beneath superficial debugging. A pilot study is being conducted to evaluate the effectiveness of the dashboard tool in large-scale CS1 instructional settings to enhance active learning at scale.