This program is tentative and subject to change.

Thu 19 Feb 2026 14:20 - 14:40 at Meeting Room 100 - Assessment and Feedback

Assessing what students know and what work they have done is increasingly challenging in the age of large language models (LLMs) like ChatGPT. In response, many instructors are looking to re-design their course assessment structures to address the likelihood of students using LLMs on homework assignments. Plausible solutions include more in-class assessment, proctored written exams, or even sitting down with every student for a dialog-based assessment. Though the benefits of individual dialog-based assessments are well-documented, adopting these approaches is often too challenging in courses with hundreds of students – a typical lower-division CS class size at many institutions. Thus, we sought to develop an intervention that augments existing course assessment structure, accounts for potential student LLM usage, scales to CS class sizes, and benefits student learning via student-centered design principles. We report our experience implementing “Personalized Exam Prep” (PEP) in two offerings of a 300-student, second-year CS-core course at a North American research-intensive university. PEP consists of one-on-one, TA-guided, problem-solving dialogues, optimized to give students personalized feedback before an upcoming written exam. We describe the logistics of applying this intervention at scale, made possible by custom tooling. We then outline outcomes: positive student feedback, metacognitive impacts, how well PEP-measured performance predicts written exam scores, and the value of PEP as a course-diagnostic for instructors. Finally, we offer insights that may be valuable for large-course instructors updating their assessment structures in a post-LLM world.

This program is tentative and subject to change.

Thu 19 Feb

Displayed time zone: Central Time (US & Canada) change

13:40 - 15:00
Assessment and FeedbackPapers at Meeting Room 100
13:40
20m
Talk
Assessing Student Proficiency in Foundational Developer Tools Through Live Checkoffs
Papers
Connor McMahon pc, Lauren Feldman University of North Carolina at Chapel Hill
14:00
20m
Talk
Understanding Student Interaction with AI-Powered Next-Step Hints: Strategies and ChallengesGlobal
Papers
Anastasiia Birillo JetBrains Research, Aleksei Rostovskii JetBrains Research, Yaroslav Golubev JetBrains Research, Hieke Keuning Utrecht University
14:20
20m
Talk
Personalized Exam Prep (PEP): Scaling No-Stakes, No-LLM Dialogue-Based Assessments in a Large CS Course
Papers
Kelly Cochran pc, Chris Piech Stanford University
14:40
20m
Talk
Fine-Tuning Open-Source Models as a Viable Alternative to Proprietary LLMs for Explaining Compiler MessagesGlobal
Papers
Lorenzo Lee Solano University of New South Wales, Sydney, Charles Koutcheme Aalto University, Juho Leinonen Aalto University, Alexandra Vassar University of New South Wales, Sydney, Jake Renzella University of New South Wales, Sydney