This program is tentative and subject to change.

The rise of Large Language Models has intensified the need for reliable assessments of foundational programming skills in introductory computer science courses. While frequent, low-stakes testing is an effective pedagogical strategy, its adoption is often slowed by the need for institutional infrastructure like a Computer-Based Testing Facility, which instructors may lack or find too inflexible. This experience report presents a practical, instructor-driven model for Skill Tests: weekly, 10-minute, proctored, computer-based assessments run by course staff in any campus room using student laptops. We provide a logistical blueprint refined over five offerings of a large CS1 course, detailing our strategies for question design, student scheduling, and staff management. Our findings show this model is effective: replacing a midterm with Skill Tests reduced student anxiety, with 91% of students preferring the new format. Student performance on the final exam remained consistent, indicating that knowledge synthesis was not compromised. This paper offers educators a framework for deploying frequent, low-stakes assessments without dedicated institutional resources.

This program is tentative and subject to change.

Fri 20 Feb

Displayed time zone: Central Time (US & Canada) change

10:40 - 12:00
Scaling Feedback and Assessments Without Losing Your Sanity (or Your Servers)Papers at Meeting Room 100
10:40
20m
Talk
Aligning Small Language Models for Programming Feedback: Towards Scalable Coding Support in a Massive Global Course
Papers
Charles Koutcheme Aalto University, Juliette Woodrow Stanford University, Chris Piech Stanford University
11:00
20m
Talk
Designing and Implementing Skill Tests at Scale: Frequent, Computer-Based, Proctored Assessments with Minimal Infrastructure Requirements
Papers
Anastasiya Markova University of California San Diego, Anish Kasam University of California San Diego, Bryce Hackel University of California San Diego, Marina Langlois University of California San Diego, Sam Lau University of California at San Diego
11:20
20m
Talk
Scaling Engagement: Leveraging Social Annotation and AI for Collaborative Code Review in Large CS Courses
Papers
Raymond Klefstad University of California, Irvine, Susan Klefstad Independent Researcher, Vincent Tran University of California, Irvine, Michael Shindler University of California, Irvine