Designing and Implementing Skill Tests at Scale: Frequent, Computer-Based, Proctored Assessments with Minimal Infrastructure Requirements
This program is tentative and subject to change.
The rise of Large Language Models has intensified the need for reliable assessments of foundational programming skills in introductory computer science courses. While frequent, low-stakes testing is an effective pedagogical strategy, its adoption is often slowed by the need for institutional infrastructure like a Computer-Based Testing Facility, which instructors may lack or find too inflexible. This experience report presents a practical, instructor-driven model for Skill Tests: weekly, 10-minute, proctored, computer-based assessments run by course staff in any campus room using student laptops. We provide a logistical blueprint refined over five offerings of a large CS1 course, detailing our strategies for question design, student scheduling, and staff management. Our findings show this model is effective: replacing a midterm with Skill Tests reduced student anxiety, with 91% of students preferring the new format. Student performance on the final exam remained consistent, indicating that knowledge synthesis was not compromised. This paper offers educators a framework for deploying frequent, low-stakes assessments without dedicated institutional resources.