This program is tentative and subject to change.

Peer code review activities, like their industry-proven counterpart code reviews, have had many benefits reported: they enhance programming ability, conceptual understanding, and community, while improving students’ debugging ability and code quality. Problems, however, can include lack of engagement and poor review quality; therefore, motivating students to engage with code reviews is essential.

The majority of peer code reviews are individual reviews of another student’s work; collaborative reviews are also used, often in person, with structured roles such as author, reader, inspector, and recorder. Both methods, however, can require too much administrative overhead to use in large courses.

We developed a novel collaborative, engaging, and scalable code review activity easy to use in large courses, using the freely available social annotation app Perusall. Perusall automatically placed students into groups, where they posted reviews and discussed them. Perusall then graded each submission with AI/ML, and synced grades to the LMS. Our goals were to increase students’ ability to find and fix bugs and readability issues in code, while improving their communication skills.

Students found the collaborative code reviews helpful and engaging; they learned from others, increased their real-world job skills, and improved their ability to find bugs and readability issues. While they desired improved functionality from Perusall, they also want more frequent code reviews, as well as individual reviews of their own code. Collaborative code reviews are beneficial, yet easy to deploy, while motivating students to want additional benefits from individual code reviews.

This program is tentative and subject to change.

Fri 20 Feb

Displayed time zone: Central Time (US & Canada) change

10:40 - 12:00
Scaling Feedback and Assessments Without Losing Your Sanity (or Your Servers)Papers at Meeting Room 100
10:40
20m
Talk
Aligning Small Language Models for Programming Feedback: Towards Scalable Coding Support in a Massive Global CourseGlobal
Papers
Charles Koutcheme Aalto University, Juliette Woodrow Stanford University, Chris Piech Stanford University
11:00
20m
Talk
Designing and Implementing Skill Tests at Scale: Frequent, Computer-Based, Proctored Assessments with Minimal Infrastructure Requirements
Papers
Anastasiya Markova University of California San Diego, Anish Kasam University of California San Diego, Bryce Hackel University of California San Diego, Marina Langlois University of California San Diego, Sam Lau University of California at San Diego
11:20
20m
Talk
Scaling Engagement: Leveraging Social Annotation and AI for Collaborative Code Review in Large CS Courses
Papers
Raymond Klefstad University of California, Irvine, Susan Klefstad Independent Researcher, Vincent Tran University of California, Irvine, Michael Shindler University of California, Irvine