Scaling Engagement: Leveraging Social Annotation and AI for Collaborative Code Review in Large CS Courses
Peer code review activities, like their industry-proven counterpart code reviews, have had many benefits reported: they enhance programming ability, conceptual understanding, and community, while improving students’ debugging ability and code quality. Problems, however, can include lack of engagement and poor review quality; therefore, motivating students to engage with code reviews is essential.
The majority of peer code reviews are individual reviews of another student’s work; collaborative reviews are also used, often in person, with structured roles such as author, reader, inspector, and recorder. Both methods, however, can require too much administrative overhead to use in large courses.
We developed a novel collaborative, engaging, and scalable code review activity easy to use in large courses, using the freely available social annotation app Perusall. Perusall automatically placed students into groups, where they posted reviews and discussed them. Perusall then graded each submission with AI/ML, and synced grades to the LMS. Our goals were to increase students’ ability to find and fix bugs and readability issues in code, while improving their communication skills.
Students found the collaborative code reviews helpful and engaging; they learned from others, increased their real-world job skills, and improved their ability to find bugs and readability issues. While they desired improved functionality from Perusall, they also want more frequent code reviews, as well as individual reviews of their own code. Collaborative code reviews are beneficial, yet easy to deploy, while motivating students to want additional benefits from individual code reviews.