Measuring Students’ Perceptions of an Autograded Scaffolding Tool for Students Performing at All Levels in an Algorithms Class
Algorithms courses are a foundational part of an undergraduate computer science degree that require abstract thinking and creativity and are known to be challenging for many students. Recently researchers have been developing auto-graded tools to scaffold students through the problem-solving process. We examine students’ perceptions of such a tool in a required upper-division Algorithms course at a R1 University. The goal of the tool is to improve students’ experience in three ways: (1) help students break down the problem-solving process into clear steps; (2) increase students’ self-efficacy by raising their confidence and understanding of the material; (3) have low “cost”, by being easy to use, enjoyable, and a good use of students’ time. The tool itself is designed to provide these benefits to students at every level of mastery through instantaneous feedback over increasingly challenging problems. Based on a survey of almost 1000 students across four semesters, each with a different instructor, we examine whether student feedback is favorable over all four offerings, and for groups of students with different course outcomes. Using qualitative and quantitative methods, we found that across each of the four semesters and across letter grades A, B, and C, students favored the tool as compared to written homework.
Thu 19 FebDisplayed time zone: Central Time (US & Canada) change
10:40 - 12:00 | |||
10:40 20mTalk | AI-Supported Grading and Rubric Refinement for Free Response Questions Papers Victor Zhao University of Illinois, Urbana-Champaign, Max Fowler University of Illinois, Yael Gertner University of Illinois Urbana-Champaign, Seth Poulsen Utah State University, Matthew West University of Illinois at Urbana-Champaign , Mariana Silva University of Illinois at Urbana Champaign | ||
11:00 20mTalk | Creating Exercises with Generative AI for Teaching Introductory Secure Programming: Are We There Yet? Papers | ||
11:20 20mTalk | Improving LLM-Generated Educational Content: A Case Study on Prototyping, Prompt Engineering, and Evaluating a Tool for Generating Programming Problems for Data Science Papers Jiaen Yu University of California, San Diego, Ylesia Wu UC San Diego, Gabriel Cha University of California San Diego, Ayush Shah University of California San Diego, Sam Lau University of California at San Diego | ||
11:40 20mTalk | Measuring Students’ Perceptions of an Autograded Scaffolding Tool for Students Performing at All Levels in an Algorithms Class Papers Yael Gertner University of Illinois Urbana-Champaign, Brad Solomon University of Illinois Urbana-Champaign, Hongxuan Chen University of Illinois at Urbana-Champaign, Eliot Robson University of Illinois Urbana-Champaign, Carl Evans University of Illinois Urbana-Champaign, Jeff Erickson University of Illinois Urbana-Champaign | ||