Designing AI-Resistant Assignments via Iterative Perturbation to Promote Interactive Learning
This dissertation proposes and evaluates an iterative process designed to help university-level computer science instructors modify (“perturb”) existing assignments until they are resilient to passive, non-constructive use of large language model (LLM) chatbots. The iterative process begins by posing a question from the original assignment to a chatbot. If the chatbot’s response meets or is above a set grade threshold (e.g., 60-80%), one of three modifications is applied to create a new question: adding a spatial reasoning requirement; re-axiomatizing the problem space (e.g. deriving a new number system by altering the axioms defining the natural numbers); or incorporating additional class-specific context. This perturbed question is then re-tested with the chatbot, with further modifications made at each step until the chatbot’s performance falls below the predetermined threshold. The resulting set of repeatedly perturbed questions is then incorporated into new assignments for students. To assess the effectiveness of this method, I conducted a pilot study in an undergraduate discrete mathematics course. The perturbed assignment was offered as a lab, and a personalized LLM-based chatbot tool was provided to collect detailed student-chatbot interaction logs as the students completed the assignment. I will analyze these logs to classify student engagement behaviors according to the ICAP framework (Interactive, Constructive, Active, Passive). I hypothesize that these behaviors can serve as a predictor for course outcomes, helping to determine if perturbed assignments effectively differentiate between students who genuinely understand the material and those who do not, ultimately leading to more meaningful learning and accurate assessment.