The rapid integration of AI coding assistants into student workflows has fundamentally challenged the role of traditional programming assignments as a measure of individual understanding. Rather than attempting to prohibit these tools, we propose a pedagogical shift: from assessing the process of code creation to verifying a student’s comprehension and ownership of their final submitted work. We present a fully automated pipeline that leverages a Large Language Model (LLM) to generate individualized, in-class quizzes with questions targeting specific segments of each student’s own code. This approach compels students to engage deeply with their submissions, as they must be prepared to explain their logic and implementation choices. While students might rely too heavily on AI tools to complete out-of-class assignments, this method places the responsibility of understanding the code on the student. The instructor remains central to the process, reviewing the quizzes before distributing to the class, and hand-grading the quizzes to provide meaningful, nuanced feedback. This ensures both fairness and a human connection in the assessment loop.