Assessing what students know and what work they have done is increasingly challenging in the age of large language models (LLMs) like ChatGPT. In response, many instructors are looking to re-design their course assessment structures to address the likelihood of students using LLMs on homework assignments. Plausible solutions include more in-class assessment, proctored written exams, or even sitting down with every student for a dialog-based assessment. Though the benefits of individual dialog-based assessments are well-documented, adopting these approaches is often too challenging in courses with hundreds of students – a typical lower-division CS class size at many institutions. Thus, we sought to develop an intervention that augments existing course assessment structure, accounts for potential student LLM usage, scales to CS class sizes, and benefits student learning via student-centered design principles. We report our experience implementing “Personalized Exam Prep” (PEP) in two offerings of a 300-student, second-year CS-core course at a North American research-intensive university. PEP consists of one-on-one, TA-guided, problem-solving dialogues, optimized to give students personalized feedback before an upcoming written exam. We describe the logistics of applying this intervention at scale, made possible by custom tooling. We then outline outcomes: positive student feedback, metacognitive impacts, how well PEP-measured performance predicts written exam scores, and the value of PEP as a course-diagnostic for instructors. Finally, we offer insights that may be valuable for large-course instructors updating their assessment structures in a post-LLM world.