Large Language Models (LLMs) hold significant potential for transforming computer science education, yet concerns over their possible negative effects on student learning and retention have slowed broader instructor adoption. Evidence on LLM use is mixed. While novices may benefit from the generative capabilities of AI, they also risk developing overreliance. To address these concerns, we investigate how the pace of interaction with AI assistants affects learning in introductory CS courses by deploying three AI assistants (Fast, Medium, and Slow) in a classroom setting. Our results show that the slower-paced, Socratic-style AI assistant significantly increases learning, especially for students with less prior knowledge. Although faster-paced interaction benefits more advanced students initially, learning retention degrades enough to negate those gains. Surprisingly, the medium-paced assistant with typical instructor preprompt elements shows no statistically significant improvements, suggesting it may be the least effective. Given that students may use fast-paced commercial AI tools for coursework regardless of policy, offering a slower-paced, Socratic-style AI alternative could meaningfully improve overall student learning outcomes.