Empowering Computer Science Teachers by Integrating AI into Learning Environments
Integrating artificial intelligence (AI) and large language models (LLMs) in education can accelerate teaching and learning. AI features such as chatbots, instructional guidance, practice support, debugging, and rubric/feedback generation can save time, improve performance, build confidence, and enhance overall learning outcomes. These tools work best when tailored to users’ needs, interactions, and perceptions, which in turn builds self-efficacy. However, without thoughtful design and proper scaffolding, they can be counterproductive. Unrestricted AI use may yield short-term speed and accuracy but risks undermining “productive struggle,” critical thinking, and long-term learning, especially in programming. A better path uses participant-focused, pedagogy-aware LLMs with clear guardrails, transparent behaviors, and actionable instructor controls. This research investigates integrating AI tools, particularly LLM chatbots, into modern block-based programming (BBP) learning platforms. The dissertation focuses on:
(1) Mapping the AI feature needs of STEM vs. non-STEM teachers through interviews, identifying common and discipline- specific requirements. (2) Analyzing teachers’ personas and perceptions of the impact of LLM chatbots on coding through thematic analysis of PD workshops. (3) Exploring the use of AI-generated rubrics in PD sessions, evaluating their benefits, limitations, and adoption factors. (4) Examining students’ attitudes toward AI-assisted assessments, with the aim of making these systems more ethical, human-centered, and acceptable. (5) Investigating how students respond to LLM assistance in BBP, focusing on emotions, perceptions, personas, and help-seeking behaviors.
The results provide evidence-based design principles for AI tools and a reusable measurement toolkit to assess their impact on performance, self-efficacy, and equity in K–12 and early undergraduate education.
