Equipping users with the ability to recognize AI hallucinations is essential for AI literacy. Building on research highlighting a lack of studies involving younger learners in the chatbot design process, we explore a constructionist approach to AI literacy focusing on understanding of hallucinations. We engaged 48 middle school learners in designing LLM-based chatbots with customized characters, roles, and constraints using a learning environment, LUMI. Through peer testing and collaborative evaluation, learners developed practical strategies for identifying hallucinations and understanding AI limitations. Pre- and post-surveys revealed substantial improvements in learners’ AI understanding, awareness of AI hallucinations, and confidence in designing trustworthy chatbots. Qualitative analysis showed that learners developed foundational understanding of chatbot functionality, LLMs, and prompt design, while demonstrating recognition of data limitations and strategies to mitigate hallucinations. This work contributes to teaching responsible AI interaction for youth learners.
Program Display Configuration
Thu 19 Feb
Displayed time zone: Central Time (US & Canada)change