Improving Online Learning: Using Utterance Distribution to Improve Student-Facing Assistants in Discussion Forums
Recent advancements in large language models (LLMs) have paved the way for AI educational assistants in academic settings. However, the experience students have when interacting with AI is substantially different from what they would have when discussing with teaching assistants (TAs). This study identifies the main differences between TA and AI responses to students by using a four-class utterance classification system to compare the utterance distributions found in TA replies and in responses generated by Ed-Bot, a state-of-the-art AI educational assistant. Using this classification, striking distributional differences are observed: Ed-Bot produces far more Advance utterances, whereas TAs use many more React and Social Convention utterances. This research examines how differences in these distributions relate to response quality in student-TA interactions. This research seeks to improve Ed-Bot’s responses by aligning its utterance distribution to more closely resemble the utterances found in actual TA responses.