Exploring Trust in Human-LLM Feedback Systems: Observation of Student Behaviour in Software Engineering EducationGlobal
AI systems can deliver scalable feedback in large courses and often achieve high consistency with human grading. Yet, students continue to view human feedback as more credible, actionable, and trustworthy, motivating growing interest in hybrid approaches that combine AI and human input. Despite advances in this area, three key challenges remain: (1) understanding when and why students escalate from AI to human support, (2) identifying how design interventions such as transparency, explainability, and endorsement affect trust and uptake, and (3) evaluating whether hybrid AI+human feedback models improve learning outcomes compared to AI-only or human-only systems. This study uses controlled experiments and structured interviews to investigate these questions. The results aim to guide the responsible design of hybrid feedback systems that align scalability with pedagogical credibility.