We present an LLM-assisted SQL learning system that closes the loop from problem discovery to grading. Grounded in real-world data-wrangling scenarios, our agentic workflow (i) synthesizes industry-style practice problems with pedagogical metadata, (ii) produces executable reference SQL via a multi-step operator-planning pipeline, and (iii) grades student submissions against rich rubrics while explaining partial credit and surfacing actionable feedback for revision. We evaluate two core capabilities. First, on a large corpus of realistic SQL problems, our zero-shot, multi-step reference-answer generator, implemented with OpenAI’s o4-mini, substantially outperforms a single-prompt baseline while approaching the state-of-the-art pipelines trained with supervised learning. Second, in a classroom deployment, we compare LLM-assisted grading with human graders across four exam questions, encompassing 326 submissions evaluated by six graders. The results indicate that LLMs can provide grading signals competitive with those of human graders for many question types. Overall, the system is designed for responsible educational use through real-world problems, generated reference solutions, and grading assistance. Together, these features enable scalable practice generation and grading, which improves student learning while augmenting instructor capacity.