Automated grading systems for SQL courses can significantly reduce instructor workload while ensuring consistency and objectivity in assessment. At our university, an automated SQL grading tool has become essential for evaluating assignments. Initially, we focused on grading Data Query Language (SELECT) statements, which constitute the core content of assignments in our first-year computer science course. SELECT statements produce a results table, which makes automatic grading relatively easy. However, other SQL statements, such as CREATE TABLE, INSERT, DELETE, UPDATE, do not produce a results table. This makes grading these statements more difficult. Recognizing the need to cover broader course material, we have extended our system to evaluate advanced Data Definition Language (DDL) and Data Manipulation Language (DML) statements. In this paper, we describe our approach to automated DDL/DML grading and illustrate our method of clause-driven tailored feedback generation. We explain how our system generates precise, targeted feedback based on specific SQL clauses or components. In addition, we present a practical example to highlight the benefits of our approach. Finally, we benchmark our grading tool against existing systems. Our extended tool can parse and provide feedback on most student SQL submissions. It can consistently provide targeted feedback, generating nearly one suggestion per error. It generates shorter feedback for simpler DML queries, while more complex syntax leads to longer feedback. It has the ability to pinpoint precise SQL errors. Lastly, it can generate precise and actionable suggestions, with each message directly tied to the specific component that caused the error.