Code quality is an essential aspect of programming education, impacting source code maintainability and readability. However, providing automated feedback on code style can be challenging, and instructors need to prioritise their time focusing student attention on the most important issues. To address this gap, we adapt a defect catalogue for novice Python programmers and construct an automated detection pipeline that integrates multiple static analysis tools and a custom natural language identifier detector. Using a standardised set of defect examples, we evaluate the detection coverage of the selected tools. Our results show that within the scope of our integrated toolset, 64 defect types are detected, with 30 undetected. Applying the pipeline to over 86,000 student submissions, we analyse the prevalence and distribution of code quality defects in real coursework. To better align with pedagogical priorities, we introduce a multi-dimensional prioritisation framework that combines defect frequency, student coverage, and instructor-rated importance. Our findings provide a reference for improving feedback mechanisms and instructional strategies for code quality in introductory programming education.