Widespread student adoption of large language models (LLMs) has prompted many CS instructors to assign greater weight to handwritten, proctored assessments. However, this approach struggles to scale as class sizes outpace course staff resources. To address this challenge, our study explores LLM-assisted grading to reduce required grading time. While prior work has emphasized tool accuracy, we evaluate both time and accuracy by comparing outcomes when course staff use an LLM-assisted grader versus Gradescope. We also incorporate a mixed-methods analysis of student and staff perceptions. In a CS1 course of 166 students supported by four teaching assistants (TAs), we observed that LLM-assisted grading reduced overall grading time by 40% compared to Gradescope. The time savings were 48% for exams and 25% for quizzes. Across all assessments, short answer questions showed a 46% time improvement, and free response questions showed a 37% time improvement. In terms of accuracy, accepted regrade requests increased negligibly from 0.1% to 0.5% across three exams and six quizzes. Students were generally neutral about LLM-assisted grading, but stressed the value of TA feedback and oversight. Meanwhile, TAs expressed positive sentiments towards the tool, tempered by concerns of skewed perceptions of students caused by the tool. Overall, these findings indicate that LLM-assisted grading can greatly reduce grading time, with only minor accuracy trade-offs that can be mitigated. As a result, LLM-assisted grading emerges as a promising approach for enhancing grading efficiency in CS courses, meriting further exploration for broader adoption.