Assessments of student learning can take many forms, but for assessing learning at scale, a multiple-choice exam is often used. A multiple-choice question is often comprised of a stem, followed by a number of distractors and a correct option. All parts of a question need to be carefully designed to support assessment validity, but each part can also exhibit bias and negatively impact students in an inequitable way. In this research, we extend psychometric methods to explore the potential bias within the distractors on an introductory computing assessment for undergraduate students. We use Differential Distractor Functioning (DDF) on 259 student responses to identify problematic distractors. We discuss the distractors that were flagged in our analysis, which did vary in regard to biasing for male or female students. This work contributes a deeper understanding of the issues that can lie within an assessment, furthering our efforts to create fairer measures of learning for all students.