Thematic analysis is an increasingly popular method in computing education research; however, widespread methodological confusion undermines its potential. For example, notions of objectivity do not make sense with reflexive approaches, and evidence of saturation is not required for thematic analysis. This position paper details how thematic analysis evolved from Braun and Clarke’s influential 2006 work into an umbrella method encompassing three general approaches: coding reliability (positivist), reflexive (interpretivist), and codebook (hybrid) thematic analysis. Each has different goals and philosophical assumptions, but researchers often inadvertently mix incompatible elements. We then present our personal journeys of learning about thematic analysis and finally dissect common confusing claims in our field’s publications and peer reviews. Our goal is for the field of computing education research to move towards a ``knowing'' practice. By clarifying thematic analysis approaches and providing guidance for authors and reviewers, we hope to help the field of computing education research better understand this popular method.