Responsible AI Use in Computer Science Education Research
Generative AI presents many possible use cases in the context of education research, including summarizing previous research, creating literature reviews, generating synthetic data, analyzing qualitative as well as quantitative data, creating data visualizations, and writing research papers. However, there are also many ethical issues that may arise related to the use of generative AI, including concerns related to environmental impact, human impact, data privacy, transparency and replicability, the potential for data re-identification, intellectual property and intellectual debt, the digital divide, and accuracy and bias. In order to promote the responsible use of AI in education research, we formed a group with expertise across the relevant research practices and ethical issues to convene in order to articulate guidelines for the use of generative AI in STEM education research.
The convening explores generative AI use cases and related ethical issues as a matrix, considering the intersection of each stage of the research process mapped to each major ethical issue raised by AI use. The work of this convening (in November 2025) forms the basis for the articulation of guidelines designed to provide pragmatic assistance to STEM education researchers, including CS education researchers, who want to capture the benefits of AI tools in a manner that comports with responsible use.
This lightning talk presents our work in progress on formulating the guidelines. Audience members will be introduced to the project, presented with the major points of the draft guidelines, and invited to provide feedback on the guidelines.