The growing use of machine learning for the data-driven study of social issues and the implementation of data-driven decision processes has required researchers to re-examine the often implicit assumption that datadriven models are neutral and free of biases. The careful examination of machine-learned models has identified examples of how existing biases can inadvertently be perpetuated in fields such as criminal justice, where failing to account for racial prejudices in the prediction of recidivism can perpetuate or exasperate them, and natural language processing, where algorithms trained on human languages corpora have been shown to reproduce strong biases in gendered descriptions. These examples highlight the importance of thinking about how biases might impact the study of educational data and how data-driven models used in educational contexts may perpetuate inequalities. To understand this question, we ask whether and how demographic information, including age, educational level, gender, race/ethnicity, socioeconomic status (SES), and geographical location, is used in Educational Data Mining (EDM) research. Specifically, we conduct a systematic survey of the last five years of EDM publications that investigates whether and how demographic information about the students is reported in EDM research and how this information is used to 1) investigate issues related to demographics, 2) use the information as input features for data-driven analyses, or 3) to test and validate models. This survey shows that, although a majority of publications reported at least one category of demographic information, the frequency of reporting for different categories of demographic information is very uneven (ranging from 5% to 59%), and only 15% of publications used demographic information in their analyses.
- Machine learning bias
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Science Applications