TY - GEN
T1 - Algorithmic unfairness mitigation in student models
T2 - 15th International Conference on Educational Data Mining, EDM 2022
AU - Stinar, Frank
AU - Bosch, Nigel
N1 - Publisher Copyright:
© 2022 Copyright is held by the author(s).
PY - 2022
Y1 - 2022
N2 - Systematically unfair education systems lead to different levels of learning for students from different demographic groups, which, in the context of AI-driven education, has inspired work on mitigating unfairness in machine learning methods. However, unfairness mitigation methods may lead to unintended consequences for classrooms and students. We examined preprocessing and postprocessing unfairness mitigation algorithms in the context of a large dataset, the State of Texas Assessments of Academic Readiness (STAAR) outcome data, to investigate these issues. We evaluated each unfairness mitigation algorithm across multiple machine learning models using different definitions of fairness. We then evaluated how unfairness mitigation impacts classifications of students across different combinations of machine learning models, unfairness mitigation methods, and definitions of fairness. On average, unfairness mitigation methods led to a 22% improvement in fairness. When examining the impacts of unfairness mitigation methods on predictions, we found that these methods led to models that can and did overgeneralize groups. Consequently, predictions made by such models may not reach the intended audiences. We discuss the implications for AI-driven interventions and student support.
AB - Systematically unfair education systems lead to different levels of learning for students from different demographic groups, which, in the context of AI-driven education, has inspired work on mitigating unfairness in machine learning methods. However, unfairness mitigation methods may lead to unintended consequences for classrooms and students. We examined preprocessing and postprocessing unfairness mitigation algorithms in the context of a large dataset, the State of Texas Assessments of Academic Readiness (STAAR) outcome data, to investigate these issues. We evaluated each unfairness mitigation algorithm across multiple machine learning models using different definitions of fairness. We then evaluated how unfairness mitigation impacts classifications of students across different combinations of machine learning models, unfairness mitigation methods, and definitions of fairness. On average, unfairness mitigation methods led to a 22% improvement in fairness. When examining the impacts of unfairness mitigation methods on predictions, we found that these methods led to models that can and did overgeneralize groups. Consequently, predictions made by such models may not reach the intended audiences. We discuss the implications for AI-driven interventions and student support.
KW - Data science applications in education
KW - Fairness
KW - Machine learning
KW - Unfairness mitigation
UR - http://www.scopus.com/inward/record.url?scp=85149284298&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85149284298&partnerID=8YFLogxK
U2 - 10.5281/zenodo.6853135
DO - 10.5281/zenodo.6853135
M3 - Conference contribution
AN - SCOPUS:85149284298
T3 - Proceedings of the 15th International Conference on Educational Data Mining, EDM 2022
BT - Proceedings of the 15th International Conference on Educational Data Mining, EDM 2022
PB - International Educational Data Mining Society
Y2 - 24 July 2022 through 27 July 2022
ER -