TY - JOUR
T1 - Constructing categories
T2 - Moving beyond protected classes in algorithmic fairness
AU - Belitz, Clara
AU - Ocumpaugh, Jaclyn
AU - Ritter, Steven
AU - Baker, Ryan S.
AU - Fancsali, Stephen E.
AU - Bosch, Nigel
N1 - Funding Information:
National Science Foundation, Grant/Award Number: 2000638 Funding information
Publisher Copyright:
© 2022 Association for Information Science and Technology
PY - 2022
Y1 - 2022
N2 - Automated, data-driven decision making is increasingly common in a variety of application domains. In educational software, for example, machine learning has been applied to tasks like selecting the next exercise for students to complete. Machine learning methods, however, are not always equally effective for all groups of students. Current approaches to designing fair algorithms tend to focus on statistical measures concerning a small subset of legally protected categories like race or gender. Focusing solely on legally protected categories, however, can limit our understanding of bias and unfairness by ignoring the complexities of identity. We propose an alternative approach to categorization, grounded in sociological techniques of measuring identity. By soliciting survey data and interviews from the population being studied, we can build context-specific categories from the bottom up. The emergent categories can then be combined with extant algorithmic fairness strategies to discover which identity groups are not well-served, and thus where algorithms should be improved or avoided altogether. We focus on educational applications but present arguments that this approach should be adopted more broadly for issues of algorithmic fairness across a variety of applications.
AB - Automated, data-driven decision making is increasingly common in a variety of application domains. In educational software, for example, machine learning has been applied to tasks like selecting the next exercise for students to complete. Machine learning methods, however, are not always equally effective for all groups of students. Current approaches to designing fair algorithms tend to focus on statistical measures concerning a small subset of legally protected categories like race or gender. Focusing solely on legally protected categories, however, can limit our understanding of bias and unfairness by ignoring the complexities of identity. We propose an alternative approach to categorization, grounded in sociological techniques of measuring identity. By soliciting survey data and interviews from the population being studied, we can build context-specific categories from the bottom up. The emergent categories can then be combined with extant algorithmic fairness strategies to discover which identity groups are not well-served, and thus where algorithms should be improved or avoided altogether. We focus on educational applications but present arguments that this approach should be adopted more broadly for issues of algorithmic fairness across a variety of applications.
UR - http://www.scopus.com/inward/record.url?scp=85126800760&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85126800760&partnerID=8YFLogxK
U2 - 10.1002/asi.24643
DO - 10.1002/asi.24643
M3 - Comment/debate
AN - SCOPUS:85126800760
SN - 2330-1635
JO - Journal of the American Society for Information Science and Technology
JF - Journal of the American Society for Information Science and Technology
ER -