TY - GEN
T1 - Automating Procedurally Fair Feature Selection in Machine Learning
AU - Belitz, Clara
AU - Jiang, Lan
AU - Bosch, Nigel
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/7/21
Y1 - 2021/7/21
N2 - In recent years, machine learning has become more common in everyday applications. Consequently, numerous studies have explored issues of unfairness against specific groups or individuals in the context of these applications. Much of the previous work on unfairness in machine learning has focused on the fairness of outcomes rather than process. We propose a feature selection method inspired by fair process (procedural fairness) in addition to fair outcome. Specifically, we introduce the notion of unfairness weight, which indicates how heavily to weight unfairness versus accuracy when measuring the marginal benefit of adding a new feature to a model. Our goal is to maintain accuracy while reducing unfairness, as defined by six common statistical definitions. We show that this approach demonstrably decreases unfairness as the unfairness weight is increased, for most combinations of metrics and classifiers used. A small subset of all the combinations of datasets (4), unfairness metrics (6), and classifiers (3), however, demonstrated relatively low unfairness initially. For these specific combinations, neither unfairness nor accuracy were affected as unfairness weight changed, demonstrating that this method does not reduce accuracy unless there is also an equivalent decrease in unfairness. We also show that this approach selects unfair features and sensitive features for the model less frequently as the unfairness weight increases. As such, this procedure is an effective approach to constructing classifiers that both reduce unfairness and are less likely to include unfair features in the modeling process.
AB - In recent years, machine learning has become more common in everyday applications. Consequently, numerous studies have explored issues of unfairness against specific groups or individuals in the context of these applications. Much of the previous work on unfairness in machine learning has focused on the fairness of outcomes rather than process. We propose a feature selection method inspired by fair process (procedural fairness) in addition to fair outcome. Specifically, we introduce the notion of unfairness weight, which indicates how heavily to weight unfairness versus accuracy when measuring the marginal benefit of adding a new feature to a model. Our goal is to maintain accuracy while reducing unfairness, as defined by six common statistical definitions. We show that this approach demonstrably decreases unfairness as the unfairness weight is increased, for most combinations of metrics and classifiers used. A small subset of all the combinations of datasets (4), unfairness metrics (6), and classifiers (3), however, demonstrated relatively low unfairness initially. For these specific combinations, neither unfairness nor accuracy were affected as unfairness weight changed, demonstrating that this method does not reduce accuracy unless there is also an equivalent decrease in unfairness. We also show that this approach selects unfair features and sensitive features for the model less frequently as the unfairness weight increases. As such, this procedure is an effective approach to constructing classifiers that both reduce unfairness and are less likely to include unfair features in the modeling process.
KW - bias
KW - fairness
KW - feature selection
KW - machine learning
UR - http://www.scopus.com/inward/record.url?scp=85112401705&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112401705&partnerID=8YFLogxK
U2 - 10.1145/3461702.3462585
DO - 10.1145/3461702.3462585
M3 - Conference contribution
AN - SCOPUS:85112401705
T3 - AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
SP - 379
EP - 389
BT - AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
PB - Association for Computing Machinery
T2 - 4th AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2021
Y2 - 19 May 2021 through 21 May 2021
ER -