Automating Procedurally Fair Feature Selection in Machine Learning

Clara Belitz, Lan Jiang, Nigel Bosch

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In recent years, machine learning has become more common in everyday applications. Consequently, numerous studies have explored issues of unfairness against specific groups or individuals in the context of these applications. Much of the previous work on unfairness in machine learning has focused on the fairness of outcomes rather than process. We propose a feature selection method inspired by fair process (procedural fairness) in addition to fair outcome. Specifically, we introduce the notion of unfairness weight, which indicates how heavily to weight unfairness versus accuracy when measuring the marginal benefit of adding a new feature to a model. Our goal is to maintain accuracy while reducing unfairness, as defined by six common statistical definitions. We show that this approach demonstrably decreases unfairness as the unfairness weight is increased, for most combinations of metrics and classifiers used. A small subset of all the combinations of datasets (4), unfairness metrics (6), and classifiers (3), however, demonstrated relatively low unfairness initially. For these specific combinations, neither unfairness nor accuracy were affected as unfairness weight changed, demonstrating that this method does not reduce accuracy unless there is also an equivalent decrease in unfairness. We also show that this approach selects unfair features and sensitive features for the model less frequently as the unfairness weight increases. As such, this procedure is an effective approach to constructing classifiers that both reduce unfairness and are less likely to include unfair features in the modeling process.

Original languageEnglish (US)
Title of host publicationAIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
PublisherAssociation for Computing Machinery
Pages379-389
Number of pages11
ISBN (Electronic)9781450384735
DOIs
StatePublished - Jul 21 2021
Event4th AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2021 - Virtual, Online, United States
Duration: May 19 2021May 21 2021

Publication series

NameAIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society

Conference

Conference4th AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2021
Country/TerritoryUnited States
CityVirtual, Online
Period5/19/215/21/21

Keywords

  • bias
  • fairness
  • feature selection
  • machine learning

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Automating Procedurally Fair Feature Selection in Machine Learning'. Together they form a unique fingerprint.

Cite this