Group Fairness via Group Consensus

Eunice Chan, Zhining Liu, Ruizhong Qiu, Yuheng Zhang, Ross MacIejewski, Hanghang Tong

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Ensuring equitable impact of machine learning models across different societal groups is of utmost importance for real-world machine learning applications. Prior research in fairness has predominantly focused on adjusting model outputs through pre-processing, in-processing, or post-processing techniques. These techniques focus on correcting bias in either the data or the model. However, we argue that the bias in the data and model should be addressed in conjunction. To achieve this, we propose an algorithm called GroupDebias to reduce unfairness in the data in a model-guided fashion, thereby enabling models to exhibit more equitable behavior. Even though it is model-aware, the core idea of GroupDebias is independent of the model architecture, making it a versatile and effective approach that can be broadly applied across various domains and model types. Our method focuses on systematically addressing biases present in the training data itself by adaptively dropping samples that increase the biases in the model. Theoretically, the proposed approach enjoys a guaranteed improvement in demographic parity at the expense of a bounded reduction in balanced accuracy. A comprehensive evaluation of the GroupDebias algorithm through extensive experiments on diverse datasets and machine learning models demonstrates that GroupDebias consistently and significantly outperforms existing fairness enhancement techniques, achieving a more substantial reduction in unfairness with minimal impact on model performance.

Original languageEnglish (US)
Title of host publication2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
PublisherAssociation for Computing Machinery
Pages1788-1808
Number of pages21
ISBN (Electronic)9798400704505
DOIs
StatePublished - Jun 3 2024
Event2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024 - Rio de Janeiro, Brazil
Duration: Jun 3 2024Jun 6 2024

Publication series

Name2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024

Conference

Conference2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
Country/TerritoryBrazil
CityRio de Janeiro
Period6/3/246/6/24

Keywords

  • Fairness
  • Historical Bias
  • Machine Learning
  • Sampling

ASJC Scopus subject areas

  • General Business, Management and Accounting

Fingerprint

Dive into the research topics of 'Group Fairness via Group Consensus'. Together they form a unique fingerprint.

Cite this