TY - GEN
T1 - Group Fairness via Group Consensus
AU - Chan, Eunice
AU - Liu, Zhining
AU - Qiu, Ruizhong
AU - Zhang, Yuheng
AU - MacIejewski, Ross
AU - Tong, Hanghang
N1 - This work is supported by NSF (2134079 and 2324770), the NSF Program on Fairness in AI in collaboration with Amazon (1939725), DHS (17STQAC00001-07-00), the C3.ai Digital Transformation Institute, and IBM-Illinois Discovery Accelerator Institute. The content of the information in this document does not necessarily reflect the position or the policy of the Government or Amazon or IBM, and no official endorsement should be inferred. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
PY - 2024/6/3
Y1 - 2024/6/3
N2 - Ensuring equitable impact of machine learning models across different societal groups is of utmost importance for real-world machine learning applications. Prior research in fairness has predominantly focused on adjusting model outputs through pre-processing, in-processing, or post-processing techniques. These techniques focus on correcting bias in either the data or the model. However, we argue that the bias in the data and model should be addressed in conjunction. To achieve this, we propose an algorithm called GroupDebias to reduce unfairness in the data in a model-guided fashion, thereby enabling models to exhibit more equitable behavior. Even though it is model-aware, the core idea of GroupDebias is independent of the model architecture, making it a versatile and effective approach that can be broadly applied across various domains and model types. Our method focuses on systematically addressing biases present in the training data itself by adaptively dropping samples that increase the biases in the model. Theoretically, the proposed approach enjoys a guaranteed improvement in demographic parity at the expense of a bounded reduction in balanced accuracy. A comprehensive evaluation of the GroupDebias algorithm through extensive experiments on diverse datasets and machine learning models demonstrates that GroupDebias consistently and significantly outperforms existing fairness enhancement techniques, achieving a more substantial reduction in unfairness with minimal impact on model performance.
AB - Ensuring equitable impact of machine learning models across different societal groups is of utmost importance for real-world machine learning applications. Prior research in fairness has predominantly focused on adjusting model outputs through pre-processing, in-processing, or post-processing techniques. These techniques focus on correcting bias in either the data or the model. However, we argue that the bias in the data and model should be addressed in conjunction. To achieve this, we propose an algorithm called GroupDebias to reduce unfairness in the data in a model-guided fashion, thereby enabling models to exhibit more equitable behavior. Even though it is model-aware, the core idea of GroupDebias is independent of the model architecture, making it a versatile and effective approach that can be broadly applied across various domains and model types. Our method focuses on systematically addressing biases present in the training data itself by adaptively dropping samples that increase the biases in the model. Theoretically, the proposed approach enjoys a guaranteed improvement in demographic parity at the expense of a bounded reduction in balanced accuracy. A comprehensive evaluation of the GroupDebias algorithm through extensive experiments on diverse datasets and machine learning models demonstrates that GroupDebias consistently and significantly outperforms existing fairness enhancement techniques, achieving a more substantial reduction in unfairness with minimal impact on model performance.
KW - Fairness
KW - Historical Bias
KW - Machine Learning
KW - Sampling
UR - http://www.scopus.com/inward/record.url?scp=85196624734&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85196624734&partnerID=8YFLogxK
U2 - 10.1145/3630106.3659006
DO - 10.1145/3630106.3659006
M3 - Conference contribution
AN - SCOPUS:85196624734
T3 - 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
SP - 1788
EP - 1808
BT - 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
PB - Association for Computing Machinery
T2 - 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
Y2 - 3 June 2024 through 6 June 2024
ER -