Wasserstein Robust Classification with Fairness Constraints

Yijie Wang, Viet Anh Nguyen, Grani A. Hanasusanto

Research output: Contribution to journalArticlepeer-review

Abstract

Problem definition: Data analytics models and machine learning algorithms are increasingly deployed to support consequential decision-making processes, from deciding which applicants will receive job offers and loans to university enrollments and medical interventions. However, recent studies show these models may unintentionally amplify human bias and yield significant unfavorable decisions to specific groups. Methodology/ results: We propose a distributionally robust classification model with a fairness constraint that encourages the classifier to be fair in the equality of opportunity criterion. We use a type-∞ Wasserstein ambiguity set centered at the empirical distribution to represent distributional uncertainty and derive a conservative reformulation for the worst-case equal opportunity unfairness measure. We show that the model is equivalent to a mixed binary conic optimization problem, which standard off-the-shelf solvers can solve. We propose a convex, hinge-loss-based model for large problem instances whose reformulation does not incur binary variables to improve scalability. Moreover, we also consider the distributionally robust learning problem with a generic ground transportation cost to hedge against the label and sensitive attribute uncertainties. We numerically examine the performance of our proposed models on five real-world data sets related to individual analysis. Compared with the state-of-the-art methods, our proposed approaches significantly improve fairness with negligible loss of predictive accuracy in the testing data set. Managerial implications: Our paper raises awareness that bias may arise when predictive models are used in service and operations. It generally comes from human bias, for example, imbalanced data collection or low sample sizes, and is further amplified by algorithms. Incorporating fairness constraints and the distributionally robust optimization (DRO) scheme is a powerful way to alleviate algorithmic biases.

Original languageEnglish (US)
Pages (from-to)1567-1585
Number of pages19
JournalManufacturing and Service Operations Management
Volume26
Issue number4
DOIs
StatePublished - Jul 2024
Externally publishedYes

Keywords

  • math programming
  • stochastic methods

ASJC Scopus subject areas

  • Strategy and Management
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'Wasserstein Robust Classification with Fairness Constraints'. Together they form a unique fingerprint.

Cite this