Fairness-aware training of face attribute classifiers via adversarial robustness

Huimin Zeng, Zhenrui Yue, Ziyi Kou, Yang Zhang, Lanyu Shang, Dong Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Developing fair deep learning models for identity-sensitive applications (e.g., face attribute recognition) has gained increasing attention from the research community. Indeed, it has been observed that deep models can easily overfit to the bias of the training set, resulting in discriminative performance against certain demographic groups during test time. Motivated by the observation that a biased classifier could result in different adversarial robustness among training samples in different demographic groups (robustness bias), we argue that such adversarial robustness information of individual training samples could imply whether the training data distribution is fair among different demographic groups. In other words, under a fair classifier, the training samples from different demographic groups are expected to show similar or comparable adversarial robustness. Therefore, in this work, we propose to re-weight the training loss of individual training samples using their adversarial robustness, and provide the fairness-awareness in the training process. Extensive experimental results on CelebA dataset show that the face attribute classifiers could learn significantly greater demographic fairness under our proposed training objective and outperform other state-of-the-art re-weighting fairness algorithms on different face recognition applications. Moreover, our proposed method also reduces the non-trivial robustness bias among different demographic groups, preventing the under-represented demographic groups from higher adversarial threats.

Original languageEnglish (US)
Article number110356
JournalKnowledge-Based Systems
Volume264
DOIs
StatePublished - Mar 15 2023

Keywords

  • Adversarial robustness
  • Fair machine learning
  • Human face recognition

ASJC Scopus subject areas

  • Software
  • Management Information Systems
  • Information Systems and Management
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Fairness-aware training of face attribute classifiers via adversarial robustness'. Together they form a unique fingerprint.

Cite this