Robust Machine Learning via Privacy/ Rate-Distortion Theory

Ye Wang, Shuchin Aeron, Adnan Siraj Rakin, Toshiaki Koike-Akino, Pierre Moulin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples. Our work draws the connection between optimal robust learning and the privacy-utility tradeoff problem, which is a generalization of the rate-distortion problem. The saddle point of the game between a robust classifier and an adversarial perturbation can be found via the solution of a maximum conditional entropy problem. This information-theoretic perspective sheds light on the fundamental tradeoff between robustness and clean data performance, which ultimately arises from the geometric structure of the underlying data distribution and perturbation constraints.

Original languageEnglish (US)
Title of host publication2021 IEEE International Symposium on Information Theory, ISIT 2021 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1320-1325
Number of pages6
ISBN (Electronic)9781538682098
DOIs
StatePublished - Jul 12 2021
Event2021 IEEE International Symposium on Information Theory, ISIT 2021 - Virtual, Melbourne, Australia
Duration: Jul 12 2021Jul 20 2021

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
Volume2021-July
ISSN (Print)2157-8095

Conference

Conference2021 IEEE International Symposium on Information Theory, ISIT 2021
Country/TerritoryAustralia
CityVirtual, Melbourne
Period7/12/217/20/21

Keywords

  • adversarial examples
  • privacy
  • robust learning

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Information Systems
  • Modeling and Simulation
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Robust Machine Learning via Privacy/ Rate-Distortion Theory'. Together they form a unique fingerprint.

Cite this