Learning robust global representations by penalizing local predictive power

Haohan Wang, Songwei Ge, Eric P. Xing, Zachary C. Lipton

Research output: Contribution to journalConference articlepeer-review

Abstract

Despite their well-documented predictive power on i.i.d. data, convolutional neural networks have been demonstrated to rely more on high-frequency (textural) patterns that humans deem superficial than on low-frequency patterns that agree better with intuitions about what constitutes category membership. This paper proposes a method for training robust convolutional networks by penalizing the predictive power of the local representations learned by earlier layers. Intuitively, our networks are forced to discard predictive signals such as color and texture that can be gleaned from local receptive fields and to rely instead on the global structure of the image. Across a battery of synthetic and benchmark domain adaptation tasks, our method confers improved generalization. To evaluate cross-domain transfer, we introduce ImageNet-Sketch, a new dataset consisting of sketch-like images and matching the ImageNet classification validation set in categories and scale.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume32
StatePublished - 2019
Externally publishedYes
Event33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019 - Vancouver, Canada
Duration: Dec 8 2019Dec 14 2019

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Learning robust global representations by penalizing local predictive power'. Together they form a unique fingerprint.

Cite this