TY - JOUR
T1 - Learning robust global representations by penalizing local predictive power
AU - Wang, Haohan
AU - Ge, Songwei
AU - Xing, Eric P.
AU - Lipton, Zachary C.
N1 - Funding Information:
Haohan Wang is supported by NIH R01GM114311, NIH P30DA035778, and NSF IIS1617583. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Institutes of Health or the National Science Foundation. Zachary Lipton thanks the Center for Machine Learning and Health, a joint venture of Carnegie Mellon University, UPMC, and the University of Pittsburgh for supporting our collaboration with Abridge AI to develop robust models for machine learning in healthcare. He is also grateful to Salesforce Research, Facebook Research, and Amazon AI for faculty awards supporting his lab's research on robust deep learning under distribution shift.
Funding Information:
Haohan Wang is supported by NIH R01GM114311, NIH P30DA035778, and NSF IIS1617583. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Institutes of Health or the National Science Foundation. Zachary Lipton thanks the Center for Machine Learning and Health, a joint venture of Carnegie Mellon University, UPMC, and the University of Pittsburgh for supporting our collaboration with Abridge AI to develop robust models for machine learning in healthcare. He is also grateful to Salesforce Research, Facebook Research, and Amazon AI for faculty awards supporting his lab’s research on robust deep learning under distribution shift.
Publisher Copyright:
© 2019 Neural information processing systems foundation. All rights reserved.
PY - 2019
Y1 - 2019
N2 - Despite their well-documented predictive power on i.i.d. data, convolutional neural networks have been demonstrated to rely more on high-frequency (textural) patterns that humans deem superficial than on low-frequency patterns that agree better with intuitions about what constitutes category membership. This paper proposes a method for training robust convolutional networks by penalizing the predictive power of the local representations learned by earlier layers. Intuitively, our networks are forced to discard predictive signals such as color and texture that can be gleaned from local receptive fields and to rely instead on the global structure of the image. Across a battery of synthetic and benchmark domain adaptation tasks, our method confers improved generalization. To evaluate cross-domain transfer, we introduce ImageNet-Sketch, a new dataset consisting of sketch-like images and matching the ImageNet classification validation set in categories and scale.
AB - Despite their well-documented predictive power on i.i.d. data, convolutional neural networks have been demonstrated to rely more on high-frequency (textural) patterns that humans deem superficial than on low-frequency patterns that agree better with intuitions about what constitutes category membership. This paper proposes a method for training robust convolutional networks by penalizing the predictive power of the local representations learned by earlier layers. Intuitively, our networks are forced to discard predictive signals such as color and texture that can be gleaned from local receptive fields and to rely instead on the global structure of the image. Across a battery of synthetic and benchmark domain adaptation tasks, our method confers improved generalization. To evaluate cross-domain transfer, we introduce ImageNet-Sketch, a new dataset consisting of sketch-like images and matching the ImageNet classification validation set in categories and scale.
UR - http://www.scopus.com/inward/record.url?scp=85090173990&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090173990&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85090173990
SN - 1049-5258
VL - 32
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019
Y2 - 8 December 2019 through 14 December 2019
ER -