TY - GEN
T1 - Why Do These Match? Explaining the Behavior of Image Similarity Models
AU - Plummer, Bryan A.
AU - Vasileva, Mariya I.
AU - Petsiuk, Vitali
AU - Saenko, Kate
AU - Forsyth, David
N1 - Funding Information:
Acknowledgements.. This work is funded in part by a DARPA XAI grant, NSF Grant No. 1718221, and ONR MURI Award N00014-16-1-2007.
Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce Salient Attributes for Network Explanation (SANE) to explain image similarity models, where a model’s output is a score measuring the similarity of two inputs rather than a classification score. In this task, an explanation depends on both of the input images, so standard methods do not apply. Our SANE explanations pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition. Our approach’s ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2. Code available at: https://github.com/VisionLearningGroup/SANE.
AB - Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce Salient Attributes for Network Explanation (SANE) to explain image similarity models, where a model’s output is a score measuring the similarity of two inputs rather than a classification score. In this task, an explanation depends on both of the input images, so standard methods do not apply. Our SANE explanations pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition. Our approach’s ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2. Code available at: https://github.com/VisionLearningGroup/SANE.
KW - Explainable AI
KW - Fashion compatibility
KW - Image retrieval
KW - Image similarity models
UR - http://www.scopus.com/inward/record.url?scp=85097647300&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097647300&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-58621-8_38
DO - 10.1007/978-3-030-58621-8_38
M3 - Conference contribution
AN - SCOPUS:85097647300
SN - 9783030586201
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 652
EP - 669
BT - Computer Vision – ECCV 2020 - 16th European Conference, 2020, Proceedings
A2 - Vedaldi, Andrea
A2 - Bischof, Horst
A2 - Brox, Thomas
A2 - Frahm, Jan-Michael
PB - Springer
T2 - 16th European Conference on Computer Vision, ECCV 2020
Y2 - 23 August 2020 through 28 August 2020
ER -