TY - GEN
T1 - On the consistency of top-k surrogate losses
AU - Yang, Forest
AU - Koyejo, Sanmi
N1 - Publisher Copyright:
Copyright 2020 by the author(s).
PY - 2020
Y1 - 2020
N2 - The top-k error is often employed to evaluate performance for challenging classification tasks in computer vision as it is designed to compensate for ambiguity in ground truth labels. This practical success motivates our theoretical analysis of consistent top-k classification. Surprisingly, it is not rigorously understood when taking the k-argmax of a vector is guaranteed to return the k-argmax of another vector, though doing so is crucial to describe Bayes optimality; we do both tasks. Then, we define top-k calibration and show it is necessary and sufficient for consistency. Based on the top-k calibration analysis, we propose a class of top-k calibrated Bregman divergence surrogates. Our analysis continues by showing previously proposed hinge-like top-k surrogate losses are not top-k calibrated and suggests no convex hinge loss is top-k calibrated. On the other hand, we propose a new hinge loss which is consistent. We explore further, showing our hinge loss remains consistent under a restriction to linear functions, while cross entropy does not. Finally, we exhibit a differentiable, convex loss function which is top-k calibrated for specific k.
AB - The top-k error is often employed to evaluate performance for challenging classification tasks in computer vision as it is designed to compensate for ambiguity in ground truth labels. This practical success motivates our theoretical analysis of consistent top-k classification. Surprisingly, it is not rigorously understood when taking the k-argmax of a vector is guaranteed to return the k-argmax of another vector, though doing so is crucial to describe Bayes optimality; we do both tasks. Then, we define top-k calibration and show it is necessary and sufficient for consistency. Based on the top-k calibration analysis, we propose a class of top-k calibrated Bregman divergence surrogates. Our analysis continues by showing previously proposed hinge-like top-k surrogate losses are not top-k calibrated and suggests no convex hinge loss is top-k calibrated. On the other hand, we propose a new hinge loss which is consistent. We explore further, showing our hinge loss remains consistent under a restriction to linear functions, while cross entropy does not. Finally, we exhibit a differentiable, convex loss function which is top-k calibrated for specific k.
UR - http://www.scopus.com/inward/record.url?scp=85102539879&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102539879&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85102539879
T3 - 37th International Conference on Machine Learning, ICML 2020
SP - 10658
EP - 10666
BT - 37th International Conference on Machine Learning, ICML 2020
A2 - Daume, Hal
A2 - Singh, Aarti
PB - International Machine Learning Society (IMLS)
T2 - 37th International Conference on Machine Learning, ICML 2020
Y2 - 13 July 2020 through 18 July 2020
ER -