TY - CONF
T1 - On the interpretability of conditional probability estimates in the agnostic setting
AU - Gao, Yihan
AU - Parameswaran, Aditya
AU - Peng, Jian
N1 - Funding Information:
We thank the anonymous reviewers for their valuable feedback. We acknowledge support from grant IIS-1513407 and IIS-1633755 awarded by the NSF, grant 1U54GM114838 awarded by NIGMS and 3U54EB020406-02S1 awarded by NIBIB through funds provided by the trans-NIH Big Data to Knowledge (BD2K) initiative (www.bd2k.nih.gov), and funds from Adobe, Google, the Sloan Foundation, and the Siebel Energy Institute. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies and organizations.
Publisher Copyright:
© 2017 PMLR. All rights reserved.
PY - 2017
Y1 - 2017
N2 - We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario. Under the agnostic setting, conditional probability estimates do not necessarily reflect the true conditional probabilities. Instead, they have a certain calibration property: among all data points that the classifier has predicted P(Y = 1|X) = p, p portion of them actually have label Y = 1. For cost-sensitive decision problems, this calibration property provides adequate support for us to use Bayes Decision Theory. In this paper, we define a novel measure for the calibration property together with its empirical counterpart, and prove an uniform convergence result between them. This new measure enables us to formally justify the calibration property of conditional probability estimations, and provides new insights on the problem of estimating and calibrating conditional probabilities.
AB - We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario. Under the agnostic setting, conditional probability estimates do not necessarily reflect the true conditional probabilities. Instead, they have a certain calibration property: among all data points that the classifier has predicted P(Y = 1|X) = p, p portion of them actually have label Y = 1. For cost-sensitive decision problems, this calibration property provides adequate support for us to use Bayes Decision Theory. In this paper, we define a novel measure for the calibration property together with its empirical counterpart, and prove an uniform convergence result between them. This new measure enables us to formally justify the calibration property of conditional probability estimations, and provides new insights on the problem of estimating and calibrating conditional probabilities.
UR - http://www.scopus.com/inward/record.url?scp=85083936879&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85083936879&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85083936879
T2 - 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017
Y2 - 20 April 2017 through 22 April 2017
ER -