On the interpretability of conditional probability estimates in the agnostic setting

Yihan Gao, Aditya G Parameswaran, Jian Peng

Research output: Contribution to conferencePaper

Abstract

We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario. Under the agnostic setting, conditional probability estimates do not necessarily reflect the true conditional probabilities. Instead, they have a certain calibration property: among all data points that the classifier has predicted P(Y = 1|X) = p, p portion of them actually have label Y = 1. For cost-sensitive decision problems, this calibration property provides adequate support for us to use Bayes Decision Theory. In this paper, we define a novel measure for the calibration property together with its empirical counterpart, and prove an uniform convergence result between them. This new measure enables us to formally justify the calibration property of conditional probability estimations, and provides new insights on the problem of estimating and calibrating conditional probabilities.

Original languageEnglish (US)
StatePublished - Jan 1 2017
Event20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017 - Fort Lauderdale, United States
Duration: Apr 20 2017Apr 22 2017

Conference

Conference20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017
CountryUnited States
CityFort Lauderdale
Period4/20/174/22/17

Fingerprint

Interpretability
Conditional probability
Calibration
Estimate
Binary Classification
Decision Theory
Decision theory
Bayes
Uniform convergence
Decision problem
Convergence Results
Justify
Labels
Classifiers
Classifier
Scenarios
Costs

ASJC Scopus subject areas

  • Artificial Intelligence
  • Statistics and Probability

Cite this

Gao, Y., Parameswaran, A. G., & Peng, J. (2017). On the interpretability of conditional probability estimates in the agnostic setting. Paper presented at 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, Fort Lauderdale, United States.

On the interpretability of conditional probability estimates in the agnostic setting. / Gao, Yihan; Parameswaran, Aditya G; Peng, Jian.

2017. Paper presented at 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, Fort Lauderdale, United States.

Research output: Contribution to conferencePaper

Gao, Y, Parameswaran, AG & Peng, J 2017, 'On the interpretability of conditional probability estimates in the agnostic setting' Paper presented at 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, Fort Lauderdale, United States, 4/20/17 - 4/22/17, .
Gao Y, Parameswaran AG, Peng J. On the interpretability of conditional probability estimates in the agnostic setting. 2017. Paper presented at 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, Fort Lauderdale, United States.
Gao, Yihan ; Parameswaran, Aditya G ; Peng, Jian. / On the interpretability of conditional probability estimates in the agnostic setting. Paper presented at 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, Fort Lauderdale, United States.
@conference{e4b4484b50c14e9c8033dd56ae94a48f,
title = "On the interpretability of conditional probability estimates in the agnostic setting",
abstract = "We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario. Under the agnostic setting, conditional probability estimates do not necessarily reflect the true conditional probabilities. Instead, they have a certain calibration property: among all data points that the classifier has predicted P(Y = 1|X) = p, p portion of them actually have label Y = 1. For cost-sensitive decision problems, this calibration property provides adequate support for us to use Bayes Decision Theory. In this paper, we define a novel measure for the calibration property together with its empirical counterpart, and prove an uniform convergence result between them. This new measure enables us to formally justify the calibration property of conditional probability estimations, and provides new insights on the problem of estimating and calibrating conditional probabilities.",
author = "Yihan Gao and Parameswaran, {Aditya G} and Jian Peng",
year = "2017",
month = "1",
day = "1",
language = "English (US)",
note = "20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017 ; Conference date: 20-04-2017 Through 22-04-2017",

}

TY - CONF

T1 - On the interpretability of conditional probability estimates in the agnostic setting

AU - Gao, Yihan

AU - Parameswaran, Aditya G

AU - Peng, Jian

PY - 2017/1/1

Y1 - 2017/1/1

N2 - We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario. Under the agnostic setting, conditional probability estimates do not necessarily reflect the true conditional probabilities. Instead, they have a certain calibration property: among all data points that the classifier has predicted P(Y = 1|X) = p, p portion of them actually have label Y = 1. For cost-sensitive decision problems, this calibration property provides adequate support for us to use Bayes Decision Theory. In this paper, we define a novel measure for the calibration property together with its empirical counterpart, and prove an uniform convergence result between them. This new measure enables us to formally justify the calibration property of conditional probability estimations, and provides new insights on the problem of estimating and calibrating conditional probabilities.

AB - We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario. Under the agnostic setting, conditional probability estimates do not necessarily reflect the true conditional probabilities. Instead, they have a certain calibration property: among all data points that the classifier has predicted P(Y = 1|X) = p, p portion of them actually have label Y = 1. For cost-sensitive decision problems, this calibration property provides adequate support for us to use Bayes Decision Theory. In this paper, we define a novel measure for the calibration property together with its empirical counterpart, and prove an uniform convergence result between them. This new measure enables us to formally justify the calibration property of conditional probability estimations, and provides new insights on the problem of estimating and calibrating conditional probabilities.

UR - http://www.scopus.com/inward/record.url?scp=85038370665&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85038370665&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85038370665

ER -