A uniform convergence bound for the area under the ROC curve

Shivani Agarwal, Sariel Har-Peled, Dan Roth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The area under the ROC curve (AUC) has been advocated as an evaluation criterion for the bipartite ranking problem. We study uniform convergence properties of the AUC; in particular, we derive a distribution-free uniform convergence bound for the AUC which serves to bound the expected accuracy of a learned ranking function in terms of its empirical AUC on the training sequence from which it is learned. Our bound is expressed in terms of a new set of combinatorial parameters that we term the bipartite rank-shatter coefficients; these play the same role in our result as do the standard VC-dimension related shatter coefficients (also known as the growth function) in uniform convergence results for the classification error rate. A comparison of our result with a recent uniform convergence result derived by Freund et al. [9] for a quantity closely related to the AUC shows that the bound provided by our result can be considerably tighter.

Original languageEnglish (US)
Title of host publicationAISTATS 2005 - Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics
Pages1-8
Number of pages8
StatePublished - 2005
Event10th International Workshop on Artificial Intelligence and Statistics, AISTATS 2005 - Hastings, Christ Church, Barbados
Duration: Jan 6 2005Jan 8 2005

Publication series

NameAISTATS 2005 - Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics

Other

Other10th International Workshop on Artificial Intelligence and Statistics, AISTATS 2005
Country/TerritoryBarbados
CityHastings, Christ Church
Period1/6/051/8/05

ASJC Scopus subject areas

  • Artificial Intelligence
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'A uniform convergence bound for the area under the ROC curve'. Together they form a unique fingerprint.

Cite this