Optimal entropy estimation on large alphabets via best polynomial approximation

Yihong Wu, Pengkun Yang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Consider the problem of estimating the Shannon entropy of a distribution on k elements from n independent samples. We show that the minimax mean-square error is within universal multiplicative constant factors of k over n log k + log2k over n. This implies the recent result of Valiant-Valiant [1] that the minimal sample size for consistent entropy estimation scales according to Θ(k over log k). The apparatus of best polynomial approximation plays a key role in both the minimax lower bound and the construction of optimal estimators.

Original languageEnglish (US)
Title of host publicationProceedings - 2015 IEEE International Symposium on Information Theory, ISIT 2015
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages824-828
Number of pages5
ISBN (Electronic)9781467377041
DOIs
StatePublished - Sep 28 2015
EventIEEE International Symposium on Information Theory, ISIT 2015 - Hong Kong, Hong Kong
Duration: Jun 14 2015Jun 19 2015

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
Volume2015-June
ISSN (Print)2157-8095

Other

OtherIEEE International Symposium on Information Theory, ISIT 2015
Country/TerritoryHong Kong
CityHong Kong
Period6/14/156/19/15

Keywords

  • best polynomial approximation
  • entropy estimation
  • functional estimation
  • high-dimensional statistics
  • large alphabet

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Information Systems
  • Modeling and Simulation
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Optimal entropy estimation on large alphabets via best polynomial approximation'. Together they form a unique fingerprint.

Cite this