Improved minimax predictive densities under Kullback-Leibler loss

Edward I. George, Feng Liang, Xinyi Xu

Research output: Contribution to journalArticlepeer-review

Abstract

Let X|μ ∼ Np(μ, vxI) and Y|μ∼N p(μ, vyI) be independent p-dimensional multivariate normal vectors with common unknown mean μ. Based on only observing X = x, we consider the problem of obtaining a predictive density p̂(y|x) for Y that is close to p(y|μ) as measured by expected Kullback-Leibler loss. A natural procedure for this problem is the (formal) Bayes predictive density p̂U(y|x) under the uniform prior πU(μ) ≡ 1, which is best invariant and minimax. We show that any Bayes predictive density will be minimax if it is obtained by a prior yielding a marginal that is super-harmonic or whose square root is superharmonic. This yields wide classes of minimax procedures that dominate p̂U(y|x), including Bayes predictive densities under superharmonic priors. Fundamental similarities and differences with the parallel theory of estimating a multivariate normal mean under quadratic loss are described.

Original languageEnglish (US)
Pages (from-to)78-91
Number of pages14
JournalAnnals of Statistics
Volume34
Issue number1
DOIs
StatePublished - Feb 2006
Externally publishedYes

Keywords

  • Bayes rules
  • Heat equation
  • Inadmissibility
  • Multiple shrinkage
  • Multivariate normal
  • Prior distributions
  • Shrinkage estimation
  • Superharmonic marginals
  • Superharmonic priors
  • Unbiased estimate of risk

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Improved minimax predictive densities under Kullback-Leibler loss'. Together they form a unique fingerprint.

Cite this