Minimax-optimal nonparametric regression in high dimensions

Yun Yang, Surya T. Tokdar

Research output: Contribution to journalArticlepeer-review

Abstract

Minimax L2 risks for high-dimensional nonparametric regression are derived under two sparsity assumptions: (1) the true regression surface is a sparse function that depends only on d = O(log n) important predictors among a list of p predictors, with logp = o(n); (2) the true regression surface depends on O(n) predictors but is an additive function where each additive component is sparse but may contain two or more interacting predictors and may have a smoothness level different from other components. For either modeling assumption, a practicable extension of the widely used Bayesian Gaussian process regression method is shown to adaptively attain the optimal minimax rate (up to log n terms) asymptotically as both n,p→∞with logp = o(n).

Original languageEnglish (US)
Pages (from-to)652-674
Number of pages23
JournalAnnals of Statistics
Volume43
Issue number2
DOIs
StatePublished - Apr 1 2015
Externally publishedYes

Keywords

  • Adaptive estimation
  • High-dimensional regression
  • Minimax risk
  • Model selection
  • Nonparametric regression

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Minimax-optimal nonparametric regression in high dimensions'. Together they form a unique fingerprint.

Cite this