An improved global risk bound in concave regression

Research output: Contribution to journalArticlepeer-review


A new risk bound is presented for the problem of convex/concave function estimation, using the least squares estimator. The best known risk bound, as had appeared in Guntuboyina and Sen [8], scaled like log(en) n−4/5 under the mean squared error loss, up to a constant factor. The authors in [8] had conjectured that the logarithmic term may be an artifact of their proof. We show that indeed the logarithmic term is unnecessary and prove a risk bound which scales like n−4/5 up to constant factors. Our proof technique has one extra peeling step than in a usual chaining type argument. Our risk bound holds in expectation as well as with high probability and also extends to the case of model misspecification, where the true function may not be concave.

Original languageEnglish (US)
Pages (from-to)1608-1629
Number of pages22
JournalElectronic Journal of Statistics
Issue number1
StatePublished - 2016
Externally publishedYes

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty


Dive into the research topics of 'An improved global risk bound in concave regression'. Together they form a unique fingerprint.

Cite this