Abstract
I thoroughly enjoyed reading the article by Bhadra et. al. (2020) and convey my congratulations to the authors for providing a comprehensive and coherent review of horseshoe-based regularization approaches for machine learning models. I am thankful to the editors for providing this opportunity to write a discussion on this useful article, which I expect will turn out to be a good guide in the future for statisticians and practitioners alike. It is quite amazing to see the rapid progress and the magnitude of work advancing the horseshoe regularization approach since the seminal paper by Carvalho et al. (2010). The current review article is a testimony for this. While I have been primarily working with continuous spike and slab priors for high-dimensional Bayesian modeling, I have been following the literature on horseshoe regularization with a keen interest. For my comments on this article, I will focus on some comparisons between these two approaches particularly in terms of model building and methodology and some computational considerations. I would like to first provide some comments on performing valid inference based on the horsheshoe prior framework.
Original language | English (US) |
---|---|
Pages (from-to) | 330-334 |
Number of pages | 5 |
Journal | International Statistical Review |
Volume | 88 |
Issue number | 2 |
DOIs |
|
State | Published - Aug 1 2020 |
Keywords
- Bayesian inference
- Bayesian regularization
- Horseshoe regularization
- spike and slab priors
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty