Abstract
Since the pioneering work of sliced inverse regression, sufficient dimension reduction has been growing into a mature field in statistics and it has broad applications to regression diagnostics, data visualisation, image processing and machine learning. In this paper, we provide a review of several popular inverse regression methods, including sliced inverse regression (SIR) method and principal hessian directions (PHD) method. In addition, we adopt a conditional characteristic function approach and develop a new class of slicing-free methods, which are parallel to the classical SIR and PHD, and are named weighted inverse regression ensemble (WIRE) and weighted PHD (WPHD), respectively. Relationship with recently developed martingale difference divergence matrix is also revealed. Numerical studies and a real data example show that the proposed slicing-free alternatives have superior performance than SIR and PHD.
Original language | English (US) |
---|---|
Pages (from-to) | 355-382 |
Number of pages | 28 |
Journal | International Statistical Review |
Volume | 92 |
Issue number | 3 |
DOIs | |
State | Published - Dec 2024 |
Keywords
- Martingale difference divergence
- principal hessian directions
- sliced inverse regression
- sufficient dimension reduction
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty