This article reviews some recent developments on the inference of time series data using the self-normalized approach. We aim to provide a detailed discussion about the use of self-normalization in different contexts and highlight distinctive feature associated with each problem and connections among these recent developments. The topics covered include: confidence interval construction for a parameter in a weakly dependent stationary time series setting, change point detection in the mean, robust inference in regression models with weakly dependent errors, inference for nonparametric time series regression, inference for long memory time series, locally stationary time series and near-integrated time series, change point detection, and two-sample inference for functional time series, as well as the use of self-normalization for spatial data and spatial-temporal data. Some new variations of the self-normalized approach are also introduced with additional simulation results. We also provide a brief review of related inferential methods, such as blockwise empirical likelihood and subsampling, which were recently developed under the fixed-b asymptotic framework. We conclude the article with a summary of merits and limitations of self-normalization in the time series context and potential topics for future investigation.
- Locally stationary
- Long memory
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty