TY - JOUR
T1 - On one-step gm estimates and stability of inferences in linear regression
AU - Simpson, D. G.
AU - Ruppert, D.
AU - Carroll, R. J.
N1 - Funding Information:
* D. G. Simpson is Associate Professor, Department of Statistics and Institute for Environmental Studies, University of Illinois at Urbana-Champaign, Champaign, IL 61820. D. Ruppert is Professor, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, NY 14853. R. J. Carroll is Professor, Department of Statistics, Texas A&M University, College Station, TX 77843. Simpson's research was supported by an NSF Mathematical Sciences Postdoctoral Research Fellowship and Air Force Office of Scientific Research Grant AFOSR-87-0041. Ruppert's research was supported by NSF Grant DMS-88-0029 and the U.S. Army Research Office through the Mathematical Sciences Institute at Cornell. Carroll's research was supported by Air Force Officeof Scientific Research Grant AFOSR-89-0240. The authors thank the participants of the IMA Conference on Robustness, Diagnostics,Computation, and Graphics for their usefulcomments; the IMA for providing travel funds and housing; and the referees, who suggested strengthening the technical conditions to substantially simplify the proofs.
PY - 1992/6
Y1 - 1992/6
N2 - The folklore on one-step estimation is that it inherits the breakdown point of the preliminary estimator and yet has the same large sample distribution as the fully iterated version as long as the preliminary estimate converges faster than n–1/4, where n is the sample size. We investigate the extent to which this folklore is valid for one-step GM estimators and their associated standard errors in linear regression. We find that one-step GM estimates based on Newton-Raphson or Scoring inherit the breakdown point of high breakdown point initial estimates such as least median of squares provided the usual weights that limit the influence of extreme points in the design space are based on location and scatter estimates with high breakdown points. Moreover, these estimators have bounded influence functions, and their standard errors can have high breakdown points. The folklore concerning the large sample theory is correct assuming the regression errors are symmetrically distributed and homoscedastic. If the errors are asymmetric and homoscedastic, Scoring still provides root-n consistent estimates of the slope parameters, but Newton-Raphson fails to improve on the rate of convergence of the preliminary estimates. If the errors are symmetric and heteroscedastic, Newton-Raphson provides root-n consistent estimates, but Scoring fails to improve on the rate of convergence of the preliminary estimate. Our primary concern is with the stability of the inferences associated with the estimates, not merely with the point estimates themselves. To this end we define the notion of standard error breakdown, which occurs if the estimated standard deviations of the parameter estimates can be driven to zero or infinity, and study the large sample validity of the standard error estimates. A real data set from the literature illustrates the issues.
AB - The folklore on one-step estimation is that it inherits the breakdown point of the preliminary estimator and yet has the same large sample distribution as the fully iterated version as long as the preliminary estimate converges faster than n–1/4, where n is the sample size. We investigate the extent to which this folklore is valid for one-step GM estimators and their associated standard errors in linear regression. We find that one-step GM estimates based on Newton-Raphson or Scoring inherit the breakdown point of high breakdown point initial estimates such as least median of squares provided the usual weights that limit the influence of extreme points in the design space are based on location and scatter estimates with high breakdown points. Moreover, these estimators have bounded influence functions, and their standard errors can have high breakdown points. The folklore concerning the large sample theory is correct assuming the regression errors are symmetrically distributed and homoscedastic. If the errors are asymmetric and homoscedastic, Scoring still provides root-n consistent estimates of the slope parameters, but Newton-Raphson fails to improve on the rate of convergence of the preliminary estimates. If the errors are symmetric and heteroscedastic, Newton-Raphson provides root-n consistent estimates, but Scoring fails to improve on the rate of convergence of the preliminary estimate. Our primary concern is with the stability of the inferences associated with the estimates, not merely with the point estimates themselves. To this end we define the notion of standard error breakdown, which occurs if the estimated standard deviations of the parameter estimates can be driven to zero or infinity, and study the large sample validity of the standard error estimates. A real data set from the literature illustrates the issues.
KW - Asymmetry
KW - Heteroscedasticity
KW - Least median of squares
KW - Minimum volume ellipsoid
KW - Robust inference
KW - Standard error breakdown
UR - http://www.scopus.com/inward/record.url?scp=84950421188&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84950421188&partnerID=8YFLogxK
U2 - 10.1080/01621459.1992.10475224
DO - 10.1080/01621459.1992.10475224
M3 - Article
AN - SCOPUS:84950421188
SN - 0162-1459
VL - 87
SP - 439
EP - 450
JO - Journal of the American Statistical Association
JF - Journal of the American Statistical Association
IS - 418
ER -