TY - GEN

T1 - On the precision matrix in semi-high-dimensional settings

AU - Hayashi, Kentaro

AU - Yuan, Ke Hai

AU - Jiang, Ge

PY - 2020

Y1 - 2020

N2 - Many aspects of multivariate analysis involve obtaining the precision matrix, i.e., the inverse of the covariance matrix. When the dimension is larger than the sample size, the sample covariance matrix is no longer positive definite, and the inverse does not exist. Under the sparsity assumption on the elements of the precision matrix, the problem can be solved by fitting a Gaussian graphical model with lasso penalty. However, in high-dimensional settings in behavioral sciences, the sparsity assumption does not necessarily hold. The dimensions are often greater than the sample sizes, while they are likely to be comparable in size. Under such circumstances, introducing some covariance structures might solve the issue of estimating the precision matrix. Factor analysis is employed for modeling the covariance structure and the Woodbury identity to find the precision matrix. Different methods are compared such as unweighted least squares and factor analysis with equal unique variances (i.e., the probabilistic principal component analysis), as well as ridge factor analysis with small ridge parameters. Results indicate that they all give relatively small mean squared errors even when the dimensions are larger than the sample size.

AB - Many aspects of multivariate analysis involve obtaining the precision matrix, i.e., the inverse of the covariance matrix. When the dimension is larger than the sample size, the sample covariance matrix is no longer positive definite, and the inverse does not exist. Under the sparsity assumption on the elements of the precision matrix, the problem can be solved by fitting a Gaussian graphical model with lasso penalty. However, in high-dimensional settings in behavioral sciences, the sparsity assumption does not necessarily hold. The dimensions are often greater than the sample sizes, while they are likely to be comparable in size. Under such circumstances, introducing some covariance structures might solve the issue of estimating the precision matrix. Factor analysis is employed for modeling the covariance structure and the Woodbury identity to find the precision matrix. Different methods are compared such as unweighted least squares and factor analysis with equal unique variances (i.e., the probabilistic principal component analysis), as well as ridge factor analysis with small ridge parameters. Results indicate that they all give relatively small mean squared errors even when the dimensions are larger than the sample size.

KW - Factor analysis

KW - Graphical lasso

KW - Inverse covariance matrix

KW - Probabilistic principal component analysis

KW - Woodbury identity

UR - http://www.scopus.com/inward/record.url?scp=85089315880&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85089315880&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-43469-4_15

DO - 10.1007/978-3-030-43469-4_15

M3 - Conference contribution

AN - SCOPUS:85089315880

SN - 9783030434687

T3 - Springer Proceedings in Mathematics and Statistics

SP - 185

EP - 200

BT - Quantitative Psychology - 84th Annual Meeting of the Psychometric Society, IMPS 2019

A2 - Wiberg, Marie

A2 - Molenaar, Dylan

A2 - González, Jorge

A2 - Böckenholt, Ulf

A2 - Kim, Jee-Seon

PB - Springer

T2 - 84th Annual Meeting of the Psychometric Society, IMPS 2019

Y2 - 15 July 2019 through 19 July 2019

ER -