TY - GEN
T1 - Robust principal component analysis via re-weighted minimization algorithms
AU - Katselis, Dimitrios
AU - Beck, Carolyn L.
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/2/8
Y1 - 2015/2/8
N2 - Many standard procedures in statistics such as linear regression and principal component analysis (PCA) are inconsistent in high-dimensional settings, where the number of unknown parameters is larger than the number of available observations. This implies that consistency can only be achieved by polynomial-time methods through additional structural assumptions on the data, e.g., sparsity or manifold constraints. In Robust PCA the goal is to recover the intrinsic data structure lying in a low-dimensional subspace from noisy measurements. One formulation of this problem capitalizes on the decomposition of the data matrix into low-rank and sparse error components, which form the aforementioned structural constraints. This formulation yields a convex problem that can recover the exact components via the minimization of a weighted combination of the low-rank component's nuclear norm and the ℓ1-norm of the sparse noise component subject to linear constraints. In this paper, we extend this approach by proposing methods that can be formulated in the general framework of majorization-minimization algorithms. The proposed methods perform at least as well as the state-of-the-art schemes for Robust PCA, while they allow for larger rank and sparsity regimes of the component matrices under exact recovery requirements. Convergence results are discussed and efficient implementations based on the general Augmented Lagrange Multiplier framework are presented.
AB - Many standard procedures in statistics such as linear regression and principal component analysis (PCA) are inconsistent in high-dimensional settings, where the number of unknown parameters is larger than the number of available observations. This implies that consistency can only be achieved by polynomial-time methods through additional structural assumptions on the data, e.g., sparsity or manifold constraints. In Robust PCA the goal is to recover the intrinsic data structure lying in a low-dimensional subspace from noisy measurements. One formulation of this problem capitalizes on the decomposition of the data matrix into low-rank and sparse error components, which form the aforementioned structural constraints. This formulation yields a convex problem that can recover the exact components via the minimization of a weighted combination of the low-rank component's nuclear norm and the ℓ1-norm of the sparse noise component subject to linear constraints. In this paper, we extend this approach by proposing methods that can be formulated in the general framework of majorization-minimization algorithms. The proposed methods perform at least as well as the state-of-the-art schemes for Robust PCA, while they allow for larger rank and sparsity regimes of the component matrices under exact recovery requirements. Convergence results are discussed and efficient implementations based on the general Augmented Lagrange Multiplier framework are presented.
UR - http://www.scopus.com/inward/record.url?scp=84962013972&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84962013972&partnerID=8YFLogxK
U2 - 10.1109/CDC.2015.7402314
DO - 10.1109/CDC.2015.7402314
M3 - Conference contribution
AN - SCOPUS:84962013972
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 718
EP - 723
BT - 54rd IEEE Conference on Decision and Control,CDC 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 54th IEEE Conference on Decision and Control, CDC 2015
Y2 - 15 December 2015 through 18 December 2015
ER -