TY - JOUR
T1 - Near-optimal compressed sensing of a class of sparse low-rank matrices via sparse power factorization
AU - Lee, Kiryung
AU - Wu, Yihong
AU - Bresler, Yoram
N1 - Funding Information:
Manuscript received January 15, 2014; revised June 28, 2016; accepted October 25, 2017. Date of publication December 18, 2017; date of current version February 15, 2018. K. Lee and Y. Bresler were supported by the National Science Foundation under Grant CCF 10-18789, Grant CCF 10-18660, and Grant IIS 14-47879. Y. Wu was supported by the National Science Foundation under Grant CCF 14-23088 and Grant IIS 14-47879.
Publisher Copyright:
© 1963-2012 IEEE.
PY - 2018/3
Y1 - 2018/3
N2 - Compressed sensing of simultaneously sparse and low-rank matrices enables recovery of sparse signals from a few linear measurements of their bilinear form. One important question is how many measurements are needed for a stable reconstruction in the presence of measurement noise. Unlike conventional compressed sensing for sparse vectors, where convex relaxation via the $\ell {1}$-norm achieves near-optimal performance, for compressed sensing of sparse low-rank matrices, it has been shown recently that convex programmings using the nuclear norm and the mixed norm are highly suboptimal even in the noise-free scenario. We propose an alternating minimization algorithm called sparse power factorization (SPF) for compressed sensing of sparse rank-one matrices. For a class of signals whose sparse representation coefficients are fast-decaying, SPF achieves stable recovery of the rank-one matrix formed by their outer product and requires number of measurements within a logarithmic factor of the information-Theoretic fundamental limit. For the recovery of general sparse low-rank matrices, we propose subspace-concatenated SPF (SCSPF), which has analogous near-optimal performance guarantees to SPF in the rank-one case. Numerical results show that SPF and SCSPF empirically outperform convex programmings using the best known combinations of mixed norm and nuclear norm.
AB - Compressed sensing of simultaneously sparse and low-rank matrices enables recovery of sparse signals from a few linear measurements of their bilinear form. One important question is how many measurements are needed for a stable reconstruction in the presence of measurement noise. Unlike conventional compressed sensing for sparse vectors, where convex relaxation via the $\ell {1}$-norm achieves near-optimal performance, for compressed sensing of sparse low-rank matrices, it has been shown recently that convex programmings using the nuclear norm and the mixed norm are highly suboptimal even in the noise-free scenario. We propose an alternating minimization algorithm called sparse power factorization (SPF) for compressed sensing of sparse rank-one matrices. For a class of signals whose sparse representation coefficients are fast-decaying, SPF achieves stable recovery of the rank-one matrix formed by their outer product and requires number of measurements within a logarithmic factor of the information-Theoretic fundamental limit. For the recovery of general sparse low-rank matrices, we propose subspace-concatenated SPF (SCSPF), which has analogous near-optimal performance guarantees to SPF in the rank-one case. Numerical results show that SPF and SCSPF empirically outperform convex programmings using the best known combinations of mixed norm and nuclear norm.
KW - Compressed sensing
KW - alternating minimization
KW - non-convex optimization
KW - restricted isometry property
KW - sample complexity
KW - sparse and low-rank matrix
UR - http://www.scopus.com/inward/record.url?scp=85039806128&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85039806128&partnerID=8YFLogxK
U2 - 10.1109/TIT.2017.2784479
DO - 10.1109/TIT.2017.2784479
M3 - Article
AN - SCOPUS:85039806128
VL - 64
SP - 1666
EP - 1698
JO - IRE Professional Group on Information Theory
JF - IRE Professional Group on Information Theory
SN - 0018-9448
IS - 3
ER -