TY - GEN
T1 - Doubly sparse transform learning with convergence guarantees
AU - Ravishankar, Saiprasad
AU - Bresler, Yoram
PY - 2014
Y1 - 2014
N2 - The sparsity of natural signals in transform domains such as the DCT has been heavily exploited in various applications. Recently, we introduced the idea of learning sparsifying transforms from data, and demonstrated the usefulness of learnt transforms in image representation, and denoising. However, the learning formulations therein were non-convex, and the algorithms lacked strong convergence properties. In this work, we propose a novel convex formulation for square sparsifying transform learning. We also enforce a doubly sparse structure on the transform, which makes its learning, storage, and implementation efficient. Our algorithm is guaranteed to converge to a global optimum, and moreover converges quickly. We also introduce a non-convex variant of the convex formulation, for which the algorithm is locally convergent. We show the superior promise of our learnt transforms as compared to analytical sparsifying transforms such as the DCT for image representation.
AB - The sparsity of natural signals in transform domains such as the DCT has been heavily exploited in various applications. Recently, we introduced the idea of learning sparsifying transforms from data, and demonstrated the usefulness of learnt transforms in image representation, and denoising. However, the learning formulations therein were non-convex, and the algorithms lacked strong convergence properties. In this work, we propose a novel convex formulation for square sparsifying transform learning. We also enforce a doubly sparse structure on the transform, which makes its learning, storage, and implementation efficient. Our algorithm is guaranteed to converge to a global optimum, and moreover converges quickly. We also introduce a non-convex variant of the convex formulation, for which the algorithm is locally convergent. We show the superior promise of our learnt transforms as compared to analytical sparsifying transforms such as the DCT for image representation.
KW - Convex learning
KW - Sparse representations
UR - http://www.scopus.com/inward/record.url?scp=84905269499&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84905269499&partnerID=8YFLogxK
U2 - 10.1109/ICASSP.2014.6854607
DO - 10.1109/ICASSP.2014.6854607
M3 - Conference contribution
AN - SCOPUS:84905269499
SN - 9781479928927
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 5262
EP - 5266
BT - 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014
Y2 - 4 May 2014 through 9 May 2014
ER -