TY - JOUR
T1 - Learning Filter Bank Sparsifying Transforms
AU - Pfister, Luke
AU - Bresler, Yoram
N1 - Funding Information:
Manuscript received March 12, 2018; revised August 10, 2018 and October 1, 2018; accepted November 14, 2018. Date of publication November 23, 2018; date of current version December 14, 2018. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Masahiro Yukawa. This work was supported in part by the National Science Foundation under Grant CCF 1018660 and Grant CCF-1320953. The work of L. Pfister was supported under the Andrew T. Yang fellowship. (Corresponding author: Luke Pfister.) The authors are with the Department of Electrical and Computer Engineering, University of Illinois Urbana–Champaign, Urbana, IL 61801 USA (e-mail:, lpfiste2@illinois.edu; ybresler@illinois.edu).
Publisher Copyright:
© 1991-2012 IEEE.
PY - 2019/1/15
Y1 - 2019/1/15
N2 - Data are said to follow the transform (or analysis) sparsity model if they become sparse when acted on by a linear operator called a sparsifying transform. Several algorithms have been designed to learn such a transform directly from data, and data-adaptive sparsifying transforms have demonstrated excellent performance in signal restoration tasks. Sparsifying transforms are typically learned using small sub-regions of data called patches, but these algorithms often ignore redundant information shared between neighboring patches. We show that many existing transform and analysis sparse representations can be viewed as filter banks, thus linking the local properties of the patch-based model to the global properties of a convolutional model. We propose a new transform learning framework, where the sparsifying transform is an undecimated perfect reconstruction filter bank. Unlike previous transform learning algorithms, the filter length can be chosen independently of the number of filter bank channels. Numerical results indicate that filter bank sparsifying transforms outperform existing patch-based transform learning for image denoising while benefiting from additional flexibility in the design process.
AB - Data are said to follow the transform (or analysis) sparsity model if they become sparse when acted on by a linear operator called a sparsifying transform. Several algorithms have been designed to learn such a transform directly from data, and data-adaptive sparsifying transforms have demonstrated excellent performance in signal restoration tasks. Sparsifying transforms are typically learned using small sub-regions of data called patches, but these algorithms often ignore redundant information shared between neighboring patches. We show that many existing transform and analysis sparse representations can be viewed as filter banks, thus linking the local properties of the patch-based model to the global properties of a convolutional model. We propose a new transform learning framework, where the sparsifying transform is an undecimated perfect reconstruction filter bank. Unlike previous transform learning algorithms, the filter length can be chosen independently of the number of filter bank channels. Numerical results indicate that filter bank sparsifying transforms outperform existing patch-based transform learning for image denoising while benefiting from additional flexibility in the design process.
KW - Sparsifying transform
KW - analysis model
KW - analysis operator learning
KW - convolutional analysis operators
KW - filter bank
KW - perfect reconstruction
KW - sparse representations
UR - http://www.scopus.com/inward/record.url?scp=85057412034&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057412034&partnerID=8YFLogxK
U2 - 10.1109/TSP.2018.2883021
DO - 10.1109/TSP.2018.2883021
M3 - Article
AN - SCOPUS:85057412034
SN - 1053-587X
VL - 67
JO - IRE Transactions on Audio
JF - IRE Transactions on Audio
IS - 2
M1 - 8543611
ER -