TY - GEN
T1 - Learning sparsifying filter banks
AU - Pfister, Luke
AU - Bresler, Yoram
N1 - Publisher Copyright:
© 2015 SPIE.
PY - 2015
Y1 - 2015
N2 - Recent years have numerous algorithms to learn a sparse synthesis or analysis model from data. Recently, a generalized analysis model called the 'transform model' has been proposed. Data following the transform model is approximately sparsified when acted on by a linear operator called a sparsifying transform. While existing transform learning algorithms can learn a transform for any vectorized data, they are most often used to learn a model for overlapping image patches. However, these approaches do not exploit the redundant nature of this data and scale poorly with the dimensionality of the data and size of patches. We propose a new sparsifying transform learning framework where the transform acts on entire images rather than on patches. We illustrate the connection between existing patch-based transform learning approaches and the theory of block transforms, then develop a new transform learning framework where the transforms have the structure of an undecimated filter bank with short filters. Unlike previous work on transform learning, the filter length can be chosen independently of the number of filter bank channels. We apply our framework to accelerating magnetic resonance imaging. We simultaneously learn a sparsifying filter bank while reconstructing an image from undersampled Fourier measurements. Numerical experiments show our new model yields higher quality images than previous patch based sparsifying transform approaches.
AB - Recent years have numerous algorithms to learn a sparse synthesis or analysis model from data. Recently, a generalized analysis model called the 'transform model' has been proposed. Data following the transform model is approximately sparsified when acted on by a linear operator called a sparsifying transform. While existing transform learning algorithms can learn a transform for any vectorized data, they are most often used to learn a model for overlapping image patches. However, these approaches do not exploit the redundant nature of this data and scale poorly with the dimensionality of the data and size of patches. We propose a new sparsifying transform learning framework where the transform acts on entire images rather than on patches. We illustrate the connection between existing patch-based transform learning approaches and the theory of block transforms, then develop a new transform learning framework where the transforms have the structure of an undecimated filter bank with short filters. Unlike previous work on transform learning, the filter length can be chosen independently of the number of filter bank channels. We apply our framework to accelerating magnetic resonance imaging. We simultaneously learn a sparsifying filter bank while reconstructing an image from undersampled Fourier measurements. Numerical experiments show our new model yields higher quality images than previous patch based sparsifying transform approaches.
KW - Filter banks
KW - MR reconstruction
KW - Sparisfying transform learning
KW - Sparse representations
UR - http://www.scopus.com/inward/record.url?scp=84951336003&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84951336003&partnerID=8YFLogxK
U2 - 10.1117/12.2188663
DO - 10.1117/12.2188663
M3 - Conference contribution
AN - SCOPUS:84951336003
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Wavelets and Sparsity XVI
A2 - Goyal, Vivek K.
A2 - Van De Ville, Dimitri
A2 - Van De Ville, Dimitri
A2 - Papadakis, Manos
A2 - Van De Ville, Dimitri
A2 - Papadakis, Manos
A2 - Goyal, Vivek K.
A2 - Van De Ville, Dimitri
PB - SPIE
T2 - Wavelets and Sparsity XVI
Y2 - 10 August 2015 through 12 August 2015
ER -