Learning overcomplete sparsifying transforms with block cosparsity

Bihan Wen, Saiprasad Ravishankar, Yoram Bresler

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The sparsity of images in a transform domain or dictionary has been widely exploited in image processing. Compared to the synthesis dictionary model, sparse coding in the (single) transform model is cheap. However, natural images typically contain diverse textures that cannot be sparsified well by a single transform. Hence, we propose a union of sparsifying transforms model, which is equivalent to an overcomplete transform model with block cosparsity (OC-TOBOS). Our alternating algorithm for transform learning involves simple closed-form updates. When applied to images, our algorithm learns a collection of well-conditioned transforms, and a good clustering of the patches or textures. Our learnt transforms provide better image representations than learned square transforms. We also show the promising denoising performance and speedups provided by the proposed method compared to synthesis dictionary-based denoising.

Original languageEnglish (US)
Title of host publication2014 IEEE International Conference on Image Processing, ICIP 2014
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages803-807
Number of pages5
ISBN (Electronic)9781479957514
DOIs
StatePublished - Jan 28 2014

Publication series

Name2014 IEEE International Conference on Image Processing, ICIP 2014

Keywords

  • Clustering
  • Image denoising
  • Overcomplete representation
  • Sparse representation
  • Sparsifying transform learning

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Learning overcomplete sparsifying transforms with block cosparsity'. Together they form a unique fingerprint.

Cite this