Learning overcomplete sparsifying transforms with block cosparsity

Bihan Wen, Saiprasad Ravishankar, Yoram Bresler

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The sparsity of images in a transform domain or dictionary has been widely exploited in image processing. Compared to the synthesis dictionary model, sparse coding in the (single) transform model is cheap. However, natural images typically contain diverse textures that cannot be sparsified well by a single transform. Hence, we propose a union of sparsifying transforms model, which is equivalent to an overcomplete transform model with block cosparsity (OC-TOBOS). Our alternating algorithm for transform learning involves simple closed-form updates. When applied to images, our algorithm learns a collection of well-conditioned transforms, and a good clustering of the patches or textures. Our learnt transforms provide better image representations than learned square transforms. We also show the promising denoising performance and speedups provided by the proposed method compared to synthesis dictionary-based denoising.

Original languageEnglish (US)
Title of host publication2014 IEEE International Conference on Image Processing, ICIP 2014
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages803-807
Number of pages5
ISBN (Electronic)9781479957514
DOIs
StatePublished - Jan 28 2014

Publication series

Name2014 IEEE International Conference on Image Processing, ICIP 2014

Fingerprint

Glossaries
Textures
Image processing

Keywords

  • Clustering
  • Image denoising
  • Overcomplete representation
  • Sparse representation
  • Sparsifying transform learning

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Cite this

Wen, B., Ravishankar, S., & Bresler, Y. (2014). Learning overcomplete sparsifying transforms with block cosparsity. In 2014 IEEE International Conference on Image Processing, ICIP 2014 (pp. 803-807). [7025161] (2014 IEEE International Conference on Image Processing, ICIP 2014). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICIP.2014.7025161

Learning overcomplete sparsifying transforms with block cosparsity. / Wen, Bihan; Ravishankar, Saiprasad; Bresler, Yoram.

2014 IEEE International Conference on Image Processing, ICIP 2014. Institute of Electrical and Electronics Engineers Inc., 2014. p. 803-807 7025161 (2014 IEEE International Conference on Image Processing, ICIP 2014).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wen, B, Ravishankar, S & Bresler, Y 2014, Learning overcomplete sparsifying transforms with block cosparsity. in 2014 IEEE International Conference on Image Processing, ICIP 2014., 7025161, 2014 IEEE International Conference on Image Processing, ICIP 2014, Institute of Electrical and Electronics Engineers Inc., pp. 803-807. https://doi.org/10.1109/ICIP.2014.7025161
Wen B, Ravishankar S, Bresler Y. Learning overcomplete sparsifying transforms with block cosparsity. In 2014 IEEE International Conference on Image Processing, ICIP 2014. Institute of Electrical and Electronics Engineers Inc. 2014. p. 803-807. 7025161. (2014 IEEE International Conference on Image Processing, ICIP 2014). https://doi.org/10.1109/ICIP.2014.7025161
Wen, Bihan ; Ravishankar, Saiprasad ; Bresler, Yoram. / Learning overcomplete sparsifying transforms with block cosparsity. 2014 IEEE International Conference on Image Processing, ICIP 2014. Institute of Electrical and Electronics Engineers Inc., 2014. pp. 803-807 (2014 IEEE International Conference on Image Processing, ICIP 2014).
@inproceedings{d1f6cc94fae0458cb18fb35bb0e7231f,
title = "Learning overcomplete sparsifying transforms with block cosparsity",
abstract = "The sparsity of images in a transform domain or dictionary has been widely exploited in image processing. Compared to the synthesis dictionary model, sparse coding in the (single) transform model is cheap. However, natural images typically contain diverse textures that cannot be sparsified well by a single transform. Hence, we propose a union of sparsifying transforms model, which is equivalent to an overcomplete transform model with block cosparsity (OC-TOBOS). Our alternating algorithm for transform learning involves simple closed-form updates. When applied to images, our algorithm learns a collection of well-conditioned transforms, and a good clustering of the patches or textures. Our learnt transforms provide better image representations than learned square transforms. We also show the promising denoising performance and speedups provided by the proposed method compared to synthesis dictionary-based denoising.",
keywords = "Clustering, Image denoising, Overcomplete representation, Sparse representation, Sparsifying transform learning",
author = "Bihan Wen and Saiprasad Ravishankar and Yoram Bresler",
year = "2014",
month = "1",
day = "28",
doi = "10.1109/ICIP.2014.7025161",
language = "English (US)",
series = "2014 IEEE International Conference on Image Processing, ICIP 2014",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "803--807",
booktitle = "2014 IEEE International Conference on Image Processing, ICIP 2014",
address = "United States",

}

TY - GEN

T1 - Learning overcomplete sparsifying transforms with block cosparsity

AU - Wen, Bihan

AU - Ravishankar, Saiprasad

AU - Bresler, Yoram

PY - 2014/1/28

Y1 - 2014/1/28

N2 - The sparsity of images in a transform domain or dictionary has been widely exploited in image processing. Compared to the synthesis dictionary model, sparse coding in the (single) transform model is cheap. However, natural images typically contain diverse textures that cannot be sparsified well by a single transform. Hence, we propose a union of sparsifying transforms model, which is equivalent to an overcomplete transform model with block cosparsity (OC-TOBOS). Our alternating algorithm for transform learning involves simple closed-form updates. When applied to images, our algorithm learns a collection of well-conditioned transforms, and a good clustering of the patches or textures. Our learnt transforms provide better image representations than learned square transforms. We also show the promising denoising performance and speedups provided by the proposed method compared to synthesis dictionary-based denoising.

AB - The sparsity of images in a transform domain or dictionary has been widely exploited in image processing. Compared to the synthesis dictionary model, sparse coding in the (single) transform model is cheap. However, natural images typically contain diverse textures that cannot be sparsified well by a single transform. Hence, we propose a union of sparsifying transforms model, which is equivalent to an overcomplete transform model with block cosparsity (OC-TOBOS). Our alternating algorithm for transform learning involves simple closed-form updates. When applied to images, our algorithm learns a collection of well-conditioned transforms, and a good clustering of the patches or textures. Our learnt transforms provide better image representations than learned square transforms. We also show the promising denoising performance and speedups provided by the proposed method compared to synthesis dictionary-based denoising.

KW - Clustering

KW - Image denoising

KW - Overcomplete representation

KW - Sparse representation

KW - Sparsifying transform learning

UR - http://www.scopus.com/inward/record.url?scp=84949928231&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84949928231&partnerID=8YFLogxK

U2 - 10.1109/ICIP.2014.7025161

DO - 10.1109/ICIP.2014.7025161

M3 - Conference contribution

AN - SCOPUS:84949928231

T3 - 2014 IEEE International Conference on Image Processing, ICIP 2014

SP - 803

EP - 807

BT - 2014 IEEE International Conference on Image Processing, ICIP 2014

PB - Institute of Electrical and Electronics Engineers Inc.

ER -