Bilevel sparse coding for coupled feature spaces

Jianchao Yang, Zhaowen Wang, Zhe Lin, Xianbiao Shu, Thomas Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we propose a bilevel sparse coding model for coupled feature spaces, where we aim to learn dictionaries for sparse modeling in both spaces while enforcing some desired relationships between the two signal spaces. We first present our new general sparse coding model that relates signals from the two spaces by their sparse representations and the corresponding dictionaries. The learning algorithm is formulated as a generic bilevel optimization problem, which is solved by a projected first-order stochastic gradient descent algorithm. This general sparse coding model can be applied to many specific applications involving coupled feature spaces in computer vision and signal processing. In this work, we tailor our general model to learning dictionaries for compressive sensing recovery and single image super-resolution to demonstrate its effectiveness. In both cases, the new sparse coding model remarkably outperforms previous approaches in terms of recovery accuracy.

Original languageEnglish (US)
Title of host publication2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012
Pages2360-2367
Number of pages8
DOIs
StatePublished - 2012
Event2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012 - Providence, RI, United States
Duration: Jun 16 2012Jun 21 2012

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Other

Other2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012
Country/TerritoryUnited States
CityProvidence, RI
Period6/16/126/21/12

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Bilevel sparse coding for coupled feature spaces'. Together they form a unique fingerprint.

Cite this