Robust Single Image Super-Resolution via Deep Networks with Sparse Prior

Ding Liu, Zhaowen Wang, Bihan Wen, Jianchao Yang, Wei Han, Thomas S Huang

Research output: Contribution to journalArticlepeer-review

Abstract

Single image super-resolution (SR) is an ill-posed problem, which tries to recover a high-resolution image from its low-resolution observation. To regularize the solution of the problem, previous methods have focused on designing good priors for natural images, such as sparse representation, or directly learning the priors from a large data set with models, such as deep neural networks. In this paper, we argue that domain expertise from the conventional sparse coding model can be combined with the key ingredients of deep learning to achieve further improved results. We demonstrate that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data. The network has a cascaded structure, which boosts the SR performance for both fixed and incremental scaling factors. The proposed training and testing schemes can be extended for robust handling of images with additional degradation, such as noise and blurring. A subjective assessment is conducted and analyzed in order to thoroughly evaluate various SR techniques. Our proposed model is tested on a wide range of images, and it significantly outperforms the existing state-of-the-art methods for various scaling factors both quantitatively and perceptually.

Original languageEnglish (US)
Article number7466062
Pages (from-to)3194-3207
Number of pages14
JournalIEEE Transactions on Image Processing
Volume25
Issue number7
DOIs
StatePublished - Jul 2016

Keywords

  • deep neural networks
  • image super-resolution
  • sparse coding

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'Robust Single Image Super-Resolution via Deep Networks with Sparse Prior'. Together they form a unique fingerprint.

Cite this