Wide activation for efficient image and video super-resolution

Jiahui Yu, Yuchen Fan, Thomas Huang

Research output: Contribution to conferencePaperpeer-review


In this work we demonstrate that with same parameters and computational budgets, models with wider features before ReLU activation have significantly better performance for image and video super-resolution. The resulted SR residual network has a slim identity mapping pathway with wider (2× to 4×) channels before activation in each residual block. To further widen activation (6× to 9×) without computational overhead, we introduce linear low-rank convolution into SR networks and achieve even better accuracy-efficiency tradeoffs. In addition, compared with batch normalization or no normalization, we find training with weight normalization leads to better accuracy for deep super-resolution networks. Our proposed SR network WDSR achieves better results on large-scale DIV2K image super-resolution benchmark in terms of PSNR, under same or lower computational complexity. Based on WDSR, our method won 1st places in NTIRE 2018 Challenge on Single Image Super-Resolution in all three realistic tracks. Moreover, a simple frame-concatenation based WDSR achieved 2nd places in three out of four tracks of NTIRE 2019 Challenge for Video Super-Resolution and Deblurring. Our experiments and ablation studies support the importance of wide activation. Code and models will be publicly available.

Original languageEnglish (US)
StatePublished - 2020
Event30th British Machine Vision Conference, BMVC 2019 - Cardiff, United Kingdom
Duration: Sep 9 2019Sep 12 2019


Conference30th British Machine Vision Conference, BMVC 2019
Country/TerritoryUnited Kingdom

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Wide activation for efficient image and video super-resolution'. Together they form a unique fingerprint.

Cite this