Deep Learning for Visual Data Compression

Guo Lu, Ren Yang, Shenlong Wang, Shan Liu, Radu Timofte

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we will introduce the recent progress in deep learning based visual data compression, including image compression, video compression and point cloud compression. In the past few years, deep learning techniques have been successfully applied to various computer vision and image processing applications. However, for the data compression task, the traditional approaches (i.e., block based motion estimation and motion compensation, etc.) are still widely employed in the mainstream codecs. Considering the powerful representation capability of neural networks, it is feasible to improve the data compression performance by employing the advanced deep learning technologies. To this end, the deep leaning based compression approaches have recently received increasing attention from both academia and industry in the field of computer vision and signal processing.

Original languageEnglish (US)
Title of host publicationMM 2021 - Proceedings of the 29th ACM International Conference on Multimedia
PublisherAssociation for Computing Machinery
Pages5683-5685
Number of pages3
ISBN (Electronic)9781450386517
DOIs
StatePublished - Oct 17 2021
Event29th ACM International Conference on Multimedia, MM 2021 - Virtual, Online, China
Duration: Oct 20 2021Oct 24 2021

Publication series

NameMM 2021 - Proceedings of the 29th ACM International Conference on Multimedia

Conference

Conference29th ACM International Conference on Multimedia, MM 2021
Country/TerritoryChina
CityVirtual, Online
Period10/20/2110/24/21

Keywords

  • image compression
  • point cloud compression
  • video compression

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Software
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'Deep Learning for Visual Data Compression'. Together they form a unique fingerprint.

Cite this