Wyner-ziv coding of multiview images with unsupervised learning of disparity and gray code

David Chen, David Varodayan, Markus Flierl, Bernd Girod

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Wyner-Ziv coding of multiview images avoids communications between source cameras. To achieve good compression performance, the decoder must relate the source and side information images. Since correlation between the two images is exploited at the bit level, it is desirable to map small Euclidean distances between coefficients into small Hamming distances between bitwise codewords. This important mapping property is not achieved with the binary code but can be achieved with the Gray code. Comparing the two mappings, it is observed that the Gray code offers a substantial benefit for unsupervised learning of unknown disparity but provides limited advantage if disparity is known. Experimental results with multiview images demonstrate the Gray code achieves PSNR gains of 2 dB over the binary code for unsupervised learning of disparity.

Original languageEnglish (US)
Title of host publication2008 IEEE International Conference on Image Processing, ICIP 2008 Proceedings
Pages1112-1115
Number of pages4
DOIs
StatePublished - 2008
Externally publishedYes
Event2008 IEEE International Conference on Image Processing, ICIP 2008 - San Diego, CA, United States
Duration: Oct 12 2008Oct 15 2008

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880

Other

Other2008 IEEE International Conference on Image Processing, ICIP 2008
Country/TerritoryUnited States
CitySan Diego, CA
Period10/12/0810/15/08

Keywords

  • Gray code
  • Multiview images
  • Stereo vision

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Fingerprint

Dive into the research topics of 'Wyner-ziv coding of multiview images with unsupervised learning of disparity and gray code'. Together they form a unique fingerprint.

Cite this