Sparse depth super resolution

Jiajun Lu, David Forsyth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We describe a method to produce detailed high resolution depth maps from aggressively subsampled depth measurements. Our method fully uses the relationship between image segmentation boundaries and depth boundaries It uses an image combined with a low resolution depth map. 1) The image is segmented with the guidance of sparse depth samples 2) Each segment has its depth field reconstructed independently using a novel smoothing method. 3) For videos, time-stamped samples from near frames are incorporated. The paper shows reconstruction results of super resolution from x4 to x100, while previous methods mainly work on x2 to xl6. The method is tested on four different datasets and six video sequences, covering quite different regimes, and it outperforms recent state of the art methods quantitatively and qualitatively We also demonstrate that depth maps produced by our method can be used by applications such as hand trackers, while depth maps from other methods have problems.

Original languageEnglish (US)
Title of host publicationIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
PublisherIEEE Computer Society
Pages2245-2253
Number of pages9
ISBN (Electronic)9781467369640
DOIs
StatePublished - Oct 14 2015
EventIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015 - Boston, United States
Duration: Jun 7 2015Jun 12 2015

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume07-12-June-2015
ISSN (Print)1063-6919

Other

OtherIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
CountryUnited States
CityBoston
Period6/7/156/12/15

Fingerprint

Image segmentation

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Cite this

Lu, J., & Forsyth, D. (2015). Sparse depth super resolution. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015 (pp. 2245-2253). [7298837] (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Vol. 07-12-June-2015). IEEE Computer Society. https://doi.org/10.1109/CVPR.2015.7298837

Sparse depth super resolution. / Lu, Jiajun; Forsyth, David.

IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015. IEEE Computer Society, 2015. p. 2245-2253 7298837 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Vol. 07-12-June-2015).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lu, J & Forsyth, D 2015, Sparse depth super resolution. in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015., 7298837, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 07-12-June-2015, IEEE Computer Society, pp. 2245-2253, IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, United States, 6/7/15. https://doi.org/10.1109/CVPR.2015.7298837
Lu J, Forsyth D. Sparse depth super resolution. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015. IEEE Computer Society. 2015. p. 2245-2253. 7298837. (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition). https://doi.org/10.1109/CVPR.2015.7298837
Lu, Jiajun ; Forsyth, David. / Sparse depth super resolution. IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015. IEEE Computer Society, 2015. pp. 2245-2253 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).
@inproceedings{ac67e1d5aff24a9daa20cfbc39853546,
title = "Sparse depth super resolution",
abstract = "We describe a method to produce detailed high resolution depth maps from aggressively subsampled depth measurements. Our method fully uses the relationship between image segmentation boundaries and depth boundaries It uses an image combined with a low resolution depth map. 1) The image is segmented with the guidance of sparse depth samples 2) Each segment has its depth field reconstructed independently using a novel smoothing method. 3) For videos, time-stamped samples from near frames are incorporated. The paper shows reconstruction results of super resolution from x4 to x100, while previous methods mainly work on x2 to xl6. The method is tested on four different datasets and six video sequences, covering quite different regimes, and it outperforms recent state of the art methods quantitatively and qualitatively We also demonstrate that depth maps produced by our method can be used by applications such as hand trackers, while depth maps from other methods have problems.",
author = "Jiajun Lu and David Forsyth",
year = "2015",
month = "10",
day = "14",
doi = "10.1109/CVPR.2015.7298837",
language = "English (US)",
series = "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition",
publisher = "IEEE Computer Society",
pages = "2245--2253",
booktitle = "IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015",

}

TY - GEN

T1 - Sparse depth super resolution

AU - Lu, Jiajun

AU - Forsyth, David

PY - 2015/10/14

Y1 - 2015/10/14

N2 - We describe a method to produce detailed high resolution depth maps from aggressively subsampled depth measurements. Our method fully uses the relationship between image segmentation boundaries and depth boundaries It uses an image combined with a low resolution depth map. 1) The image is segmented with the guidance of sparse depth samples 2) Each segment has its depth field reconstructed independently using a novel smoothing method. 3) For videos, time-stamped samples from near frames are incorporated. The paper shows reconstruction results of super resolution from x4 to x100, while previous methods mainly work on x2 to xl6. The method is tested on four different datasets and six video sequences, covering quite different regimes, and it outperforms recent state of the art methods quantitatively and qualitatively We also demonstrate that depth maps produced by our method can be used by applications such as hand trackers, while depth maps from other methods have problems.

AB - We describe a method to produce detailed high resolution depth maps from aggressively subsampled depth measurements. Our method fully uses the relationship between image segmentation boundaries and depth boundaries It uses an image combined with a low resolution depth map. 1) The image is segmented with the guidance of sparse depth samples 2) Each segment has its depth field reconstructed independently using a novel smoothing method. 3) For videos, time-stamped samples from near frames are incorporated. The paper shows reconstruction results of super resolution from x4 to x100, while previous methods mainly work on x2 to xl6. The method is tested on four different datasets and six video sequences, covering quite different regimes, and it outperforms recent state of the art methods quantitatively and qualitatively We also demonstrate that depth maps produced by our method can be used by applications such as hand trackers, while depth maps from other methods have problems.

UR - http://www.scopus.com/inward/record.url?scp=84959252278&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84959252278&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2015.7298837

DO - 10.1109/CVPR.2015.7298837

M3 - Conference contribution

AN - SCOPUS:84959252278

T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

SP - 2245

EP - 2253

BT - IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015

PB - IEEE Computer Society

ER -