Automatic scene inference for 3D object compositing

Kevin Karsch, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Hailin Jin, Rafael Fonte, Michael Sittig, David Alexander Forsyth

Research output: Contribution to journalArticle

Abstract

We present a user-friendly image editing system that supports a drag-anddrop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), postprocess illumination editing, and depth-of-field manipulation. Underlying our system is a fully automatic technique for recovering a comprehensive 3D scene model (geometry, illumination, diffuse albedo, and camera parameters) from a single, low dynamic range photograph. This is made possible by two novel contributions: an illumination inference algorithm that recovers a full lighting model of the scene (including light sources that are not directly visible in the photograph), and a depth estimation algorithm that combines data-driven depth transfer with geometric reasoning about the scene layout. A user study shows that our system produces perceptually convincing results, and achieves the same level of realism as techniques that require significant user interaction.

Original languageEnglish (US)
Article number32
JournalACM Transactions on Graphics
Volume33
Issue number3
DOIs
StatePublished - May 2014

Fingerprint

Lighting
Drag
Light sources
Cameras
Geometry

Keywords

  • Depth estimation
  • Illumination inference
  • Image-based editing
  • Image-based rendering
  • Physically grounded
  • Scene reconstruction

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design

Cite this

Karsch, K., Sunkavalli, K., Hadap, S., Carr, N., Jin, H., Fonte, R., ... Forsyth, D. A. (2014). Automatic scene inference for 3D object compositing. ACM Transactions on Graphics, 33(3), [32]. https://doi.org/10.1145/2602146

Automatic scene inference for 3D object compositing. / Karsch, Kevin; Sunkavalli, Kalyan; Hadap, Sunil; Carr, Nathan; Jin, Hailin; Fonte, Rafael; Sittig, Michael; Forsyth, David Alexander.

In: ACM Transactions on Graphics, Vol. 33, No. 3, 32, 05.2014.

Research output: Contribution to journalArticle

Karsch, K, Sunkavalli, K, Hadap, S, Carr, N, Jin, H, Fonte, R, Sittig, M & Forsyth, DA 2014, 'Automatic scene inference for 3D object compositing', ACM Transactions on Graphics, vol. 33, no. 3, 32. https://doi.org/10.1145/2602146
Karsch K, Sunkavalli K, Hadap S, Carr N, Jin H, Fonte R et al. Automatic scene inference for 3D object compositing. ACM Transactions on Graphics. 2014 May;33(3). 32. https://doi.org/10.1145/2602146
Karsch, Kevin ; Sunkavalli, Kalyan ; Hadap, Sunil ; Carr, Nathan ; Jin, Hailin ; Fonte, Rafael ; Sittig, Michael ; Forsyth, David Alexander. / Automatic scene inference for 3D object compositing. In: ACM Transactions on Graphics. 2014 ; Vol. 33, No. 3.
@article{f9d3a0d2b4d543398bed7651273e6d55,
title = "Automatic scene inference for 3D object compositing",
abstract = "We present a user-friendly image editing system that supports a drag-anddrop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), postprocess illumination editing, and depth-of-field manipulation. Underlying our system is a fully automatic technique for recovering a comprehensive 3D scene model (geometry, illumination, diffuse albedo, and camera parameters) from a single, low dynamic range photograph. This is made possible by two novel contributions: an illumination inference algorithm that recovers a full lighting model of the scene (including light sources that are not directly visible in the photograph), and a depth estimation algorithm that combines data-driven depth transfer with geometric reasoning about the scene layout. A user study shows that our system produces perceptually convincing results, and achieves the same level of realism as techniques that require significant user interaction.",
keywords = "Depth estimation, Illumination inference, Image-based editing, Image-based rendering, Physically grounded, Scene reconstruction",
author = "Kevin Karsch and Kalyan Sunkavalli and Sunil Hadap and Nathan Carr and Hailin Jin and Rafael Fonte and Michael Sittig and Forsyth, {David Alexander}",
year = "2014",
month = "5",
doi = "10.1145/2602146",
language = "English (US)",
volume = "33",
journal = "ACM Transactions on Computer Systems",
issn = "0730-0301",
publisher = "Association for Computing Machinery (ACM)",
number = "3",

}

TY - JOUR

T1 - Automatic scene inference for 3D object compositing

AU - Karsch, Kevin

AU - Sunkavalli, Kalyan

AU - Hadap, Sunil

AU - Carr, Nathan

AU - Jin, Hailin

AU - Fonte, Rafael

AU - Sittig, Michael

AU - Forsyth, David Alexander

PY - 2014/5

Y1 - 2014/5

N2 - We present a user-friendly image editing system that supports a drag-anddrop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), postprocess illumination editing, and depth-of-field manipulation. Underlying our system is a fully automatic technique for recovering a comprehensive 3D scene model (geometry, illumination, diffuse albedo, and camera parameters) from a single, low dynamic range photograph. This is made possible by two novel contributions: an illumination inference algorithm that recovers a full lighting model of the scene (including light sources that are not directly visible in the photograph), and a depth estimation algorithm that combines data-driven depth transfer with geometric reasoning about the scene layout. A user study shows that our system produces perceptually convincing results, and achieves the same level of realism as techniques that require significant user interaction.

AB - We present a user-friendly image editing system that supports a drag-anddrop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), postprocess illumination editing, and depth-of-field manipulation. Underlying our system is a fully automatic technique for recovering a comprehensive 3D scene model (geometry, illumination, diffuse albedo, and camera parameters) from a single, low dynamic range photograph. This is made possible by two novel contributions: an illumination inference algorithm that recovers a full lighting model of the scene (including light sources that are not directly visible in the photograph), and a depth estimation algorithm that combines data-driven depth transfer with geometric reasoning about the scene layout. A user study shows that our system produces perceptually convincing results, and achieves the same level of realism as techniques that require significant user interaction.

KW - Depth estimation

KW - Illumination inference

KW - Image-based editing

KW - Image-based rendering

KW - Physically grounded

KW - Scene reconstruction

UR - http://www.scopus.com/inward/record.url?scp=84902241311&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84902241311&partnerID=8YFLogxK

U2 - 10.1145/2602146

DO - 10.1145/2602146

M3 - Article

AN - SCOPUS:84902241311

VL - 33

JO - ACM Transactions on Computer Systems

JF - ACM Transactions on Computer Systems

SN - 0730-0301

IS - 3

M1 - 32

ER -