Automatic scene inference for 3D object compositing

Kevin Karsch, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Hailin Jin, Rafael Fonte, Michael Sittig, David Forsyth

Research output: Contribution to journalArticlepeer-review


We present a user-friendly image editing system that supports a drag-anddrop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), postprocess illumination editing, and depth-of-field manipulation. Underlying our system is a fully automatic technique for recovering a comprehensive 3D scene model (geometry, illumination, diffuse albedo, and camera parameters) from a single, low dynamic range photograph. This is made possible by two novel contributions: an illumination inference algorithm that recovers a full lighting model of the scene (including light sources that are not directly visible in the photograph), and a depth estimation algorithm that combines data-driven depth transfer with geometric reasoning about the scene layout. A user study shows that our system produces perceptually convincing results, and achieves the same level of realism as techniques that require significant user interaction.

Original languageEnglish (US)
Article number32
JournalACM Transactions on Graphics
Issue number3
StatePublished - May 2014


  • Depth estimation
  • Illumination inference
  • Image-based editing
  • Image-based rendering
  • Physically grounded
  • Scene reconstruction

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Automatic scene inference for 3D object compositing'. Together they form a unique fingerprint.

Cite this