Rendering Synthetic Objects into Legacy Photographs

Kevin Karsch, Varsha Hedau, David Forsyth, Derek Hoiem

Research output: Contribution to journalArticlepeer-review

Abstract

We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of the scene that is suitable for realistically rendering synthetic objects with diffuse, specular, and even glowing materials while accounting for lighting interactions between the objects and the scene. We demonstrate in a user study that synthetic images produced by our method are confusable with real scenes, even for people who believe they are good at telling the difference. Further, our study shows that our method is competitive with other insertion methods while requiring less scene information. We also collected new illumination and reflectance datasets; renderings produced by our system compare well to ground truth. Our system has applications in the movie and gaming industry, as well as home decorating and user content creation, among others.

Original languageEnglish (US)
Pages (from-to)1-12
Number of pages12
JournalACM Transactions on Graphics
Volume30
Issue number6
DOIs
StatePublished - Dec 1 2011

Keywords

  • computational photography
  • image-based rendering
  • light estimation
  • photo editing

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'Rendering Synthetic Objects into Legacy Photographs'. Together they form a unique fingerprint.

Cite this