An amazing paper at SIGGRAPH2011 Asia from Kevin Karsch, Varsha Hedau, David Forsyth, Derek Hoiem shows some amazing algorithms they’ve developed for inserted rendered & artificial objects into photographs of real-scenes, all with a minimum of user input.

With a single image and a small amount of annotation, our method creates a physical model of the scene that is suitable for realistically rendering synthetic objects with diffuse, specular, and even glowing materials while accounting for lighting interactions between the objects and the scene. We demonstrate in a user study that synthetic images produced by our method are confusable with real scenes, even for people who believe they are good at telling the difference. Further, our study shows that our method is competitive with other insertion methods while requiring less scene information. We also collected new illumination and reflectance datasets; renderings produced by our system compare well to ground truth. Our system has applications in the movie and gaming industry, as well as home decorating and user content creation, among others.

There are some amazing applications of this technology, only the first of which is detailed in their abstract.  A little more automation and a nice simple web-driven/cloud-backed system and this could be the cornerstone of many technologies, not the least of which would be interior decorating.

Be sure to watch the video below.

Rendering Synthetic Objects into Legacy Photographs from Kevin Karsch on Vimeo.

via KevinKarsch.com