A picture’s worth a thousand words, right? Well, in the modern age of digital photos, manipulation, and color correction, how many words do you think you can squeeze out of a picture? MyModernMet has a great collection of 12 fantastic pictures from some of the most talented photographers in the industry, some of which are heavy edited for content and color, that tell volumes.
While some may say that every picture tells a story, I’m under the belief that only a few can do it right. These are the masters of storytelling, the ones who can draw out an emotion in us and leave us with a lasting memory. We’ve written about some of these photographers right here at theMET. Like Erik Johansson who stretches our imagination and turns reason upside down and inside out. Or August Bradley, who crafts amazing scenes out of his imagination and who cites authors, such as Thomas Pynchon and David Foster Wallace, over fellow photographers as his sources of inspiration.
PhotoModeler is a piece of software capable of taking multiple images of an object and then reconstructing it in 3D. They’ve released a free whitepaper describing the technlology:
Do you need to create dense 3D point clouds and detailed surfacemodels of physical objects, and do you want to do it in a cost-effective and efficient manner? A new class of 3D scanning technologies, called photo-based scanning, may be the solution you are looking for.
This new technology uses a standard off-the-shelf digital camera in combination with specialized software that analyzes the digital images – there is no additional hardware required. The new technology is called photo-based scanning, or photogrammetric scanning.
In this white paper we define 3D scanning, review the current methods for 3D scanning, introduce photo-based scanning, plus examine strengths and weaknesses of the various technologies.
Peter Belanger has posted a timelapse video of the creation of the recent MacWorld Magazine cover, showing not only the photography and set design but the Photoshop work in cleaning up the photographs and then manipulating them into what they want.
After working on the latest cover for Macworld Magazine I wanted to show what is involved in making a cover. I focused on the three main areas: the photography, photoshop and design. I chose a time lapse format to convey lots of information in a small amount of time. The only drawback of time lapse is that since half a day goes by in 30 seconds, the whole process seam so easy! Lots of details were left out of the design process (like the cover meetings and rounds of layout options). I began to photograph the design process after the layouts had already been narrowed down to just three cover designs.
David McKinnon, a researcher from Queensland University of Technology, has developed a software tool called 3DSee that can take a collection of ordinary 2D photographs and process them into a 3D model with surprising accuracy.
Dr McKinnon said the software automatically locates and tracks common points between the images allowing a determination of where the cameras were when the photos were taken. This information is then used to create a 3D model from the images using graphics cards to massively accelerate the computations.
A nice application of GPGPU computing. However, not just any images will do. According to Dr McKinnon, it requires 5-15 images, each overlapping by a minimum of 80%. Essentially, it sounds like he needs video slowly panning around the object.
If the accuracy is high enough, I can envision this replacing (or supplementing) alot of 3D Scanning technology used by the graphics and mechanical engineering community.
Robert Stadd, visual effects supervisor on Public Enemies, found himself in a quandary when it came to the opening scene where John Dillinger breaks into prison to bust out his friends. Unfortunately, in the background would be a shot of an adjacent building but there was no information on the camera arrangement or distance away to the building. The solution? Build a large tiled image that could be zoomed or warped as necessary.
In the final composite, the tiled background has been inserted into the shot. Note the size of the background is larger than in the reference photo, and it was in a different position. Only by using a high-resolution, tiled background could we ensure that it would work in the shot, from a grain size and position standpoint. Approximately eight pictures were used to tile the background, creating a file that was roughly 200MB.
He also covers some helpful tips on creating your own tiled images, such as manual controls for exposure and white balance and maintaining a 30% overlap.
A new app in the Apple App Store aims to turn the meager iPhone camera into a 3D Camera by allowing you to composite pairs of images into analyphic or sterographic images.
To take the photos, you use one of two methods. If you’re planning on taking an anaglyph or sterogram, you take one picture, move the iPhone camera slightly to the right, and take another photo. To make a wiggle stereogram, you point directly at the same object from two slightly different vantage points. The app provides instructions on exactly how to take the two photos each time you launch it.
The app sells for $1.99 (iTunes Link). Anyone want to send us at VizWorld an analyphic image to show?
AutoStitch, the technology licensed to ILM for panoramic production for film uses, is now available on the iPhone. They’re calling it the ‘first automatic stitcher’ for the iPhone, which I doubt in the wake of tools like Mosaica, but it’s still pretty impressive. Using the same technology used in Autopano and other tools, it’s now on the iPhone with impressive results.
AutoStitch now brings the latest in image recognition technology to your iPhone. Stitch images in any order or arrangement, using photos taken from your iPhones camera. Just select a set of images from the camera roll or photo albums, and AutoStitch does the rest.
It’s now available in the iPhone App Store for $1.99 .
Gigapan.org is hosting an impressive 0.29 gigapixel (290 MegaPixel) image of an ant (presumably dead).
This is a 136 images of a whole ant (the head of which was imaged before by Jay) taken using the nanogigapn unit. It is magnified 800x using a Scanning Electron Microscope. The Antennas were not part of the original gigapan, and were taken by hand, they are made of 16 pictures and were put together in photoshop and added to the ant which was stitched by the gigapan stitcher.
A new app for the iPhone named “Mosaica” aims to make creating montages & panorams easier. Through a demonstration video on YouTube, they show how you can snap multiple pictures and have them stitched on the phone automatically. Then you can pan around and view them, or upload them to a webserver for sharing & higher-quality images. Supporting multiple angles and zooms (close-ups for detail, distance for context) it’s a pretty impressive tool and the results he shows on the phone (with live processing times) are impressive.
A former venture of Sony Corp’s Kihara Laboratory, the new MotionPortrait company has developed a tool called “MotionPortrait” that can analyze a picture of a human face and animate it.
The core technology which has the same name as the company “MotionPortrait ” enables to create a 3D graphic by automatically identifying specific characteristics such as the shapes of eyes and mouth of a human face in a picture. The eyes and mouth can be moved at will because computer graphics CG are created from a still picture
It’s interesting technology, aimed mainly at created human interaction systems for computers (from the article). You can see a demo of the technology at the MotionPortrait website. It’s impressive, but they still have a bit of an uncanny valley problem.
VizWorld.com We cover visualization and graphics news from around the internet, including Scientific Visualization, Visual Effects, and Graphics Hardware. Read more on our About Page or learn about our Advertising Options Get updates via twitter from @VizWorld.