David McKinnon, a researcher from Queensland University of Technology, has developed a software tool called 3DSee that can take a collection of ordinary 2D photographs and process them into a 3D model with surprising accuracy.
Dr McKinnon said the software automatically locates and tracks common points between the images allowing a determination of where the cameras were when the photos were taken. This information is then used to create a 3D model from the images using graphics cards to massively accelerate the computations.
A nice application of GPGPU computing. However, not just any images will do. According to Dr McKinnon, it requires 5-15 images, each overlapping by a minimum of 80%. Essentially, it sounds like he needs video slowly panning around the object.
If the accuracy is high enough, I can envision this replacing (or supplementing) alot of 3D Scanning technology used by the graphics and mechanical engineering community.
3d scanning from Surfdev, a specialist scanning service performed by a reverse engineering company.