Folks from the University of North Carolina at Chapel Hill have compiled a bunch of Microsoft Kinect’s into an incredible inexpensive 3D Teleconferencing system.

The setup uses up to four Kinect sensors in a single endpoint, capturing images from various angles before they are processed using GPU-accelerated filters. The video captured by the cameras is processed in a series of steps, filling holes and adjusting colors to create a mesh image. Once the video streams have been processed, they are overlaid with one another to form a complete 3D image.

That description doesn’t do it justice honestly.  They’re using GPU’s to realtime process the data from 4 kinects, performing mesh alignment, color correction, and surface smoothing in realtime.  The results are then sent to another screen where they can be viewed in 3D (with glasses of course), but another Kinect tracks the head of the viewer to adjust the display to create an even richer experience.

Check out the amazing video below.

via Amazing 3d telepresence system – Hack a Day.