New press release from Vicon today, announcing three new products they’ll be demonstrating on the floor of SIGGRAPH next week.. First off is their new Blade2 software (shown to the right) that offers real-time motion capture with a new engine called “Axiom” that can handle multiple characters in realtime. Also, they’ll be demonstration two new tools : A handheld interaction device, and a new head-mounted facial tracking system named “Cara”.
Imogen Moorhouse, COO at Vicon says, The product line-up were showing at Siggraph this year is the result of years of research and development. Feedback from customers during our Alpha project for Blade has been invaluable to developing a product that, we think, will set a new industry standard for real time motion capture.
Get all the details after the break, or at Booth #729 next week.
Next week at SIGGRAPH, OptiTrack will be on-site demonstrating their usual motion tracking cameras. But this year, they’ll be announcing a new series of cameras specifically designed for large-volume full-scale productions. The press release doesn’t include many details (but instead choosed to name-drop a few dozen companies and films that used their equipment), and I don’t see anything about it on their website yet. Guess we’ll just have to wait.
At SIGGRAPH 2012, OptiTrack will launch a brand new series of motion capture cameras specifically tailored to large-volume, full-scale productions. Mocap demos will be conducted in booth #610, and will also include a look at the Flex 13, which debuted in March 2012 and is OptiTrack’s solution for high precision, small and medium-volume tracking applications. Autodesk, Halon Entertainment, Fabric Engine and Oblong Industries will join OptiTrack on stage to demonstrate how OptiTrack virtual and motion capture cameras can be painlessly integrated into both established and custom-made pipelines.
OptiTrack will be demonstrating a pair of new motion capture systems at GDC2011 this year, both offering full 120hz tracking with 6 degrees of freedom, all for under $2000.
Designed for desktop-friendly motion capture, the Duo and Trio offer the power of multi-camera, six degrees of freedom (DOF) object tracking in a single, plug-and-play package. Each tracking system is self-contained and factory calibrated for out-of-the-box tracking that is simple to use, but still offers the leading accuracy and flexibility of optical tracking technology.
What I really love about this new offering is the new V120:Trio (shown above). This bar combines two “sensor” cameras for 3d tracking (on the ends) with a traditional color camera in the middle. This makes it perfect for augmented reality type applications that want to track objects accurately in 3D space and combine it with live video of the space.
These sensor bars are used widely in image guided surgery applications (Where I used to work), currently dominated by NDI (Northern Digital). Offering similar technology at a vastly lower price point could really revolutionize a space (biomedical) where budget constraints are becoming a bigger issue every year. The biggest question is the accuracy.
If you’re not terribly familiar with how some of the big fight scenes in movies like Iron Man 2 are made, then check out this video from G4TV where Candace Bailey visits the Skywalker Ranch and suits up in the full mocap suit and they break out a virtual camera and take it for a test drive.
3dWorld has a review of the iPi markerless Motion Capture system that uses low-resolution video cameras (like Webcams or the Sony PlayStation Eye) to capture motion. They target 640×480 60fps video, primarily because higher-resolutions would require additional processing (putting it beyond realtime right now), but only sell the Software (requiring you to purchase your own compatible cameras and connecting equipment). It seems rather similar to the Microsoft Kinect. So, how does it work?
The good news is that the approach does work: after a rather clumsy experience with the hardware and lack of preview, the experience of using iPi Desktop Motion Capture paid off, yielding some pretty solid full-body mocap.
Small delicate moves and bold aggressive moves were all captured well. Some floor slippage was seen, but nothing that can’t be cleaned up in post in MotionBuilder. However, only BVH, Collada and SMD formats are supported; FBX support would definitely be a welcome addition.
If you’ve seen the new TRON: Legacy film then you saw the impressive scenes of the lightcycle races, and the fantastic crashes that resulted from them. That work was done by VICON House of Moves at their 26,000 square foot studio over two weeks of rehearsal and motion capture work.
The “TRON: Legacy” production crew spent five days rehearsing on the HOM stages followed by seven days of motion capture shooting. The crew shot high-impact stunt work along with more subtle body and finger poses, and movements of actors sitting on and riding the film’s signature Light Cycles at HOM’s 26,000 square foot studio. HOM was hired to contribute to the project based on the success of their previous collaboration with Grid Productions on motion capture shoots for the first “TRON: Legacy” teaser trailer that screened at Comic-Con 2008.
The OptiTrack Insight VCS Motion Captures System, demonstrated at SIGGRAPH, is now shipping to customers. For a price as low as $199, their hardware and software solution introduces inexpensive high-quality motion tracking with plugins for Maya, MotionBuilder, and other popular packages.
“Not only does the Insight VCS allow us to service the needs of VFX supervisors and directors, but also the art department,” continues Hameed. “Production designers are finding it a useful tool to help design film sets and demonstrate ideas on the fly. It’s like virtual set building in real-time!”
Di-o-matic is showing off an upcoming product called ‘Maskarad’ that enable facial capture without markers or dedicated scanning equipment, just working on any regular video.
Maskarad uses state-of-the-art technology to automatically locates and tracks head motion as well as facial expressions and precise lips movements. No messy or cumbersome face markers setup, no expensive dedicated hardware and no need to outsource any part of your pipeline. Just a simple video file featuring an actor performance is all you need to create lifelike facial animations.
Of course, this won’t work for any video as it pretty much required a straight on shot of the actor’s face. However, taking that straight-on shot, mapping it onto a CG actor, then using that in a 3D scene on top of more traditional mocap should be trivial.
Avatar has already broken even, proving that audiences apparently like James Cameron’s motion captured “giant smurfs”, as many pundits called the Na’vi after pre-screenings, and that motion-capture is a viable alternative to pure CG artistry. This has a few industry people wondering if perhaps this is the first step towards replacing traditional actors alltogether?
With the arrival of Avatar we’ve seen a huge amount of publicity surrounding the revolutionary capture system developed by Cameron’s team, based on tiny cameras which pick up subtle facial movements. The film-maker is promising that more than 95% of his actors’ performances are ending up there on screen. The difference is that this time around, we may just have to start believing. There is a feral intensity, in particular, to Zoe Saldana’s alien heroine Neytiri which could not have come from anyone else but the actor. The creature she plays is all grace and fire: it may just be the first motion captured performance that will stand the test of time.
Of course, an actor will be needed so long as motion capture is involved, but not necessarily an A-list multi-million dollar actor. Anyone willing to endure the time in the mocap suit would do.
Update: @m0rphbrings up an excellent point that rather than putting Actors out of work, another possibility is putting Character Animators out of work. Without needing anyone to rig & animate biped models, that’ll obsolete a large portion of the graphics community.
Avatar has really impacted the visual effects and digital cinematography industry in a huge way, bringing previs, CG rendering, motion capture, and direct video into a single seamless real-time package that opens a whole new world to directors and filmmakers. AWN has a great writeup on what James Cameron and Rob Legato were able to accomplish, and what it might mean in the future.
Thus, thanks to the virtual cinematography workflow created by Rob Legato, allowing Cameron to observe directly on an LCD monitor how the actors’ CG characters (or avatars) interact with the CG Pandora in realtime and direct scenes as though he were shooting live action, digital and live action moviemaking have become one. In other words, everything you've heard or read about the new digital paradigm or 5D has now become a reality. Which also means that pre and post are obsolete, compositing will have to be redefined and so might previs.
VizWorld.com We cover visualization and graphics news from around the internet, including Scientific Visualization, Visual Effects, and Graphics Hardware. Read more on our About Page or learn about our Advertising Options Get updates via twitter from @VizWorld.