Shotgun Software today announced the release of their new Shotgun 5.0, boasting a totally rebuilt UI made to make it easier to use.
“For the 5.0 release, we turned our focus to artists and supervisors, designing simple and visual tools that connect them to important project details and to each other,” said Don Parker, Co-Founder and CEO, Shotgun Software. “This is an important step towards providing an off-the-shelf toolset that equally meets the needs of creative artists and data-centric facility managers around the single goal of doing great work while running a solid business.”
The Harvard Business Review has a nice interview with Linda Boff of GE on their use of visualizations.
As a large multinational company, we do have many audiences. And they range from employees and retirees to retail investors and thought leaders. Initially we thought about this — and I think to a large degree continue to — as a way to do external storytelling, but we have found that it works on so many different levels.
As a result, we have used data visualization in places as diverse as our annual reports or our annual report app, which is obviously geared toward investors. We’ve used it with thought leaders. When we released a white paper last fall on the industrial Internet, data visualization was a great way to tell that story.
At SIGGRAPH2013 you’ll see a new paper entitled “Position-Based Fluids” that shows an impressive new type of liquid physics simulation, accelerated using NVidia’s PhysX.
Because PBD uses an iterative solver, it can maintain incompressibility more efficiently than traditional SPH fluid solvers. It also has an artificial pressure term which improves particle distribution and creates nice surface tension-like effects (note the filaments in the splashes). Finally, vorticity confinement is used to allow the user to inject energy back to the fluid.
InsideHPC brings us the shocking news that SGI, the oldest of graphics companies that hasn’t done any graphics lately, is returning to its roots a bit and announcing their “new” VUE suite of products. I put “new” in quotes because I distinctly remember the VUE products from abou 5 years ago. But insightful John Leidel noticed that perhaps this announcement went a bit deeper:
I spent about then minutes flying through presenations from their VP of viz, Bob Pette, when I noticed something interesting. The logo was no longer the singular sgi cube [affectionately called the "Bug Logo"], but rather it actually contained “Silicon Graphics.” Plot thickens. Is SGI going back to its roots, not only in graphics, but in corporate logo as well?
I’ve not seem much success in the VUE line in the past, as it seemed more gimmicky than functional. However, maybe this new version will change that. Their SoftVUE, PowerVUE, RemoteVUE trio seems to be a stab at systems like HP’s Scalable Visualization, which I looked at several years ago and passed on. The other tools are, from what I remember, primarily hardware accelerated live-video viewing tools, so they’re honest in saying you can view data from any source, because you’re getting those sources as video streams live in your view, and then you can overlay, stretch, and warp them around. It’s a neat way to merge disparate systems like Google Maps and live video streams, or CAD and Simulation outputs, but there’s no cross-talk between the applications.
Hopefully this new release not only signals some new features to these tools, but a newfound thrust without SGI towards reinstating the “G” in their name
The upcoming Stereoscopic Displays and Applications Conference (SD&A) is hosting their first ever Game Contest where the winner with the best Stereo Game will walk away with $1000.
The Stereoscopic Displays and Applications Conference is pleased to announce the first SD&A Stereoscopic Game Competition, to be held at the conference in February 2014. The aim is to encourage the creative use of stereoscopic depth in exciting new game designs. A panel of expert judges will review the game designs and the winner will receive a cash prize of $1000.
Full rules are at the website below, but interested competitors must register by July 22nd.
A new patent from Philips proposes a new data storage and algorithmic reconstruction for stereoscopic 3D data based on mixing a Depth Map with the video stream and allowing a compute processor to reconstruct as many stereo views as necessary.
stereoscopic data may comprise a so-called depth map that is associated with a visual image. A depth map can be regarded as a set of values, which may be in the form of a matrix. Each value relates to a particular area in the visual image and indicates a distance between a virtual observer and an object in that area. The area concerned may comprise a single pixel or a cluster of pixels, which may have rectangular shape, or any other shape for that matter. A processor is capable of generating different views, which are required for stereoscopic rendering, on the basis of the visual image and the depth map associated therewith.
At first glance this seems a bit pointless for movies, until you realize the number of Autostereoscopic displays that can represent multiple views, aside from the classic stereo pair.
Over at FastCompany, they have an interview with Jonathan Schwabish of the Congress Budget Office on their new push toward the use of Infographics to educate congressional staff.
Before attending a one-day Edward Tufte course a few years back, Schwabish had no background in visual communication. But that one seminar “opened his eyes” about the way the CBO was presenting their research to their client. Schwabish snowballed his interest into a basic graphic design course, and at the course’s conclusion, the teacher wanted Schwabish to create a pamphlet. Schwabish designed an infographic for CBO instead. And since then, he’s been spending about 25% of his time making infographics alongside an expanded team of colleagues.
The graphics themselves are based on some large-scale behavioral and economic simulations, and so far they’re not showing any great success in changing economic policy. But as they keep pushing forward, hopefully that will change.
Princeton University is having a new “Art of Science” competition, allowing students and researchs to contribute scientific visualizations of their work in art-gallery form, competing for (rather meager, unfortunately) prizes.
The three prize-winners will share $500, divided into shares of $250, $154.51 and $95.49 in accordance with the aesthetically pleasing golden ratio. Another 40 images are included in Princeton’s Art of Science 2013 exhibit, which opened on Friday in the atrium of Princeton’s Friend Center. The works were chosen from 170 images submitted from 24 different departments across campus.
The theme was of “Connections”, focusing on cross-disciplinary research. Follow the link to the full gallery of some of the best work.
Amidst the turmoil of closures and bankruptcies plaguing the VFX industry, it seems newsworthy to hear that Shade VFX in Santa Monica, CA is actually thriving and just moved into a new space doubling their size. What are they doing differently? Easy: Making good business deals.
Godwin admits that in some cases Shade has been outbid by firms in London or Vancouver that offer tax breaks for studios, but he said that being in California has its advantages. Producers have told him that they prefer being able to meet with artists in person instead of needing to have conference calls with visual-effects teams working in far-flung locations around the globe.
One of the big nail-biting scenes of the new Iron Man 3 is the “barrel of monkeys” freefall, where Iron Man has only seconds to save a crew of 13 freefalling from a now-crashing Air Force 1. At first glance, you may think it’s some amazing CG and bluescreen work, but the reality is far more impressive. Involving a full team of parachuters and multiple jumps, the entire scene was actually done in-air and then touched-up for final results.
“I’ve worked on movies in the past where we’ve done fake free fall sequences, with vertical wind tunnels, people on wires, but by actually shooting it, you get the visceral, kinetic camera work that comes with actual free fall photography,” said Digital Domain VFX supervisor Erik Nash, who is an experienced sky diver himself. “It’s something that’s incredibly difficult to fake — the high-frequency camera shake that’s inherent to free fall photography. If you start with something photographed, it’s real, it’s believable and even if you change everything about it you’ve got a foundation.”
VizWorld.com We cover visualization and graphics news from around the internet, including Scientific Visualization, Visual Effects, and Graphics Hardware. Read more on our About Page or learn about our Advertising Options Get updates via twitter from @VizWorld.