Cloud Computing?

At the recent HPC360 event, several people showed up talking about using “cloud resources” for HPC research.  Running big sim codes in the cloud isn’t a problem, in fact it’s not much different from how it’s done now.  The real sticky point is then doing the analysis and visualization with data in the cloud and resources in the cloud.  There aren’t many of these resources out right now, and the tools are a bit confusing.  Dassault Systemes’s Matt Dunbar said it best:

As Dunbar stated, “doing actual batch simulation in the cloud is reasonably straightforwared but doing 3D graphics post-processing is something that remains a question mark for us. There are a number of ways we can do that, but right now we’re trying to decide how best to do that.” This is a difficult decision because software architects are either faced with waiting for a long time or taking what might be a performance hit with their use of utility resources versus their own, slightly more time-intensive (due to wait time) use of workstations.

This is personally a big area of interest for me.  I really thought the Sun Visualization System was a great start, but it was a bit before its time and died in the Oracle acquisition (if not before).  Products like the TACC EnVision and Longhorn suite are a great step in the right direction, putting existing applications on remote resources through a single web-client.

Eventually, tho, we’re going to need smarter applications that can handle the kind of integrity and latency problems that come from super-huge runs.  Tools like EnSight and ParaView are already working on scaling to these larger systems, but still don’t gracefully handle the death of a node or graphics resource.  Of course, some will say that’s an MPI problem or an OS problem, and they’re right.  Exascale computing has lots of challenges, and I just hope that Visualization doesn’t get left behind as it has in the past.

via HPC in the Cloud: Weighing the Queue, Evaluating the Utility.