nsfIt’s no secret that the most powerful supercomputers in the world lie in the hands of the National Science Foundation, the NSF.  John West (of InsideHPC) mentioned a paper by Larry Smarr, a big name in HPC circles, which talks about what the NSF has gotten right and wrong over the duration of the program.  A few things to call out for us Viz folks, first a “Good”:

Drove Scientific Visualization. The need for visualization of the massive datasets generated by the NSF centers drove the development of computer graphics teams at a number of centers. The concept of data-driven scientific visualization quickly swept the academic community, but also had a major impact, largely through SIGGRAPH, on Hollywood and later the gaming community. For instance, Stefen Fangmeier, who was NCSA scientific visualization project manager in 1987, went on to spend over 15 years as a visual effects supervisor at Industrial Light and Magic, working on such films as Terminator 2, Jurassic Park, Dreamcatcher, Perfect Storm, and Master and Commander,

So we have the NSF to thank for alot. But it’s not all good, as evidenced from this clip of “The Ugly”:

Lack of balanced user-to-HPC architecture. From the beginning of the NSF centers program, a basic architectural concept was building a balanced end-to-end system connecting the end user with the HPC resource. Essentially, this was what drove the NSFnet build-out and the strong adoption of NCSA Telnet, allowing end users with Macs or PCs the ability to open up multiple windows on their PCs, including the supercomputer and mass storage systems. Similarly, during the first five years of the PACI, both NPACI and the Alliance spent a lot of their software development and infrastructure developments on connecting the end-user to the HPC resources. But it seems that during the TeraGrid era, the end-users only have access to the TG resources over the shared Internet, with no local facilities for  compute, storage, and visualization that scale up in proportion with the capability of the TG resources. This sets up an exponentially growing data isolation of the end users as the HPC resources get exponentially faster (thus exponentially increasing the size of data sets the end-user needs access to), while the shared Internet throughput grows slowly if at all.

In short:

  • Thank the NSF for pushing Scientific Visualization to deal with massive datasets & create realistic visuals
  • But they kinda dropped the ball, and aren’t properly handling SciVis requirements now.

For any readers out there working in the NSF, what do you think? Agree, Disagree?

Smarr on “the good, the bad, and the ugly” in the NSF supercomputer program | insideHPC.com.

Tags