The NCSA has just issued a press release showcasing some of the first scientific visualizations coming from the new Blue Waters machine. Focusing on star formation, supernova, and Big Bang research, it’s a great collection of images and details on how petascale visualization is done.
These collaborations have provided insight into the size and type of data that each team will produce as well as their individual analysis needs. It became immediately clear that the size of the data, the variety of data formats, and the domain-specific analysis needs would be the defining factors in formulating an effective visualization strategy. Two visualization software packages, VisIt and Paraview, were chosen to satisfy this need. Both suites offer the ability to ingest large volumes of data, use distributed memory parallelism, and read a variety of data formats. These software suites were installed on the Blue Waters Early Science System.
VisIt 2.3 is now available, boasting new cumulative queries in selections, and new file readers for VelodyneCLE AMR, Nek5000, and some Cale formats. Support for Xdmf has improved, and they’ve got some exciting new features for parallel users:
VisIt’s X launching and parallel GPU acceleration features have been rewritten to mesh better with modern cluster installations. See the wiki for more information.
VisIt can now start a remote compute engine through a gateway machine. This capability has been implemented by using ssh to login to the gateway machine and then using ssh to login to the remote machine from there. It can handle giving a password to the gateway machine, but not the remote machine. To enable launching a remote compute engine in this manner, enable the Use gateway toggle button and enter the name of the gateway machine in the text field next to it on the Host settings tab on the Host profiles window.
Looking forward to trying out the Gateway option, looks like it could solve lots of problems in my environment.
In just a few weeks, the CINECA supercomputing centre in Bologna, Italy will be hosting a Workshop on Visualization of Large Scientific Data.
The scientific community is presently witnessing an unprecedented growth in the quality and quantity of data coming from simulations and real-world experiments. Moreover writing results of numerical simulations to disk files has long been a bottleneck in high-performance computing. To access effectively and extract the scientific content of such large-scale data sets (often sizes are measured in hundreds or even millions of Gigabytes) appropriate tools and techniques are needed. In-situ visualization libraries enable the user to connect directly to a running simulation, examine the data, do numerical queries and create graphical output while the simulation executes. It addresses the need of extreme scale simulation, eschewing the need to write data to disk. The workshop will bring together researchers, developers, computational scientists for cross-training and to discuss recent developments and future advancements in remote and in-situ visualization
I see that staff from Kitware (VTK, ParaView) will be there, and it seems they’ll be talking a lot about VisIt as well. Both fabulous tools, but I find it interesting that CEI/Ensight isn’t mentioned anywhere…
This month’s issue of the IEEE Computer Graphics & Applications journal is dedicated to “Ultrascale Visualization”, all about visualizing massive datasets across some of the largest computers in the world. In particular, this issue contains the article about the massive “Trillion Zone” run of VisIt we discussed a while back.
A series of experiments studied how visualization software scales to massive data sets. Although several paradigms exist for processing large data, the experiments focused on pure parallelism, the dominant approach for production software. The experiments used multiple visualization algorithms and ran on multiple architectures. They focused on massive-scale processing (16,000 or more cores and one trillion or more cells) and weak scaling. These experiments employed the largest data set sizes published to date in the visualization literature. The findings on scaling characteristics and bottlenecks will help researchers understand how pure parallelism performs at high levels of concurrency with very large data sets.
The paper is available from the IEEE Computer Society for $19, but I was lucky enough to get a review copy. Read on to see some details and my thoughts.
Empisys generated this movie using Weather Research and Forecasting (WRF) simulation data published by the University Center for Atmospheric Research (UCAR) in Boulder, Colorado. This weather model simulates a large hurricane in the Gulf of Mexico. The visualization was generated using Lawrence Livermore National Lab (LLNL) VisIt running on a Windows 2008 HPC Server R2 cluster with 64 processors.
After we reported on the inclusion of VisIt into the SuperComputing 2009 Student Cluster Competition, Hank Childs stopped by to post some information.
As far as the benchmarks: the evaluation criteria is still being determined. From an HPC perspective, we obviously want to stress I/O, compute, and communication.
My initial thoughts were to upsample a toy data set to become a large one, then have them run a script to make a movie. I was thinking a three frame movie with volume rendering, contouring, and a moving slice.
He follows it up with an invitation for suggestions and ideas on what to include in the contest criteria. So, here’s your chance: What would make for a good VisIt benchmark?
The rules have been set for this year’s SuperComputing Student Cluster competition, and this year it’s all about “Go Green”.
This year’s SC09 Student Cluster Competition is built around a “Go Green!” theme, tying it in with this year’s show. Just like the previous competition, this year’s rules have capped the overall power requirements of each team’s gear to a pair of 120-volt, 20-amp circuits. Each circuit will have a soft limit of 13 amps. Penalties will be assessed if a respective team trips an alarm on the metered power circuits. Each team’s hardware, along with the metered power units, must fit into a single rack.
The teams & vendors have been set and work is underway. The benchmarks consist of the usual simulation codes, but a great addition this year is VisIt. I don’t see any details yet on what they’ll need to do inside of VisIt, but the fact that is made it into the application run category is a huge win for both the developers of VisIt and visualization scientists everywhere.
On Monday afternoon I attended the VisIt Tutorial, taught by the ever-knowledgeable Sean Ahern and Hank Childs. The tutorial covered all the major parts you could hope for: Basic usage, advanced functionality, expressions, client-server, analysis, and finally development. I thought I’ld cover some of the highlights for you in case you couldn’t make it yourself.
VisWeek is the annual Visualization conference where researchers, users and enthusiasts of data visualization meet to present their work and discuss new ideas. This year the conference will be in Atlantic City, NJ from October 11th-16th and promises to be a very exciting event.
The guys at SuperComputing have published a list of the tutorials that will be underway on November 15th and 16th, and there’s a few Visualization tutorials you may want to check out if you’re in the Portland, OR area.
VizWorld.com We cover visualization and graphics news from around the internet, including Scientific Visualization, Visual Effects, and Graphics Hardware. Read more on our About Page or learn about our Advertising Options Get updates via twitter from @VizWorld.