microsoft-hpc-server-2008One of the big booths on the SC09 floor belonged to Microsoft, who opened the event with the announcement of a new 2nd beta of Windows HPC Server 2008 and a related Cluster Aware version of Excel 2010.

Microsoft has been ridiculed by several old-school HPC types with such jokes as “fastest blue screen” or “crashing at a teraflop”, but their HPC Server business has been making significant inroads in industries like finance, insurance, and the stock exchanges. The main reason is that the Windows server tools mesh nicely with their existing windows based systems (Excel, sharepoint, SqlServer, etc).  The newest version of HPC Server adds a few new features that might bring it into a more widespread audience with classical HPC.

The new version boasts a much improved MPI implementation, integrated at the OS level, bringing the performance of their product up to identical levels provided by Linux Solutions for heavily distributed benchmarks like Spec. In addition, they have implemented their own job queueing and control system that works with various other Microsoft products to allow multiple users access to the cluster with a minimum of fuss.

In particular they have announced their new Microsoft Office Excel 2010 which will allow you to execute your Excel spreadsheets across the cluster. To many of you I’m sure that sounds odd, but several people (again, primarily in Financial services) have their entire pricing and forecasting simulation built as a massive collection of Excel formulae and VBScript into a single spreadsheet. Some of these spreadsheets can crunch on data for hours, if not days, and typically need to run multiple times with slightly varying input parameters to generate statistical analyses. With HPC server, all of this computation can be pushed into the cluster with a minimum of effort, as Excel is already aware of the data flow and dependencies and can parallelize it effectively. Add to that the existing integration of Microsoft products and you can easily envision a scenario of a Sharepoint application (popular in many corporate environments) submitting a large job to the Windows Cluster for evaluation on an extremely low end device, possibly in-the-field like a Netbook or PDA.

To take a slight detour here, one other interesting use of this comes from a collaborative effort between Microsoft and NVidia.  Using GPU’s on Windows is really gaining traction recently in fields as varied as gaming and financial simulation. This works pretty well as is, but any of you out there using CUDA on Windows have probably run into the problem where a long-running kernel is interpreted by the OS as a failed device and “reset”. NVidia and Microsoft have collaborated on a new driver specifically for GPU compute systems (like Tesla), that represents the card not as a Display Device but as a generic IODevice, fixing this problem.

In the Microsoft booth were a wide variety of vendors demonstrating on the platform, but most relevant to Visualization experts was the Kitware station, running ParaView in Client server mode with the server being a small Windows HPC server cluster back at Kitware HQ (New York). The resulting dataset was fully interactive and usable on the slow floor, interacting at 3-5 fps, and running flawlessly. The irony of a fantastic free visualization tool running in the Microsoft booth confounded many visitors I’m told.

The potential of windows HPC server is best exemplified by the upcoming TSUBAME2.0 server being constructed in Japan at the Tokyo Institute of Technology.  From HPCWire’s recent coverage of a presentation about the system:

According to Matsuoka, the next generation machine, TSUBAME 2.0, will be a 3 petaflop machine made of next-generation x86 CPUs — not sure if it’s Xeons or Opterons — and NVIDIA Fermi Tesla devices. He thinks the power draw will be in the neighborhood of 1 MW. (For comparison, the 1.76 Linpack petaflops on Jaguar draws close to 7 MW.) The second-generation TSUBAME will also incorporate between 500 TB and 1 PB of SSD storage, and the whole thing is supposed to fit in around 65 racks. They’re scheduling deployment for October 2010.

With an estimated 3 petaflops of power in a hybrid CPU/GPU system, the entire system will be running HPC Server, which could put Microsoft on the #1 HPC on next years top 500 list by a healthy margin.

All of you Linux programmers may want to dust off your old Visual Studio installs, as it looks like Microsoft is making a strong return to the HPC space traditionally owned by the likes of SGI, Sun, IBM, and Cray.

Read more about Windows HPC Server here.