A new press release from Nvidia discusses the use of QuadroPlex GPU Virtualization by Santos, an Australia-based oil and gas company, to bring high-powered GPU capabilities to thin-client workstations.
Santos uses IBM x3650 M3 server utilizing NVIDIA Quadro Plex scalable visualization systems. All 12 servers located at their various offices are powerful enough to serve more than 600 users at a time to provide a high-performance, 3D Linux production environment, which their users can access from any Santos’ office using a standard Windows notebook PC with no 3D capabilities. Rather than have a high-end workstation deal with the 3D rendering and calculations, each server maximizes performance by loadbalancing the four GPUs contained in each NVIDIA Quadro Plex connected to each server to handle hundreds of 3D render requests from multiple users at a single time.
Many people would call this “private clouds”, but it’s actually just classic-old school virtualization technology. Beefing it up by some new kit from NVidia, I’m impressed to hear they can services 600 users simultaneously.
Get all the details after the break.
Many people don’t realize just how much data goes into finding the next big oil field. They spend millions scouring the globe and running seismic surveys to find what’s under our feet, and then spend days, weeks, even months to analyze the surveys to find something useful. Take this example:
For example, the average ship running seismic gear has between 20,000 and 25,000 sensors on board, and you typically use several ships in concert to survey an area. This will yield anywhere from 50 to 200TB of data per run and take five to seven days of solid processing on a large number of systems to get results. If you ramp up the resolution, it can take 15,000-20,000 compute nodes running days or weeks to complete the job.
Here at GTC in the “Oil & Gas” track, they had a presentations discussing how they have had success integrating GPUs into their workflow to great effect. They’ve come up with a 5-fold increase in performance, resulting in a 6-fold decrease in overall cost, just by porting their already embarrassingly parallel codes to CUDA.
via GPUs slick up with oil sleuths • The Register.
Dell now has a 16-GPU PCIe Expansion Chassis for sale, enabling you to install up to 16 GPU’s and hook it all up to a single computer. The design of the device was pushed by the Oil and Gas industry who have really embraced GPGPU computing as a way to accelerate their massive dataset analysis.
I thought it was really interesting that when an oil and gas customer came to Dell and asked for a chassis solution for GPUs, their “GPU-to-server” ratio requirement went from 2:1 in the beginning all the way up to 4:1 (4 GPUs per server).
Presumably this ratio was determined by testing and maybe tuning their GPGPU application. Or it simply might’ve been because the chassis made it practical to access 4 GPUs.
Oil and Gas have always loved GPU technology, first for the ability to visualize and render their massive datasets interactively, and now for it’s amazing ability to run their massive image analysis kernels at unheard of speed. The massive quantity of GPU’s is only partially driven by computing power, tho, as I bet it’s mainly driven by memory requirements (drop 8 of the new Quadro 6000′s in there and get access to 48G of Video Memory).
via What Is Your Application’s GPU-to-CPU Ratio? – Blog – Pixel I/O.
In the recent Scientific Computing World, there is an article that talks to Laurent Billy, CEO of Visualization Sciences Group (VSG), on some of the uses of GPUs in the Oil and Gas industry. In particular, they get into the benefits of the low power consumption in comparison to similarly powerful CPU’s and some of the new features this enables:
He adds that many companies are now looking towards moving data processing operations nearer to the drilling location, on a ship, or even on the drilling platform itself, where power and space are even more constrained. “Typically, they put the data onto a number of hard drives and fly it by helicopter back to the mainland, where it is loaded into a computer. They compute whatever they are working on, and then they often have to send the results back in the same manner! If they could do all of the processing on site, they’ld be saving a lot of time and money, not only in terms of just moving the data, but also in terms of avoiding idle time on the drilling platform while they wait for the results.”
Slashdot | NSF Gives Supercomputer Time For 3D Model of Spill
Computerworld | Researchers race to produce 3D models of BP oil spill
Acting within 24 hours of receiving a request from researchers, the National Science Foundation late last week made an emergency allocation of 1 million compute hours on a supercomputer at the Texas Advanced Computing Center at the University of Texas to study how the oil spreading from BP’s gusher will affect coastlines.
The goal is to produce models that can forecast how the oil may spread in environmentally sensitive areas by showing in detail what happens when oil interacts with marshes, vegetation and currents.
What may be just as important are models that simulate what could happen if a hurricane carried the oil miles inland, said researchers in interviews.
This is the best use of the government’s scientific funds on this horrible disaster yet. Looking forward to images from and analyses of the model runs.
A new report from Francisco Ortigosa, Repsol YPF talks about the problems oil and gas companies have visualizing data from the bottom of the ocean, particularly the US Gulf of Mexico. Their solution? Combining the Mare Nostrum supercomputer with Cell Processors (just like the PS3 or the RoadRunner) to create the “Kaleidoscope Project”.
The Kaleidoscope Project encompasses a simultaneous innovation of hardware and software to achieve a petascale solution to seismic imaging using off-the-shelf technology. Software research focuses on the quality of algorithms and avoids shortcuts or tradeoffs common because of lack of computing power. Kaleidoscope is to ensure the maximum possible imaging quality regardless of the computer power required. And of course speed and power, which are the two main factors for the project to succeed, are guaranteed while ensuring low cost because it’s coming from a massive market.
via Seismic visualization supercomputer brings subsalt data into the light.