This week is the NVidia GPU Technology Conference, and NVidia is kicking it off with a huge announcement of interest to anyone doing GPU development: the world’s first Eclipse-based Integrated Development Environment that supports Linux and MacOS development. For the last several years, Linux GPU programmers had to deal with basic commandline tools while the Windows world delighted in tools like Visual Studio and NSight. With this new development, CUDA and NVidia GPU programming is now on even (or at least closer to even) footing.
“Previously, debugging required dedicated systems that were often expensive and time consuming to configure,” said Tony Tamasi, senior vice president of content and technology at NVIDIA. “Now, any system with an NVIDIA GPU that supports debugging can be used without any additional cost or system upgrades, resulting in significant cost and time savings.”
Get a free demo on the show floor or at www.nvidia.com/paralleldeveloper. Get the full press release after the break.
The new SolidWorks2012 offers some limited GPU features focused around making the visuals pop a little more than classic CAD packages. Over at SolidSmack they take it for a test-drive with one of the lower-end professional cards, the Quadro 2000, and find it works surprisingly well.
The image below is a screenshot of a data set from NVIDIA shown in SolidWorks with RealView graphics on. RealView graphics utilize the GPU to render the semi-realistic graphics on the SolidWorks screen. The other window is the PhotoView 360 Preview window. PhotoView 360 is 100% CPU-based rendering and doesn’t task the GPU, so for PV360 rendering your benefits come along with more multi-threaded CPU cores. The SolidWorks models are all CPU as well. The GPU does little for processing the model, so more GPU’s won’t gain you any added performance.
Of course, it sounds like SolidWorks isn’t really pushing the GPU that hard, focusing only on some nice rendering features in the realtime viewport. High-end renderings are still classic CPU raytracing, and none of the software seems to use any GPGPU features, so the lower-end cards make for a nice inexpensive way to add some more “oomph” to your workstation.
Of course, NVidia is quick to point out the growing trend of designers using post-rendering tools like KeyShot and BunkSpeed to do their high-end renderings, which will definitely take advantage of higher-end Quadros.
via Can SolidWorks 2012 Spit the NVIDIA Quadro 2000 GPU Juice? – SolidSmack.com.
AMD has just announced a new embedded GPU targeted at signage and industrial spaces, but comes with the impressive capability of driving 4-displays and supporting EyeFinity.
Targeted at casino gaming, digital signage, instrumentation and industrial control systems, the AMD Radeon E6460 GPU sets a new bar for features and performance in an entry-level embedded GPU with broadly scalable graphics and multimedia performance, and a planned 5 years of supply availablitly (hence the longevity in the press release title).
It also includes HDMI 1.4, Stereoscopic video, and DisplayPort 1.2. Not bad for an embedded chip.
via Entry-Level E6460 Embedded GPU support up to four simultaneous displays using Eyefinity | FireUser Blog.
Jon Peddie has a new free analysis available of the GPU market, covering everything from old integrated units to new hybrid units. In it he covers lots of historical data and makes some predictions, but I found the image above particularly interesting.
In it he proposes that the integrated space (currently dominated by Intel) will quickly disappear in the growth of Hybrid systems like Fusion & Sandy Bridge. While this may not surprise many, combine this with the fact that so many of these systems go into servers or embedded designs, never actually using the graphics capability available. The end result is that the discrete GPU, theorized by many to be dying, will actually be around for quite a while to come.
Read his paper for all the details.
An Analysis of the GPU Market – Jon Peddie Research Analyst Presentations.
NVidia has just announced a new GPU for laptops called the GTX580M, based on the Fermi architecture and boasting the most powerful performance ever offered in a notebook.
The first notebook PC to feature the GeForce GTX 580M, the Alienware M18x offers the option of two GeForce GTX 580M GPUs in one system for up to double the gaming performance, using NVIDIA SLI® technology. Not to be outdone, the Alienware M17x will offer the GeForce GTX 580M along with NVIDIA Optimus™ technology and will deliver 5 hours of battery life in Facebook, and 100 frames per second performance in Call of Duty: Black Ops.
It comes with every single feature NVidia offers: 3d Vision, 3DTV Play, Optimus, CUDA, Verde, and PhysX.
In other news, they also offered the 570M.
via Fastest. Notebook GPU. Ever. – NVIDIA Newsroom.
iSGTW has a great writeup from Jan Zverina on the advantages of CPU’s and GPU’s, correctly seeing that each of them have their own areas of expertise and use and neither of them will die completely. They look at recent advances in the TeraGrid systems and how GPUs are offering huge advances in a few areas. Toward the end they speak with some of the developers of the AMBER computational chemistry code.
“GPUs are, for the first time, giving us the increases in capability we have been desperate for since the beginning of the multicore era,” says Walker. “I’m confident that we will soon be achieving throughput with GPU-enabled AMBER that is at least an order of magnitude better than we could ever hope to achieve with CPU-based clusters.”
via GPUs versus CPUs, Part 1 | iSGTW.
A new whitepaper from Intel brings in some statistics and stories from Luxology, Luxion, and Modo on the power of CPU’s for ray-tracing and how they can smoke any GPU on the market with CPU-only solutions.
“Modern GPUs offer a brute force solution to ray tracing, but the memory available to GPUs is relatively limited compared to the system memory available to 64-bit CPUs such as Intel Core i7 and Xeon processors. That means that GPUs typically can’t handle the huge scene files required in full-scale production rendering, which may involve tens of millions of polygons and hundreds of high-resolution texture maps. And CPUs offer greater flexibility in terms of shading complexity and plug-in shaders, which may or may not have been ported to run on a GPU.”
These are the same arguments I’ve been hearing for the last year or so. And I have to admit they’re right, if not a bit short-sighted. It’s my belief that most of the arguments they use are going to fall apart soon.
- They always talk about the power of Moore’s law in CPU’s. Well, that same law applies to GPU’s too, they’re going to get faster just like CPU’s will. Even more so, most likely, as they not only optimize individual cores but add more cores as a rate exponential to CPU’s.
- They always talk about Memory limitations. There was a time where CPU’s had rather restrictive memory limitations (the fabled “640k is enough for anyone” comment?). GPU’s will continue to grow in memory. In fact, Sandy Bridge and Fusion offer the first step towards eliminating the distinction between GPU and CPU memory.
- They always talk about the limited instruction set. This one isn’t likely to change, and will always be a hindrance to GPU computing. However, newer algorithms come along at a steady pace showing that you don’t really need the type of complex branching mechanisms of CPU’s, since the GPU has enough horsepower to just compute both sides of the condition and drop the unnecessary one.
In fact, I think within the next 5 years we may see the distinction between CPU and GPU disappear almost entirely, as they both wind up on the same die (similar to how Processor and Math Co-Processor eventually merged several years ago).
It’s a good whitepaper tho, full of some concrete numbers on attempts to GPU-ize code unsuccessfully and benefits achieved from using some of Intel’s newest CPU-optimization technology.
Check it out, and see what you think?
via Why CPU is better than GPU for rendering from Intel with Luxology, Keyshot and Maxwell. – SolidSmack.com.
Last year, NVIDIA told the world about its upcoming GPUs in 2011 and 2013. These GPUs are codenamed Kepler and Maxwell, respectively. Kepler will be released sometime in 2011, and will be manufactured on a 28nm process. Kepler would be approximately 2.7 times faster than the Fermi C2070.
The follow-on GPU to Kepler will be the Maxwell. Maxwell will be released sometime in 2013, and will be manufactured on a 22nm process. Maxwell is approximately 7.6 times faster than the Fermi C2070.
NVIDIA has also told us about Project Denver, which combines a GPU and an ARM CPU in one. The question is, when will that be available? Will it be on Kepler, or will it be on Maxwell? Hexus.net has provided the answer in an interview with NVIDIA’s Tegra General Manager, Mike Rayfield .
Lastly we asked about Project Denver: the surprising announcement that NVIDIA will be designing a CPU in partnership with ARM, with a view to using it in high-end computers. We asked Rayfield to elaborate.
“As well as licensing Cortex A15, we also have an architectural license with ARM to produce an extremely high performance ARM CPU, which be combined with NVIDA GPUs for super-computing,” he said. When we asked for timescales, Rayfield revealed: “The Maxwell generation will be the first end-product using Project Denver. This is a far greater resource investment for us than just licensing a design.”
Hexus also speculates that NVIDIA may launch Tegra 3 at Mobile World Congress next month. Tegra is, of course, a system-on-a-chip developed for mobile devices such as smartphones
via : Exclusive: NVIDIA’s Tegra 3 primed for MWC launch @ Hexus.net
Many people don’t realize just how much data goes into finding the next big oil field. They spend millions scouring the globe and running seismic surveys to find what’s under our feet, and then spend days, weeks, even months to analyze the surveys to find something useful. Take this example:
For example, the average ship running seismic gear has between 20,000 and 25,000 sensors on board, and you typically use several ships in concert to survey an area. This will yield anywhere from 50 to 200TB of data per run and take five to seven days of solid processing on a large number of systems to get results. If you ramp up the resolution, it can take 15,000-20,000 compute nodes running days or weeks to complete the job.
Here at GTC in the “Oil & Gas” track, they had a presentations discussing how they have had success integrating GPUs into their workflow to great effect. They’ve come up with a 5-fold increase in performance, resulting in a 6-fold decrease in overall cost, just by porting their already embarrassingly parallel codes to CUDA.
via GPUs slick up with oil sleuths • The Register.
The Mozilla Foundation is trying to match FireFox against other browsers feature-for-feature and has announced some of what we’ll see in the next few versions. In addition to Chrome-style process-separation, they are working to add GPU acceleration:
GPU acceleration is another hot topic and here Mozilla hopes to offer Direct2D support in an update for Gecko 1.9.3, slated for October launch. Unfortunately this release doesn't include the Direct2D acceleration, but will be added later on, but hopefully not that much later.
Odd to see them choosing Direct2D, a uniquely Microsoft technology, over something like OpenGL. Hopefully they’ll offer similar GPU acceleration options for Linux & OSX.
If you’re interested in the multi-process separation technology, a-la Google Chrome, check out ‘FireFox Lorentz‘, detailed over at Lifehacker today.
via Firefox to get separate processes, Direct2D acceleration.