multigpuRecently, Ars Technica finally found the JPR report on Multi-GPU from the beginning of this month, and smashed it as being pretty unlikely due to manufacturing constraints and consumer interest.  Well, I disagree and in this feature I’ll lay out why I think that Multi-GPU penetration of 30% is not only likely, it’s inevitable.

Read the full article and chime in with your thoughts after the break.

There are a few main points to get out of the way first, and they’re primarily linguistic.

What is Multi-GPU?

First we need to specify what exactly is MultiGPU?   The landscape is changing fast, and this line is getting increasingly difficult to define.  So I present a few questions to you:

  • The NVidia GTX295 – Two GPU’s in a single card form-factor
  • The Larrabee – Integrated CPU/GPU on a chip.  If I drop in an extra video card, is that Multi-GPU?
  • NVidia’s Quadroplex – Internally it’s 4 Quadro cards, is that MultiGPU?
  • CausticOne – Such a radically different approach, how do you define MultiGPU With that?
  • NVidia Tegra – With all those cores on a single chip, is that MultiGPU?

Think on it for a minute, while you consider this:

What is a “Computer” ?

I know, you think it’s a stupid question.  But consider these for a moment:

  • SmartPhones
  • Tivo
  • AppleTV
  • Netbooks
  • iPod Touch & ZuneHD

Are these computers?  Or just fancy electronic devices?

The Landscape of Computing

from TechFlash

from TechFlash

No doubt, the landscape of computing is changing.  More and more people are switching to netbooks as they realize that 99% of their computer usage is for nothing more than email and websurfing.  SmartPhones, like the Palm Pre, Android, and iPhone, are gaining market penetration at an alarming rate.  We are in a situation where “computers” are being replaced by consumers with other simpler, faster, and more portable devices.

As this trend continues, we’ll see classically defined “computers” becoming a smaller percentage of the overall market.  The resulting market will then be dominated by people in desperate need of high-performance, people doing Animation & Rendering work, people in High Performance computing, and people that really need large memory and fast processors.

That is the crux of the inevitability argument.

CPUs have stopped “growing”

There was a time when if you needed a faster processor, just wait a year and get a painless 30-50% speed boost with a new one.  Those times are long gone, and replaced with meager 5-10% improvements.  The chip industry (AMD and Intel) are aware of this, so they’ve gone a different route: multi-core.  Now, instead of seeing faster processors (1Ghz, 2Ghz, 3Ghz), you see more cores (dual-core, quad-core, 6-core, 8-core) coming out every year.  Chip manufacturer ran into Moore’s law and the limitations of quantum effects in silicon, and couldn’t pack any more transistors onto the silicon.

Graphics chipmakers work on a completely different process, rather than general computing chips they have chips specifically designed for raw throughput with a minimum of manipulation.  As such, they’ve been multi-core from the beginning, although the instruction set is far more restrictive than traditional processors.  Where Intel’s working on 8-core ships, ATI’s working on 800-core chips.

GPUs will stop “growing” too

But just like CPU’s ran into physical limitations, so will GPU’s.  GPU manufacturers are already having difficulties with their smaller (45nm) processes, and compensating by adding more cores.  Some manufacturers, like NVidia, have gone so far as to begin packing multiple silicon cores onto a card (the GTX295).  Just like Intel & AMD compensated for a stall in single-chip performance by adding multiple chips, GPU manufacturers are doing the same.

We are already at the place where if your visualization won’t run fast enough, then run it on multiple GPU’s. Large government labs with terabyte datasets have been doing this for years with ParaView and VisIt across clusters, but find themselves capable to (sometimes) run it on a single large shared-memory computer now.  When the rendering isn’t fast enough, what do you do?  You get more video cards.

The Result

And so is the inevitable rollout of Multi-GPU.  More and more applications are coming online that support OpenCL and GPGPU, and chip manufacturers are getting on-board with it as well.  As consumers leave the “computer” for other more intuitive devices, just the hardcore-users will be left and their applications will thirst for the raw power that only GPU computing can provide.  And when that can’t provide enough, they’ll buy more.  Add to that the near-requirement of one GPU for visuals and another GPU for computing, with the advent of Snow Leopard and Windows7 using GPU visuals, and the next generation of operating systems will nearly require Multi-GPU.

The next generation of hardware will make this even simpler.  Soon, when you buy a computer motherboard it will come with an Intel Larrabee chip with GPU integrated, or a new NVidia or AMD chipset embedded for 3D visuals.  The new NVidia Ion is fully CUDA capable, but with only 16 cores.

So while some naysayers say that Multi-GPU will never catch on, and tout that last year only 2% of computers were Multi-GPU, I have to say what were they the year before?  Multi-GPU is new yes, but the benefits are plain to see and for those people who need it, they’ll buy it.  And those people will soon easy encompass 30% or 50% of the market.