CUDA and GPGPU are all the rage these days, with another press release every few days from some company touting the 10x performance boost you can get with a high-end video card. People talk about DX10 and Tesla like it’s the only option for CUDA. Many people forget that CUDA and GPGPU have been around for a few years now, with CUDA available on all NVidia Hardware since the GeForce 8 series. How well do these applications work on non-cutting-edge hardware?
That’s all well and good, but proof of CUDA’s incendiary capabilities has largely been proven on high-end GPUs. I’m on a tight budget. Friends are getting mowed down around me by lay-offs and wage-cuts like bubonic plague victims. You bet, I’d love to drop ten or twelve Benjamins on a 3-way graphics overhaul, but the reality is that, like many of you, I’ve only got one or two C-notes to spare. On a good day. So the question all of us who can’t afford the graphics equivalent of a five-star menage-a-troi should be asking is, “Does CUDA mean anything to me when all I can afford is a budget-friendly card for my existing system?”
The answer I would infer is that it is NOT worth it at all, simply put.
The answer boils down to economics. As Tesla and even a good system capable of the minimum ‘effective’ GPGPU scheme are nice to learn; Is it making you money? Is it returning any significance to the advancement of your science? Without a funded situation, it isn’t too practical to keep up to a jump in performance with all the changes constantly to the hardware offered and then still make great contributions to computational work. Since most people making contributions usually have access to a large compute cluster anyway, once the hype of CUDA wears off a little, I think a trend will continue to back down until the hardware price comes down… The Cray CX1 was always an interesting notion as well…