If you were reading our LiveBlog of the GTC2010 Keynote Speech, then you saw the note about the Product Roadmap. NVidia has historically been pretty secretive with roadmaps, particularly with the press, but this time Jen-Hsun threw up a single slide announcing not only the names of their upcoming 2 new products, but estimated performance figures and dates.
The two new products:
- Kepler – To be released sometime in 2011, 28nm process. Estimated double-precision Gigaflops performance of 4-6GFlops per watt.
- Maxwell – To be released sometime in 2013, 22nm process. Estimated double-precision Gigaflops performance of 15-16 GFlops per watt, making is about 16x better than the Fermi-driven cards.
How can they expect to hit these unbelievable numbers? First off, they’re going to work on reducing the process size (Fermi is at 40nm) so simply cutting down to 28nm would allow a significantly quantity of extra transistors to be added. However, that’s not the only thing that will have to happen. Beyond that, NVidia wouldn’t say much more than “architectural changes” similar to what happened with Fermi.
I was, personally, very glad to see NVidia presenting figures in Performance Per Watt, as in Jen-Hsun’s words:
Math is Free. Transistors are Free. Power is expensive. Performance Per Watt = Performance.
And this translates through their entire product portfolio. The changes that will enable this kind of performance on the high-end will also show up in the consumer-side (GeForce) as improved gaming performance, better PhysX, and more real-time raytracing, and it will show up on the mobile side as improved Tegra chipsets (CUDA in the power of your hand??!).
Hmmmm…
I don’t think the card is going to draw 600 watts. That is a massive amount of power. Even if it does, it will peak at that, versus drawing that much power 24 hours, 7 days per week.
Realistically, you can expect about a $20 dollar increase, per month, in your power bill, if you game heavily. (10 hours per day, 5 days per week.)
good point just nitz but
it adds up
600 watts is going to cost you like 7 cents an hour
add that up with everything else you have in your house and your looking at an electric bill of like 300+ a month
@ JusttNiz Most users won’t matter, but for those of us deploying thousands of them (Like the new Tianhe computer), every watt of Electricity & Heat counts.
Meh. I don’t understand why anyone would give a crap about the few measly cents their next $500 GPU will save off an average monthly electricity bill. If you can afford a good GPU at all, its not an issue. No one seriously decides between a Ford and a Ferrari only based on how much it costs to gas them up.
I wish they’d just give us some performance stats that indicate outright what graphics performance Kepler and Maxwell-based GTXs will be actually capable of compared to current gen. GTX, without all the irrelevant bullshit marketing stats.
Well, it’s definitely a fact that TSMC sorta crashed the Fermi launch (lol 2009 … right), but their graphs are realistic if you put it one year later for each (i.e. 2010 for Fermi), thus accounting for the potential foundry failures (which I believe will not be repeated a dozen times either).
I believe it’s pretty standard to go for optimistic announcements when everyone knows a project is meant to be overbudget and delivered late anyway (really, it’s a corporate standard …).
Wow, putting such huge promises on the table after the fermi release desaster is quite surprising … after all there is one thing gpu makers are really relying on … its the capabilities of the chip manufacturers like TSMC to produce these things in time