Looks like Hexus goofed as well and prematurely posted their Fermi review, including a slew of benchmarks that show the average FPS, Power Consumption, and Temperature. They are best summed up by this Overclock.net commenter:
According to the Hexus review, the 5970 is still on top of the dog pile. On average, the 480 saw about a 7% FPS increase in most games compared to the 5870. Is that worth $100+ for 7%? Not to me. I’ll of course be keeping an eye out for more reviews (guru3d, Tom’s Hardware, Overclockersclub, etc). Oh well, was hoping for a reason to jump ship and get some more performance, but that would probably mean getting a new PSU AND $500 GPU from Nvidia before seeing any noticeable gains. Bleh, hope Fermi doesn’t end up sucking. I don’t care about Red/Green team, I just want more performance for my money.
Edit: lol @ 480 for being hotter than 5970 and drawing more power.
Rest after the break.
Now, comparing a single GPU (GTX480) Card against a dual-GPU (the 5970) isn’t exactly fair, but it is disappointing to see the results..
@Brian I think the concern is that Power Draw pretty closely correlates with Heat Dissipation, and these higher draw figures will require most people to upgrade their power supply in addition to the card.
Ha, all this bickering about power draw. Who but the most cash strapped of us really care about the difference in power? Switch one or two light bulbs to CFL… and you’ve made up for the difference.
No one buying a $400+ GPU is going to care about that. It’s like refusing to buy a better performing car solely because it requires premium gasoline. Pfft.
@ jub jub
you are absolutely correct. By the time new full featured directx 11 games come out, there will already be an ati card thats probably going to top it. Lets not forget that the 5870 is more than 6 months old. When you buy a video card, you’re going to be playing games NOW. So the whole point that “future games MIGHT use heavy tessellation” is moot.
the 5870 didn’t sell as much cards as it did only because it was direct x 11, Its sold because there was also huge performance increases on CURRENT games, and used less power, and ran cool (well for a video card anyway). So to use the whole “it has an advantage in heavy tessellation” is an extremely weak point considering all the draw backs.
Exactly, it’s a bit of a skewed comparison 1GPU vs 2. This is a competitive offering HOWEVER power draw is a concern. This thing draws a lot of juice and for a single core chip that’s a problem. NVIDIA can’t go to a dual GPU card with this and in SLI configs their power draw will be massive.
Still they delivered the goods, but the competition is now fierce.
@drub0y77
The real issue is that a 5970 is a DUAL GPU architecture. So it’s comparing a single core solution to a dual core and there are a couple of other metrics that matter. Price and power. The power story is not good, but price is and for those interested in a slightly more affordable single core GPU it’s a good buy. NVIDIA is hurting though, their flagship product is priced a lot lower than where they’d like it to be.
But this card has a lot of strengths particularly on the GPGPU front.
I’d call nVidia Fermi a Brilliant Failier.
It stressed the already known point of nVidia GPUs/chipsets:
– exellent drivers – the more than exellent tesselation productivity is first of all an exellent software realisation.
– more or less in par with AMD graphics hardware (really nVidia fix point architecture approach is more promising for graphics applications than ATI floating-point approach – but nVidia still can not use the advantage to the full)
– slow “Graphics PCIe” interface, inherited from the inability to make fast PCIe bus interconnections (the main reason Intel banned nVidia from Nehalem chipset business) resulting in a so-so productivity in intensive CPU-GPU-talk applications.
Nvidia being 10% faster chip for chip is nice. What is nicer is 3x double precision speed, 3D gaming and general purpose computing. A lot of applications other than graphics will use this new GPU. Even a non gamer would get one in his computer. Some professional apps already use NVidia for nice speedups. Supercomputer builders have already lined up for the cards.
ATI doesn’t have ECC memory, DP floating performance or software to even compare. theirs is a plain graphics
card with fickly driver.
allot more impressive than i thought it would be.
@jub jub
Take a look at the last chart on this page where it shows the results for “extreme” tessellation and a single 480 core beats even the 5970:
http://www.hexus.net/content/item.php?item=24000&page=8
So what does this mean? It means having a 480 will guarantee the most polygons/highest fidelity models in every game with the highest framerate when tessellation is utilized properly by the game designers.
Last but not least, let’s not forget this is only 480 cores enabled. Once the yields increase and a “490” comes out with all 512 cores (32 more) just imagine what the perf will be like.
@iSoLateD1
Crowds, water, cloth in Dirt 2. Avp and Metro 2033 DX11 benches would be interesting but i suspect that GTX480 will not be significantly faster in those either. Those are the benches you need to look out for tonight anyway…
I’m pretty sure Dirt 2 only uses tessellation for the crowds.
The only reason the GTX 480 did better on Dirt II is that it was played on DirectX9, the Radeon 5870 was played on DX11….. no crap the Nvidia card outperformed.
@drub0y77
I know what you are driving at, but its not really much of a defence right now. Is anyone really going to want to drop 100 dollars more just because its possible that in a year there will be a lot of DX11 games and again, POSSIBLY the GTX480 might be a lot faster in them? Truth is DX11 games arent exactly piling themselves up this year. By the time there are a significant amount anyone will want to play, this card will be a generation old and there will be a second generation of DX11 parts on 28nm in 9-12 months or so.
I wouldnt drop that 100 knowing that its simply not giving anything right now more, and by the time it MIGHT, i would have the hundred saved extra on top of a 5870 sold to pick up a new generation card….
Not bad performance for a single GPU at all. Shame about the heat and power draw. Comparison’s between 480 and 5970 are probably unfair, but same thing happened with 5870 and 295. So those sorts of comparisons are probably here to stay.
But that power draw. Eek. That killed it for me.
@jub jub
You’re right, it’s not so much about the benchmarks, but they are obviously a good way to measure the performance and feature support of the cards when you’re talking about raw performance numbers.
Dirt2 barely scratches the surface for DX11 tessellation because it ony uses it for a few things (crowds, flags, water). That’s why I said “faux” DX11 titles aren’t good for gauging the new architecture.
Anyway, the only point I’m trying to make is it’s not a good idea to just go off of framerate of existing games to measure something as radically diff. as the Fermi architecture is from previous generations.
@fermion
Because tesselation benchmarks are what everyone plays all day? Its just that, a benchmark. Synthetic and meaningless. Dirt 2 Direct x11 uses tesselation, and surprise! The GTX480 is not significantly faster. Less than 10 percent….
yeah, it’s disappointing…. to see the dual-gpu ati card fail on extreme tesselation.
Oh. Dear. Hyper fail. Its only significantly faster on the games and settings that dont matter, who cares if the GTX480 can do 115 frames if a 5870 is already doing a whopping 85 for example???
Far cry 2560 x 1600 8 x AA is the only test there it seems where owning a GTX480 will give you a good gameplay advantage over a 5870. But seriously, who actually needs 8x AA @ 2560 x 1600? you wouldnt even tell the difference between 4 and 8 samples at that mega resolution.
The game that really matters is the one that is clearly the most demanding- crysis warhead. GTX480 over no advantages in fact it LOSES one of the tests to 5870. Its going to cost 100 bucks more, its going to use more power than a 5970 and its gonna be hotter than hellfire. Wow.
Hyper fail
@drub0y77 I agree with you 100%. Unfortunately, when you’re benchmarking you have to use what’s available. While the games are ‘old’, it’s what’s available and popular.
I still think the real power of Fermi comes in on the GPGPU side. Any benefits on the gaming side will be, like you said, in the next generation of games that can make better use of the GPGPU capabilities.
Running “old” DX10 or “faux” DX11 games is never going to show off the power of a new architecture. Comparing FPS under high resolutions with FSAA is a silly way to measure performance of a new architecture. Whoopee we get a couple more FPS! Does the game look any better? Does it play any more realistically because of real-world physics? Answer is simply “no”.
New benchmarks (Heaven2) and games are needed that actually start to push the boundaries of DX11 features like tessellation and DirectXCompute/OpenCL for physics simultaneously to see how a new architecture like Fermi will outperform existing architectures like the 5xxx series.
I will bet dimes to dollars that as next-gen titles start to come out we’ll begin to see a single core Fermi 480 will outperform even the 5970 in next-gen titles that actually start to use some of these features.