BSN has some information about the upcoming GT300 chip from NVidia, and the hard silicon numbers alone are staggering:
- 3.0 billion transistors
- 40nm TSMC
- 384-bit memory interface
- 512 shader cores [renamed into CUDA Cores]
- 32 CUDA cores per Shader Cluster
- 1MB L1 cache memory [divided into 16KB Cache – Shared Memory]
- 768KB L2 unified cache memory
- Up to 6GB GDDR5 memory
- Half Speed IEEE 754 Double Precision
But the awe doesn’t end there, check out this list of supported languages:
Ferni architecture natively supports C [CUDA], C++, DirectCompute, DirectX 11, Fortran, OpenCL, OpenGL 3.1 and OpenGL 3.2. Now, you’ve read that correctly – Ferni comes with a support for native execution of C++. For the first time in history, a GPU can run C++ code with no major issues or performance penalties and when you add Fortran or C to that, it is easy to see that GPGPU-wise, nVidia did a huge job.
If NVidia was aiming to revolutionize the world of GPGPU programming, then native support for C++ would do it for sure. Especially if the speed boosts are anything close to CUDA/OpenCL. Most C++ code won’t parallelize easily, but if it simply pushes raw instructions through faster then that would be a huge improvement.
Update:Read more about this architecture in the following stories:
via nVidia GT300’s Fermi architecture unveiled: 512 cores, up to 6GB GDDR5 – Bright Side Of News*.
HardOCP reports that:
“…it is our understanding that it will be late February at the earliest before we actually see a next-gen GPU show up from NVIDIA in the retail channel.”