Here is all of the news we’ve collected about NVidia Corporation. This includes their new hardware offerings like Tegra and Fermi, as well as their software offerings such as CUDA, PhysX, and more. Feel free to browse around, and then maybe check out some of these other tags:
NVidia Tag Page
The NVidia Shield handheld gaming console is now available for preorder at $349, with delivery by the end of the next month. For the $349, you get the package shown above: The console, and 2 games.
The desktop streaming feature, which allows you to use the Shield as a “remote” for Steam games on your PC, will be available as a “Beta” feature at launch. I’m not sure exactly what that means, except that it will be unsupported.
NVidia has released the latest version of their popular NSight OpenGL & GLSL Debugging suite, boasting new support for OpenGL4.2, CUDA5, and new local-debugger features.
The final release of Nsight™ Visual Studio Edition 3.0 is available for download under Nsight™ Visual Studio Edition Registered Developer Program. This new release officially supports OpenGL frame debugging and profiling, GLSL GPU shader debugging, local single GPU shader debugging, the new Kepler™ GK110 architecture found in Tesla® K20 & GeForce GTX TITAN, and CUDA® 5.0.
Download it at the link below.
At SIGGRAPH2013 you’ll see a new paper entitled “Position-Based Fluids” that shows an impressive new type of liquid physics simulation, accelerated using NVidia’s PhysX.
Because PBD uses an iterative solver, it can maintain incompressibility more efficiently than traditional SPH fluid solvers. It also has an artificial pressure term which improves particle distribution and creates nice surface tension-like effects (note the filaments in the splashes). Finally, vorticity confinement is used to allow the user to inject energy back to the fluid.
PhysX has a few more details at the link below.
Over at gfxspeak they have results from Jon Peddie’s recent evaluation of NVidia’s new Kepler cards with some CAD benchmarks like Cinebench and Cadalyst. The results are most impressive when comparing the new Quadro K5000 to the older Quadro5000.
The Cadalyst 2012 results proved a good showcase for the C30 and both new Quadro K-series cards. We have a limited history with the 2012 update of the benchmark, but suffice to say that these scores were well beyond what has been reported by third parties in the past year. Furthermore, the K5000-equipped C30 eclipsed the marks set by the C30 with the Fermi-generation Quadro 5000 … and by a substantial margin: 18.6% on the 3D index, 52.6% on the 2D index, and 18.1% on total index. The 18.1% total score is an impressive generation-to-generation gain for a graphics card running a system-level benchmark on the same system as its predecessor.
I was a little disappointed, I had expected bigger improvements. It could be that the results from a benchmark suite aren’t actually representative of what you’ll see in day-to-day use, or perhaps better drivers will improve things even further.
What are your thoughts?
Having learned a valuable lesson in regards to drivers with Windows Vista, NVidia has already released WHQL-certified drivers for Windows 8 (due out later this month).
Today, NVIDIA released its final set of Windows Hardware Quality Labs (WHQL)-certified GeForce drivers in preparation for the launch of Windows 8 later this month. Download the new GeForce 306.97 drivers for Windows 8 from GeForce.com.
While it remains to be seen if the industry disregard for Windows 8 will really play out, I’m glad to see hardware suppliers getting on board early.
If you’re interested in the new Kepler GK110 processor from NVidia, then definitely check out this new architecture whitepaper they’ve released.
Comprising 7.1 billion transistors, Kepler GK110 is not only the fastest, but also the most architecturally complex microprocessor ever built. Adding many new innovative features focused on compute performance, GK110 was designed to be a parallel processing powerhouse for Tesla® and the HPC market.
Kepler GK110 will provide over 1 TFlop of double precision throughput with greater than 80% DGEMM efficiency versus 60‐65% on the prior Fermi architecture.
Details of the new Tesla K10 and K20 continue to come out, and the K10 is proving to be a bit different than anticipated. The new Tesla K10 is using the GK104 chip, available in the GeForce cards. This chipset lacks most of the usual Tesla features, and NVidia is getting around this by marketing it specifically to a narrow slice of the market:
NVIDIA’s market strategy here is actually summed up rather well in their K10 press release: “NVIDIA Tesla K10 GPU Accelerates Search for Oil and Gas Reserves, Signal and Image Processing for Defense Industry.” GK104 lacks the ECC and compute flexibility of the Fermi Tesla cards, but what it doesn’t lack is single-precision compute performance and memory bandwidth; and with a dual-GPU card in particular it has both of those in spades. Accordingly, NVIDIA’s goal for K10 is to go after the specific market segments that don’t need ECC and don’t need flexibility, but do need all the raw compute performance they can get. This as it turns out is something gamers are already familiar with: image processing.
I’m busy watching the NVidia GTC Keynote, but just got a press release too exciting not to share. I don’t think Jen-Hsun has said it yet, but this press release indicates that he’s about to announce new Kepler powered Tesla cards, the K10 and K20.
The NVIDIA Tesla K10 GPU delivers the world’s highest throughput for signal, image and seismic processing applications. Optimized for customers in oil and gas exploration and the defense industry, a single Tesla K10 accelerator board features two GK104 Kepler GPUs that deliver an aggregate performance of 4.58 teraflops of peak single-precision floating point and 320 GB per second memory bandwidth.
Get the full release after the break.
Of course, GTC is going on right now and there’s lots of discussions and vendors talking about the future of GPU computing and technology. Today at 10:30 PT (12:30 Central), NVidia CEO Jen-Hsun Huang will be presenting the Keynote and from what I hear, making some major announcements.
This week is the NVidia GPU Technology Conference, and NVidia is kicking it off with a huge announcement of interest to anyone doing GPU development: the world’s first Eclipse-based Integrated Development Environment that supports Linux and MacOS development. For the last several years, Linux GPU programmers had to deal with basic commandline tools while the Windows world delighted in tools like Visual Studio and NSight. With this new development, CUDA and NVidia GPU programming is now on even (or at least closer to even) footing.
“Previously, debugging required dedicated systems that were often expensive and time consuming to configure,” said Tony Tamasi, senior vice president of content and technology at NVIDIA. “Now, any system with an NVIDIA GPU that supports debugging can be used without any additional cost or system upgrades, resulting in significant cost and time savings.”
Get a free demo on the show floor or at www.nvidia.com/paralleldeveloper. Get the full press release after the break.