What Does the Launch of the GeForce RTX 20 Series Mean for the Future?

Introduction

Like many of you, I was glued to my computer screen this morning during NVIDIA’s live-stream of the GeForce RTX 20 series launch. This family of video cards has been rumored and hinted about online for months now, with details slowly becoming more and more clear – especially in the last week or so. The launch of a new GPU generation – “Turing” – along with its focus on real-time ray tracing was revealed last week with the launch of the Quadro RTX line, so it was apparent that the next GeForce series would have a similar focus. But what exactly was shown today, and what does it mean for the future of gaming, virtual reality, and other GPU applications?

NVIDIA GeForce RTX 2080 Ti 11GB Founders Edition

RTX Cores & Real-time Ray Tracing

The big new feature of the Turing generation of GPUs is the RTX Core: a new bit of circuitry that is specifically focused on ray tracing in real-time. Ray tracing has been used for ages to create photo-realistic images and effects in movies, but has pretty much always required pre-processing… often taking minutes or even hours per frame. To perform similar calculations fast enough to maintain the 30, 60, or even more frames per second that games and interactive computer experiences demand was unheard-of. That is why games and VR have been using rasterisation for decades, which can be done much faster but at the sacrifice of visual accuracy and fidelity.

So, what does it mean to have hardware in the GPU dedicated to ray tracing? This signals the potential beginning of a move from rasterisation to ray tracing for real-time 3D computer graphics. I say "potential" because right now this is NVIDIA-only, and on just a handful of their newest video cards. We don't yet know how well these RTX cores can keep up: whether they can handle a fully ray-traced game at both high resolution and high frame rate. It may be that fully ray-traced games will need another generation or two of GPU advancement to provide a good experience. And I say "beginning" because it will be a long time before the market is sufficiently saturated with these (and future) cards to the point that a major game could rely on ray-tracing as the only method of rendering graphics. It might also require AMD to get onboard with similar technology, since they control a substantial portion of the video card market as well.

In the shorter term, it means that game and VR designers will have the option to adopt small elements of ray-tracing into their upcoming projects. NVIDIA showed off several examples of soon-to-be-released games which will use RTX technology for one feature or another: improved shadows in Shadow of the Tomb Raider, accurate indoor lighting in Metro Exodus, and realistic reflections in Battlefield V. This is the low-hanging fruit, so to speak: things that can be implemented without completely rebuilding a game engine, and can be enabled or disabled based on the video card used so that folks without a new GeForce RTX 20 series cards can still play.

Improved Point Light Shadows in Shadow of the Tomb Raider Using NVIDIA RTX Technology

Improved Reflections in Battlefield V Using NVIDIA RTX Technology

It should also be noted that this technology is not a replacement for full, in-depth ray tracing that is found in rendering engines like OctaneRender, V-Ray, and Redshift. While based on the same ideas, those renderers are designed to make the most photo-realistic images and animations possible – with speed being important, but taking a back seat to accuracy and realism. Many such engines already utilize the general-purpose calculation capabilities found in NVIDIA's CUDA cores to run part or all of their ray tracing algorithms, but from what I can tell their calculations are much more complex in functionality than what is being added in the new RTX Cores. Anything is possible in the future, of course, but for now I would not expect the RTX Cores to impact professional rendering performance.

Tensor Cores & Deep Learning

Tensor Cores debuted on the Volta generation of GPUs from NVIDIA, including cards like the Titan V and Quadro GV100, and they are specifically tailored to perform floating point calculations (mostly FP32 & FP16) quickly for use in machine & deep learning frameworks. This technology is all about fast processing of data based on pre-built networks and AI models, not creating those networks.

Volta never came to the mainstream GeForce lineup. Because of that, I had wondered if NVIDIA might keep Tensor Cores as an exclusive feature of more professional-oriented cards – the Titan, Quadro, and Tesla – but it looks like I was wrong. Including them on the GeForce RTX 20 family of graphics cards means that NVIDIA must see a benefit to having such calculation capabilities at the consumer level, and they gave some examples of how it could be used for enhancing the resolution of still images (like frames in a video game). I'm sure there are other applications as well, but if you want more insight into machine learning check out Dr. Don Kinghorn's HPC Blog.

Hybrid Rendering Pipelines & Future Performance Potential

Throughout the presentation, NVIDIA's CEO made claims about the performance of the GeForce RTX 2080 Ti being in the ballpark of ten times faster than the current GTX 1080 Ti… but that must be taken with a grain of salt. The numbers shown seem to be based on a new metric, which attempts to combine the old-fashioned processing capabilities of video cards with the new RTX and Tensor Cores – creating a value they call "RTX-OPS". Since the 1080 Ti (and other previous-gen cards) did not have either RTX Cores or Tensor Cores, of course they score much lower in this metric. So why are they making such extreme claims?

Hybrid Rendering Pipeline on NVIDIA Turing GPU - Example from SEED / PICA

The idea, from NVIDIA's point of view, is that all of these different processing capabilities can be combined into a hybrid rendering pipeline – using different parts of the GPU for each step, allowing a dramatic improvement in performance if fully implemented. CUDA Cores that before had to handle all the steps of rendering a frame can now just work on the rasterisation, while lighting, shadows, and reflections are handled by ray tracing (RTX Cores) and post-processing is handled by deep learning algorithms (via Tensor Cores). In the long run, a new GeForce RTX 20 series video card may well end up being several times more powerful than previous models – but we won't see the full benefits until games and other 3D applications begin to make use of these technologies.

Impact on Applications & Games Today

Until then, how much faster will these cards be compared to the GTX 10 series? That we don't know for sure yet, but looking at the CUDA core count, clock speed, and memory bandwidth numbers for the GeForce GTX 1080 Ti versus the new RTX 2080 TiI would hazard a guess at around a 20% increase in base performance. Specs for the 1080 vs 2080 and 1070 vs 2070 look to be in the same ballpark. Anything gained through RTX technology will be on top of that, but only when applications and games add support for it.

  RTX 2080 Ti GTX 1080 Ti RTX 2080 GTX 1080 RTX 2070 GTX 1070
CUDA Cores 4352 3584 2944 2560 2304 1920
Base Clock 1350 MHz 1480 MHz 1515 MHz 1607 MHz 1410 MHz 1506 MHz
Boost Clock 1545 MHz 1582 MHz 1710 MHz 1733 MHz 1620 MHz 1683 MHz
Memory Bandwidth 616 GB/s 484 GB/s 448 GB/s 320 GB/s 448 GB/s 256 GB/s

It is also worth noting that the onboard graphics memory each card is equipped with is not increasing at all in this generation. The GTX 1080 Ti and RTX 2080 Ti both have 11GB, while the 1080, 2080, 1070, and 2070 all have 8GB. If your application is memory-limited, as can be the case with complex 3D design and rendering, then the GeForce RTX 20 series does nothing to help with that.

Closing Thoughts

Until we get our hands on some of these cards up here in the Labs department, I can't really be sure of more than what is written above. I am eager to run the new GeForce 20 series cards through their paces, and to see how software developers take advantage of the RTX and Tensor Cores in the future. We'll be posting articles with performance data as soon as we can, most likely toward the end of September and beginning of October (given the estimate that NVIDIA has given of 9/20/18 for availability).

As for my own personal systems, I currently have GeForce GTX 1080, 980, and 1060 cards in my home gaming rigs. I have been very pleased with them, so I am not chomping at the bit to upgrade, but if games I enjoy playing start adding features which utilize ray tracing I could certainly see myself upgrading my flagship rig and passing the 1080 down to my son's system (replacing the 980 he has now). I would encourage anyone else with a nice video card to wait a bit as well, since it looks like the pre-orders are going crazy and online resellers like Newegg are suffering from greatly inflated prices. The GTX 1080 debuted over two years ago, so I'm sure the 20 series will be around for a while too… there is no need to rush!