Not Just for Gaming: NVIDIA GeForce RTX Will Improve Workflows

NVIDIA’s new GeForce RTX video cards have been all the talk lately. There is a lot of debate on the value that real time ray tracing brings to games, and some questions on how useful these cards will be to traditional ray traced renderers. With these cards becoming available for testing, and reviews starting to come in, many of these questions will be answered. However, there is an aspect to these cards that is often being overlooked: how the advances in real time ray tracing will dramatically cut down on production time before the rendering stage.

Let’s take reflections as an example. Current real time engines use a combination of Cubemaps and Screen Space Reflections to create these effects. They give pretty good results, but with some big limitations. Cubemaps are images captured in the environment that the engine uses to determine what should be reflected on a shiny surface. We can get very convincing results with this, but it takes time to compute. The artist must determine where in the environment to capture these images so things line up correctly, since they are taken from a specific point in space, but the camera/user will move around that space. Different lighting conditions will need a new cube map. For example, if you generate a cubemap in a bright city street, then the camera moves down to a dark alley, the reflections will look off – so a new cubemap will need to be generated for this area, ensuring they don’t overlap, etc. Depending on the size of the scene, there might be dozens of these. Since cubemaps are static images, they don't respond to changes in light. Things like day/night cycles, lights that turn on and off, or changes to the geometry would not be updated. This is also why, in games, the player’s character never has a reflection.

Lighting comparison, left to right: SSR and Cubemap, only SSR, only Cubemap

Lighting comparison, left to right: SSR and Cubemap, only SSR, only Cubemap

Screen Space Reflections, or SSR, are used to supplement cubemaps and can reflect changes in geometry, lights, etc. Their limitation is that they can only handle what is already on the screen. The nature of real time engines is to only render what is actually on screen in order to save on resources. Not only do they not render geometry that is not on screen, they will not render polygons behind a visible object. So if there is a camera looking at the driver side of a car, and on the other side of the car is a mirrored wall, all the polygons of the passenger side would not be rendered, so the reflection would only be of the drivers side of the car or possibly nothing if back-face culling is used. If the camera moves past the car, the entire car is dropped from the scene so there would be nothing to reflect. There are many workarounds for these shortcomings, and tricks that artists will use to get the look they want, but they all take time to create.

Another task that takes up a lot of artists’ time is Global Illumination. When real light hits a surface, some of it is absorbed and some of it is reflected. The surface properties will determine what color of light is reflected and how scattered that light is. If you are in a dark room and shine a flashlight onto a mirror, you’ll get a near perfect beam on the opposite wall. Shine that flashlight on a piece of wood, you might get a soft diffused glow over a wider area. Each additional bounce takes energy away from the light, making it weaker until it is no longer noticeable. Calculating these interactions of light is a significant computational task, and traditional real time engines just don't have the time to do it between every frame. Instead, artists have a lot of tools at their disposal to simulate this light. They may add additional point lights to represent bounced light, or light volumes, or baked lighting, etc. Again, the artists currently can achieve very realistic results… but only with a lot of time investment.

Global Illumination Off

Global Illumination Off

Global Illumination On

Global Illumination On

What NVIDIA is presenting with the RT cores in their new Turing GPU architecture is a way to achieve much more natural results natively. With this new hardware, they claim the above scenarios will “just work” in real time. We still need to see how this plays out with final hardware and software updates, but the potential benefits are enormous.

The video game industry is already jumping into this tech, but since they will still need to support older, non-RTX hardware for quite a while, they will still need to use their current workflows until RTX enabled hardware becomes widespread. However, as this technology gains support in professional applications, someone working in architectural visualization could potentially use it to go from their blueprints to final render much faster with more lifelike results. Since this would all work in real time, they can be sitting with their client and change materials, add furniture, move walls, etc – and everything will update live. A studio producing a weekly animated show might be able to render the entire episode in real time, allowing them to spend more time in production. Even high end movie studios, which will likely still rely on traditional raytraced renderers, could have a near final graphics “preview” with the ability to make significant visual changes and not have to wait to re-render everything.

This only scratches the surface of what creative studios will accomplish with these cards. We have yet to see how the new RT Cores will improve traditional raytraced rendering. Or what the scientific communities will do with Tensor cores, which are making their mainstream debut in the GeForce RTX series. The potential presented by these cards reaches far beyond the benchmarks of old-school rasterized game technology which most people are focused on.