Puget Systems print logo


Read this article at https://www.pugetsystems.com/guides/1241
Article Thumbnail

OctaneRender 3.08: NVIDIA GeForce RTX 2080 & 2080 Ti GPU Rendering Performance

Written on September 28, 2018 by William George


OctaneRender, from OTOY, is a real-time, physically correct 3D rendering engine that uses GPUs instead of CPUs for processing. This is a relatively new approach, as traditional graphics rendering was done on CPUs instead. Graphics processors are ideal for highly parallel tasks like rendering, though, and it is easier to fit multiple video cards in a single computer than multiple CPUs.

With NVIDIA's recent release of the GeForce RTX 2080 and 2080 Ti, we wanted to see how these new graphics cards stack up to the well-respected 1080 Ti and the top-end Titan V.

Test Setup

In order to test the most recent video cards, we took the OctaneBench program and modified it slightly. As of this publication, OctaneBench is still using the 3.06.2 version of OTOY's rendering engine - which does not support either the Titan V or the new GeForce RTX cards. However, you can manually copy over the files from 3.08 into the folder containing OctaneBench and then it will use the newer rendering engine. We cannot redistribute the modified software, but if you download both OctaneBench 3.06.2 and the demo version of OctaneRender 3.08 it is pretty easy to copy over the necessary files.

For our test platform, we wanted to use a high-end processor so that there is no way it is limiting any of the video cards. Since we are looking at single GPUs (sadly we only have one RTX 2080 Ti at this time) we didn't worry about supporting a maximum number of cards. If you would like full details on the hardware configuration we tested on, just .

Benchmark Results

Here are the OctaneBench 3.08 scores for each of the video cards we tested:

OctaneBench 3.08 Titan V, GeForce RTX 2080 Ti, GeForce RTX 2080, GeForce GTX 1080 Ti, and GTX 1070 Ti GPU Performance Comparison

And here is another way of looking at the results, as percentages relative to the GeForce GTX 1080 Ti's performance:

OctaneBench 3.08 Titan V, GeForce RTX 2080 Ti, GeForce RTX 2080, GeForce GTX 1080 Ti, and GTX 1070 Ti GPU Performance as Percentage Compared to GTX 1080 Ti Result


It is a little tricky to interpret raw benchmark times into actionable information, as in the first graph above, but the second chart makes things much clearer. The new GeForce RTX 2080 is neck and neck (only 2% faster) than the similarly-priced GTX 1080 Ti from the previous generation, though it has less dedicated video memory (8 vs 11GB) - so for folks working with larger scenes it may make sense to stick with the 1080 Ti in that price range. For those with a little more to spend, the RTX 2080 Ti comes in 30% faster than the GTX 1080 Ti - and not very far behind the much more expensive Titan V, while still having a respectable 11GB of VRAM.

Keep in mind that these are only single-card scores, and doubling (or tripling/quadrupling) up on video cards can give even more impressive results. However, we have found that the dual-fan coolers used by NVIDIA on these new GeForce RTX cards are problematic when used in multi-GPU configurations. For more information about that, check out our in-depth look at that topic.

Finally, it is worth noting that this version of OctaneRender is not yet able to utilize the new RT cores which NVIDIA added to this generation of GPUs. Those are purpose-built for raytracing, and if Octane is updated to use them in the future we could see a huge increase in performance. We will be sure to test these cards again if or when that technology is integrated into this rendering engine.

Are the GeForce RTX 2080 and 2080 Ti Good for OctaneRender?

The GeForce RTX GPUs do very well in Octane, especially the 2080 Ti, but their true potential is the dedicated RT cores for raytracing. When OctaneRender is updated to use those, we could see a leap in performance! Avoid using dual-fan cards in multi-GPU configurations, though. If you want multiple cards, wait for single-fan blower models.

Recommended Systems for OctaneRender

Tags: Multi, GPU, Scaling, Rendering, Octane, Render, OTOY, OctaneBench, Benchmark, NVIDIA, GeForce, RTX, 2080, 2080 Ti, Turing, Performance, Video, Card

There are already results for OctaneBench 5 (2018.1) and more information about the role of NVLink in this post: https://www.reddit.com/r/Re...

Btw. enjoyed the interview you guys did with Paul from RedGamingTech a lot!

Posted on 2018-10-02 05:57:50

I am skeptical of the comments there indicating that OTOY has confirmed NVLink memory sharing for the RTX GeForce cards. What I have read indicates that all the GeForce cards support is SLI over NVLink - so higher bandwidth and old fashioned SLI, probably, but not the additional features that Quadro cards are capable of (like memory pooling). We have tried the older Quadro GP100 / GV100 NVLink bridges on RTX 2080 cards, and they didn't seem to work at all... I mean the bridges - the cards worked, but there were no additional SLI or NVLink options that I could see in the NVIDIA control panel. We are going to try and get some of the new GeForce NVLink bridges in and see what happens, though.

Posted on 2018-10-02 16:37:36

We did some testing, and it doesn't look like NVLink is fully functional on the GeForce RTX cards: https://www.pugetsystems.co...

Posted on 2018-10-05 23:47:11

I have added links to some real world tests of GeForce NVLink published by V-Ray (I.):


Posted on 2018-10-08 20:25:36

Interesting - if I am reading the data on that link correctly, NVLink on dual 2080s was *never* beneficial (compared to just having the two cards without NVLink). Do you concur? (based on just that one set of data)

Posted on 2018-10-08 21:11:36

As discussed, there is no way around that NVLink will have an impact on speed compared to rendering on a single GPU, given the whole scene fits the cards' VRAM.

Those posts prove that NVLink can be used to pool memory of two cards resulting in 22 GB accessible with higher bandwidth than out-of-core system memory access. Vlado did not mention the scene size in GB in this post. We don't know if it fits a single GPU VRAM anyway. V-Ray started out as a CPU renderer and V-Ray next is using out-of-core automatically.

Tl;dr you cannot interpret more into the results than the simple fact that memory pooling will be possible with GeForce RTX consumer cards and it comes with a performance hit of yet to be determined scale.

Posted on 2018-10-09 06:51:32