Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1384
Article Thumbnail

OctaneBench 2019 Preview: GeForce RTX Performance Boost

Written on March 15, 2019 by William George


OctaneRender, from OTOY, is a real-time, physically correct 3D rendering engine that uses GPUs instead of CPUs for processing. This is a relatively new approach, as traditional graphics rendering was done on CPUs instead. Graphics processors are ideal for highly parallel tasks like rendering, though, and it is easier to fit multiple video cards in a single computer than multiple CPUs.

The upcoming 2019 version of OctaneRender is adding support for the dedicated ray-tracing hardware in NVIDIA's RTX series of video cards, and a preview of the OctaneBench tool was released recently to show a sneak peek at what we can expect from this technology. We rounded up the whole GeForce RTX card line - along with the Titan RTX - to see how they compare to each other and how much of a boost RTX tech can provide. Please keep in mind that this is a preview, selected to show off RTX technology, and results may vary.

Screenshot of OctaneBench 2019 Preview Running on a GeForce RTX 2080 8GB

Test Methodology & Hardware

Each graphics card was run through the benchmark once, and the scores with RTX on and off were recorded. In a full performance review article I usually run applications multiple times, but OctaneBench is remarkably steady in its results from one run to the next - and since this is just a preview, I didn't want to spend a lot of extra time repeating the test for additional data points.

Since we are just looking at individual video card performance, and OctaneRender is almost entirely GPU dependant, the test platform really doesn't matter much aside from maintaining the same configuration and drivers across all of the tested cards. The open testbed I had available happened to be an X299 based system with a Core i9 9940X installed, and while that CPU is overkill for Octane (its 14 cores will be wasted on this application) is meets the basic needs of providing a full PCI-Express 3.0 x16 slot for the video card to run in.

Here are full platform details, including links to the OctaneBench 2019 Preview and the NVIDIA drivers we used, for anyone interested:

Benchmark Results

Here are the OctaneBench 2019 Preview scores for each of the GPUs we tested, listed from fastest to slowest. Since this version of OctaneBench measures both, performance with RTX technology turned off is shown in light green and with it enabled in a darker shade:

OctaneBench 2019 Preview Showing GeForce and Titan RTX GPU Rendering Performance With and Without RTX Enabled

Analysis & Conclusion

The relative performance of these cards to each other should be no surprise, so the big takeaway here is the massive improvement that RTX technology brings. On average, these cards were 185% faster with RTX enabled! To look at that another way, rendering the scene used in this benchmark with RTX on took only 35% of the time that it did with RTX turned off. That is a huge boost, only available on RTX GPUs, though it is worth noting that OTOY has said this is a best-case example of RTX's capabilities. Right now it sounds like the speed-up this tech provides depends heavily on the scene itself, with the potential for much smaller increases in other situations.

Another factor worth noting is the difference in onboard memory between these cards. In particular, the GeForce RTX 2080 Ti is only about 5% slower than the Titan RTX in OctaneRender, while costing half as much, but it also has less than half the memory of the Titan. That makes the 2080 Ti a great value for most folks, but if you need more room for your scene data then the Titan RTX with 24GB or potentially even a Quadro RTX 8000 with 48GB of VRAM is worth considering. On the other hand, for smaller projects, multiple lower-cost GeForce cards will outperform a single RTX 2080 Ti or Titan RTX if graphics memory capacity is not a big concern.

Whatever your needs and budget are, getting a RTX-series card will give a big boost over older models in OctaneRender 2019!

What Is the Best GeForce Card for OctaneRender 2019?

While the Titan RTX is currently the fastest GPU for OctaneRender 2019, its use in multi-GPU systems is limited by its cooling layout. For just one or maybe two cards, in a chassis with very good airflow, it is the top dog... but for folks wanting to build a system with three or four cards the GeForce RTX 2080 Ti with a rear-exhaust style cooler will pump out better performance thanks to the ability to stack closer together without overheating. If your scenes require more than the 2080 Ti's 11GB of VRAM, consider a Quadro RTX instead.

Looking for a
Rendering System?

Puget Systems offers a range of workstations and render nodes designed specifically for rendering engines such as V-Ray, OctaneRender, and Redshift.

Tags: GPU, Rendering, Octane, Render, OTOY, OctaneBench, 2019, Preview, Benchmark, NVIDIA, GeForce, Titan, RTX, Turing, Performance, Video, Card
Avatar Roi Nigo

Hi! Thank's for this review. What about multi fans coolers like the Zotac RTX 2080 T! AMP edition (3 fans) ? Should we expect "issues" on multi GPU configuration ?

Posted on 2019-03-17 14:48:43

Every multi-fan cooler I have seen has the same core issue: the majority of the heat they generate is pumped back into the chassis, rather than being exhausted out of it. That means that the air in the case - especially right around the video cards - is dramatically warmer than it would be otherwise, and that is the same air that the video cards then try to pull in for cooling. If the air is already hot, it won't cool the card effectively... and the more cards you put in, the worse the problem gets. One is usually fine, assuming you have some airflow through the chassis, and with enough airflow (especially helped by a side fan right over the video cards) two can be okay. Any more than that has proven to be too much in our testing, and leads to overheating video cards and drastically throttled performance when under load.

Cards like the AMP Edition you referenced have additional complications. That model has a triple-wide cooler, which means that even just two of them will take up a full six PCI-E slots on your motherboard. Moreover, the way some motherboards have their PCI-E slots spaced out is designed for two double-wide GPUs with an empty slot between them to allow for better airflow... but a pair of these cards, in that sort of motherboard, would have almost no space between them. That could make it harder for the upper card's fans to effectively pull in air, so even just two cards might be trickier in that situation.

Posted on 2019-03-18 16:20:29
Avatar Roi Nigo

Thank's a lot for those explanations !

Posted on 2019-03-18 18:43:37
Avatar ComputahNerd

If you are going to stack up 3 or 4 RTX GPUs, would it be sensible to NVlink two of them? Since redshift 3.0 etc will support nvlink. Or would it not make any difference when adding more than two GPUs ?

Posted on 2019-05-01 09:24:00

I suspect that having 2 out of 3 cards in NVLink won't be helpful, but if you have 4 then two pairs in NVLink might be a good idea. I'd love to test that when rendering engines start to support NVLink, but I am concerned that the benefits of NVLink may not be visible when using the relatively small scenes included in public benchmarks.

Posted on 2019-05-01 16:51:18

Hello. Thank you for the article. I am currently new to offline gpu-rendering, as I have mostly worked with CPU on that front before.

I am contemplating doing the move to GPU Rendering in order to perhaps achieve my goal of 4K@60 Product Viz ~2 min animations rendered out within a max time budget of 24 hours.

I still don’t know much about GPU Rendering to know how many RTX GPUs I would need to make this goal a reality. Would you give me any suggestions? Thank you!

Posted on 2019-06-11 16:33:54

Unfortunately there is a lot that goes into how long rendering takes, so there is no simple way to answer that question. Beyond resolution and frame count (which can be calculated as: frame rate X number of seconds) it will also be affected by how complex your scenes are, which is a huge and complex issue by itself.

Thankfully, OctaneRender scales almost linearly - so if you want to get started, maybe try and see how fast a single GPU is in your specific workload. If it takes (for example) 3 days to do what you want to accomplish in 1 day, then simply adding two more should get you to where you want to be. If it turns out that it takes way too long, then you can reconsider your options (faster GPU? reducing scene complexity? etc) before spending more money and maybe not reaching your goal.

Posted on 2019-06-11 16:44:29

Thank you William! I have taken notes. Hopefully I will be able to get some results.

Posted on 2019-06-12 02:36:50
Avatar ahmed mansour

Hi there,
I'm having a Zotac RTX 2060 super with amd ryzen 3700x, whenever i try to start octane bench it gives me an error "no supported GPU found " do you have any idea what is the problem ?

Posted on 2020-04-29 21:20:12

Hmm, it sounds like octane can't detect the Nvidia card properly. I would start with trying to do a clean installation of the latest driver for that video card. During driver installation you can select a clean install which will first remove the existing driver, so if something is messed up that should help clear it up. Let me know if that works :-)

Posted on 2020-04-29 21:22:42
Avatar ahmed mansour

Thanks so much ,i will give it a try and let you know.

Posted on 2020-04-29 21:37:45