Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1241
Article Thumbnail

OctaneRender 3.08: NVIDIA GeForce RTX 2080 & 2080 Ti GPU Rendering Performance

Written on September 28, 2018 by William George


OctaneRender, from OTOY, is a real-time, physically correct 3D rendering engine that uses GPUs instead of CPUs for processing. This is a relatively new approach, as traditional graphics rendering was done on CPUs instead. Graphics processors are ideal for highly parallel tasks like rendering, though, and it is easier to fit multiple video cards in a single computer than multiple CPUs.

With NVIDIA's recent release of the GeForce RTX 2080 and 2080 Ti, we wanted to see how these new graphics cards stack up to the well-respected 1080 Ti and the top-end Titan V.

Test Setup

In order to test the most recent video cards, we took the OctaneBench program and modified it slightly. As of this publication, OctaneBench is still using the 3.06.2 version of OTOY's rendering engine - which does not support either the Titan V or the new GeForce RTX cards. However, you can manually copy over the files from 3.08 into the folder containing OctaneBench and then it will use the newer rendering engine. We cannot redistribute the modified software, but if you download both OctaneBench 3.06.2 and the demo version of OctaneRender 3.08 it is pretty easy to copy over the necessary files.

For our test platform, we wanted to use a high-end processor so that there is no way it is limiting any of the video cards. Since we are looking at single GPUs (sadly we only have one RTX 2080 Ti at this time) we didn't worry about supporting a maximum number of cards. If you would like full details on the hardware configuration we tested on, just .

Benchmark Results

Here are the OctaneBench 3.08 scores for each of the video cards we tested:

OctaneBench 3.08 Titan V, GeForce RTX 2080 Ti, GeForce RTX 2080, GeForce GTX 1080 Ti, and GTX 1070 Ti GPU Performance Comparison

And here is another way of looking at the results, as percentages relative to the GeForce GTX 1080 Ti's performance:

OctaneBench 3.08 Titan V, GeForce RTX 2080 Ti, GeForce RTX 2080, GeForce GTX 1080 Ti, and GTX 1070 Ti GPU Performance as Percentage Compared to GTX 1080 Ti Result


It is a little tricky to interpret raw benchmark times into actionable information, as in the first graph above, but the second chart makes things much clearer. The new GeForce RTX 2080 is neck and neck (only 2% faster) than the similarly-priced GTX 1080 Ti from the previous generation, though it has less dedicated video memory (8 vs 11GB) - so for folks working with larger scenes it may make sense to stick with the 1080 Ti in that price range. For those with a little more to spend, the RTX 2080 Ti comes in 30% faster than the GTX 1080 Ti - and not very far behind the much more expensive Titan V, while still having a respectable 11GB of VRAM.

Keep in mind that these are only single-card scores, and doubling (or tripling/quadrupling) up on video cards can give even more impressive results. However, we have found that the dual-fan coolers used by NVIDIA on these new GeForce RTX cards are problematic when used in multi-GPU configurations. For more information about that, check out our in-depth look at that topic.

Finally, it is worth noting that this version of OctaneRender is not yet able to utilize the new RT cores which NVIDIA added to this generation of GPUs. Those are purpose-built for raytracing, and if Octane is updated to use them in the future we could see a huge increase in performance. We will be sure to test these cards again if or when that technology is integrated into this rendering engine.

Are the GeForce RTX 2080 and 2080 Ti Good for OctaneRender?

The GeForce RTX GPUs do very well in Octane, especially the 2080 Ti, but their true potential is the dedicated RT cores for raytracing. When OctaneRender is updated to use those, we could see a leap in performance! Avoid using dual-fan cards in multi-GPU configurations, though. If you want multiple cards, wait for single-fan blower models.

OctaneRender Workstations

Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: Multi, GPU, Scaling, Rendering, Octane, Render, OTOY, OctaneBench, Benchmark, NVIDIA, GeForce, RTX, 2080, 2080 Ti, Turing, Performance, Video, Card

There are already results for OctaneBench 5 (2018.1) and more information about the role of NVLink in this post: https://www.reddit.com/r/Re...

Btw. enjoyed the interview you guys did with Paul from RedGamingTech a lot!

Posted on 2018-10-02 05:57:50

I am skeptical of the comments there indicating that OTOY has confirmed NVLink memory sharing for the RTX GeForce cards. What I have read indicates that all the GeForce cards support is SLI over NVLink - so higher bandwidth and old fashioned SLI, probably, but not the additional features that Quadro cards are capable of (like memory pooling). We have tried the older Quadro GP100 / GV100 NVLink bridges on RTX 2080 cards, and they didn't seem to work at all... I mean the bridges - the cards worked, but there were no additional SLI or NVLink options that I could see in the NVIDIA control panel. We are going to try and get some of the new GeForce NVLink bridges in and see what happens, though.

Posted on 2018-10-02 16:37:36

We did some testing, and it doesn't look like NVLink is fully functional on the GeForce RTX cards: https://www.pugetsystems.co...

Posted on 2018-10-05 23:47:11

I have added links to some real world tests of GeForce NVLink published by V-Ray (I.):


Posted on 2018-10-08 20:25:36

Interesting - if I am reading the data on that link correctly, NVLink on dual 2080s was *never* beneficial (compared to just having the two cards without NVLink). Do you concur? (based on just that one set of data)

Posted on 2018-10-08 21:11:36

As discussed, there is no way around that NVLink will have an impact on speed compared to rendering on a single GPU, given the whole scene fits the cards' VRAM.

Those posts prove that NVLink can be used to pool memory of two cards resulting in 22 GB accessible with higher bandwidth than out-of-core system memory access. Vlado did not mention the scene size in GB in this post. We don't know if it fits a single GPU VRAM anyway. V-Ray started out as a CPU renderer and V-Ray next is using out-of-core automatically.

Tl;dr you cannot interpret more into the results than the simple fact that memory pooling will be possible with GeForce RTX consumer cards and it comes with a performance hit of yet to be determined scale.

Posted on 2018-10-09 06:51:32
Vedran Klemen

Why do You recommend single fan cards for multiple gpu's?

Posted on 2019-01-26 20:33:01

Because of the findings in this article, where a set of four dual-fan (Founders Edition) RTX 2080 cards were facing massive thermal throttling issues:


There is a lot more detail in the article itself, but the issue has to do with the dual-fan cards pumping heat back into a computer. That may be okay with one card, but when you have multiple (even just two can be an issue!) then the cards themselves along with all of the other components are having to contend with much higher ambient air temperatures, which in turn reduce the cooling their heatsinks and fans can provide. Single-fan cards, by comparison, exhaust most of the heat the cards generate out the back of the computer and are able to maintain higher performance even under extended loads.

Posted on 2019-01-28 17:17:25
Vedran Klemen


Thank you for your answer. Ok, I see Zotac Blower 2070 would be good for multi Gpu rendering. I would like to ask one more question iff you can answer... Those cards are ready for overnight 3D rendering and such jobs? Because I see that some of those cards just stops working when mining and such... In gaming they never sucseed 100% of power. I guess iff I would have two 2070 then I wouldn't need to use them for overnight as they can finish the job over a daylight. I think safer bet would be Ryzen 1950X or 2990WX for overnight rendering? I think that there is no factory guarantee for rendering and mining. And 2070 has issues with memory and kids sending them massivly on RMA. :(

Posted on 2019-01-29 21:23:56

Using a CPU like the Threadripper 1950X or 2990WX requires a completely different render engine that what you would use with video cards like the RTX 2070s - a CPU based rendering engine, rather than a GPU based one. I would recommend picking the render engine you want to use first, and then getting a workstation built to maximize performance for that specific renderer.

As for whether the GeForce RTX cards are ready for overnight rendering, I know we have customers using them for that - but I do also know that failure rates on the early RTX cards have been a little higher than previous generations. If you are overly concerned about that, using Quadro RTX cards (instead of GeForce) should help: they are designed for more intensive, workstation-class workloads. The downside is price: a single Quadro RTX 6000, the equivalent in performance and memory of a Titan RTX, is over two and a half times the price! You could get a set of four RTX 2080 Ti cards for less than one RTX 6000, and they would perform far better in most rendering situations... and even if you had one or two fail, and lost some time to those failures, you'd probably still end up getting more work done for the same price. Now if your budget allows for multiple Quadro cards, then that would be even better!

Posted on 2019-02-01 19:12:14
Vedran Klemen

Yes, Arnolod is free and also have the Optix. I already have Firepro W7100 (In Cinebench it is faster than 2080 Ti lol). :) So the most cheapest bet would be new 2070 and turning Optix on. :)

Posted on 2019-02-02 18:22:19
Vedran Klemen

Actually turning denoiser on in After Effects would be even more cheaper... :)

Posted on 2019-02-02 18:26:32
Muhammad Kaunain

Which card is best for 3d modeling and 3d animation or rendering quadro p4000 or rtx 2080

Posted on 2019-02-11 22:48:58

The RTX 2080 is a newer and much more powerful card, so I would go for that - unless you are running software that is only certified on Quadro cards like the P4000. For example, Autodesk usually doesn't test / certify GeForce cards on their applications like Maya and 3ds Max... so there, the Quadro might be a safer (though slower) choice.

It is also worth noting that if you plan to stack multiple cards together, the cooling on many GeForce RTX series cards is less than ideal. The standard setup on those cards is two fans, which is great for keeping a single card cool and quiet... but does not do well when you stack more than one card together in a system.

Posted on 2019-02-11 22:51:34
Muhammad Kaunain

Geforce rtx 2080 highest benchmark than quadro gp100 so why quadro are more expensive than geforce
Quadro gp100 $7,000
Geforce rtx 2080 $899

Posted on 2019-02-11 23:03:40

Quadro cards have always been more expensive, and always will be. They are NVIDIA's professional graphics card line, while the GeForce are mainstream. GeForce is intended for gamers and the general public, rather than engineers, artists, and scientists. But because of how powerful and cost-effective GeForce cards are, they often get used in situations where NVIDIA may not intend or really want them to be used. They've tried to fight that a few times, most recently with the way they are setting up cooling on GeForce and even the latest Titan cards... but people still try to use the more affordable options whenever they can get away with it.

It is also worth noting that the GP100 is a generation older, so that isn't a completely fair comparison. Quadro cards also tend to have more onboard video memory, and on high-end cards even feature ECC memory.

Posted on 2019-02-11 23:36:59
Anatoliy Zhygarev

Hello @William M George ! Can you please test 2060 super, 2070 super, 2080ti and 1070 in FrostStorm(FStorm) render? FrostStorm render is a very powerful GPU render. I wont to buy a new card for rendering in FStorm and thinking between a new 20x series and x2 1070. Thank you!

Posted on 2019-10-16 17:50:49

I'm sorry, but FStorm isn't one of the rendering engines we build systems for - so we don't have a license for it, and I don't think I'll be able to do any testing on it. I also don't see a current benchmark; there was one at some point, it seems, but the link is dead now... and it looks like it wasn't stand-alone, but rather just a scene to test within FStorm.

If there is any chance that FStorm will be adding RTX functionality, though, I would strongly recommend one of the RTX series cards (2060 or higher) rather than the older generation. Two of the RTX 2060 Super cards might be a good option, if you don't want to spend a lot of money and feel that 8GB would be sufficient VRAM for your scene sizes. A 2080 Ti would give you more VRAM (11GB vs 8) but would likely be a little slower in pure rendering speed.

Posted on 2019-10-16 18:00:50