Puget Systems print logo


Read this article at https://www.pugetsystems.com/guides/1176
Article Thumbnail

Does the CPU Matter for OctaneRender?

Written on June 8, 2018 by William George


OctaneRender, from OTOY, is a real-time, physically correct 3D rendering engine that uses GPUs instead of CPUs for processing. This is a relatively new approach, as traditional graphics rendering was done on CPUs instead. Graphics processors are ideal for highly parallel tasks like rendering, though, and it is easier to fit multiple video cards in a single computer than multiple CPUs.

A computer still has to have a central processor (CPU), though, and both the CPU and motherboard in a system can impact how many video cards can be installed. For example, higher-end processors usually support more PCI-Express lanes and larger motherboards can fit more PCI-Express slots.

What about actual rendering performance, though? Does the clock speed or core count of a given computer's CPU impact how fast the system can process jobs in OctaneRender? Let's find out!

Test Setup

To see how clock speed and core count might affect OctaneRender performance, without bringing in too many variables like chipset, PCI-Express lane support, etc - we opted to use Intel's Xeon W platform. It offers processors from 4 to 18 cores, and all of them support the same number of PCI-E lanes (48). For the motherboard, we used a Gigabyte MW51-HP0 motherboard to provide the right PCI-E slot layout for up to four GPUs. And finally, to make sure the CPUs were tested to their limits, we used the fastest GPUs for Octane: Titan Vs.

On the software side, we wanted to use the latest version of OctaneRender - which as of this writing is 3.08 - so we took OTOY's OctaneBench program and modified it slightly. The current release of OctaneBench was still using the 3.06.2 version of the rendering engine, though, which does not support the Titan V we wanted to use. However, you can manually copy over the files from 3.08 into the folder containing OctaneBench and then it will use the newer rendering engine. We cannot redistribute the modified software, but if you download both OctaneBench 3.06.2 and the demo version of OctaneRender 3.08 it is pretty easy to copy over the necessary files.

If you would like full details on the hardware configuration we tested on, just .

Benchmark Results

Here are the total scores from OctaneBench for the 1 to 4 Titan V cards running on the Xeon W-2125 and W-2195 processors:

OctaneRender Titan V Performance Scaling from 1 to 4 Video Cards on Xeon W-2125 and W-2195 Processors


Perhaps surprisingly, the CPU with fewer cores - Intel's Xeon W-2125 - outperformed the higher core count W-2915 in OctaneRender. That seems to have less to do with the number of cores, though, and is more likely due to the higher base and turbo clock speeds on the W-2125. To double-check this, we looked at a smattering of other tests we've run recently here in Labs and found that OctaneRender scores do appear to be higher on systems with the fastest clock speeds.

Why might this be? While the bulk of the calculations involved in OctaneRender - and other, similar GPU rendering engines - are carried out on the video cards, there are some small steps like loading data into the program and coordinating the workload between multiple video cards which do briefly use the central processor. That sort of usage isn't going to need a lot of cores, but with higher clock speeds (and more instructions per clock) such steps will be completed more quickly, leading to faster overall render times.


So does the CPU Matter for OctaneRender? Yes - not a huge amount, but it will impact performance. Based on our data, it is apparent that having lots of CPU cores doesn't help OctaneRender, but high clock speeds do. This is good news for several reasons:

  • CPUs with fewer cores tend to cost less, though there can be a small premium for high clock speeds in some cases.
  • Many of the applications that would be run alongside OctaneRender - like Cinema4D, Maya, and 3ds Max - also perform best on low core count but high clock speed processors.
  • CPU power usage goes up with both increased clock speed and higher core counts, so having fewer cores will help keep that in check and leave more power for the video cards in a system (which also need a lot of juice).

Based on this info, and in combination with our other recent tests, we are moving our OctaneRender configurations toward using high clock speed processors like the Xeon W-2125 and Core i7 8700K. You can see those changes reflected in the systems below.

Recommended Systems for OctaneRender

Tags: CPU, Processor, Motherboard, Chipset, Multi, GPU, Scaling, Rendering, Octane, Render, OTOY, OctaneBench, Benchmark, Performance, Intel, Xeon W, Video, Card

OTOY's RenderToken (RNDR) project team also found out that disabling any CPU power savings (Bios/Windows/etc.) helps to increase performance. This is might be due to having a more responsive system by avoiding any CPU sleep states when it idles during rendering.

Posted on 2018-06-11 08:55:31

The RNDR community is doing a lot of testing for OctaneRender. You can follow the project on https://www.reddit.com/r/Re... or https://twitter.com/rendert....

Posted on 2018-06-11 08:57:59

Thanks for those links! I'll check them out :)

Posted on 2018-06-11 21:23:28

Could you do comparisons of 3 GPUs on the cheaper platforms?

The AM4 B350 platform would allow for 2 GPUs at PCI-E 3.0 x8, x8, then 1 GPU at PCI-E 2.0 x4. Along with a 6 or 8 core CPU for scene loading.

This would be comparatively cheap in terms of overall system cost per GPU, but obviously the PCI-E 2.0 x4 slot is 1/4 the speed of PCI-E 3.0 x8. You've established that the 3.0 x8 speed has no impact on performance, but is 1/4 that speed too far?

Posted on 2018-06-26 11:24:18

We don't carry any products for that AMD chipset, so I don't have the stuff necessary to test it. However, I think you can find some Z370 motherboards with three PCI-E x16 size slots (maybe with x8 / x4 / x4 layout?) and that should be a decent, low-cost 3-GPU option. The AMD might be fine too, but if you are dropping to both PCI-E 2.0 and x4 lanes... I don't know, that does seem like it could start being an issue. AMD chips also tend to have lower clock speeds and IPC, which seems to be the main factor for scene loading (number of cores doesn't seem to matter so much).

Posted on 2018-06-26 16:41:37

Thanks for the quick reply. I wasn't aware the Z370 platform offered that configuration.

I've had a look and getting an i3 with high clockspeeds, plus a motherboard with PCI-E 3.0 x8, x8, x4 only comes to about $300. That's indeed good for a platform allowing 3 GPUs.

Posted on 2018-06-27 13:06:41

That is interesting - how much of a difference have they seen?

Posted on 2018-06-11 21:23:15

So far they talked about 10-12% higher scores when switching just Windows 10 power savings from Balanced Mode to Performance Mode. I guess that heavily depends on your particular setup.

Posted on 2018-06-11 22:06:00
Pedro Alves

Hi, we have 15 computers with 4 GPUs, we have been using i7 6700k and i7 7700k in our systems with Asus motherboards with a PLX so we get PCIe 8X on all cards despite the low lane count on the CPU. Recently we bought an i7-9800x with 44 PCIe lanes, paired with an Asus X299 SAGE/10G motherboard which get's us PCIe X16 on all GPUs. The result as been sub optimal. The new machine is around 25% slower loading the same scene and also around 15% slower compiling, when compared to an i7-7700k. Having 10Gbit ethernet is convenient for the future and also the ability to go to up to 128GB RAM, but why is it so slow? We do have to use Windows 7 because Windows 10 reserves a huge amount of VRAM that is needed.

Posted on 2019-03-05 12:48:05

Hmm, some things like scene loading can be single-threaded, in which case a CPU with higher clock speed is more helpful than one with a lot of cores... but I just double-checked and those three processors are all around the same single-core turbo speed (4.2 - 4.5GHz). So that shouldn't be the issue.

Honestly, without being able to work directly on the hardware it is hard to say what might be happening. I would recommend trying various benchmarks, though, to see if you can pin down other things that show a similar difference. For example, try out Cinebench for a simple single-core and multi-core CPU test. I would expect all of those CPUs to be in the same ballpark (within 10% of each other) for single-core speed, but if the 9800X is lower for some reason then it could point to an underlying issue.

Also, is there any difference to actual rendering speed - or is it just the scene loading time? And what are you using to compile that is showing the slowdown there? Lastly, I haven't used Windows 7 in quite a while... it is entirely possible that the support for that older OS on newer hardware (the 9800X is a current-gen processor) is not very good.

Posted on 2019-03-05 17:25:59
Pedro Alves

Thank you for your reply

Rendering time is the same across all computers, they are all equipped with the same GPUs and we haven't seen any benefit or drawback from any CPU.

I have ran Cinebench, and single core on the 9800x is marginally faster than the 7700k, below 5% difference (don't have the precise numbers here with me) and around double the speed for multicore.

We have been trying to improve the loading and compiling speeds, because being 9 people working and sometimes having to wait between 5 and 10 minutes for a scene to load and doing it several times a day, many hours are lost by the end of the month. I suppose we can't do much about it. I will see if the 9800x is turboing when loading.

Posted on 2019-03-05 19:52:23

Okay, so it sounds like the 9800X is behaving as expected performance-wise... and if it is beating the 7700K, marginally, then that should mean that it is turboing properly.

I'm honestly not sure what might be going on with the loading times. How large are the scenes you are working with? Are they coming from a centralized network storage location, or are they stored on each individual workstation?

Posted on 2019-03-06 17:54:21
Pedro Alves

We do have a centralised NAS where people access the projects from. But to perform this test we exported a project and loaded it locally on each machine. The project is 6GB. I forgot to mention that the 7700k only has 32GB of ram and the 9800x 64GB, although don't think that would influence the results.

Posted on 2019-03-06 18:14:46

More RAM certainly shouldn't hurt, unless it is not configured in a proper multi-channel memory setup.

That is a large project... maybe drive speed is at play? I wouldn't think it would make such a big difference, but what sort of drives are in each of the systems?

Also, I recall that you said that you can't use Windows 10 due to VRAM issues - do you have a link with more info on that? We've been using Windows 10 exclusively for a while now, and I wasn't aware of any VRAM problems with it.

Posted on 2019-03-06 18:19:58
Pedro Alves

EDIT: the link wasn't working

Both systems have a M.2 Samsung 970 EVO 500GB

Regarding the VRAM issue here is a link that I found to have things most up to date in terms of explanation.


This has been a major issue since it consumes the VRAM from all GPUs, even the ones that are not being used for video output.

Posted on 2019-03-06 18:53:19
Pedro Alves

Hi William!

Today I did some more testing one the systems, I used HWmonitor to see the frequency that the CPUs were doing while loading the scene. I found that the 7700k was doing 4.5GHz on all 4 cores simultaneously, while the 9800x had 2 core randomly going maximum up to 4.2GHz and sometimes below 2GHz as if the computer was idle. I really don't understand why it's behaving like that. Because in other usage the 9800x sometimes gets a core to go up to 4.5GHz... I tried disabling Hyperthreading but no improvement.

Posted on 2019-03-07 21:42:59

Newer CPUs are often more aggressive about throttling clock speeds up and down per-core as needed - instead of keeping a steady clock speed for longer periods of time - but that behavior can often be adjusted via BIOS settings. It may well be that only a couple cores are active at a time when loading the scene, or it could be that something is amiss. I can't image that sort of load causing any overheating, but it might be worth checking temps to make sure (if you have a program handy that can do so, like CoreTemp).

Posted on 2019-03-08 17:37:43

Also, just to check, I looked at the memory usage on a couple GeForce RTX 2080 Ti cards (with 11GB each) in Octane 4 under Windows 10. It looks like the card that was hooked to the display had 2.3GB reserved for system usage, while the card that was not plugged into a monitor had 2.1GB reserved. Do you happen to know how much is reserved on your Windows 7 systems?

Posted on 2019-03-08 18:59:15
Pedro Alves

With Octane 4 under Windows 7, with 4 2080ti the amount of memory unavailable is around 700MB on each GPU and one of them, strangely, not the one with the monitors attached is 800MB. Sometimes maximum 900MB.

Posted on 2019-03-08 23:38:47

Okay, wow, that is substantially less! Thank you for bringing that to my attention. I had not realized there was such a substantial difference between Windows 7 and 10 in this area. It really is a pity since Windows 7 is not as well supported anymore on newer hardware

Posted on 2019-03-09 00:11:11
Pedro Alves

The cooler that it has is an ARCTIC Freezer 34. I'll try with an Noctua NH-U12S and see if it changes.

Posted on 2019-03-08 23:16:24