Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1536
Article Thumbnail

V-Ray Next CPU Roundup: AMD Ryzen 3, AMD Threadripper 2, Intel 9th Gen, Intel X-series

Written on July 22, 2019 by William George


AMD's new Ryzen 3rd generation processors feature both an increase in core count and per-core performance, both of which directly improve rendering speeds in V-Ray Next. In this article we will take a look at how they stack up to other AMD and Intel processors in this application, both in the pure CPU and GPU+CPU render pipelines. We also took a look at rendering in Cinema 4D in another article.

V-Ray Next Logo

Test Hardware

For this roundup we have several models in the new Ryzen 3rd Gen family - along with a couple of older 2nd Gen models, AMD's Threadripper 2nd Gen lineup, and Intel's mainstream 9th Gen Core and high performance Core X processors. The main results are all using the same memory: 16GB DDR4 modules running at 2666MHz (4 of them on the Ryzen and Core, 8 on Threadripper and Core X). This was selected to ensure a fair comparison, but because Ryzen also officially supports higher speed memory in certain configurations we tested the new 3rd Gen chips at 3200MHz as well. Those results are included here too, but in a separate set of charts near the end.

AMD Ryzen Test Platform
CPU AMD Ryzen 9 3900X
AMD Ryzen 7 3800X
AMD Ryzen 7 3700X
AMD Ryzen 5 3600
AMD Ryzen 7 2700X
AMD Ryzen 5 2600X
CPU Cooler AMD Wraith PRISM
Motherboard Gigabyte X570 Aorus Ultra
RAM 4x DDR4-2666 16GB (64GB total)
4x DDR4-3200 16GB (64GB total)
Video Card NVIDIA GeForce RTX 2080 Ti 11GB
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit (version 1903)
V-Ray Next Benchmark
Intel Core Test Platform
CPU Intel Core i9 9900K
Intel Core i7 9700K
Intel Core i5 9600K
CPU Cooler Noctua NH-U12S
Motherboard Gigabyte Z390 Designare
RAM 4x DDR4-2666 16GB (64GB total)
Video Card NVIDIA GeForce RTX 2080 Ti 11GB
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit (version 1903)
V-Ray Next Benchmark
AMD Threadripper Test Platform
AMD TR 2950X
AMD TR 2920X
CPU Cooler Corsair Hydro Series H80i v2
Motherboard Gigabyte X399 AORUS Xtreme
RAM 8x DDR4-2666 16GB (128GB total)
Video Card NVIDIA GeForce RTX 2080 Ti 11GB
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit (version 1903)
V-Ray Next Benchmark

Benchmark Details

We used the latest version of Chaos Group's V-Ray Next Benchmark for this comparison, and it includes tests for both CPU-only and GPU+CPU rendering. This is somewhat novel as most GPU rendering engines do not use the CPU at all - but in V-Ray Next, they have implemented CUDA emulation on the CPU to improve performance a bit. Even the fastest CPU doesn't add as much performance in this mode as a single high-end video card, but who would turn down additional performance during renders for no added cost?

With the information we have gathered, then, we put together three charts. The first is the CPU-only performance, the second is GPU+CPU, and the third is the CPU by itself but in GPU mode. The first and third charts, then, are really the most important when considering which CPU to pick for this application - depending on which rendering pipeline you plan to use. Please also note that the selection of CPU (and motherboard) will impact how many video cards you can use, so if you are going the GPU route you can fit more cards in systems which use higher-end processors.


A note about the color coding used here: AMD Threadripper chips are shown in orange, Ryzen is red, and all Intel processors are blue.


In CPU mode, shown on the first chart, Intel's Core X and AMD's Threadripper processors lead the results - which makes perfect sense, given their high core count. However, the new 12-core Ryzen 9 3900X does remarkably well too: it matched the performance of the 16-core Threadripper 2950X for a couple hundred dollars less! The 8-core Ryzen 3rd Gen chips didn't quite match Intel's processors with the same core count, but they are very close - far closer than the 2nd Gen 2700X, which is about 15% further behind.

GPU mode, best shown in the third chart, seems to be even better threaded... and it looks like AMD's architecture is also more favorable. Here, the biggest Threadripper CPUs take the top spots by a wide margin - and the Ryzen 9 3900X beats Intel's Core i9 9920X as well.

Ryzen 3rd Gen Memory Comparison

Most CPUs are rated by manufacturers to support specific speeds of memory, and AMD's new Ryzen 3rd Gen chips are no exception. In this case, the official memory support varies depending on how many modules you have installed and whether they are single- or dual-rank. Tom's Hardware conveniently published these details, which likely came from AMD's reviewer guide (which we do not have):

AMD Ryzen 3rd Gen Processor Memory Support Chart (courtesy of Tom's Hardware)

AMD Ryzen 3rd Gen Processor Memory Support Chart (courtesy of Tom's Hardware)

The normal memory we used in this test fits the bottom category on that chart: four sticks of dual-rank memory. As such, testing at 2666MHz was the correct speed according to AMD's official support documents. To get the other end of the spectrum, we tested with 3200MHz memory modules as well - which is at the top of the chart, though we still used four 16GB modules in order to keep the total amount of RAM the same. That means this configuration is actually outside of their official support specs, but should still show whether there is anything to gain by going with faster memory on this platform (and in this specific application). Red is 3200MHz, orange is 2666.

In CPU mode, there is about a 2% performance increase going from 2666MHz to 3200MHz - which is close to the margin of error, but since it was steady across all four processors we tested I think it is safe to say that there is a *very small* benefit to faster memory in this situation. However, to officially stay within AMD's support specs and still use 3200MHz memory would mean only having two sticks of memory... which would mean a limit of 32GB at this time (2 x 16GB), and if you are rendering large or complex scenes that might not be enough. Having renders take 2% longer seems a small price to pay for being able to fit a lot more RAM into the system.

Over on the GPU tests, the two higher-end chips seem to have a similar performance gain - but the others didn't see any benefit. The resolution on results is a lot smaller here, though, with only 2-3 significant digits... or it may just be within margin of error.


It is no surprise that rendering benefits from high core count - as well as clock speed / per-core performance - and AMD has excelled here. The Ryzen 9 3900X is clearly the fastest CPU for rendering in its price range, and with such a small difference between it and the lower core count 3800X and 3700X I don't see any reason to consider those. A $100-200 difference, in the context of a full system build that is probably $2000-5000, is nothing compared to the 40-50% increase in performance gain compared to those models.

With that said, though, there are even faster CPUs available for this sort of workload... so if you have the money to spend, a top-end Intel Core X or AMD Threadripper will render frames more quickly. Those CPUs also support more PCI-Express lanes, so if you plan to use V-Ray Next GPU they will enable more video cards as well as providing more processing power on their own.

Looking for a V-Ray Workstation?

Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: V-Ray, CPU, Rendering, Performance, AMD, AMD Ryzen 2nd Gen, AMD Ryzen 3rd Gen, AMD Threadripper 2nd Gen, Intel vs AMD, Intel, Intel 9th Gen, Intel X-series, Core X
Avatar Petar J Petrovic

In one of the previous articles dealing with TR4, as I understood from reading the article and charts, when using Cuda code, one 1950x matches the rendering speed of a Titan XP card. That said, this article is interesting for me because here you state that "Even the fastest CPU doesn't add as much performance in this mode as a single high-end video card". I was planning on buying 3900x which is supposed to be on par with 1950x rendering speeds thus practically playing the role of having another video card. Not sure what to make of it all now.

Posted on 2019-08-11 13:19:17

Hi Petar! I presume this is the article you are referring to?


If so, some things to keep in mind are:

1) That was an older version of the software: V-Ray RT 3.6, while this newer article is looking at performance in V-Ray Next 4.10. It is very likely that GPU performance has been improved upon in the nearly two years since I wrote that previous article.

2) In addition to the opportunity for Chaos Group to have improved V-Ray's use of video cards, this newer article is also focusing on a newer generation of GPUs from NVIDIA compared to what the old article tested.

3) You will still get a noticeable benefit from having a nice CPU like the Ryzen 9 3900X.

4) Depending on your budget and performance goals, though, the Ryzen (and similar "mainstream" processors) might be a limiting factor because of how many GPUs then can support. If you just want to have one or two video cards, then Ryzen is fine... but if you want the option to scale up to three or four, I would advise going with Threadripper instead.

Posted on 2019-08-12 16:43:51
Avatar Petar J Petrovic

Thank you for your reply William. Yes, that was the article in question. I have to say I'm not sure I followed through and understood the answer completely. Does that mean CPU render speed diminished with the new V-ray, or Ryzen still renders at 1x Titan speeds but since new GPU performance upgrades went so much ahead that they can't really scale compatibly? In short, how fast would you say a 3900x aproaches to a 1080 ti or Titan render speed then? If it's 20-30 percent of that, not sure it's worth shelling out the money for 3900x as opposed to just having 3700x.

4) Since prices dropped and everyone showed everything they had in their labs I decided to upgrade after some 7 years or so. My goal is to make an all-around concepting workstation similar to what my colleagues have (fast cpu and 2xGpu) and set myself for the next 5 years or so. There are those who have 3-4 or 7 gpu's but those are the top 5 percent in the industry and this is a predominant config. as I understand. That being said I'm not planing for more than 2 gpu's until the machine can pay itself off and after reading your reviews I was planning to "cheat" with having a "third" gpu in form of a cuda ported cpu rending option.

Lastly, since these are PCIe 4.0 boards, I was hoping I could in theory squeeze a 3rd gpu at 4x speed. PCIe 4.0 should be 2x faster than PCIe 3.0 as I understood. Following your tests with Ryzen 7, two cards running at x8 speed on PCIe 3.0 have no noticable speed loses, thus a third card running at 4x speed on PCIe 4 shouldn't have considerable speed loses because it should work at x8 PCIe 3.0 speeds (I hope).

Thank you.

Posted on 2019-08-12 20:22:47

Hmm, I don't have V-Ray Next Benchmark results for the 1080 Ti or Titan Xp so I can't say quite how the Ryzen 9 3900X compares to them... but it provides about 1/3rd of the rendering performance of the RTX 2080 Ti (if that comparison helps). If I had to guess, it is probably about half of what the 1080 Ti can do - but please note that is just a rough estimate off the top of my head. I may be able to go back and test that older GPU at some point, but going forward most of our testing is going to just be on the current RTX series cards.

As for why the comparison between CPU and GPU performance looks worse today than it did two years ago, I'll try to explain that in a better way. Lets assume that two years ago a Threadripper 1950X and a Titan Xp were roughly similar in performance (this is not perfectly true, but lets just call it even for the sake of this explanation). Over the course of two years, newer & faster video cards have come out - which has made the CPU look less effective in comparison - and I believe ChaosGroup has also improved the performance of their V-Ray rendering engine on GPUs (but maybe not on CPUs). Those two factors combined now have the same CPU only providing around half to one third of the performance of a high-end video card today. Hopefully that made more sense :)

Now switching gears to what you said you plan to get, a dual GPU system is a solid way to go - especially if you want to keep from spending a ton of money - but here are some additional things to keep in mind:

- Of the major GPU rendering engines, as far as I am aware, only V-Ray Next GPU can also use the CPU. If you decide you'd rather use OctaneRender, Redshift, etc then those will not utilize the CPU to any substantial degree. If you run other programs along with rendering, I would definitely take those into account when selecting a processor as well.

- Most GPU rendering engines are working on adding support for NVIDIA's RTX technology, and that dedicated hardware in the GPU for ray tracing makes a *huge* impact on performance. OTOY has it in their latest OctaneBench preview already, and both Redshift (Maxon) and V-Ray Next (ChaosGroup) talked about it being added to their rendering engines in the coming months. If at all possible, I strongly recommend getting RTX video cards so that you are future-proofed for when that support is added! Additionally, though, I expect that (in the case of V-Ray) this tech will make the CPU play even less of a part. In OctaneRender it looks like RTX technology can more than double performance of video cards in some cases, but the CPU won't see any of that benefit since it doesn't have the same dedicated ray tracing hardware.

- If you are going to focus on V-Ray, and able to get RTX series video cards, check out our recent article showing some performance oddities on those cards in the current benchmark: https://www.pugetsystems.co...

- 3rd-gen Ryzen may support PCI-E 4.0, but the video cards we have today are all still PCI-E 3.0. That means that if you put a current video card in a PCI-E 4.0 x4 slot it will just communicate with the system at PCI-E 3.0 x4 speed. That may still be alright, but it probably won't get the full performance the card could on a x8 or x16 slot. Building a whole system for three cards also means you need a bigger power supply, more airflow, and possibly larger chassis... plus, at least two of the video cards will end up being right next to each other - which can make cooling even trickier. If you go that way, or even for just dual video cards in my opinion, be sure to get blower-style video cards rather than multi-fan designs. We have articles about why that matters here: https://www.pugetsystems.co...

Posted on 2019-08-12 20:43:00
Avatar Petar J Petrovic

Thank you for the elaborate reply. It seems I did understand most of it. I read all the articles you linked and many more. In fact I will be going for 2x 2080 blower cards precisley because of them as well as Ryzen because of your Photoshop tests. Funny how after spending weeks of looking on the internet I mostly end up getting the most relevant and practically useful info from your research and tests. Thank you for that and also thank you for clearing GPU speeds up for me and suggesting that Fractal Case in a different article.

As far the 3900x itself I still have to mull over that info.
- 3900x renders at the speed of 2950x (this article) which is more or less like 1950x which is "comparable to Titan XP" (your linked article).
- Titan XP and 1080 Ti are within 10% margin (your linked article) meaning they are around 2080 card rendering speed.

So, even if the GPU render development went ahead and is significantly faster, cpu portion should still be able to crank out that
1080ti/ Titan XP /2080 cards original rendering speed even though it's not comparable to sped up new GPU speeds any more. So in very short terms, I was only hoping for confirmation it didn't get slower somehow. The part where 3900x is around the half the speed of 1080 Ti is confusing me as 3900x is the same speed as a TR4 comparable to a Titan XP card.

Posted on 2019-08-13 00:45:26
Avatar La Frite David Sauce Ketchup

So a 30$ wraith prism for amd but 70$ noctua for intel ? wtf

Posted on 2019-08-13 19:37:37

We haven't finished qualifying the other components we will carry with Ryzen processors, including alternative coolers, so we opted to use the cooler that AMD provides and thus seems to trust to cool sufficiently. It certainly was louder than a Noctua, but I did not see temperatures high enough to indicate that it was leading to thermal throttling or any other similar issues. Even looking at other review sites, where comparisons were done between the Wraith Prism and coolers as high-end as a 360mm AIO, there was little to no performance difference (the most I saw on one review was 0.1GHz lower speed with the Wraith than the 360mm AIO I mentioned... so around ~2% performance loss, at most).

Posted on 2019-08-13 20:15:48
Avatar La Frite David Sauce Ketchup

ah okay

Posted on 2019-08-13 21:34:39
Avatar Canal Aldrin

As always, thank you for this very informative testing!

Posted on 2019-08-21 12:48:39

You are very welcome :)

Posted on 2019-08-21 16:28:29