Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1538
Article Thumbnail

DaVinci Resolve Studio CPU Roundup: AMD Ryzen 3rd Gen, AMD Threadripper 2, Intel 9th Gen, Intel X-series

Written on July 23, 2019 by Matt Bach


When configuring a workstation for DaVinci Resolve, most people focus very heavily on the GPU since Resolve is known industry-wide for how well its GPU-acceleration capabilities. Even so, your choice of CPU is still an extremely important factor and can make a big impact on performance in many areas - especially since tasks like generating optimized media or doing very basic grades utilize the CPU more than the GPU.

In our Photoshop and After Effects testing we have already seen that AMD has almost entirely closed the gap with Intel when it comes to lightly threaded applications, and trades blows with Intel in Premiere Pro depending on what codec you are working with. Resolve tends to behave very different than Adobe applications, however, so we are very interested to see how these new AMD Ryzen CPUs (which feature both an increase in core count and IPC improvements) perform.

AMD Ryzen 3rd Gen DaVinci Resolve Studio Performance

In this article, we will be looking at exactly how well the new Ryzen 3600, 3700X, 3800X, and 3900X perform in DaVinci Resolve Studio 16 beta4. Since we expect these CPUs to shake up the market quite a bit, we also took this opportunity to do a full CPU roundup. Not only will we include results for a few of the previous generation Ryzen CPUs, but also the latest AMD Threadripper, Intel 9th Gen, and Intel X-series CPUs. And for good measure, we will throw in a 14-core iMac Pro and a current (for the moment) 2013 Mac Pro 12-core as well.

If you would like to skip over our test setup and benchmark sections, feel free to jump right to the Conclusion.

Looking for a DaVinci Resolve Workstation?

Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!

Test Setup & Methodology

Listed below are the specifications of the systems we will be using for our testing:

Shared PC Hardware/Software
Video Card 1-2x NVIDIA Titan RTX 24GB
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit (version 1903)
DaVinci Resolve Studio 16 Beta4
Puget Systems D.R. Benchmark V0.5 BETA
Mac Test Platforms
iMac Pro 14-core Intel Xeon W
64GB 2666MHz DDR4 ECC
Radeon Pro Vega 64 16GB
Mac Pro (2013) 12-core, 2.7GHz
64GB 1866MHz DDR3 ECC
Dual AMD FirePro D700 6GB
1TB PCIe-based SSD

*All the latest drivers, OS updates, BIOS, and firmware applied as of July 2nd, 2019

Note that while most of our PC test platforms are using DDR4-2666 memory, we did switch up to DDR4-3000 for the AMD Ryzen platform. AMD CPUs can be more sensitive to RAM speed than Intel CPUs, although in our Does RAM speed affect video editing performance? testing, we found that the new Ryzen CPUs only saw modest performance gains in video editing applications when going from DDR4-2666 to even DDR4-3600 RAM.

For each platform, we used the maximum amount of RAM that is both officially supported and actually available at the frequency we tested. This does mean that the Ryzen platform ended up with only 64GB of RAM while the other platforms had 128GB, but since our DaVinci Resolve benchmark doesn't need more than 32GB of RAM to run, this does not actually affect performance at all.

However, keep in mind that this is technically overclocking since the AMD Ryzen 3rd Gen CPUs support different RAM speeds depending on how many sticks you use and whether they are single or dual rank:

Ryzen 3rd Gen supported RAM:

  • 2x DIMM: DDR4-3200
  • 4x single rank DIMM: DDR4-2933
  • 4x dual rank DIMM: DDR4-2667

Since we are using four sticks of dual rank RAM (almost every 16GB module available will be dual rank), we technically should limit our RAM speed to DDR4-2666 if we wanted to stay fully in spec. However, since many end users may end up using a RAM configuration that supports higher speeds, we decided to do our testing with DDR4-3000, which right in the middle of what AMD supports.

The benchmarks we will be using are the latest release of our upcoming DaVinci Resolve Studio benchmark. Full details on the benchmarks and a link to download (coming soon!) and run it yourself are available at:

Benchmark Results

While our benchmark presents various scores based on the performance of each test, we also wanted to provide the individual results. If there is a specific task that is a hindrance to your workflow, or a specific codec you use, examining the raw results is going to be much more applicable than our overall scores. Feel free to skip to the next section for our analysis of these results if you rather get a wider view of how each CPU performs in DaVinci Resolve Studio.

Single GPU - Color Grading Benchmark Analysis

While we have both 4K and 8K scores in the two charts above, for now we are primarily going to talk about the performance with 4K media. Not too many people are working with 8K media quite yet, and the relative performance between the different CPUs really isn't all that much different with a single Titan RTX. So, in order to keep our article from being longer than necessary, we are going to focus on the 4K results.

Looking at the 4K Overall Score with a single Titan RTX 24GB GPU, there are a couple very interesting things to point out. First of all, if you are looking for the best performance, the Intel X-series processors are clearly the top dog here. The Intel Core i9 9920X in particular is a great balance of price and high-end performance, and while the more expensive models are certainly faster, the performance benefit gets less and less as you get to the higher-end models.

Below the i9 X-series CPUs, there is a bit of a traffic jam where a large portion of the CPUs we tested sit right around the 100 score mark. However, what is really interesting is what we find when we look beyond the overall score and break down the results according to the type of test:

In the four charts above, we are again separating out the 4K and 8K results, but also pulling out the tests that are more CPU-heavy (optimized media and basic grade tests) from the ones that are more GPU-heavy (OpenFX and Temporal NR tests).

We did this because there is a very interesting dynamic between the new AMD Ryzen CPUs and the Intel 9th Gen CPUs when we look at the individual tasks. Starting with the CPU-heavy tasks, the AMD Ryzen CPUs are simply better than Intel, and it isn't by a small amount either. At the top-end of each product line (Ryzen 3900X vs Core i9 9900K), AMD is around 15% faster. As you drop down to the mid/low-end models, however, this increases to around a 35% performance lead for AMD.

Once the GPU is a more significant part of the performance equation, things change slightly. Here, the Intel Core i9 9900K is actually ~8% faster than the AMD Ryzen 9 3900X, although on the other models AMD continues to maintain a small ~5% lead.

If you work with 8K footage, this largely still holds true. The only exception is that AMD doesn't have quite as large of a lead for the CPU-heavy tasks, while their lead with the mid-range models on GPU-heavy tasks actually increases to ~15%.

Dual GPU - Color Grading Benchmark Analysis

When we add a second Titan RTX to the mix, the results start to shift a bit. Here, the relative performance with the new AMD Ryzen CPUs is even better than it was with a single GPU, although they still can't catch up with most of the more expensive Intel X-series CPUs.

Once again, however, we really want to look at the breakdown by task in order to determine which mainstream CPU line (AMD Ryzen or Intel 9th Gen) is the better option.

Interestingly, with two NVIDIA Titan RTX video cards the relative performance between those two CPU lines changes quite a bit between 4K and 8K media, so we will address both resolutions this time around.

Starting with 4K media, the AMD Ryzen CPUs are again quite a bit faster than the Intel 9th Gen CPUs. For the CPU-heavy tasks, the difference is about 30-40% in favor of AMD on the low/mid-range models, and drops a bit to (only) a 20% advantage for the AMD Ryzen 9 3900X over the Intel Core i9 9900K. On the GPU-heavy tasks, however, there is very little difference between AMD and Intel, although AMD keeps a very slight lead in some cases.

Moving up to 8K media, the AMD Ryzen CPUs continue to maintain a lead over the Intel 9th Gen on CPU-heavy tasks, but it is only by about 15% - which is still impressive, just not quite as much as the other tests. Looking at the GPU-heavy tasks, the lead is about the same, although it is slightly lower (closer to 10%) when comparing the Ryzen 9 3900X to the Core i9 9900K.

Fusion Benchmark Analysis

DaVinci Resolve Studio AMD Ryzen 3rd generation Fusion Benchmark Performance

Before we discuss performance in Fusion, we do want to point out that we will only look at performance with a single GPU. For whatever reason, we consistently see lower performance in the Fusion tab with dual GPUs than we do with just one. Fusion is still fairly new to Resolve, and it looks like there is simply a few bugs that still need to be worked out when it comes to GPU acceleration. We included the Fusion scores with dual GPUs in our raw benchmark data, but since it really doesn't make sense to use two GPUs for Fusion (for now at least), we decided to not spend time analyzing those results.

Overall, in Fusion the Intel 9th Gen CPUs have a slight lead, but it really isn't by much. The Core i9 9900K is 5% faster than the Ryzen 9 3900X, the i7 9700K is on par with the Ryzen 7 3800X, and the i5 9600K is only ~3% faster than the Ryzen 5 3600. In the real world, this means that you really won't notice much of a difference between the AMD Ryzen and Intel 9th Gen CPUs.

Are the Ryzen 3rd generation CPUs good for DaVinci Resolve Studio?

If you are looking for an overall winner between the new AMD Ryzen CPUs and the Intel 9th Gen CPUs, AMD is clearly the better choice for DaVinci Resolve - and often by a very large margin. Neither product line can keep up with Intel's X-series processors, but if you are looking for a more budget-firendly CPU for Resolve, the AMD Ryzen 3rd generation CPUs are an obvious choice.

Getting into the details, at the $400 and below price range (AMD Ryzen 5/7 and Intel Core i5/i7), AMD is simply better. The difference is more pronounced on tasks that are not as GPU-intensive like creating optimized media or doing very basic grades, but it is very significant - AMD is on average around 15-40% faster than the equivalent Intel processor. Once the GPU becomes a larger part of the picture, the difference is less stark, but AMD still maintains a 5-20% lead.

Choosing between the AMD Ryzen 9 3900X and the Intel Core i9 9900K is tougher, however. The 3900X still maintains a very healthy 15-25% performance lead when the GPU isn't used much, but the i9 9900K takes a small ~8% lead on GPU heavy tasks when using a single high-end GPU (which should also apply if you use two mid-range GPUs instead). This is still very much an overall win for AMD, but if you tend to use a lot of OpenFX or noise reduction, the 9900K may be a bit better of a choice.

As always, keep in mind that performance is only part of the question - there are a number of other considerations that you may want to keep in mind:

On the Intel side, the Z390 (and X299) platform has been available for quite some time which means that most of the bugs and issues have been worked out. In our experience over recent years, Intel also simply tends to be more stable overall than AMD and is the only way to get Thunderbolt support that actually works. Thunderbolt can be a bad time on PC, and there are only a few motherboard brands (like Gigabyte) where we have had it actually work properly.

For AMD, the X570 platform is very new and there will be a period of time where bugs will need to be ironed out. However, AMD is much better about allowing you to use newer CPUs in older motherboards, so if upgrading your CPU is something you will likely do in the next few years, AMD is the stronger choice. In addition, X570 is currently the only platform with support for PCI-E 4.0. This won't directly affect performance in most cases, but it will open up the option to use insanely fast storage drives as they become available.

Keep in mind that the benchmark results in this article are strictly for DaVinci Resolve Studio. If your workflow includes other software packages (We have articles for Photoshop, After Effects, Premiere Pro, etc.), you need to consider how the processor will perform in all those applications. Be sure to check our list of Hardware Articles for the latest information on how these CPUs perform with a variety of software packages.

Looking for a DaVinci Resolve Workstation?

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: DaVinci Resolve, AMD Ryzen 3rd Gen, AMD Threadripper 2nd Gen, Intel 9th Gen, Intel X-series, Intel vs AMD
Batt Mach

So as of right now, the mainstream amd cpus are a better choice for davinci resolve compared to the mainstream intel cpus. It should be interesting to see what 3rd gen threadripper will look like compared to intel's X proccessors.

Posted on 2019-07-23 23:52:07

I'm very curious as well. AMD does some really funny things with the "WX" Threadripper models that honestly makes a mess of performance unless the application is really, really good at using a high number of CPU cores. If AMD doesn't fix those issues (or makes them worse as they continue the Core Wars), I think they will be in some trouble. It would take some really significant improvements for AMD to catch up with the Intel X-series in Resolve, however.

Posted on 2019-07-23 23:58:44
Misha Engel

The benchmark results are unreadable.

These results from the graphs make no sense to me, when I look at this https://www.pugetsystems.com/labs/articles/DaVinci-Resolve-15-CPU-Roundup-Intel-vs-AMD-vs-Mac-1310/, which contained also very strange results.

I would love to see a test with Radeon VII, which turned out to be a pretty good card for the money https://www.pugetsystems.com/labs/articles/DaVinci-Resolve-15-AMD-Radeon-VII-16GB-Performance-1382/ with crappy early drivers which have improved a lot lately.

NVidia has/had driver issues with Ryzen 3000, don't know what the current status is, but since your drivers are from 2th of July or before and Ryzen 3000 was launched at the 7th of July I'm 100% sure that the drivers you used are the ones with the issues.

"WX" Threadripper doesn't make a mess, windows makes a mess when it sees more than 2 NUMA nodes (Linux works great with "WX").
AMD already fixed this issues with Zen2, it's UMA all the way upto 64 cores 128 threads EPYC.

I guess we have to wait for version 2, to see more realistic (and readable) results than presented in this... beta test suite with beta software.

Blackmagic Design is also working on a new benchmark.

Posted on 2019-07-24 00:52:39

If you are talking about the raw results, we have a bug on the website that we are fixing now. The workaround is to right-click on the pop-up image and select "open in new tab", then remove the "&width=1200&height=800" from the URL - then you will get the full res image. Annoying, I know, but we're hoping to have that fixed in the next week or so.

As far as the Radeon VII, we probably won't include that card in any future testing. Maybe one more time when we test the NVIDIA Super and AMD 5700 cards, but that would be the last time. That is a really weird card and is pretty much EOL - the theory is that it was just a placeholder until AMD got their new cards out. You can still find it, but not in the kinds of quantities that we would need to be able to offer it, and I fully expect it to disappear completely any time now. We also still have stability issues with that card that AMD never successfully fixed. To us, it is kind of like the Intel Core i9 9990XE - it technically exists, but not to the level that we would use it in our systems.

We definitely had some stability issues with the Ryzen platform, but that honestly isn't too uncommon with launch day (or pre-launch) platforms, which is why we didn't bring it up in any of our articles. That happens all the time with Intel as well, and is just something you have to deal with on anything this new. We are constantly re-testing and publishing new results, however, so it will be interesting to see if/how things change in the future. In our experience, stability definitely improves as a platform matures, but performance doesn't usually get more than a few percent better. This could 100% be the exception, however, but only time will tell. That is one of the really hard things about product launches - people really want to know how things perform, but since benchmarking takes a (long) time, our results are never going to be using the absolute latest drivers/BIOS/firmware.

One last thing, we call our benchmarks "beta" not because we don't trust the results, but because they may not run successfully on other people's systems. Especially with older platforms that we don't actively test, there are often weird things that come up. It is often little things like the language not being set to English that can break them, but it is impossible to get them perfect without feedback from other people. So, until a good amount of people run our benchmark and we are confident that it should run on any system with good enough specs, we are going to keep them as a "beta" so that people understand that it is not a 100% polished product yet.

Posted on 2019-07-24 16:59:46
Misha Engel

You have to turn HBCC on with the Radeon VII (we have 3 of them and it works flawlessly, got the tip from someone on the BMD forums before we made our purchase), we shoot BRAW most of the time and sometimes ProRes and RED 7k 4:1 24..25 fps.

Resolve scales almost linear upto 16 cores/32 threads, so it is not that strange that a CPU with more cores performs less, since the all-core out of the box freq. is lower. A lot of current RED 8k users use an overclocked 18 core which is able to decode 8k.R3D 5:1 25 fps in realtime (16 or 18 core intel on 4.5 GHz is enough for realtime decode). Since their is no memory bandwidth limit, the R9-3900x should perform equal(+/-5%) to the i9-9940x at stock settings.
A good benchmarker always questions his/her own results and if they make sense or not and dive into them when strange things happen.
(Wendell with the WX windows vs. linux, Steve Burke all the time same is valid for Steven Walton, Rob Williams, David Kanter, Igor Wallossek, etc...).
The results in this review are so different from the expected results that it would have been better to not publish them at all (now it has no added value).

Posted on 2019-07-24 20:20:03

Where are you getting the information that Resolve scaling is near perfect up to 16 cores? CPU core scaling that good is really, really hard to do outside of things like ray tracing, especially in an application that mixes the CPU and GPU. Even highly optimized machine learning algorithms are often only about 98% efficient, and the dropoff in core efficiency at that level is steeper then you would think. If you want to read up about it, we have an older post about Amdahl's Law that goes over the math: https://www.pugetsystems.co...

In all the testing we have done in Resolve, we tend to only see somewhere around a 10% performance gain at most every time you add a pair of CPU cores, and that is only when you are not doing things that utilize the GPU heavily (OpenFX, TNR most notably). If you have a link to someone who has benchmark results showing differently, I would be very interested to see it.

We definitely always question our results, which is one of the reasons we keep an open comment section - there is always the chance we miss something that someone else can let us know about. The AMD Ryzen did a bit better than we expected, but that was really the only "surprise" in this testing, and that is completely normal since it is a new product with a ton of improvements made. Also, keep in mind that core count is only a tiny part of the picture, the underlying architecture of the CPU makes a huge difference which is why you can't compare Intel and AMD based on specs alone - or even different CPU models from the same brand, but different generations.

Typically, when something is really wrong we don't publish the results but take them to AMD/Intel/NVIDIA/Adobe/Blackmagic/whoever and try to get the problem resolved. Sometimes we do end up posting the results anyway if there doesn't appear to be a fix coming in the near-term, but we do that since that is the performance people would actually see with that product if they were to purchase and use it. If an official fix does ever happen, we typically publish a follow-up article with the updated results.

Posted on 2019-07-24 20:37:46
Misha Engel



Since this is a CPU performance test, look at the heaviest codec 8k.R3D 9:1 25 fps for the CPU, with the least GPU load full-res premium.

8 core R7 2700x 13 fps
12 core TR 2920x 19 fps
16 core TR 2950x 25 fps

8 core i7 9800x 12 fps
10 core i9 9900x 15 fps
12 core i9 9920x 19 fps
14 core i9 9940x 22 fps
16 core i9 9960x 25 fps

(Almost) a perfect CPU scaling and they make total sense.

It also makes sense that the WX, performs not in line with it's number of cores because windows can't handle more the 2 numa nodes.

In the current 8k overal benchmark with one titan the 16 core TR2950x scores 91.3 and the 8 core i7-9800x scores 105.1, where in the previous test is was totally different.

8k.R3D 9:1 25 fps source with 1 RTX 2080ti with the security leaks driver, the fast one.

16 core TR 2950x: 25 fps(cpu/source bound), 19 fps(gpu bound) and 11 fps(gpu bound)
8 core i7 9800x : 12 fps(cpu bound), 12 fps(cpu bound) and 10 fps(cpu bound/gpu bound)

Posted on 2019-07-25 01:48:06

There are a few things I think you are overlooking. First, RED footage is not a CPU-bound codec, so you are introducing the GPU into the mix a lot more than you would with something like ProRes. The debayering process for RED RAW is entirely GPU-accelerated, so it is just the decoding that is done on the CPU (which is actually being moved to the CPU as well in one of the next updates). That doesn't mean the CPU doesn't matter, but straight CPU core scaling will never be as good with that codec as it is with something like ProRes, and things like architecture and other factors like cache can make a huge difference.

Next, the media is different between that benchmark and this one. We only used 25FPS media for our old RED 8K testing, and are using 59.94FPS media in our newer benchmarks. Media that is twice the FPS is actually more than twice as hard to process. I don't know how much that is a factor in this case, however, because the two benchmarks you are looking at are testing two different things.

Our previous benchmarks looked at FPS performance in the Color tab, while we are now testing export performance. There are a number of reasons why we made that shift, but the short of it is that testing the FPS in the Color tab as simply a pain to do and we have had more requests by customers for performance when rendering than for live playback. So, we made the shift in order to fulfill the needs and wants of our customers. But that does mean that things like encoding is being introduced into the mix, which is going to change the results.

Also, performance simply changes over time. BIOS updates, driver updates, Windows updates, etc. all contribute to how the system performs. That usually doesn't make performance worse with higher-end CPUs (although I wouldn't put it past Microsoft to break something), but it definitely can make performance better with lower-end CPUs, which in turn makes the higher-end SKUs look worse in comparison.

Like I stated earlier, it is completely possible our test had a glitch or something, or that we straight messed up. That is why we have comments open, but is also one of the reasons why we are working on making our benchmarks available to the public. It is much harder for benchmark results to be wrong when there are multiple independent parties doing the testing. We love the work we do, but the more data out there, the better decisions people can make when choosing hardware for their workstation.

Nothing makes me doubt the results from this test though. Yes, they don't line up perfectly with previous testing, but that is to be expected since we are testing different media, with different tests, for a different task, with different software/updates.

Posted on 2019-07-25 03:42:41
Misha Engel

The scaling is linear with the 8k.R3D, it scales perfect in your own Resovle 15 CPU roundup!!!

Try a 8k.R3D 5:1 25 fps, you will find no single CPU at stock settings that can de-crypte, decode, decompress it in realtime at full-res premium, till this very day.

Davinci Resolve haven't even anounced the NVidia GPU decode, no current NLE is supporting it at the moment, the only NLE that announced it officially is Assimilate Scratch.
All de-bayering of any supported RAW format in Resolve is handled by the GPU. This is a CPU roundup.
All effects whenever possible are handled by the GPU in Davinci Resolve Studio.

No one shoots 8k.R3D 59.94 fps other than some youtubers like Linus. Linus shoots 22:1 which is not very CPU intensive, an i7-8700k is fast enough to decode it, in realtime. It is also very light on the GPU, since you have less data to de-bayer the GPU has to do less work.

The high-end movie/TV show stuff is shot when possible on 8k 5:1 7k 4:1 6k 3:1 (24..30 fps), otherwise the image quality is crap compared to Venice 6k X-OCN-XT or ARRI RAW from a Alexa LF, Alexa 65 and even Alexa Mini (compression still kills the quaility).

This test is the total opposite from previous cpu roundups concerning Resolve.

Are all the older roundups flawed?, is this one flawed? are both flawed, or is there some other commercial reason to publish these kind of results. Looking forward to the official Blackmagic Design Resolve benchmark.

One other thing, why are you not using blower cards with dual GPU setups https://www.pugetsystems.com/blog/2019/01/11/NVIDIA-RTX-Graphics-Card-Cooling-Issues-1326/ .

Posted on 2019-07-25 11:13:16
Batt Mach

I agree. The WX models are all over the place.The 3950x could also be something to look forward to because it's 16c/32t 4.7 ghz. Should be interesting to see how it can handle being on the am4 chipset.

Posted on 2019-07-24 01:41:40

Let's wait for SIGGRAPH @ July 28th, AMD will demonstrate how PCIE Gen4 helps in Pro apps and lets see if new PCIE gen 4 Vega/Radeon Pro GPU's will be announced and then you can decide between Z390 and X570 platform.
Also new TR won't suffer like 2970WX/2990WX and they will scale better, also PCworld tested the Per thread performance and 3900X has better thread performance then 9900K at any thread count, so new TR with 250W TDP will be monsters but also much more expansive because they won't have real competition, not to mention the new TR platform will be much more expansive then X570 :-(.
So if you can wait- wait and see in 1~2 month and choose the best, but if you can't and it's for your work PC that make you money then buy the best you can that is also the most stable and bug free.

Posted on 2019-07-24 06:55:49

We'll be at SIGGRAPH and have meetings lined up with AMD - PCIe Gen4 is definitely something I will be asking them about. I have to say that I am skeptical that PCIe Gen4 will actually be useful in the real-world though. We went through the same thing with Intel with Optane and it was a lot of "Look how amazing this is!", then we tested it ourselves and didn't see any performance benefit. I would be very happy to be proved wrong, however, since anything that lets creatives work faster is a good thing.

I'm also hopeful about the new Threadripper processors. Honestly, I think both Intel and AMD are going the wrong direction with the whole "MORE CORES" route. I guess it works for marketing purposes, but very, very few applications actually benefit from that kind of core count, and they often end up doing things to achieve it that ends up resulting in worse performance for most people. Only time will tell, but I am definitely looking forward to testing those CPUs when they come out.

Posted on 2019-07-24 17:07:13
Eric Marshall

"MORE CORES" is actually by far the most cost efficient route to scaling up performance and density and compute-efficiency for the most lucrative market: enterprise/cloud. In a hyper-converged data center that makes its dollars on maximizing the number of VM's it can host from a rack, more cores is a way better scaling strategy than throwing enormous increases in transistor count at the individual cores for minor IPC gains.

The CPU's that wind up in content creation machines, are derivatives of enterprise and mobile market computing. While the direction may not be ideal for performance uplift in individual applications, there's really no other direction they can go until some revolution in fab comes along that makes room for enormous transistor density improvements. The "market" for high end content creation computers, is a tiny tiny drop in the bucket compared to the market for cloud compute. Intel and AMD are going the RIGHT direction for where the most opportunity in compute is.

Seems like when CPU manufactures focus on the desktop, we get netburst and bulldozer. When CPU manufactures focus on enterprise, we get much better results on the desktop as a consequence. It's counter-intuitive but has proven to be the case over and over. I'd rather AMD and Intel stick to enterprise compute principals.

The issue is not the CPU manufactures going the wrong direction. The issue is the software developers spending all their time making new features, cloud integration, and interface changes that nobody asked for, while spending almost no time developing ways to utilize the way the market is expanding compute resources.


New interface/bus speeds almost always seem to come in advance of the actual need. Each generation of SATA, each generation of PCIE, DDR, etc, all started off seemingly pointless... Much like building the ball park with no team to play, but as history has shown, each and every doubling of interface bandwidth, we grow into and eventually out of. PCIE Gen4 on AM4 marks the landing of this technology into consumer space. It may be pointless now, but it won't be forever.

Posted on 2019-07-28 04:47:39
Eric Marshall

I believe the performance "issues" with current generation threadrippers is related to memory access. It's a pair of dual channel memory controllers being shared as separate non-uniform address spaces. We would see similar performance oddities on a multi-socket system with separate memory spaces. The WX series compounds this issue further with compute resources that "share" those same dual channel resources....

Epyc Rome and 3rd gen TR, unifies memory space with a single memory controller on a central I/O die. The oddities of existing "WX" CPU's should go away with this change. I expect it to behave more like a true quad-channel memory subsystem, rather than like multi-socket NUMA system.

Posted on 2019-07-28 04:08:54
White Matrix

Are you going to also test Radeon 5000 series GPUs too?

Posted on 2019-07-24 15:04:50

These moderate price cards aren't Puget's emphasis on higher performance systems, but I would be very interested in their testing applied to these cards (and comparisons to others currently available) as a hint of how AMD's newer architecture may shape up.

Posted on 2019-07-24 16:13:14

We are planning on testing at least the 5700XT in a few weeks (alongside the new NVIDIA SUPER cards). I don't think we'll do the 5700 as well, however, since like tomdarch noted, that is starting to get a little too low-end for what we would offer our customers.

Posted on 2019-07-24 17:08:28

Looking forward to the 5700XT results. Other various benchmarks I've seen (not necessarily Resolve) show very interesting results, mostly beating the 2070 and occasionally trading blows with 2080-level cards. Partner cards will be the most interesting, I think.

Posted on 2019-07-24 18:42:52
David Varela

we all want a GPU that can beat the RTX2080ti but with RTX2070 pricing :)

Posted on 2019-07-26 12:19:54
Misha Engel

Radeon VII is close to the price of the RTX2070 https://www.pugetsystems.com/labs/articles/DaVinci-Resolve-15-AMD-Radeon-VII-16GB-Performance-1382/ and you won't run out or memory with 5k and up.

Posted on 2019-07-28 09:49:06

Just be aware that the Radeon VII is EOL now - you can still find it, but if you want the ability to add a second card in the future you may have a hard time doing so. Only a small number of that card was actually made and sold, so finding a used one in the future may end up being a bit tough.

There are also a number of unresolved bugs and stability issues with that card that were serious enough that we never offered it to our customers. The performance was terrific in Resolve, but a fast card is no good if it is going to bluescreen on you. Not saying you will run into problems if you go with the Radeon VII, but I personally wouldn't recommend using it if I had a choice.

Posted on 2019-07-29 02:47:30
Misha Engel

Yes it's EOL, same as the GTX 1080 ti(which was value for money). In your own test https://www.pugetsystems.com/labs/articles/DaVinci-Resolve-15-AMD-Radeon-VII-16GB-Performance-1382/ you proved that a second card doesn't bring much. With the current drivers the Radeon VII is rocksolid (just check the BMD forum) and never has the out of memory NVidia bug. When you want a BSOD in windows use a NVidia card in combination with Ryzen 3000(not the first time NVidia used this trick), succes guaranteed with current studio drivers.

Turn HBCC on and use the newest drivers with the Radeon VII and your good to go. The biggest disadvantage of the Radeon VII is it's huge value for money with content creation software, not many system builders like this and prefer RTX cards with a lower value for money. Same is valid for CPU's, Threadripper is a big succes under the technical savvy users, where system builders hate it because of it's value for money.

When you use cuda only software NVidia is the only choice, when you don't AMD gives you more value for money.
When you edit/grade/postprocess movie quality 6k.R3D 3:1 24 fps upto 8k.R3D 5:1 24 fps, AMD will give you a solution in september 2019 to do this in real-time full-res premium for a price point of around $1.450 (Radeon VII + R9-3950x), no way NVidia can match this with it's $2.500 for the GPU alone (RTX 2080 ti 11GB is not enough for serious post processing above 5k "out of memory").

Posted on 2019-07-29 11:28:56
James Nielson

On this note, do you see much difference in GPU performance when the variable is the CPU brand? By this I mean, do the CPU's "favor" one GPU brand or the other?

Posted on 2019-08-20 01:02:32

Not really. People like to make those kinds of claims on forums, but we've never seen that actually happen in real life. If you have an AMD and Intel CPU that perform the same in Resolve (or any other app) and throw in a NVIDIA/AMD GPU, they should perform pretty much the same. The only exception to this right now would be the Radeon RX 5700 (XT) since it supports PCI-E 4.0 which is only available on the new Ryzen x570 chipset.

Posted on 2019-08-20 16:50:26

Thank you for the work you do and for putting this out there. Fun time to be building a new system. This has confirmed that, for my purposes, the 3700X is a better value than the 3800X. I'll save the $70 and take the lower TDP and barely give up any performance.

Posted on 2019-07-24 18:50:43
Mark Brandt

Matt Bach - you earned yourself an ice cream cone! Thanks for sharing this information with the community, as always. Surprisingly, there doesn't seem to be that much of a delta between the 3700X and 3900X for 4k media (only 3-7%, if I'm reading these charts correctly). That's shocking considering the 50% greater core/thread count.

Posted on 2019-07-24 19:56:20

It does seem odd at first glance, but that kind of performance gain with more cores is a trend not just for the AMD Ryzen CPUs, but the Intel CPUs as well. More cores can still help, but just like going from the 3700X/3800X to the 3900X, the performance gain each time you add a handful of cores to the Intel X-series CPUs, the performance gain if only about 10% at most.

Given how much the performance changed between the Intel 9th Gen processors (where increased frequency is a big selling point), I think it is pretty safe to say that core frequency is more important than core count for many of these tasks in Resolve. Once you get to the mid-range CPUs, however, the Turbo Speeds really don't change much, however, so the only way to get more performance is by adding more cores.

Posted on 2019-07-24 20:21:08
Misha Engel

Test Media (59.94 FPS), soap opera youtube media?

4K ProRes 422 16-bit, really?

4K ProRes 4444 16-bit, really?

8K H.265 100mbps, 4K H.264 50mbps, 4K H.264 100mbps 10-bit.
4k RED and 8k RED, what is the compression ratio, (linus 22:1 material?).

Is this a test for youtube influencers?, how many of those files come from a smartphone?
No movie/doc/TV-show is shot on 59.94 fps, only some youtube and soap opera's(interlaced) are shot at 59.94 fps.

Posted on 2019-07-25 13:51:21
Dawid Verwey

Your raw result images are very small and is thus unreadable. I would love to see them.

Posted on 2019-07-27 12:42:05

Sorry, that is a bug from when we moved to the new gallery view. It should be getting fixed in the next week or so! As a work-around, you can right-click on the image, open in a new tab, then strip out anything like "&width=1200&height=800" from the URL. The only thing needed in there is the "ID=____" and you will get the full res image.

Posted on 2019-07-29 02:44:10

Interesting preformance report here: Look forward to Puget tests as hardware becomes available and bugs are fixed.
"AMD BEATS EVERYTHING! (Including themselves...) AMD Ryzen 3700x & 3900X Content Creation Performance"

Posted on 2019-08-05 19:55:03

Resolve, Premiere & Handbrake TESTED: AMD RX 5700 vs Nvidia RTX 2080 - WHY IS THIS SO FAST?!

Posted on 2019-08-05 20:24:33
David Varela

yeah. Its interesting because according to this video "buy AMD" and Puget is "buy NVIDIA RTX2060 Super due RX5700 drivers are still on work"

Posted on 2019-08-26 08:54:20

Lookout...Misha's got the vapors again.

Posted on 2019-08-08 15:47:22


DaVinci Resolve 16.0 final release now available.


"Blackmagic Design Announces DaVinci Resolve 16.1"

Posted on 2019-08-08 20:24:54

Noticed that earlier today! Good timing too, I was just starting benchmarks for the new NVIDIA Super and AMD Radeon 5700XT cards.

Posted on 2019-08-08 20:29:52
Umano Teodori

Great! and thanks a lot for the work you do, for us is invaluable.
May I ask if it is possible this time to test the ryzen with faster Ram close as possible to the 3733 sweet spot?
The fastest for amd I have found are the gskill neo 3600 cas14 2x16gb.
I believe given your audience that a consumer cpu/platform make sense in an itx board, and thankfully you showed us that more ram does not mean more performance:)
Thank you!

Posted on 2019-08-14 17:42:24
Umano Teodori

I am new to video grading but I want to start with davinci, I don't care about the increased complexity compared to the adobe suite. Considering I will do videomaker jobs I think the new 3950x and Radeon VII will do a fine job on an itx board with liquid cooling and thunderbolt ofc. The system has to be as much portable as possible, a cerberos microAtx case with Threadripper (zen2) can be a good improvement but at my level it would be a waste of performance. It will take at least a year to justify that hardware but at that time ddr5 will be out. Thank you to make the only benchmarks that matters to me.

Posted on 2019-08-10 11:16:02

I'm considering a similar setup using Resolve for 4K and HDR editing and waiting for more Puget AMD test results.
Radeon VII is no longer produced but initial tests seem to show X570 M/Board with a Ryzen 3900 and a Navi RX 5700/XT GPU look good so far (see my videos above).
Would like to see X570 PCIe 4 with a NAVI GPU and 12 core Ryzen CPU cost and performance vs Intel CPU plus Nvidia RTX..
It's still early days and a wrinkles are being ironed out -
Threadripper is coming , RX 5800/5900 GPUs and a new 64 core 128 thread server CPU.. Feel sorry for Matt :)

Another interesting development for HDR editors using Adobe

Nvidia GeForce RTX cards now support 10-bit color in Adobe

"This is very big news. Previously, you had to buy an expensive Quadro RTX card if you wanted to use a 10-bit HDR monitor during any kind of video or photo editing. Thanks to this update, GeForce RTX cards can now support a 10-bit HDR workflow."


Posted on 2019-08-10 14:13:51
Umano Teodori

Luckily I can still find the Radeon VII in my country, and of course they don't make them anymore it is a datacenter card wirh video output and consumer driver. If it is stable enough with the needed softwares I wouldn't buy anything else for a single gpu system, I hope the new 59xx xt will prove me wrong.
But considering I won't do any 4k soon and that I will already spend a lot of money for pc and raid system I will try how my rx580 8gb perform.

If you want to go itx I think there well be nothing better than the 3950x soon, the 12-14 core intel in itx is doable, but the best itx 299 mb does not have thunderbolt and the guy here at puget tested the ryzen with 3000 mhz ram. The sweet spot is 3733 so the ryzen will perform better,

Good luck finding a good monitor for HDR which does not cost like a good car ;)

Onestly I would not use any adobe software for video, I would rather spend more time learning davinci, now they made another panel "cut" for quick editing. For sure there are better software for specific purposes, but having one app for everything with project sharing, the best grading tools in the market, hw panels and keyboard, to me is priceless.

Posted on 2019-08-14 17:28:24
Christopher Flannigan

Hi Matt, nice article. I was hoping you could help me with a GPU question. I'm building a PC using x570 for use with Davinci Resolve and had been planning on getting the rx 5700 xt once the custom cards came out, but see there are some bugs with the drivers. I see from some of your other posts there was also issues with the Radeon VII and wonder if they are the same and if they had been fixed? Obviously if they are the same problems and never were fixed or not fixed quickly I think I should stay away from AMD GPU. What would your advice be for the GPU - Should I get RX 5700 XT, Radeon VII or an Nvidia card (I could push to a 2080 Ti if it was significantly better performance, but if I could get away with less I would prefer this). Keep up the good work.

Posted on 2019-08-15 11:50:32

The Radeon VII is EOL now, so we haven't really done any testing to see if the issues we were having have been fixed or not. We never sold that card, and never will (since it is no longer available), so we don't have much of a reason to spend the time figuring out if the card is stable now or not.

In general, if you are looking at the "consumer" cards (GeForce/Radeon), we typically recommend NVIDIA. They are least have some official support for pro apps, while the AMD Radeon cards are almost entirely geared towards gaming. That doesn't mean you can't use them video editing, but in general I think you will have a better experience with NVIDIA GeForce. The one area where this gets a bit hairy is Resolve since AMD cards do very well for their price. We will actually have a GPU roundup post for Resolve coming out around mid next week if you want to wait and see how the 5700XT compares to the NVIDIA SUPER cards.

Posted on 2019-08-15 16:28:44

Hallo! this seems to be a more up to date blog entry so I am reposting my question. I am looking into buying a new computer for editing (and some amount of grading) 4K footage with Da Vinci. I am thinking of buying the Acer Predator Helios 300
(PH317-53-77NT). Most of the specifications look good enough: Geforce
RTX 2070, 8 MB VRam, 32 GB RAM, 2 SSDs. The only spec I am not sure
about is the intel i7-9750H CPU.It is not even listed in the
tests here. Do you think this is powerful enough for editing, given that
mostly the GPU matters? Or would it limit the other specifications?
Thank you so much!

Posted on 2019-08-16 22:00:06
David Monteagudo

Thanks for the insightful article and all the work you and your team do. I'm working on a project and we're really pushing the limits of our system and looking for any ways we can optimize performance. We're running Titan RTX 24 GB, 64-128GB, 2 Samsung 970 Pro 1TB on Resolve 16 Studio. We're working with 8k R3D footage in a 4096x2864 timeline with the decode set to Half Res Premium. Most (if not all) clips end up requiring some combination of FX (usually grid warping to extend the frame edges), compositing, sharpening, noise reduction, and/or stabilization. Currently looking into adding a second Titan RTX, but as you can see we're really pushing it. So, any thoughts for how to maximize performance (especially playback) are much appreciated!

Posted on 2019-08-20 16:43:25

The CPU makes a bigger impact than most people think in Resolve, so depending on what CPU you have, that may be an upgrade to look into. A second Titan RTX could help as well. Something else to be aware of is that RED's new SDK moves the decoding of RED footage from the CPU to the GPU, so that might help whenever Blackmagic integrates that into Resolve. You may not even need to do any hardware changes, or you might find that a second Titan RTX would be an even more useful upgrade.

No idea when that update will go live in Resolve though. I honestly expected it to happen last NAB (April 2019), but it didn't.

Posted on 2019-08-20 16:53:32
GA Video

They relied on CPU because their software was obsolete and was not programmed to take advantage of GPU. On my system (version 17) one GPU was geings flooded by Fusion to 100% on very trivial tasks. Before removing DaVinci Studio software from my system, I had to switch to version 16 to finish my project. I guess they tried to take advantage of GPU, but performance is worse than that of v 16. They came long way from time they could not even take advantage of large memory and were dumping everything on disks, but still have a long way to go.

Posted on 2021-04-26 02:15:47
Samrat kundu

hi guys.. I'm new into DaVinci Resolve (Video Editing 1080p - 2K/4K).. Can anyone please suggest me a GPU for My Ryzen 5 3600 ?
Thanks in Advance!

Posted on 2020-06-27 18:40:00
GA Video

DaVinci Resolve on the surface looks good, but underlying software is of poor quality.I used to say it looks like Ferrari with lawn mower engine. Blackmagic does not believe in regression testing and quality control. It is a big experiment on patience of its users.

After few years of working with this software, I just switched to HitFilm pro and for first time could see the software taking full advantage of my hardware. I would also wait until Resolve V. 18. It will take them a while to de-bug 17.

Posted on 2021-04-26 02:02:41