Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1219
Article Thumbnail

DaVinci Resolve 15: AMD Threadripper 2990WX & 2950X Performance

Written on September 19, 2018 by Matt Bach
Share:

Introduction

DaVinci Resolve may be known for its ability to utilize the power of your GPU to improve performance, but your choice of CPU can definitely have an impact on performance in many situations. However, the Resolve testing we've done in the past has shown diminishing returns after around 10 CPU cores. This begs the question of whether AMD's new 32 Core 2990WX or even the 16 Core 2950X will perform well in Resolve or if Intel's X-series will continue to be our go-to CPU for DaVinci Resolve. To find out, we decided to test these new Threadripper CPUs in both the Color tab and the new Fusion tab alongside a wide range of Intel CPUs.

If you would like to skip over our test setup and benchmark result/analysis sections, feel free to jump right to the Conclusion section.

Test Setup & Methodology

Listed below are the test platforms we will be using in our testing:

Our testing for DaVinci Resolve primarily revolves around the Color tab and focuses on the minimum FPS you would see with various media and levels of grading. The lowest level of grading we test is simply a basic correction using the color wheels plus 4 Power Window nodes with motion tracking. The next level up is the same adjustments but with the addition of 3 OpenFX nodes: Lens Flare, Tilt-Shift Blur, and Sharpen. The final level has all of the previous nodes plus one TNR node.

We kept our project timelines at Ultra HD (3840x2160) across all the tests, but changed the playback framerate to match the FPS of the media. For all the RAW footage we tested (CinemaDNG and RED), we not only tested with the RAW decode quality set to "Full Res" but we also tested at "Half Res" ("Half Res Good" for the RED footage). Full resolution decoding should show the largest performance delta between the different cards, but we also want to see what kind of FPS increase you might see by running at a lower decode resolution.

Codec Resolution FPS Camera Clip Name Source
CinemaDNG 4608x2592 24 FPS Ursa Mini 4K Interior Office Blackmagic Design
[Direct Download]
RED 4096x2304
(7:1)
29.97 FPS RED ONE MYSTERIUM A004_C186_011278_001 RED
Sample R3D Files
RED 6144x3077
(7:1)
23.976 FPS WEAPON 6K S005_L001_0220LI_001 RED
Sample R3D Files
RED 8192x4320
(9:1)
25 FPS WEAPON 8K S35 B001_C096_0902AP_001 RED
Sample R3D Files
H.264
ProRes 422 HQ
ProRes 4444
DNxHR HQ 8-bit
XAVC Long GOP
3940x2160 29.97 FPS Transcoded from RED 4K clip

With the addition of the "Fusion" tab in Resolve, we are also going to be including some basic tests for that tab as well. At the moment these are relatively easy projects that specifically test things like particles with a turbulence node, planar tracking, compositing, and 3D text with a heavy gaussian blur node. These projects are based on the following tutorials:

If you have suggestions on what we should test in the future, please let us know in the comments section. Especially if you are able to send us a sample project to use, we really want to hear from you!

Color Tab FPS - Raw Benchmark Results

[Click Here] to skip ahead to analysis section

Color Tab FPS - Benchmark Analysis

To analyze our benchmark results, we are going to break it down based two criteria. First, we will completely separate the results based on whether we used a single GTX 1080 Ti or if we used a triple GTX 1080 Ti setup. Second, we noticed that whether the media we used was RED or not significantly affected the results. Because of this, in both charts we will split up the results between RED and non-RED media.

The "Score" shown in the charts is a representation of the average performance we saw with each CPU for that test. In essence, a score of "800" would mean that on average that card was able to play our project at 80% of the tested media's FPS. A perfect score would be "1000" which would mean that the system gave full FPS even with the most difficult codecs and timelines.

Starting with the single GPU results, there was actually little difference between each CPU. With non-RED media especially, it appears that the GPU is the limiting factor as every single processor scored within a few percent of each other.

With RED media, however, there is a tiny bit of difference but only if you are using a relatively low-end CPU like the Core i7 7820X or Core i7 8700K. At that level, the AMD Threadripper 1920X is a bit better - about 11% faster (or a few FPS on average) than the i7 7820X.

Moving up to a triple GPU setup is where we really start to see a difference between some of the processors. However, at the high-end the Threadripper 2990WX and the Core i9 7980XE performed almost exactly the same. Technically the 2990WX was a few percent faster, but it really only showed up as an FPS or two when using Temporal Noise Reduction.

At a more mid-range (TR 2950X vs i9 7900X) level, Threadripper takes the lead. Here, the 2950X gave about 14% higher FPS with RED media, although with non-RED media it only barely managed to sneak ahead of the 7900X. The results are a bit more in favor of Threadripper with a relatively low-end CPU (TR 1920X vs i7 7820X) where Threadripper gave about 26% higher FPS with RED media.

Fusion Tab FPS - Raw Benchmark Results

[Click Here] to skip ahead to analysis section

Fusion Tab FPS - Benchmark Analysis

Moving on to the results for Fusion, the first thing we want to point out is that having multiple GPUs doesn't appear to greatly affect performance. To be fair, we are not using footage inside any of our projects that is particularly difficult to process, but given the FPS we saw in each project we doubt that that having multiple GPUs would significantly improve performance even if you are using 8K RED media.

That said, the results with each CPU is certainly very interesting. At first glance, it would appear that Fusion favors per-core performance more than it likes having a higher number of CPU cores. This is shown by the fact that the Core i7 8700K was easily the fastest CPU overall for our tests. On the other hand, the second fastest CPU was the AMD Threadripper 2950X which, compared to Intel, doesn't have the greatest per-core performance.

Really, what it comes down to is that with the exception of the Core i7 8700K, all the CPUs performed within ~5% of each other. This is a very interesting result since it appears that we are not CPU bottlenecked... but we also don't seem to be bottlenecked by the GPU performance either. We will need to look into this further in the future, but our best guess here is that we actually are GPU limited, but Fusion doesn't scale well across multiple video cards. So if we want better performance, we may need to upgrade to a higher-end GPU like the Titan V.

Is Threadripper 2 good for DaVinci Resolve?

Based on our testing, AMD's Threadripper 2 CPUs are terrific choices for DaVinci Resolve 15. They are not always significantly faster than a similarly priced Intel X-series CPU, but if you have multiple video cards and work with RED footage - especially at 6K/8K resolutions - you can often see significant FPS gains with Threadripper.

AMD Threadripper 2990WX & 2950X DaVinci Resolve 15 Benchmark

The charts above really don't tell the whole story since it is simply an overall compilation of all Color Tab results (we opted to skip the Fusion results since we didn't see a significant difference between each CPU). To really solve the Intel vs AMD question for DaVinci Resolve, there are three primary comparisons we should be looking at based on the rough price of each CPU:

AMD Threadripper 2990WX vs Intel Core i9 7980XE for DaVinci Resolve

In most situations, the 2990WX and 7980XE are going to perform almost exactly the same in Resolve. One thing to keep in mind is that we saw very little difference between the 2990WX and the 2950X. So, if you are considering purchasing the 2990WX, you may be better off sticking with the 2950X and diverting the difference in cost towards a more powerful GPU configuration.

AMD Threadripper 2950X vs Intel Core i9 7900X for DaVinci Resolve

For many users, these CPUs should perform about the same unless you have multiple video cards and work with RED footage. In that case, the 2950X should be able to give you on average about 14% higher performance in the Color Tab. However, with 6K/8K RED media in "Full Res. Decode" mode the 2950X can at times give up to 10 more FPS compared to the i9 7900X.

AMD Threadripper 1920X vs Intel Core i7 7820X for DaVinci Resolve

In most cases, the Threadripper 1920X has the lead - but typically only if you work with RED footage. In that case, the 1920X should be able to give you on average about 10-25% higher performance in the Color Tab. However, with 6K/8K RED media in "Full Res. Decode" mode the 2950X can at times give up to 10-15 more FPS compared to the i7 7820X if you have multiple video cards.

Overall, AMD Threadripper CPUs are pretty clear winner for DaVinci Resolve as they at the very least match the Intel X-series and in some situations significantly out-perform them. The main thing to keep in mind is that since DaVinci Resolve utilizes the video card to such a heavy extent, having a more powerful CPU won't matter if you don't have the GPU power to match. This means that if you only have a single GPU you may actually be better off with an Intel CPU simply due to the lower power draw which translates to a quieter workstation. However, if you do have multiple video cards and work with RED footage, Threadripper is the clear choice.

Choosing the right CPU for your system is a complicated topic, and DaVinci Resolve is likely just one of many programs you use every day. If you want to see how the 2990WX and 2950X fare in other applications, we recommend checking out some of our other recent Threadripper articles.

Tags: DaVinci Resolve, Threadripper, 2990WX, 1950X, 2950X, Core i7, Core i9, 8700K, 7820X, 7900X, 7920X, 7940X, 7960X, 7980XE
Håkon Broder Lund

Fantastic work! It is clear that the 2950x is the clear overall winner for resolve, balancing the best performance in grading and fusion. Fusion was especially interesting and the most unexpected on my part.

Will you include Blackmagic RAW in future tests? The codec is said to be optimized for multi core CPUs, so that would be interesting to see how these many core processors fare with the new codec. This being an open RAW codec as well as having the ability to render to Blackmagic RAW files in Resolve, makes it an promising codec for the future. Blackmagic is providing sample footage on their website. If needed, I can also provide you with my own raw footage from my URSA for testing purposes.

Posted on 2018-09-20 01:39:21

Yea, the 2950X is a pretty darn good CPU for Resolve.

As for Blackmagic RAW, that is definitely on my radar - as is ProRes RAW. Right now is just a bit crazy with a whole lot of hardware updates happening (Threadripper, NVIDIA RTX cards soon, etc) so adding to our testing process will have to wait until the current wave dies down a bit. I think the sample clips from Blackmagic should be sufficient for our testing, but I'll definitely take you up on your offer if they don't work for whatever reason!

Posted on 2018-09-20 17:47:29

OK, fine, you twisted my arm enough to make me add it into our Resolve testing. It should show up in all our future articles - starting with the ones looking at the new RTX GPUs from NVIDIA coming in the next few weeks (I hope). I've been meaning to add H.264 LongGOP and XAVCS to our testing anyway, so this lets me do it all at once.

A quick correction though, it looks like you can't actually export out to BRAW. From what I've read, that was a bit of a slip during the presentation that was taken the wrong way. This forum post https://forum.blackmagicdes... could be wrong about that, but I couldn't find a way to export to it either.

Posted on 2018-09-20 22:18:46
Håkon Broder Lund

haha sorry about that. Looking forward to the results!

Thanks for the correcting details about BRAW.

Posted on 2018-09-21 21:21:58
CSell

Please include the new nVidea RTX 2080 Ti Graphics Card in future tests.

Posted on 2018-09-20 10:12:38

We definitely will once we get our hands on some of those cards. We unfortunately were not able to get them pre-launch for review, but hopefully we will be able to get some in for testing soon.

Posted on 2018-09-20 17:48:08
AlbertS

WOW - A lot of hard work with some surprises - Thank you!!!!!
Will be interesting to see how the 8 core i7-9900K performs when it's launched next month.
AMD has been pushing more cores to consumers which can be misleading. Apps are limited bu multi-threading/parallelism ie the number of tasks that can be performed in parallel. Servers benefit from more users using more apps but I don't think AMDs 32 core monster is aimed at the server market - which begs the question..

Posted on 2018-09-20 12:49:25

Yep, we're curious to see how the 9900K does as well! Usually we only see about a ~10% performance gain per generation, however, so I don't think it is going to be able to keep up with the more expensive Intel/AMD CPUs. You also really should only use dual GPUs on that platform which is going to hold it back a bit from really high-end color grading work.

That said, we won't know for sure until it actually launches and we can get our testing published!

Posted on 2018-09-20 17:50:30
cls105

You are using old logic with this. From 4 core to a new gen 4 core the gains were minimal. But 7700k to 8700k is about 40% gain (with proper cooling) which is huge. Same thing from 6 core to 8 core. Intel also stopped using crappy thermal paste which should help cooling. But i'm happy with my 14 core 7940x.

Posted on 2018-11-05 22:11:41

We actually were off just a bit with our prediction, although in DaVinci Resolve the difference truely was about 10% https://www.pugetsystems.co... . Premiere Pro, After Effects, and a few others showed closer to a 20% jump in performance, but that was the largest we saw.

Posted on 2018-11-05 22:22:23
Jack Fairley

I would love to see some more targeted testing done with the 2990WX. The results shown here are what experts could already assume to be true: lesser media than 8K R3D can be decoded by a lesser processor in real time, and existing processors were already enough to decode 8K 25 fps 9:1 R3D. For these top-end processors, please try the 8K 60 fps 12:1 R3D sample.

Additionally, the current AMD architecture needs higher RAM clocks than you are testing with for maximum performance. DDR4-3200 CL14 would be more appropriate.

Posted on 2018-09-20 19:11:29
Jack Fairley

I would be remiss if I did not thank you on behalf of the Resolve user community for being one of our only sources for hardware performance testing, of course!

Posted on 2018-09-20 19:12:42

DDR4-3200 is getting into overclocking which we tend to shy away from - especially in our testing since it greatly expands the number of variables. The official supported RAM for the 2990WX is DDR4-2933MHz, but for AMD CPUs it actually depends on the number of sticks you are using and whether that RAM is single or dual ranked. I don't think AMD has publicly released what the actual supported RAM is with those factors included (which is annoying), but based on what it is for the Ryzen CPUs I'm pretty sure we are already technically overclocking since we are using 8 sticks. My best guess is that we "technically" should only use DDR4-2400 if we wanted to stay fully within AMD's specifications.

---------
Adding a quick edit for those that might not actually know what we do here at Puget Systems: I think a lot of people miss the fact that we are a workstation manufacturer and our articles are a way to support that business rather than being our primary focus in and of itself. Overclocking or using RAM with higher clock/lower timings is something that I'm sure lots of people do when they build their own systems, but that is something we don't feel is worth the risk in the workstations we sell. Because of this, we tend to keep our testing within what we would actually sell to one of our customers.

Posted on 2018-09-20 19:16:33
right copy

why is the "score" in overall color grading test more than 100?
I think the score can not exceed 100.

Posted on 2018-09-21 13:26:25

Oops, that is a typo. We typically do individual scores (which aren't used in our Resolve testing, but are used in our Premiere Pro and After Effects testing) scale of 100, but increase it to a scale of 1000 for the overall scores. I must have just forgotten that while I was writing that section. Thanks for pointing it out, I've got it fixed now!

Posted on 2018-09-21 15:19:40
Gary Hoyer

Very helpful analysis, expect your Test Setup section does not mention a few important software settings that could have a significant impact on performance, namely:

- Was the "Use display GPU for compute" option set when using multiple GPUs?
- Was the "Decode H.264/HEVC using hardware acceleration" option set?
- For RED footage was the "Use GPU for RED Debayer" option set?

Posted on 2018-11-15 02:25:17

Yes, yes, and yes. Besides the "use display GPU for compute" those are all the default options. And really, using the display GPU for compute should be default as well in my opinion. I guess they are worried people will have a low-end display GPU or something. That's the only reason I can think of for it on anything newer than Resolve 14.

Posted on 2018-11-15 02:30:18
Mosquito

This looks like a ton of work, thank you!

I'm curious, though, how all these results and numbers correspond to actual use. What do these numbers mean for an actual render of a finished video?

Posted on 2018-11-17 04:28:31

It should be along the same lines as the live playback FPS, but will be slightly different since there is no "max FPS" you can hit during the final render. I have it on our to-do list to bring exporting back to our Resolve testing, it is just somewhat annoying to do since export settings are not saved when you export/import projects between systems. I've got some ideas on how to get around that without it being too manual of a process though.

Posted on 2018-11-17 04:33:25
Mosquito

Awesome, thanks for the quick response Matt!

Posted on 2018-11-17 04:47:29
AlbertS

Hi Matt - Many users on the Resolve Forum feel that ver 15 with integrated Fusion has increased hardware demands over Resolve 14. Is there any indication of that in your 14 vs 15 benchmarks? Thanks

Posted on 2018-11-17 10:11:40

Not really, from what we saw I believe we are getting higher FPS using the same hardware on newer versions of Resolve. Fusion changes the hardware problem a bit since it can only use a singe GPU and likes different kinds of CPUs vs the rest of Resolve, but just in the edit/color/delover tabs things really haven't changed much.

That said, we don't often do direct comparisons between versions, and hardware changes so rapidly that we rarely have the exact same hardware across a range of articles. So there could be a small difference in some situations.

Posted on 2018-11-17 14:57:11
AlbertS

Thanks Matt :)

Posted on 2018-11-17 16:33:07
Chad Capeland

Fusion doesn't use more than one GPU for OpenCL or OpenGL. The only way it scales to more than 2 GPUs is if you run one for OpenCL exclusively and one for OpenGL exclusively. This is of course not the ideal setup for Resolve's Color tab. Perhaps you should be testing Fusion Studio, not just Resolve?

Posted on 2018-12-30 22:12:35
Chad Capeland

Huge question mark on the testing... With Color, it's common for a single frame to be used to make the grade and then you're non-interactively waiting for a render. With Fusion, at least with visual effects, you're often interacting with all of the frames in the shot and doing it interactively. The render times don't matter so much as the interactive performance. You might spend hours iterating on a shot that's only 50 frames long. So the performance of the final render is not an issue compared to how quickly interactive iteration can happen. The consequence for benchmarking is that you should assume all of the input plates are cached in RAM before you render. That would also eliminate any variability in using MediaIn vs Loader nodes.

Posted on 2018-12-31 00:57:46
David Weeks

Where's the GNU/Linux run? I wager that GNU/Linux works better than MS W10 with high core/thread counts. If you're going to present a review of hardware, seems to me you need to account for OS biases, by running both.
The Windows results, I do not trust them.

Posted on 2019-02-05 15:10:01

It is probably due for an update since Resolve 15 launched, but we did that comparison about a year ago and Windows 10 was actually slightly faster than CentOS: https://www.pugetsystems.co... .

Linux support has improved for things like audio, but I haven't heard anything regarding performance. My adcice is that you should use the OS that fits what you need for things like software compatibility or security rather than straight performance. Linux is great if you want a locked down OS, but the fact that things like Photoshop are not Linux compatible is a deal breaker for many people.

Posted on 2019-02-05 16:42:36
David Weeks

If you do a Linux test, you should use a tweaked version of Linux, rather than an out-of-the-box distribution. You can't tweak M$, but you're expected, at this level, to tweak your Linux kernel. As a customization company, I'd see this tweaking as a hard-to-get value add for your customers.

Gentoo Linux is a good place to start. Just saying.

Posted on 2019-02-07 14:44:37
Ajet Luka

Hi Everybody
Does anybody have any good info on the AMD Threadripper 2990WX compares to its Intel counterpart for Deep Learning applications, i know the Threadripper has more cores but almost all the scientific computing libraries are optimized for intel so there really is no telling without actually comparing them directly thanks for your help.

Posted on 2019-03-14 08:18:07

For scientific computing stuff, I would recommend checking out Dr Don Kinghorn's HPC Blog: https://www.pugetsystems.co...

He hasn't done a lot with AMD chips, but here is a recent comparison between the 2990WX and a lower core count Intel processor:

https://www.pugetsystems.co...

None of the stuff tested there is deep learning, though... and from my limited understanding, that is mostly done on GPUs. Dr Kinghorn could better address how much the CPU matters for that sort of stuff, but I can at least say that there are Threadripper motherboards which will let you stack a full set of four GPUs if that is part of your goal.

Posted on 2019-03-14 15:29:37