Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1063
Article Thumbnail

DaVinci Resolve 14 CPU Performance: Skylake-X vs Threadripper

Written on October 13, 2017 by Matt Bach


For a number of years, we have focused our video editing testing around Premiere Pro and After Effects, but we have received a large number of requests to include DaVinci Resolve as well. Resolve has actually been on our list for a while now, but since Resolve 14 was supposed to make fundamental changes to how the software runs we decided to hold off our testing until it was available. The launch of Resolve 14 dragged on longer than we expected, but it finally came out of beta about a month ago and we were able to begin our testing.

Today, we have published our first two articles for DaVinci Resolve. This article will be looking at how performance changes with different models of CPUs while out other article will be looking at GPU scaling performance with 1-4 NVIDIA Titan Xp video cards. There is quite a bit more we want to test, but this is an excellent start and will allow DaVinci Resolve users to give us feedback on our test process that we can leverage in future articles. Plus, we wanted to get these articles up before we left for the NAB Show in New York next week (Oct 17-18, 2017). Come visit us at booth N570 in the "Post + Production Pavilion" of Hall 3B if you happen to be attending!

There is a lot we could (and want to) look at in DaVinci Resolve, but today we will be focusing on performance while exporting and more importantly FPS while grading in the color panel. Our testing includes 4K RED, H.264, ProRes 422, and DNxHR HQ footage as well as 6K and 8K RED footage. If you would rather skip over our analysis of the individual benchmarks, feel free to jump right to the conclusion section.

Test Setup

Similar to most other post production software, DaVinci Resolve should benefit from having a CPU with a relatively high core count. In this article, we will be looking at a range of Intel Core i7/i9 Skylake-X CPUs with between 8 and 18 cores as well as the 12 and 16 core AMD Threadripper CPUs. Depending on the results we find, we may expand our testing in the future to include the new Intel Coffee Lake 6 core CPUs or even dual Xeon configurations.

While our GPU scaling analysis article found that multiple video cards really only make a difference when using things like temporal noise reduction, we went ahead and used three NVIDIA Titan Xp video cards for our testing. Most of our results should be roughly identical to what we would get with just a single GPU, but we feel it is a good idea to ensure we are not bottlenecked by the GPU since we are comparing CPU performance in this article. For users of the free edition of DaVinci Resolve, this does mean that almost everything short of the color grading "torture" test should directly apply to you even though the free edition only supports a single GPU.

Before getting into our testing, we want to point out that while our test platforms are using a single storage drive, that is not actually what we would typically recommend to our customers. Normally we would recommend having a SSD/NVMe for your OS and programs and a dedicated drive or multiple drives for your media. However, this is in large part for capacity and organizational purposes and should not have any impact on our results so we opted to use a single drive to cut down on the number of testing variables.

Most of the media we will be using is available from the Sample R3D files. However, there were two special things we did. First was to transcode the 4K RED footage to a number of other popular codecs we wanted to test. The second was that we increased the project FPS to 60 for our color grading tests regardless of the original media FPS. Since many of our CPU/GPU combinations were able to easily play back our test footage at ~24 FPS, this was a relatively easy way to increase the spread and accuracy of our results.



Transcoded to:

H.264 .MP4
ProRes 422 .MOV
DNxHR HQ 8-bit .MOV





8192 x 3456

To test exporting we used a moderately complex timeline involving multiple clips, basic color grading, multicam footage, and various effects like crop, gaussian blur and cross dissolves. We do not yet have a video showing exactly our test timeline, but it is nearly identical to what we use in our Premiere Pro testing only without the logo and text overlay.

For our color grading tests, we applied three levels of correction which we called Basic, Advanced, and Torture:


  • Single node adjustments



  • All nodes from "Advanced"
  • Adds a single node of temporal noise reduction

Exported stills from our 4K testing are available for download if you wish to examine them in more detail.


Performance while exporting is likely a secondary consideration for many users, but it is still an important factor to keep in mind. Color grading may be what Resolve is known for, but you still have to get that graded footage delivered after you are done working on it.

DaVinci Resolve 14 Exporting CPU Benchmark

[+] Show Raw Results

Our testing was fairly extensive with around 175 data points, so while the actual results in seconds are available if you click on the "Show Raw Results" link, what we will mostly be looking at is the average performance relative to the Core i7 7820X. If there is a specific type of footage you tend to work with, we highly recommend taking a look at the raw results to get a more accurate idea of what you may see rather than just the average.

Overall, the results are very interesting. AMD's Threadripper CPUs did very well for their price, typically beating the similarly priced Intel CPUs by a few percent. There were a few times AMD beat Intel by a larger margin, but there were also times when Intel was faster than AMD so all in all we would give a slight edge to AMD at that price bracket. For the best possible performance, however, Intel definitely holds the lead at the top end.

With the Intel Core i9 CPUs, the performance gains start to plateau the higher you get, but we saw some decent performance gains all the way up to the Core i9 7960X 16 core. Interestingly, the Core i9 7980XE 18 core was actually slower than the Core i9 7960X in almost every case so while there is an argument for having a high number of CPU cores, there is definitely an upper ceiling when exporting.

This is also an indication that a dual Xeon workstation may not be great for Resolve given the high core counts we now have available on a single CPU. Dual Xeon workstations allow for even higher core counts, but each individual core tends to run at a lower frequency. This, combined with the additional overhead associated with dual CPU setups, makes it very likely that a Core i9 7960X is going to be faster than any dual Xeon workstation no matter how much money you throw at it.

Color Grading

Color grading is really where we wanted to focus our efforts so we used three different levels of grading on our test footage. With each footage type and grading level, we watched the FPS counter and recorded the lowest result after giving about a 15 second "grace period" since the counter can sometimes take a bit to settle down. This test was repeated three times with Resolve being restarted in between tests. Across the three runs, we logged the highest of the minimum FPS results and used that in our results below.

DaVinci Resolve Color Grading CPU performance benchmark

[+] Show Raw Results

Just like in the previous section, the chart above is formatted based on the performance relative to the Core i7 7820X. If you want to see the raw results in seconds, just click on the "Show Raw Results" link.

Starting with the AMD vs Intel question, the two AMD Threadripper CPUs did pretty well in our "basic" and "advanced" grading tests performing right in line or slightly ahead of the Intel CPUs. Where they really fell behind was in our "torture" test that included temporal noise reduction. In those tests, the AMD CPUs fell significantly behind even the Core i7 7820X 8 core CPU.

Overall, for our "basic" tests there was a small but consistent gain in performance when using a more powerful CPU. It isn't much in most cases, but it is there. On the "advanced" and "torture" tests, however, we definitely hit a performance wall right around the $1000 mark. As we noted, AMD falls behind in the "torture" test, but with the Intel CPUs you are only going to see minimal performance gains above the 14 core mark. There is still a small gain with the Core i9 7960X 16 core, but actually a performance drop with the Core i9 7980XE 18 core.


Looking at a broad overview of our testing and only separating based on whether we were exporting or color grading, we see the following performance from each CPU:

DaVinci Resolve CPU Benchmark overall performance

Overall, the results are pretty straight-forward. If you spend more money, you will likely get at least a small gain in performance. There are things not shown in this chart (such as the poor AMD performance when using temporal noise reduction) but this is about as simple as a performance chart gets. So really the question of Intel Skylake-X vs AMD Threadripper comes down to whether you use things like temporal noise reduction (TNR) and your overall budget. If you do us TNR, Intel is clearly the better choice. If you don't, then we would rate both Intel and AMD as being pretty roughly the same at similar price points. Intel has the performance lead with their more expensive Core i9 7940X and Core i9 7960X, however, so if your budget can afford it those CPUs are an excellent choice.

One point we want to make sure to call out is that we didn't see any benefit to using the Core i9 7980XE 18 core. In almost every case, the Core i9 7960X 16 core was actually faster. There may be an argument for the 7980XE from a multi-tasking standpoint, but even then having two more cores likely isn't worth the 4% drop in FPS when color grading.

As we noted in the Introduction, this is just the first of many articles looking at performance in DaVinci Resolve and there is still a lot we want to test including exporting all the color graded clips in addition to our test timeline, adding motion tracking, testing the performance impact of using a Decklink display out card, and much more. If you think there is something else we should include, we encourage you to let us know in the comments. Especially if you are able to send us an example project, we have found that active communication with the community to be incredibly valuable as it helps us shape our testing to match exactly the type of work that real users do every day.

DaVinci Resolve Workstations

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: DaVinci Resolve, Skylake-X, Threadripper, 7980XE, 7960X, 7940X, 7920X, 7900X, 7820X, 1950X, 1920X
Max Roberts

Great article! Glad to see some tests in DaVinci. I'd be curious to see where more mainstream CPUs like Coffee Lake and Ryzen 7 fall in the mix. This one is a longshot but I'd also be interested in seeing test's done in Assimilate Scratch, another color grading program, it's not a's popular... or as accessible as DaVinci so there aren't many articles or any really that look into how different systems affect performance.

Posted on 2017-10-14 15:25:42

Coffee Lake and Ryzen we will probably get to at some point, but to be honest it isn't super high on our to-do list. Most of our customers are looking for relatively high-end workstations so even if those CPUs are great for their price, they aren't going to be able to match the X-series CPUs in terms of raw performance. That said, I know there are a lot of people using the free edition of Resolve that are looking for a more moderately priced system, so testing Coffee Lake and Ryzen is still something I'd like to get done at some point in the (hopefully) not too distant future.

Posted on 2017-10-17 19:14:26
Joe S.

Hi, how is the status about the 8700k test with Resolve? Because after reading the one on PS & LR, I kind of suspect the 8700k maybe a good contender and even beat some of the more expensive CPUs in the quality uality of life department. With that I mean playback, scrobbeling, turning nodes on/off, temporar NR, basically everything which depends on more GHz and which make the system "feel" fast while working.

Export speed is not so important to me, when I'm working on a project, I hit a milestone after a few hours & then export. I actually kind of would enjoy 5min more break time to rest the eyes ;)

Posted on 2017-12-13 21:46:00
Joseph Kalliff Fraude

Would love to know too!
Not sure should i get the Ryzen 1700X, or Intel i7 8700K, Or add a little more to get the intel 7820X?
And how much difference there is between the Ryzen 1700X vs intel 7820X?
Some say the Ryzen 1700X has not much difference with thread ripper 1900X, but did not compare it with the Threadripper 1920X.
So i would like to know the answer too.
And yes, I also do not really emphasize on the export speed, i too would like more break time =)
And as long as it's smooth when im working, it will help a lot to finish my work !
Thanks !

Posted on 2018-02-14 12:31:39

Agreed - Resolve also has support for Intel 8700K Iris GPU and Quicksync which is not available on CPUs tested here. Useful for those of us using H.264 source files and not using Red or Raw. Also plz include H.264 HD & 4K test files.

Posted on 2018-07-04 13:55:25

the 8700k doesnt have an iris gpu its on the uhd 630. does resolve support quicksync\igpu optimizations? i know adobe does now but i dont think resolve does (beyond maybe being able to use the igpu for opencl stuff)

Posted on 2018-11-17 05:17:09

Resolve does not support hardware acceleration via Quicksync. It does support acceleration for h264 and (I believe) HEVC, but that is done through the GPU.

I'll also add that many of our newer articles include the 8700k and 9900k like https://www.pugetsystems.co...

Posted on 2018-11-17 05:24:39

Hardware acceleration via Quicksync encoding is done by the Intel GPU including 4K and HDR. Intel iGPU HEVC hardware decoding for edit plus encode/render is also supported by other NLEs.Please add the 9900K Export to the Ver 15 benchmark - Thanks.

Posted on 2018-11-17 10:02:33

Sure, that is supported in other software packages like Premiere Pro, but Resolve doesn't use Quicksync. "Hardware acceleration" is a pretty generic term, and different developers use it in very different ways. Adobe and Blackmagic is the perfect example since in Premiere they separate out hardware acceleration (Quicksync) and GPU acceleration where Resolve uses hardware acceleration to mean acceleration via the GPU.

Export testing we do want to bring back at some point - we used to do it but deliver settings don't get saved when you export/import projects between machines so it is a bit annoying. I have some ideas on that though, so hopefully future articles wilstart including it again.

Posted on 2018-11-17 15:00:29

Thanks - I think there is some confusion between "the GPU" and the Intel GPU integrated into the CPU. The 8700K Intel iGPU has powerful media capabilities for hardware assisted edit timeline decoding of 4K H.264 and H.265 highly compressed codecs used in many cameras including 10bit and Rec2020 support. Likewise the iGPU can speed up encoding using Quicksync hardware. Intel has APIs for developers to use these features.
" Intel claims that Kaby Lake-U/Y can handle up to eight 4Kp30 AVC and HEVC decodes simultaneously. HEVC decode support is rated at 4Kp60 up to 120 Mbps (especially helpful for premium content playback and Ultra HD Blu-ray). With Kaby Lake-U/Y's process improvements, even the 4.5W TDP Y-series processors can handle real-time HEVC 4Kp30 encode"
" The Video Quality Engine also receives some tweaks for HDR and Wide Color Gamut (Rec.2020) support. Skylake's VQE brought in RAW image processing support with a 16-bit image pipeline for selected filters."

Posted on 2018-11-17 16:57:48

yes it does have decode\encode abilities but if you have a seperate graphics card installed it wont use them. it will use the decoder on the gpu. resolve does not use the encode capabilities of the integrated graphics on the cpu at all even if you dont have a seperate graphics cards. it uses the cpu cores or "software encoding". if you dont have a graphics card then it will use the decode capabilities of the itnegreated graphics on the cpu for scrubbing etc but probalby wont for the actual encode (when you encode a video you have edited it will decode it first, maybe with the intregated graphics maybe not), before putting the clips together and then rencoding them in the codec you choose. the cpu cores (and not the itnegrated grpahics) do the encoding. on resolve anyway.

but adobe (and final cut pro on mac) actually use the encoding features of the igpu. resolve does not use intregrated grpahics for encoding anything period. these so called "quicksync" or igpu accelerationg features make a huge difference in render time but only for certain codecs (264 and maybe 265) and only at certain bitrates and settings.

these new features make a big difference and an i5 can be faster than a ryzen 8 core or even an i7 with thats not using these features. but resolve doesnt use these. they may use the integrated graphics for scrubbing and playback when editing but thats it. and even then only if there is not a seperatae graphics cards installed. adobe and final cut on mac use the integrated graphics fully.

i just dont want people to get mixed up adn think that these integrated graphics updates are available on resolve

Posted on 2018-11-17 22:18:05

final cut pro on mac also uses the quick sync features that adobe just added. they were first i think

Posted on 2018-11-17 22:19:25

Edius NLE uses the Intel iGPU for both timeline decoding and quicksync decoding.
Magix (who own Sony Vegas) has also added Intel iGPU support for encoding and decoding via a joint project with Intel.

Posted on 2018-11-18 07:41:38

Thanks a lot for this article!

Posted on 2017-10-15 22:34:13

very interesting. looks like going with a higher base clock rate is as important as number of cores, if not more so. Would it be possible to add a Xeon chipset to the test?

Posted on 2017-10-15 22:39:57

Dual Xeon we will likely test at some point, but the new Xeon "Scalable Processor" line is a bit of a mess right now. Its fine if you want a server, but for workstations the options are extremely limited. In fact, we still haven't moved to those CPUs since we don't feel we could offer a solid product to our customers quite yet.

Single socket Xeon we likely won't test unless something significant changes. There isn't really a reason to use them over the Core i7/i9 CPUs unless you need ECC memory which Resolve really doesn't need. Plus, they cost significantly more than the Core line CPUs.

Posted on 2017-10-17 19:18:30

Thank you so much for all the many hours you've obviously spent creating these results!

However, all of the raw results for the colour grading are labelled incorrectly. The x-axis is labelled "Seconds - Lower is Better" when I think it should be labelled "FPS - Higher is Better".

Finally, please will you run these benchmarks on the slower CPUs with a Single GPU? I'm very curious to see how these much cheaper systems they perform in comparison (CPU, 1 GPU and only 64GB RAM except for the 7820X which would still be 128GB). Specifically, the Intel Core i7 7820X, 7800X, 7700K, 8700K, i5 8600K, i3 8350K, AMD Ryzen 1800X and 1700X?

Posted on 2017-10-16 12:29:10

Thanks for pointing out the chart error. I'm on a trip right now so I can't get that corrected, but I can at least add a note above those charts.

Coffee Lake and Ryzen we will probably get to at some point, but to be honest it isn't super high on our to-do list. Most of our customers are looking for relatively high-end workstations so even if those CPUs are great for their price, they aren't going to be able to match the X-series CPUs in terms of raw performance. That said, I know there are a lot of people using the free edition of Resolve that are looking for a more moderately priced system, so testing Coffee Lake and Ryzen is still something I'd like to get done at some point in the (hopefully) not too distant future.

The 7800X, however, likely won't make that list. That CPU is just kind of a weird choice now that the 8700K is out. For everything we have tested, the 8700K performs much better than the 7800X although the 7800X does support 128GB of RAM instead of just the 64GB of the 8700K. However, even before the 8700K launched we sold almost no 7800X CPUs and we sell even less now. In fact, I believe the 7800X is really close being dropped from our product line entirely since no one ever really uses it.

Single GPU, however, is something I would like to include. We tested single GPU in our GPU scaling article, and while the results won't be much different than what we showed since most things in Resolve don't scale as well across multiple GPUs as people think, but it would still be good to have. We simply ran out of time to get that included in this article.

Posted on 2017-10-17 19:24:50
Andrew Cheramie

There is a gap in quality productivity related tests for amateurs and semi-pros on lower-end prosumer/enthusiast platforms, which I say as an amateur wanting to break into the business. Thanks to PS research, I have a much better understanding of how things perform on HEDT/workstations than simply gaming benchmarks. It can be surprisingly difficult to find good info.

I understand you don’t have unlimited resources to perform all tests, but Puget could possibly reach a potential market of casual video hobbyists by testing mid-range hardware. In any case, thanks for your work.

Posted on 2017-10-25 03:25:05
Joe S.

+1 to this. was thinking exactly the same, trying to decide between these 2 chairs right now (8700k/ryzen vs 7900x/tr)

Posted on 2017-12-13 21:32:22
Drew McFadyen

Would love to know what config is good if using Davinci resolve for heavy editing jobs. 14 is attracting quite a few people moving from premiere fcp...

Posted on 2017-10-23 02:32:01
Janis Lionel Huber

Would you say the 7820X is the sweet spot for price / performance?

Posted on 2017-10-24 08:18:47

Lower cost components tend to always have the best results from a price/performance standpoint, so for your dollar the 7820X would be the best of the CPUs we tested. However, I would really try to go up to the 7900X (or higher) if you are working with 60FPS footage or resolutions higher than 4K footage. In those instances, I think the extra performance you would get from the 7900X would be well worth the cost.

Posted on 2017-10-24 22:19:50

It also depends on how you look at the price / performance. Are you just looking at the cost of the CPU on its own, or the cost of the whole system with different CPUs installed? I've found, over time, that looking at the cost of one component by itself can lead to incorrect conclusions about the best value :)

Posted on 2017-10-24 22:25:34
Janis Lionel Huber

I would keep my case and fans. I already have a 1080 GTX so it would just be Motheboard, RAM, CPU - so it pretty much comes down to the CPU price for me. Since I use photoshop and Lightroom a lot the 8 core might be a better choice than the 10 core. Maybe the multicore usage will improve much over the next 3 years. Then again in 3 years I might be able to pick up the 12core used for 400$. One thing I learnt (of course this doens't apply for big post houses and production firms) don't plan to much into the future. Time and things change fast...

Posted on 2017-10-26 12:15:26

For lightroom and photoshop "only" the i7 8700(k) is a better option and anyway the 1080 is useless.

Posted on 2017-11-21 13:09:30
Keith Gresham

1080 GTX is still in the top 10. Hardly useless especially if you already have one.

Posted on 2018-07-04 05:35:58
Joe S.

since I'm in the same spot right now, what did you decide after all? did you maybe go with the i7-8700k?

Posted on 2017-12-13 21:34:37
Joseph Kalliff Fraude

Would love to know that too !

Posted on 2018-02-14 12:36:13
Kawabata Yoshihiro


Posted on 2017-10-24 22:24:14
Austin Travis

What I am waiting to see benchmarks from people who are content creators who live stream, specifically twitch and Youtube. When Playing a game in 2k/4k then running that game in Ultra or High settings while running capture software, then any additional software audio, chat servers, discord, browsers with multi tabs. It can really push a Computer especially when doing so 8-12 hours a day possibly 5 days a week, if your a professional content creator.
But does the recommended dual PC beat a single ultra core PC setup.

Posted on 2017-10-25 03:32:44

The extra cores on the 16 core systems would surely come in handy there, since you could use CPU affinity to force 8 cores to go to each program.

May be cheaper still to do a dual PC set up, say 2 Ryzen 5 1600s, one system only needs a basic GPU and a capture card in that set up compared to say a 1950X. Though the single system is simpler to use naturally.

Posted on 2017-10-25 19:51:30

Did you guys already do a review of Da Vinchi Resolve as a NLE? Is it up to snuff yet?

Did you cover Sony Vegas or is that next on the list?

How's actually scrub time with the systems? Does the RAID 0 NVME threadripper offers out of the box give it any advantages there in any Editing sofware?

Posted on 2017-10-25 19:40:49

To be honest, we are much more focused on the performance impact of different hardware than we are on the actual usability of the software itself. We are a bunch of hardware nerds, not professional video editors so while we can use Resolve/Premiere/Photoshop/etc. we probably are not the best resource for reviewing Resolve as an NLE. My impression is that it is much better than it used to be, but whether it can make it as a professional NLE in it's current state I can't really say.

Sony Vegas is actually pretty low on our list and might be something we will never get to. We're already pretty much fully booked for our testing and there is still so much we still want to do with the Adobe suite, Resolve, and a number of other applications that we see a bigger demand for from our customers. This might change and it will get back on our testing list at some point, but at the moment we have no plans to do much testing for Vegas.

RAID with NVME I don't think is necessary for 99.9% of users, at least from a speed standpoint. The performance of drives like the Samsung 960 Pro is already so high (3.5GB/s read!) that we actually haven't seen storage speed be a bottleneck in any of the software we test. Some things like Lidar image processing can benefit from a RAID of NVME, but for things like Resolve it shouldn't be necessary. To be completely honest, however, testing different storage configurations is still on our "to-do" list for Resolve, so I don't have any hard data backing this up. If you work with RED 8K 3:1 footage or 6K/8K RAW then it may make a difference, but even then I doubt that a single NVME drive won't be able to keep up.

Posted on 2017-10-25 19:57:15

8k is where it'll really matter ya. It's why AMD slapped an SSD to a GPU, and Nvidia recently showed off something kinda similar with the newest fastest SSDs in RAID 0 to do 8k editing, but that's more work to deal with than an SSG from AMD if it ever even comes out.

<https: www.youtube.com="" watch?v="2ad-HzVRFbs">

You could always like freelance some reviews, or ask someone you know for their input, I don't think anyone has done a definitive comparison between all the big editing software, would be nice to see someone do a comparison between premiere, vegas, final cut, resolve.

having reviews might also help out your customers, though I'm sure most already know what software they're going to be using.

Posted on 2017-10-25 20:28:25
Ravi Kant

was the test conducted at base clock speed of cpus or they were also tested overclocked?

Posted on 2017-10-27 22:30:32

Our testing is almost always done at stock speeds and in the few cases where we do an overclock comparison at make sure it is clearly marked. In this case, we are at 100% stock speeds and have verified the CPUs are running at Intel/AMD's official specifications.

Posted on 2017-10-27 22:49:59

Hey Guys, thanks for your tests! One Question: Witch Timeline Resolution did you used? HD 4K?

Posted on 2017-11-09 08:46:20
Josue Almendares

Great article. I was tired of seeing reviews for these CPUs based on gaming only.

Posted on 2017-11-22 14:17:22

Thanks for the hard work!

Posted on 2017-11-28 04:22:56
Chad Lancaster

I'd love to see a test between the latest i9 and dual cpu setups in resolve. Also a redcine-x cpu and gpu test

Posted on 2017-11-29 18:23:56

Dual CPU testing is coming "soon", but I'm not sure when. The new Intel Xeon SP launch was pretty messy and motherboards that are usable in a workstation (ones that have more than a couple USB ports and don't require rackmount-style cooling) just recently became available. We have to get through our qualification process first then I'll be able to do some testing. Honestly, however, I don't expect dual Xeon to be very attractive. At moderate core counts, you can get just as many cores at roughly the same frequency from a Core i9 CPU (at a lower price to boot) while avoiding all the bottlenecks and issues associated with dual physical CPUs. Last time we did some testing with higher core counts (24+), we didn't see much if any performance gain in Resolve. Things may be different with Resolve 14, however, so we are planning on re-doing that testing to make sure nothing has changed.

For REDCINE-X, to be honest there is little chance of us testing that anytime soon. At the moment, we are pretty stretched thin just covering all the software packages that are our current focus (https://www.pugetsystems.co.... We have to focus first and foremost on the software packages the majority of our customers are using, so until we see an uptick in people requesting workstations for REDCINE-X, we probably won't be able to justify the time it would take to do that testing. I wish we could test it, but the unfortunate reality is that we only have so much time in the day to do our testing.

Posted on 2017-11-29 18:42:05
Chad Lancaster

ya I was looking at the new xeons and its confusing to say the least. Totally understand about time resources when it comes to testing..its not easy. If you were closer i'd offer to help. I would say tho that customers buying for r3d workflows in premiere and resolve do use redcine-x a lot and while people are not usually buying specifically for redcine-x, it would be good stats to know. Keep up the good work, we all appreciate the articles!

Posted on 2017-11-29 18:59:11

From your testing which would you recommend? 7940X or 7920X? I plan on overclocking as well. Working with footage from the new Sony A7III, and a Red Cam I rent from time to time.

Posted on 2018-03-13 00:48:28

7940X easily - the 7920X actually doesn't have any performance advantages over the 7940X. Normally a CPU with a lower core count will have a higher operating frequency (including Turbo speeds), but the 7920X is an oddball since it actually has less cores and lower frequency than the 7940X. The only reason to use the 7920X would be for the slightly lower price and the slightly lower wattage (if you for some reason have a situation that requires less power/heat).

Posted on 2018-03-13 17:19:15
Michael Faber

Would love to see the 8700k tested, or in the future, any best rated cpu for after effects. For someone like myself who uses both Resolve and AE on the same system, I want to find the happy medium and best all around option for both.

Posted on 2018-05-04 12:40:51

That's a good point. We see a lot more customers using a Premiere/Resolve mix rather than AE/Resolve but you are definitely right that we should toss in a Core i7 8700K platform in our next round of testing. However, to give you a rough idea it should come in relatively close to the Core i7 7820X. Likely a bit better for 4K footage and a bit worse for 6K/8K footage.

Posted on 2018-05-07 17:44:53

Please help! I have a Threadripper Build - 1950X and GTX 1080ti - can't render a single thing out of Premiere - have tried h.264 (aftercodecs) DNxHD - all these things are taking 15 hours or so. Haven't been past 2 percent! How do i optimise this system for performance?

Posted on 2018-05-28 13:46:57

What would be interesting is seeing how well the different software utilize the cpus core counts. what is utilization % of say premiere vs resolve for instance. Naturally this test is not so important in the real world but its interesting to know.

Posted on 2018-08-18 15:55:50

Resolve 15 is now out of Beta and claims more "performance improvements". Resolve is a "new" NLE when compared to the decades of development behind Premiere, and others who pioneered non-linear editing. CPU technology hit the wall many years ago around 2nd gen i7s at 4-5Ghz and won't improve much until a silicon replacement is found. If found it could take 5-10 years to reach the market. Puget's comprehensive benchmarks illustrate the more cores = more power tradeoff in this and other Premiere benchmarks. The 6 core i7-8700K outperformed 8 core Ryzens and only a small percentage gain compared to the 14 core i9 at 4X the price.
The latest 32 core Ryzen hyped as "the most powerful PC CPU ever" should be good in a server environment but might be worse than a 4 core CPU for a single app. The market is mis-led by benchmark scores designed to max out all cores but most real-world apps and games are limited by parallelism (the ability to perform multiple tasks at the same time in parallel to use multiple cores/threads). I've been testing Resolve since ver 12 and it seemed slow compared to Premiere but they have added tons of new features and improved performance since ver 12. BlackMagic (BMD) recommended a dual Xeon and it can support 4 GPU cards which results in a very expensive workstation for all but the Pros and Studios. Yet I know a Resolve user editing 6K RED on an old 4 core i7 with no problems. To further complicate the matter, most NLEs have improved performance by offloading the once CPU intensive effects to today's powerful, but not cheap, discrete GPU cards so, depending on your workflow, the choice of GPU or multiple GPUs can also impact benchmarks. To answer AP, above, I would not worry too much about whether Premiere or Resolve is best at using more cores. You can throw a lot of hardware at both ($20K+ workstation) but it may be for diminishing returns depending on workflow and number of users (workgroup). Smooth timeline edit and playback become more hardware demanding as you add layers and effects. Codecs also effect performance. 4k H.264 (AVC) is highly compressed hard to decompress (decode) for edit and 4K H.265 (HEVC) can require 10X more compute power to decode/encode than AVC. HEVC can produce better quality than AVC at the same bitrate - but half the file size and is thus gaining in popularity for cameras and delivery including broadcast TV. More cores (multi-threading) does not help but some NLEs have added GPU acceleration to assist. AVC and HEVC are not edit friendly. However newer NLEs including Resolve can edit on a consumer desktop or a decent notebook using Proxy files or converting source files to less compressed edit friendly codecs like ProRes, Dnx, Cineform, and GV HQX with no loss in quality - but it will take a bit more time and space to generate these files before edit.
Render/export can be broken up into chunks and can benefit from more CPU cores - but if I spend days editing a project my concern is edit performance.so I find it difficult to justify 4X the CPu cost which might reduce a 2 hr render to 1.5hrs.

So my advice is first choose the software you want. Premiere is the most popular, but Edius, Vegas, Avid and other pro NLEs are still used by many. Resolve is now on par with the best and one advantage is you can get almost all features in the Free version and the full featured Studio version has been reduced from $999 to $299. They have also integrated Fusion into the editor (Adobe After Effects software equivalent).
Next determine your workflow - is it HD, 4K, 6K, 8K etc, 8bit, 10bit, HDR, RAW, and what codecs/formats for edit and delivery?
Single user or workgroup.
Puget will size and deliver a quality cost effective custom hardware solution.

In my case, my planned upgrade is for 4K (AVC, HEVC and HDR) and leaning toward Resolve. I prefer Intel so I'm waiting to see the new Intel 8 core i7-9900K due in Oct rumoured to have a 5Ghz clock speed out the box at around $100 more than the 6 core i7-8700K.. Hoping Puget can include in their benchmarks before year end for Resolve 15 and Premiere. Also would be nice to see if multiple GPUs (2 X 1080TIs) make a significant performance difference.

Just my thoughts - comments welcome :)

Posted on 2018-08-19 11:54:21

We used to do more reporting of things like CPU load both in total and per core, but honestly it was mostly just noise. Very rarely did it actually tell any sort of story or have any relevance to performance. For example, you may see 100% load on all 18 cores of the i9 7980XE, but an i7 8700K 6-core will still be faster. This is actually pretty common in something like After Effects.

We do look at CPU load sometimes, but I really only trust it to tell me something useful if I see only one or two cores being loaded. That tells me that whatever is being run is definitely single or lightly threaded so frequency is going to be way more important than core count. If the majority of the cores are being loaded, however, really the only way to know what kind of CPU will perform better is simply to test multiple CPUs and see how much faster the higher core count CPUs actually are in real life.

We even used to do things like using Amdahl's Law to try to estimate difference CPU performance by varying the number of available cores (https://www.pugetsystems.co..., but again it didn't actually end up being more than decently accurate.

Posted on 2018-08-20 20:24:08