Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1616
Article Thumbnail

DaVinci Resolve Studio CPU performance: AMD Ryzen 9 3950X

Written on November 14, 2019 by Matt Bach
Share:

Introduction

While DaVinci Resolve is best known for its ability to leverage the power of your GPU to increase performance, the CPU is often just as important - especially if you are not heavily utilizing noise reduction or OpenFX. The AMD 3rd Gen Ryzen CPUs that were launched back in July of 2019 are already a great choice for Resolve, but now, AMD is launching one more 3rd generation Ryzen CPU - the AMD Ryzen 9 3950X.

This processor features a staggering 16 CPU cores which is really starting to blur the line between "consumer" and "HEDT" (High End Desktop) processors. However, the increase in core count comes with a fairly large MSRP price of $749. For comparison, both the AMD Ryzen 9 3900X 12 Core and Intel Core i9 9900K 8 Core have a MSRP of $499. If you want more information on the specs of this new processor, we recommend checking out our New CPU Announcement: AMD Ryzen 9 3950X post.

AMD Ryzen 9 3950X CPU for DaVinci Resolve

In this article, we want to see whether the increase in core count (and price) is worth it for DaVinci Resolve. However, since Intel is launching their new Core X-10000 series processors and AMD is launching their new 3rd Gen Threadripper processors in the near future, we are only going to compare the 3950X to a handful of Intel and AMD CPUs. If you want to see how it stacks up against a wider range of Intel and AMD processors, check back in the coming weeks for articles that will include the AMD Ryzen 3rd Gen, AMD Threadripper 3rd Gen, Intel Core 9th Gen, and Intel Core X-10000 series processors in a number of applications.

If you would like to skip over our test setup and benchmark sections, feel free to jump right to the Conclusion.

Looking for a DaVinci Resolve Workstation?

Puget Systems offers a range of workstations that are tailor-made for your unique workflow. Our goal is to provide the most effective and reliable system possible so you can concentrate on your work and not worry about your computer.

Configure a System!

Test Setup & Methodology

Listed below are the specifications of the systems we will be using for our testing:

AMD Ryzen Test Platform
CPU AMD Ryzen 9 3950X​​​​
AMD Ryzen 9 3900X
CPU Cooler Noctua NH-U12S
Motherboard Gigabyte X570 AORUS ULTRA
RAM 4x DDR4-2933 16GB (64GB total)
Intel 9th Gen Test Platform
CPU Intel Core i9 9900K
CPU Cooler Noctua NH-U12S
Motherboard Gigabyte Z390 Designare
RAM 4x DDR4-2666 16GB (64GB total)
AMD Threadripper Test Platform
CPU AMD TR 2950X
CPU Cooler Corsair Hydro Series H80i v2
Motherboard Gigabyte X399 AORUS Xtreme
RAM 4x DDR4-2666 16GB (64GB total)
Intel X-Series Test Platform
CPU Intel Core i9 9960X
CPU Cooler Noctua NH-U12DX i4
Motherboard Gigabyte X299 Designare EX
RAM 4x DDR4-2666 16GB (64GB total)
Shared Hardware/Software
Video Card NVIDIA Titan RTX 24GB
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit (version 1903)
DaVinci Resolve Studio (version 16.1.1.5)
PugetBench V0.8 BETA for
DaVinci Resolve Studio

*All the latest drivers, OS updates, BIOS, and firmware applied as of November 11th, 2019

A few notes on the hardware and software used for our testing: First, we have decided to standardize on DDR4-2933 memory for the Ryzen platform. The officially supported RAM speed varies from DDR4-2666 to DDR4-3200 depending on how many sticks you are using and whether they are dual or single rank, and DDR4-2933 is right in the middle as well as being the fastest supported speed if you want to use four sticks of RAM. In fact, this is the speed we are planning on using in our Ryzen workstations once JDEC DDR4-2933 16GB sticks are available.

The second thing to note is that we are using an unreleased version of our DaVinci Resolve Benchmark. This upcoming version only includes usability and stability improvements, however, so the scores are directly applicable to the version that is currently available for download.

Benchmark Results

While our benchmark presents various scores based on the performance of each test, we also wanted to provide the individual results. If there is a specific codec or level or grade that you typically work with, examining the raw results for that task is going to be much more applicable than the total scores.

Feel free to skip to the next section for our analysis of these results if you rather get a wider view of how each CPU performs in DaVinci Resolve Studio.

DaVinci Resolve Benchmark Analysis

Our DaVinci Resolve benchmark looks at performance for 4K and 8K media with a range of different types of grades, along with a few tests dedicated for Fusion. These are combined into scores for 4K, 8K, and Fusion that give you an overall snapshot of how each CPU might perform in DaVinci Resolve Studio.

Looking at these overall scores, the AMD Ryzen 9 3950X does pretty well, coming in at about 7% faster than the 3900X and almost 20% faster than the Core i9 9900K. Compared to the Core i9 9960X, the 3950X is on par when it comes to 4K media, but takes a 6% lead for 8K media.

However, many of our tests are very GPU intensive and are commonly bottlenecked by the performance of the GPU rather than the CPU. Because of this, our OpenFX and temporal noise reduction tests really don't show the full potential of the each of these processors since they are a smaller part of the overall performance picture. If we want to look at the largest possible difference between each of these CPUs in DaVinci Resolve, a good place to start is relatively basic grades of 8K footage:

For this set of tests, the AMD Ryzen 9 3950X takes the top spot between all the CPUs we tested - beating even the Intel Core i9 9960X by about 7%. It is also 20% faster than the Ryzen 9 3900X, 43% faster than the Threadripper 2950X, and a huge 52% faster than the Core i9 9900K. No matter how you slice it, this is a very impressive showing from the Ryzen 9 3950X.

However, the one thing we do want to note here is that because of how difficult it is to process 8K footage, the upcoming Intel X-10000 series or the AMD Threadripper 3rd Gen processors may end up being a better option for this type of workload. The performance of the 3950X is incredibly impressive, but just be aware that there may be a better (although likely more expensive) option available in the near future if you work with 8K footage.

Is the AMD Ryzen 9 3950X good for DaVinci Resolve?

Overall, the AMD Ryzen 9 3950X is a very solid choice for DaVinci Resolve. While more complex grades in Resolve often depend more on the power of your GPU than your CPU, for relatively basic grading and editing the 3950X can provide up to a 10-20% increase in performance over the Ryzen 9 3900X or a 40-50% increase in performance over the Core i9 9900K. In exchange for just a $250 higher price tag, that is a pretty good return on investment!

Keep in mind that the benchmark results in this article are strictly for DaVinci Resolve. If your workflow includes other software packages, you need to consider how the processor will perform in all those applications. Currently, we have articles for Photoshop, Lightroom Classic, Premiere Pro, After Effects, and a number of other applications.

In addition, both Intel and AMD have new processors coming out in the near future which may change the price to performance picture. We will be publishing more articles as these new processors launch, so be sure to keep a close eye on our list of Hardware Articles in the coming weeks.

Looking for a DaVinci Resolve Workstation?

Puget Systems offers a range of workstations that are tailor-made for your unique workflow. Our goal is to provide the most effective and reliable system possible so you can concentrate on your work and not worry about your computer.

Configure a System!

Tags: Intel 9th Gen, Intel X-series, Intel vs AMD, AMD Ryzen 3rd Gen, AMD Threadripper 2nd Gen, Ryzen 9 3950X, DaVinci Resolve
Misha Engel

To bad you're sticking to the 59.94 fps source material. Most indie's and studio tend to shoot 24(23.976),25 and 30(29.97)fps.
Only for sports(smoothness) and soap opera's(specific crappy looks) it makes sense to shoot 60fps.
For 8k.R3D material it's important what the workstation can do with 5:1 compression at 24 fps, 22:1 is even doable on a laptop.
I bet Linus will give you whatever kind of .R3D you want.
BRAW is also still missing, BMD will give you any kind of BRAW file you want, if it isn't already for download on their website.

Your Resolve test upto DR15 where pretty good.

Posted on 2019-11-15 13:17:57
DSKEN

While that is true, the point of the test to give a baseline and allow you to extrapolate to specific situations if needed. Aesthetics is not a consideration (it shouldn't be). Lowering the frame rate means the workload, thus the resources needed is lower. It is harder to extrapolate and compare compared. For example, if I can get x performance at 60 fps, it is reasonable I will get 2(x) at 30 fps. But if you reverse it, your prediction can be off because what happens if you go over VRAM or RAM demands? You may not achieve 2.5x performance that you assume.

 Other than satisfying an aesthetic need, what is the scientific advantage going from 60 to 24 fps?
I was actually surprised they did 60. It is a smart move.

More codecs would be nice just to make sure we don't add every codec under the sun. I mean I wanted DNxHR and uncompressed.

If you keep in mind the point of the benchmark is to allow the results to be applied across the platform, GPU, CPU and time then it makes perfect sense the way they are doing it.

Posted on 2019-11-15 16:03:51

DSKEN pretty much as it right. It basically comes down to the fact that that there are two areas we want to focus on for NLEs like Resolve: live playback and exporting performance. We unfortunately had to stop testing playback in Resolve due to technical issues, but once we get that worked out, using 59.94FPS media is actually incredibly useful for people who use 24 or 29.97FPS media because it lets you see how much "head room" there is. If one CPU gives 24FPS, and another gives 30FPS, I would still recommend the second CPU for people using 24FPS media because you may have other apps running in the background, apply a couple light effects, etc. If all our testing capped at 24FPS, you lose that information regarding whether the system was just barely able to achieve that, or has a bit extra to give.

For exporting, the FPS of the media really doesn't impact things at all - it is the total number of frames that matters. Exporting isn't capped to the FPS of the media, so whether we export 100 frames of 24FPS media, or 100 frames of 60FPS media, it doesn't really matter. Since we record the results in total frames exported per second, the FPS of the media isn't a factor.

The only place this doesn't quite hold true is for H.264/H.265 since it uses a fixed bitrate that is per second. Technically a 150mbps 60FPS H.264 file should be equivalent(ish) to a 75mbps 30FPS H.264 file since you have twice the frames per second, but in my mind that is a fairly minor problem that is more than compensated by the amount of information we gain by using 60FPS media. At some point, we may be able to test both 29.97 and 59.94FPS media (like we do in our Premiere Pro tests), but the full DaVinci Resolve test already takes several hours right now, so we need to find a way to trim that down a bit before we do that or add additional codecs to the test. I have some ideas for that, but with all the product launches and application updates going on right now, just can't invest the time necessary to do so at the moment.

Posted on 2019-11-15 18:09:09
Dragon

I agree generally, but encoders are more complex than they look at first glimpse. At 30 fps, the images move more from frame to frame than they do at 60 fps (for the same rate of motion). That larger movement makes it harder for the encoder to accurately predict the next frame and thus tends to generate more correction data (for the same still frame image quality). Lower frame rates are often shot at slower shutter speeds (to reduce the visibility of judder), which helps to mitigate the difference (with an attendant loss of image quality), but if the target viewing device is a modern TV with motion interpolation, then it is better practice to shoot with a relatively short shutter in all cases, since the TV is going to present at least 60 motion interpolated fps to the viewer. All this is the underlying reason why 60 fps can be coded at less than twice the data rate of 30 fps and also why there is no material drop in data rate going from 30 to 24 fps (24 is hard to code due to the large frame to frame change). Bottom line, I agree with your analysis for purposes of memory capacity, etc, but exporting 100 frames shot at 24 fps is NOT the same as exporting 100 frames shot at 60 fps. If the encoder is well designed for efficiency, the 24 fps frames will take longer (at least if there is significant motion in the sequence).

Posted on 2019-11-17 17:18:34
DSKEN

Are you referring to temporal compression? Does this apply to codecs and processes that are intraframe and not interframe?

I have never ran in to a production (scripted or non-scripted) that considered the high scan rate feature of modern TV and change their shutter angle.

Posted on 2019-11-18 01:04:21
Dragon

No, not temporal compression or intraframe (which can be viewed as a series of sophisticated JPEGs, just the details of how interframe MPEG codecs work. Hard to explain in less than 10 or 15 pages, but I will try to abbreviate. The encoder has a circuit that predicts what the next frame will look like based on motion vectors for small blocks of the image. It then looks at the image that a standard decoder would produce from that prediction and compares it to the original image. It then sends as much of the difference data as it can based on the assigned data rate. The more motion there is from frame to frame, the harder the prediction becomes and global changes like cuts and zooms tend to break the encoding process unless there is enough data space to either throw in a lot of difference data or alternatively a sequence of I-frames (i.e. intra-frame coding).
As to productions taking advantage of modern TVs, don't hold your breath. The cinema industry got very married to 24 fps (initially out of necessity due to the limitations of film). Many of the specialized jobs in the industry are frankly there only because of the challenges presented by such a low frame rate. For instance, a great deal of the skill of a cinematographer is involved with managing pan and zoom rates such that they don't produce annoying Judder in the end result. The other trick with 24 fps was that it could be speeded up to 25 and shown on European TV for added revenue. If all productions were shot and displayed at 60 fps (or higher), many of the special skills that have been developed over the decades would not be required, so there is understandable resistance to that change. To reinforce their position (and save their jobs), many in that community tend to cast a very negative light on frame interpolating TV sets (they use the same type of algorithms as the MPEG codec to predict the interpolated frames). You hear the term "soap opera look", etc. used to diminish the value of frame interpolation. You will also hear that 60 fps involves the viewer too much and thus they can't stand apart and see that a story is being told. Note that under certain conditions, frame interpolators will fail. A good example is two frames of a ball on either side of a pole and the interpolator is tasked with creating the frame where the ball is exactly in line with the pole. It has no way of knowing whether the ball should be in front of or behind the pole, but the human eye will typically spot the error from depth estimates based on object size.
Try shooting some 24 FPS video with a nominal 180 degree shutter (1/48th of a second) and reshoot the same sequence with 60 degree shutter (1/144th of a second) and feed the two sequences to a frame interpolating TV. You will notice a real difference in motion blur, particularly if the footage and the TV are 4k. Note that for cameras without shutter angle control, fairly rough approximations to shutter speed are fine.
I hope that sort of answers your question.

Posted on 2019-11-18 21:11:57
DSKEN

To be honest, it doesn't. It might be my fault for not asking clearly.
If it is not temporal or interframe encoding, then what are you referring to when you refer encoding is sampling previous frames? You are talking about "movement", which can only be measure "in difference over time".

DCT encoding like ProRes and DNxHR as far as I know are strict intraframe.

DNxHR's data rate for a given quality setting is the same for every frame no matter the FPS. Any visual quality difference frame per frame as more to do with the complexity (motion or NOT) independent of previous frame.

And regarding motion and shutter speed, I was just asking when is this an issue? I understand that shutter speed affects motion blur which in turns affects image complexity for encoding, but how does this affect post and these benchmarks? Not like you can change shutter angle in post or ask the director to reshoot something to get better encoding performance.

I'm confused by your argument. It seems there is the technical production factors you bring in about shutter speed. But I'm not understanding how it connects to Resolve benchmarks at 60 fps.

edited: interframe vs intraframe

Posted on 2019-11-18 22:15:08
Dragon

I think you have the prefixes inter and intra confused. Inter means between. Intra means within, so interframe coding is between or across multiple frames and intraframe coding is within a single frame as in JPEG. And yes, I am referring to temporal coding, but in your initial question, you asked about temporal compression which normally refers to time compression (as in speeding up playback and that is how I interpreted your question) but it can also refer to interframe coding and in that case, the answer is yes, I am talking about interframe coding. My point with respect to post is that 24 fps is harder to interframe code than 60 fps (on a frame by frame basis) due to the larger motion displacement between frames. The comment on shutter angle was an aside, but hopefully useful information, and yes, you have to choose your shutter angle at capture. It is technically possible to generate interframe blur in post, but you can't shorten the shutter to make the picture sharper anywhere but the camera.

Posted on 2019-11-19 01:51:04
DSKEN

apologies, you are correct. I swapped the terminology. edited and corrected.

Now I understand better what you are saying. While I agree in concept, in practice the difference can be moot. For example, a lot of 29.97 and 60 fps media are often documentary and non scripted shows. This means a lot of talking heads and talent on camera, which means very little motion difference from frame to frame.

Posted on 2019-11-19 15:20:46
Dragon

My point was more what could be than what is. ABC, ESPN, and Fox made the decision to use 720P with lower spatial resolution and better temporal resolution when they went to HDTV in the interest of better rendition of sports, but virtually any scene with significant motion is better rendered with a higher frame rate. Upping the display frame rate with motion interpolation really does make 24P easier to watch, but you will still hear all the previously mentioned put downs (and many more) from the Hollywood community and many of their avid followers. Note that with higher resolution (and bigger and brighter screens) motion judder becomes noticeable (and annoying) and much lower motion rates than it is at lower resolution unless you stretch the shutter out so all motion is just a blur. This is to say that 8k really, really needs a higher frame rate than 24.

Posted on 2019-11-20 00:39:19
Misha Engel

8k.R3D 60 fps 22:1 plays back on a beafy laptop in realtime where 8k.R3D 24fps 5:1 cripples an I9-9980XE + RTX Titan X to playback in real-time.
When you go over VRAM or RAM demands your system is not capable to process that codec.
Indie's shoot All-intra when ever possible when using AVC, because it's easier to edit, they don't shoot long GOP when they aren't forced.
99% of the commercials is shot in 24..30 fps.
95% of wedding are shot in 24..30 fps.
99% of the movies are shot in 24 fps.

60 fps only makes sense for sports and soap opera and they don't shoot ProRes 4444.
When the target market is youtube/hipster/invluencer shooters with to much money this is a perfect test.

Posted on 2019-11-15 22:18:15
DSKEN

"8k.R3D 60 fps 22:1 plays back on a beafy laptop in realtime where 8k.R3D24fps 5:1 cripples an I9-9980XE + RTX Titan X to playback in real-time."

And?

"When you go over VRAM or RAM demands your system is not capable to process that codec"

Exactly. So it is better you start higher fps to allow you to extrapolate better when trying to predict performance at lower fps.

"99% of the commercials is shot in 24..30 fps.
95% of wedding are shot in 24..30 fps.
99% of the movies are shot in 24 fps."

Again you are missing the point. It is not to achieve a particular frame rate only (24 in your case) but to use the information when workflow and hardware are different. For example, "How much more real-time can I achieve if I move my workflow from PR422 to PR444?". With the 60fps test, you can make a reasonable judgment. If the difference in real time between PR422 and PR444 is only a 20% increase and you need 100% then you can say "OK, I will likely need better hardware". And this kind of reasoning is not limited to one particular frame rate.

Giving a benchmark that is CAPPED at 24 fps makes it less useful even to workflows that are using 24fps media. That is like limiting your gaming benchmark 60fps even though overwhelming of games are played on TVs and monitors at 60/50 with VSync.

"When the target market is youtube/hipster/invluencer shooters with to much money this is a perfect test."

I manage hundreds of editing systems including a color grading division that bill $$$ by the hour. 99% of incoming media and required deliverables are 23/24 fps. Projects that air on major American networks (NBC, ABC, etc), cable channels (HBO, Viacom, Disney, Discovery), streaming service (Disney+, Netflix, Hulu) and film festivals (Sundance, Berlin, TriBeCa, etc.).
I'm positive that these kind of benchmarks are more useful at 60 fps than 24 fps. You just need to understand how to read it and use it. This is more than X is better than B kind of data.

Posted on 2019-11-16 16:11:06
grokker

So you've been saying for what amounts to forever that anything beyond 8 cores really doesn't matter for Resolve and now we all see it actually does. Is hard to trust you guys.

Also, your recommendation to wait for upcoming hardware is plain hilarious.

Thanks god you post your numbers for everyone to make up their own mind.

Posted on 2019-11-15 16:52:57

I'm not sure where you are getting that we have always said that anything beyond 8 cores doesn't matter for Resolve. The closest I can find is that our hardware recommendation page for Resolve states that "there is a sharp drop in performance gains after about 14 cores" https://www.pugetsystems.co... . Our last set of CPU performance testing also very clearly shows a performance gain when using CPUs with more than 8 cores: https://www.pugetsystems.co...

Some of our older testing with Resolve 15 didn't show much of a performance difference between CPUs, but we have since updated our testing to include more projects that aren't as heavy on things like OpenFX or noise reduction so we can get a more accurate look at how the CPU can affect performance in those situations. All of those posts have a big warning at the top saying "Always look at the date when you read a hardware article. Some of the content in this article is most likely out of date...", however, so hopefully people aren't taking that information as current or up to date. If there is anywhere on our website that we have old information that does NOT include that kind of a warning, let me know and I'll get that corrected ASAP!

I'm curious why you think our recommendation to wait for the new X-10000 and Threadripper CPUs is "plain hilarious" as well. I normally don't tell people to wait since there is always something new coming, but those processors are literally weeks away and could potentially shake things up quite a bit. The 3950X pretty much matches the 9960X for 4K work, and the upcoming Intel Core X-10940X is close to the same price as the 3950X. I don't think it is all that unreasonable to want to wait to see how those two CPUs end up performing against each other. Not to mention the new AMD Threadripper CPUs - they could be absolutely amazing and well worth the upgrade for many people.

And yes, we will always post the raw benchmark results for people to examine. Overall scores and well and good from a general performance perspective, but anytime we help a client configure a workstation for Resolve, we want to know what codecs they work with, whether they use noise reduction often, etc. It is always better to drill into the specific results that align as closely as possible with what you actually do than to go by generalized total scores.

Posted on 2019-11-15 19:08:02
grokker

I stand rectified on the issue of more cores for Resolve. I'm eager to see results with 32 core Threadrippers and upcoming Intel processors.

Posted on 2019-11-17 15:43:52
Daniel Elfe

First of all, thank you for your tests. You really do amazing job and you give it for free. Luckily you existe guys.
Still, I feel myself a bit lost and really need your help.

I’m preparing to build myself a very good workstation and I can’t choose between: AMD Ryzen 9-3900X and AMD Ryzen 9-3950x

I specialize in After Effect (for which 3900X is the best), but now I do a lot of color grading and I’m about to start working in DaVinci Resolve (for which 3950X is the best). So, 3900X has bigger frequency, but wasn’t tested in DaVinci; 3950X has more cores, but wasn’t tested in AfterEffects; so it’s a really difficult choice to do.
Do you have ANY advice for me?

Another question if you don’t mind:
How Is DDR4 2993MHz is better then 3200MHz? Cause I was about to buy G Skill 4 sticks for 128Go at 3200MHz, (since both processors can use it without OC), now I’m not sure.

Thanks again.

Posted on 2019-11-16 12:12:21
DSKEN

My advice...

1. Read the chart and see how more core count affects a particular workflow. For example, the 3D based FX they use is almost all GPU bound. Even between 8 core Intel vs 16 core AMD, the difference is negligible. Thus, it is reasonable to say that CPU in those kinds of tasks have minimal impact. If you move to a CPU bound task like Optimizing media (it is basically a transcode), a 33% increase in core count achieves a 15% increase in performance.

RAM MHz generally has little effect in media processing. You can safely ignore it.

Posted on 2019-11-16 16:20:43
Misha Engel

Some programs like a lot of GHz and some like a lot of cores, overall the sweetspot will be the 3950x because it can do both. With a good cooler 280 AIO you can get decent clockspeed right out of the box. AMD zen, zen+ and zen2 like fast memory, for us the sweet spot is DDR4-3200 CL16 32GB modules, because we want 128 GB(Fusion loves a lot of memory) while not breaking the bank. For the motherboard we are going for the Asrock X570 Creator because it has 2xTB3 ports and build in 10G. Boot/system/programs drive will be a Sata SSD and for scratch we go for 2x NVMe PCIe 4 2 TB in raid0. We use the Radeon VII as GPU (already available). When you also want local fast/big/secure storage, you can use 4 big spinning drives in RAID10.
Above system is able to edit all current codecs at the highest resolution(full-res premium) in realtime (incl 8k.R3D 5:1 24 fps, the compute heaviest current codec available). We will place the Decklink card in an cheap external GPU box($300).

Posted on 2019-11-17 16:51:30

Hey Daniel, I think either the 3900X or 3950X will work great for you - it just comes down to budget. If going with the 3900X lets you get more RAM (important for AE), then go that route. If you can afford 128GB of RAM and the 3950X, go for that! You are really only looking at a 6-7% bump in performance from the 3950X in Resolve and After Effects, so whether that is worth it or not is something that really only you can decide.

As for the RAM speed, we always, always recommend sticking with what matches the official specifications. In the case of these CPUs, it is 2933MHz if you are using 4 sticks, or 3200MHz if you are only going to use 2 sticks. Going beyond that definitely increases the instability of the system. Like everything, it is a risk vs reward, and some people will have no problem with 4 sticks of 3200MHz. But I would say using beyond spec memory is one of the leading causes of instability whenever we help out our customers with non-Puget computers.

The one thing you can do if you really want to is go ahead and get those 3200MHz sticks, and just be aware that if Resolve or After Effects start crashing, to go into the BIOS and manually set the frequency to 2933MHz. You can pretty much always turn down the frequency/timings safely, so that gives you at least a path to resolution if you start to have problems.

Posted on 2019-11-19 18:28:00
Daniel Elfe

Thank you Matt. It’s a really useful information. I’m pretty sure I’ll take 3950X 128Go at 3200GHz.
Thank you for what you do. I’ll wait your tests for the next Threadripper. Maybe I’ll change my mind ;)

Posted on 2019-11-19 21:36:30