Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1533
Article Thumbnail

After Effects CPU Roundup: AMD Ryzen 3rd Gen, AMD Threadripper 2, Intel 9th Gen, Intel X-series

Written on July 19, 2019 by Matt Bach


Way back in 2014, After Effects was very efficient at using multiple CPU cores to improve performance which would have made processors like AMD's new Ryzen series (which currently have up to 12 cores and soon will have a 16 core model) absolutely terrific. However, Adobe has trended away from this in recent years, largely due to the addition of GPU acceleration. Newer versions of Ae are certainly still faster than older versions, but today it is less about having a ton of CPU cores than having a CPU that has very fast individual cores.

This makes the new Ryzen 3rd generation CPUs very interesting since not only do they have more cores than the previous generation, AMD has also spent considerable effort to improve performance for moderately threaded applications like After Effects. In a large part, this is really what makes AMD's new processors very exciting. The increase in core count is certainly nice (and may be useful for those using multi-threading plugins/scripts to improve render performance), but most applications simply are not going to see a benefit from the higher core counts that both Intel and AMD are trending towards. Instead, it is the IPC (instructions per clock) improvements that will be more significant for most users - even if that doesn't show up in the marketing specs.

AMD Ryzen 3rd Gen After Effects Performance

In this article, we will be looking at exactly how well the new Ryzen 3600, 3700X, 3800X, and 3900X perform in After Effects. Since we expect these CPUs to shake up the market quite a bit, we also took this opportunity to do a full CPU roundup. Not only will we include results for a few of the previous generation Ryzen CPUs, but also the latest AMD Threadripper, Intel 9th Gen, and Intel X-series CPUs. And for good measure, we will throw in a 14-core iMac Pro and a current (for the moment) 2013 Mac Pro 12-core as well.

If you would like to skip over our test setup and benchmark sections, feel free to jump right to the Conclusion.

Looking for an After Effects Workstation?

Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!

Test Setup & Methodology

Listed below are the specifications of the systems we will be using for our testing:

Shared PC Hardware/Software
Video Card NVIDIA GeForce RTX 2080 Ti 11GB
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit (version 1903)
After Effects CC 2019 (Ver 16.1.2)
Puget Systems Ae Benchmark V0.5 BETA
Mac Test Platforms
iMac Pro 14-core Intel Xeon W
64GB 2666MHz DDR4 ECC
Radeon Pro Vega 64 16GB
Mac Pro (2013) 12-core, 2.7GHz
64GB 1866MHz DDR3 ECC
Dual AMD FirePro D700 6GB
1TB PCIe-based SSD

*All the latest drivers, OS updates, BIOS, and firmware applied as of July 2nd, 2019

Note that while most of our PC test platforms are using DDR4-2666 memory, we did switch up to DDR4-3000 for the AMD Ryzen platform. AMD CPUs can be more sensitive to RAM speed than Intel CPUs, although in our Does RAM speed affect video editing performance? testing, we found that the new Ryzen CPUs only saw modest performance gains in Creative Cloud applications when going from DDR4-2666 to even DDR4-3600 RAM.

For each platform, we used the maximum amount of RAM that is both officially supported and actually available at the frequency we tested. This does mean that the Ryzen platform ended up with only 64GB of RAM while the other platforms had 128GB, but since our benchmarks never need more than 32GB of RAM to run, this does not actually affect performance at all. We have recently re-confirmed this in our RAM speed article linked above.

However, keep in mind that this is technically overclocking since the AMD Ryzen 3rd Gen CPUs support different RAM speeds depending on how many sticks you use and whether they are single or dual rank:

Ryzen 3rd Gen supported RAM:

  • 2x DIMM: DDR4-3200
  • 4x single rank DIMM: DDR4-2933
  • 4x dual rank DIMM: DDR4-2667

Since we are using four sticks of dual rank RAM (almost every 16GB module available will be dual rank), we technically should limit our RAM speed to DDR4-2666 if we wanted to stay fully in spec. However, since many end users may end up using a RAM configuration that supports higher speeds, we decided to do our testing with DDR4-3000, which right in the middle of what AMD supports.

The benchmarks we will be using are the latest release of our public After Effects benchmarks. Full details on the benchmarks and a link to download and run it yourself are available at:

Benchmark Results

While our benchmark presents various scores based on the performance of each test, we also wanted to provide the individual results. If there is a specific task that is a hindrance to your workflow, examining the raw results for that task is going to be much more applicable than our Overall scores. Feel free to skip to the next section for our analysis of these results if you rather get a wider view of how each CPU performs in After Effects.

After Effects Benchmark Analysis

Looking at the overall performance in our After Effects benchmark (which combines RAM Preview, Rendering, and Tracking tests), the results are very interesting. The Intel Core i9 9900K continues to stand at the top of the chart, but the new AMD Ryzen CPUs are right on it's tail. In fact, AMD takes the 2nd, 3rd, and 4th place spots!

What this means is that while Intel is still the top dog for general After Effects usage, the AMD Ryzen 5 and 7 CPUs match very favorably against Intel. While pricing is constantly changing, with current MSRP pricing this makes AMD somewhere around 6% faster than Intel at similar price points in the mid-range. At the top-end, the Intel Core i9 9900K maintains a 3% lead over AMD.

To us, this is very exciting to see. Competition between AMD and Intel is always going to be a good thing for consumers, but it has been a while since AMD could seriously compete in applications like After Effects that are only lightly threaded. Given how close the performance was (5% is hard to notice in the real world), we would say that at least in this benchmark, it is pretty much a wash between Intel and AMD.

After Effects Render Node Benchmark Analysis

While most motion graphics artists use After Effects in a fairly traditional manner, many are starting to leverage multiprocessing plugins or homemade scripts to improve rendering performance. These typically leverage a little-known application called "aerender" that is installed alongside After Effects which allows you to divide up your render across multiple threads in order to fully utilize the performance of your CPU and GPU. In fact, the limiting factor is often the amount of RAM and VRAM you have available since each thread requires it's own share of memory.

Since this form of rendering (whether it be on your local machine or across multiple systems on a network) has been gaining in popularity lately, we decided to create a new version of our After Effects benchmark that tests a number of compositions with anywhere from a single render thread to as many threads as your system has CPU cores. If you want to try this benchmark yourself, we have a beta available at Puget Systems After Effects CC Render Node Benchmark.

This is the first iteration of our After Effects Render Node benchmark, but already the results are completely different than what we expected. Given that the Ryzen 9 3900X has four more cores than the Intel Core i9 9900K, we expected it to take the lead. Instead, the 9900K actually pulled further ahead and ended up being ~10% faster than the 3900X.

On the whole, the real winners for this test are clearly the Intel X-series CPUs. The Intel 9th Gen CPUs also did extremely well for their price and are likely still what most users will want, but if you need a system that can tear through renders as fast as possible, the higher-end X-series ended up being on average about 11% faster than the Intel 9th Gen series.

Are the Ryzen 3rd generation CPUs good for After Effects?

Overall, the new AMD Ryzen CPUs perform neck-in-neck with the Intel 9th Gen CPUs in traditional After Effects workflows. The Intel 9900K continues to hold the performance crown by a slim margin (especially for RAM Preview performance), but AMD has a slight lead at the Ryzen 5 and 7 price-points. If you use multithread plugins or scripts like BG Renderer Max or RenderGarden, however, Intel take a pretty significant lead over AMD at the top-end, but it continues to very close at the mid/low-end.

In the real world, the extra performance during RAM Preview with the Intel Core i9 9900K will likely be noticeable, but if you are looking for a CPU at a slightly lower price point, it is going to be really hard to tell the difference between a system with an Intel 9th Gen CPU and one with an AMD Ryzen 3rd generation CPU. Because of this, many users are going to want to consider other factors beyond just price and performance. While there are many more factors that go into your choice of platform, here are a few other considerations to keep in mind:

On the Intel side, the Z390 platform has been available for quite some time which means that most of the bugs and issues have been worked out. In our experience over recent years, Intel also simply tends to be more stable overall than AMD and is the only way to get Thunderbolt support that actually works. Thunderbolt can be a bad time on PC, and there are only a few motherboard brands (like Gigabyte) where we have had it actually work properly.

For AMD, the X570 platform is very new and there will be a period of time where bugs will need to be ironed out. However, AMD is much better about allowing you to use newer CPUs in older motherboards, so if upgrading your CPU is something you will likely do in the next few years, AMD is the stronger choice. In addition, X570 is currently the only platform with support for PCI-E 4.0. This won't directly affect performance in most cases (although it may allow Ae to write to the disc cache faster), but it will open up the option to use insanely fast storage drives as they become available.

Keep in mind that the benchmark results in this article are strictly for After Effects. If your workflow includes other software packages (We have articles for Photoshop, Premiere Pro, DaVinci Resolve, etc.), you need to consider how the processor will perform in all those applications. Be sure to check our list of Hardware Articles for the latest information on how these CPUs perform with a variety of software packages.

Looking for an After Effects Workstation?

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: After Effects, Intel 9th Gen, Intel X-series, Intel vs AMD, AMD Ryzen 3rd Gen, AMD Threadripper 2nd Gen

Have you tested Asrock X570 boards? they officially support Thunderbolt 3.
Thanks for your great articles!.

Posted on 2019-07-19 19:54:24

We have not. As far as I'm aware, there is no completely official implementation for Thunderbolt on AMD chipsets, but motherboard manufacturers can slap it on if they want. A few brands have that on AMD Threadripper boards, but when we tried it it really didn't work well (which is likely why they didn't list Thunderbolt in the specs). My take is that Asrock is willing to take more risks than other brands, but whether that is because they are confident in it working or simply because it is something they can use in marketing to try to drive sales I don't know.

After working with Thunderbolt for years and dealing with the huge hassle it has been, I personally wouldn't trust it - at least not in this early of an implementation. I've been wrong before, however, so who knows.

Posted on 2019-07-19 20:11:07

Thanks you for your reply, indeed looks like only Asrock put an effort to implement TB3, on some boards it's already built in like the ITX model and there is special Creators model too, IMO it's will be interesting model for you:

Posted on 2019-07-19 20:31:45
Misha Engel

All ASRock X570 boards officially support Thunderbolt.
Only the X570 Phantom Gaming-ITX/TB3 and the not yet announced creator* board have it build in.

Posted on 2019-07-20 12:59:40

This is a bit of an old thread, but we spoke with AMD at SIGGRAPH and they cleared a few things up for us. First of all, the X570 chipset does NOT have Intel-certified Thunderbolt support as of now. So the Thunderbolt implementation by ASRock is "official" in that ASRock is saying it will work, but neither AMD nor Intel have certified it.

Given how poor of an experience we have had with Thunderbolt on completely certified platforms (beyond a few manufactures like Gigabyte), I would be very wary of Thunderbolt on any AMD platform right now. It might work depending on the exact Thunderbolt device you are using, but my guess is that you have somewhere around a 50/50 chance it won't work or will have tons of issues.

Posted on 2019-08-16 16:14:18
Nick Hamilton

Thanks for this in-depth look! I very recently upgraded my work machine to an i9 9900K... I was seriously thinking the new Ryzen 3900X would beat it in benchmark tests, so that's a surprise (altho it is clearly very close). For the price point, the 3700X could be a good pick though for a slightly less high-end machine.

Posted on 2019-07-19 20:10:45

Aw, it's a good result, but I was hoping Ryzen 3900X would be even closer to the 9900K. Well, at least you get extra cores, so perhaps it'll even out as more parts of AE get better multi-threading support.

Posted on 2019-07-19 21:30:52

I'm not surprised to see Puget Systems (principally an Intel vendor, when you actually look at their inventory proportionally) protect the Intel brand (and back inventory), against nearly unanimous findings otherwise across the World Wide Web. Here are a few flaws in their testing platform:
1. One great benefit of the Zen 2 architecture is native support for high-frequency DRAM; I installed 16gb x 2 of 3600 MHz DRAM and that was actually the budget choice; it runs very stable paired with my 3900x on an Asus Prime-Pro X570 motherboard. Puget admits that they damaged the capability of their DRAM for the AMD tests, to create "a level playing field." But Intel should be penalized as a CPU choice, for failing to support fast DRAM that is uber-cheap today.
2. Hiding in the test bench disclosure, is that the included cheapo stock cooler got used on the AMD, compared to the best-in-class Noctua on the Intel. Obviously, speed throttles according to temperature, and Puget knows this (and cannot verify that temperature had no impact on these results, even if they nonchalantly claim it).
3. Oddly, they gave double the amount of DRAM to the Intel test bed. Why? Again, Puget can claim it has no impact, but can't verify it.
4. Simply put, when you are motivated to be pro-Intel, you emphasize single-threaded tasks. Whether you focus on the single- or multi-threaded tasks inside of After Effects -- not to mention Adobe Media Encoder's functionality which is un-severable from an After Effects workflow -- that is extremely dispositive (and revealing of bias). Whatever kinds of things you do, are subjective until the point that you add bias.

I just find it hilarious and frankly vintage how the Intel Industrial P.R. machine has been squirming these days, after years of violating its own Moore's Law, owing to greed and throttling innovation guided by their accounting department. Intel got creamed by AMD, most of the universe has come around to admitting it, save for a few financially-motivated holdouts...

Posted on 2019-07-19 22:35:12

LOL... Did you miss Matt's Photoshop article yesterday, where AMD's new CPUs took 3 of the top 4 performance spots? Or my preview (full article coming next week) showing these chips roundly defeating Intel at CPU-based rendering? We do our best to make realistic testing and present unbiased results. Will it be perfect? No, but nothing in this world is :)

Also, Matt is working on a RAM speed article focusing on how much impact that has on Ryzen performance - so we'll see soon how much that impacts things.

Posted on 2019-07-19 22:40:10

This entire field of science holds ego and brand in irrelevance; results and methodologies are the only thing relevant. I listed four problems, just to begin with. They are unaffected factually by feelings, etc...

Posted on 2019-07-19 22:42:30


1) Matt is, as I said, testing RAM speed impact on performance with Ryzen. We aren't ignoring that. Personally, I am including 2666 and 3200MHz memory results in my articles - as those are the lower and upper bounds of AMD's official memory speed support for Ryzen 3rd Gen chips. Matt is going wider in his full RAM testing, coming soon, but opted to go with 3000MHz for these CPUs in this roundup. That is the max supported speed for four (4) single-rank modules on Ryzen 3rd Gen.

The 3600MHz stuff you said you're using, though? That is above AMD's official support spec, even if motherboard makers allow it (and indeed, AMD has shown higher speeds in their own marketing materials as well).

2) AMD's stock cooler is actually pretty decent, and I haven't seen any evidence of thermal throttling in my tests. We separate performance testing from qualification here, though, and we have not yet qualified a "better" (aftermarket) cooler for these chips. When we do, I expect we'll start to use that cooler up here in Labs.

3) The largest memory modules we have at speeds above 2666MHz is 16GB per module, thus forcing the 4 x 16 = 64GB limit on Ryzen. Had he used 32GB 2666MHz modules, people would instead have complained about the RAM speed. Damned if you, damned if you don't :/

For my part, I did try to keep my tests at the same amount of RAM when possible - but that is because RAM amount can actually affect some of my benchmarks (especially photogrammetry). I don't think 64 vs 128GB has an impact on Matt's tests, but he would be able to speak to that more than I can.

4) We aren't motivated to be pro-Intel. Sure, everyone has some biases which they aren't even aware of... but we definitely don't try and tailor tests or results to favor any particular manufacturer. I'm sorry you think so little of us.

Posted on 2019-07-19 22:57:53

You guys should ignore trolls like him, let them have fun in WCCFTech where all retards spamming under mama's basement, btw great tests as always.

Posted on 2019-07-20 11:10:01

I came here looking for evidence on how intel cpus perform compared to amd.

I have no motivation to try and find out what your methodology is and why these results are the way they are but:

1) the overall score of the 16core 9960X is 16% faster than the 32core 2990wx (ie estimated +132% when adjusting for # cores!!!)
2) the overall score of the 32core 2990wx is 2.2% faster than the 8c i7-9700, the 8c R7 3800X, the 12c 3900X and SLOWER than the 8c i7-9800.
3) the overall score of the 12c 3900x is the more or less the same as the 8c 3800x!

Your "benchmarking" has managed to map a theoretical +50% performance improvement from 3800x to 3900x to -0.3%! This could make sense if only testing single threaded performance but then it is no longer a multi threading performance benchmark and at this point obviously one could make do with an i3 or an r3.

I also have no interest in calling you an intel shop but things like these are known to have happened already and there is not even a hint of doubt in your article or some possible explanation.

edit: I have come by a number of times by your web site and the only reason I am writing this is because I think highly of your company and your work.

Posted on 2020-03-19 16:09:25

I hesitate to explain the why behind the results since you stated that you have no motivation to understand the methodology, but I'll give a quick summary for others that may come across this thread. In a nutshell, theoretical performance (which most people look at as just cores X frequency) rarely has any bearing on real-world performance.

This post is about performance in After Effects, which is a very lightly threaded applications. Here, the number of cores is almost insignificant beyond 2-4 cores - it is all about the architecture of the processor and how fast it is for single or lightly threaded tasks. If you look at our other articles looking at Premiere Pro, DaVinci Resolve, or even better threaded applications, the relative performance between processors can change drastically.

And that is the crux of why we do all this testing. Theoretical performance is not indicative of what you would see in your workflow, just like how benchmarks for applications you don't use (be it gaming, ray-tracing, HPC, or anything else that isn't a part of your workflow) is largely meaningless. You need to find performance metrics for the applications that are the biggest bottleneck for you and look at how different hardware performs to determine what is the best fit for you.

Also, make sure you check out our newer articles before you say we are an Intel shop. Intel definitely had a pretty big lead for a long time in content creation applications, but the latest Ryzen and Threadripper CPUs are really, really good and are almost always at or near the top of the charts.

Posted on 2020-03-19 16:42:33

I am motivated enough to discuss it, perhaps I should have worded it as "I am just not motivated to start searching your web site enough to find documentation about it more than reading the download page's description".

I was carefull not to call you an intel shop, I only stated that these exist and I also said that I appreciate your company and the posts on your web site in general.In short, I do not want to engage in a quarel.

The thing is that this is an article claiming real world performance on intel and amd of some multithreaded applications with no other mention to them being poorly multithreaded or tuned to intel or anything which can easily mislead someone to think that "intel still dominates in HPC applications" which may be true in very specific workloads but definitely is not true in general.

Posted on 2020-03-19 17:42:12

The only correction I would make is that it is not the benchmark that is limited to 2-4 cores (that is a very rough number by the way), it is After Effects that has that limitation. The performance shown in this article should be very accurate for people working in After Effects. And in the end, how different hardware performs in the real world is what is important. That is why there is no single benchmark that is useful for everyone - everyone uses different applications or combinations of apps.

Posted on 2020-03-19 17:53:49

All benchmarks are very specific but a lot of them are used to extrapolate performance to other applications. I know that extrapolation is wrong but this is all we can do when our applications are not among the commonly used benchmarks and a lot of them are often quite accurate indicators of the performance level. It is also not your fault but it wouldnt be bad if you were more careful about it in the future (again, I am not saying that you should, only that it would have been nice)

Posted on 2020-03-19 18:00:57
Michael Rogers

I think he's right you. A quick google search leads me to
and this
and this
and this

and I could continue practically endlessly, I can't find ONE SINGLE BENCHMARK on any site that doesn't have 3900x absolutely demolishing the 9900k, yet you have 9900k beating ryzen in PP? oh and how about this?


your Radeon VII vs 2080 comparison showed Radeon VII get its ass handed to it by Nvidia, so please explain this video that's showing the 2080ti dropping frames with Radeon VII running smooth as butter? Sorry bud, it really really seems like you are paid off. We aren't idiots. Thanks

Posted on 2019-08-30 01:25:50

You seem to have a bit of a vendetta against us for some reason, so I'm close to simply ignoring your posts from here on out rather than explaining the reasons behind our testing, but I'll give this one last shot.

First, official and unofficial RAM speed support are completely different things. Sure, you can use 3600 with Ryzen, but you can do that with Intel too (and in our testing, the performance gain is pretty close to the same with either x570 or Z390) . And you may not have issues with your one system with higher RAM, but we do thousands of systems and I assure you, using higher frequency RAM than what is officially supported definitely makes systems less stable overall.

As for the coolers, the reason we used the wraith cooler is because it is frankly an excellent cooler and is more than enough to prevent throttling at stock speeds, especially on the open air test beds we use.

The reason we used more RAM on Z390 is because you can get 32GB modules at ddr4-2666. Previous testing where we kept AMD at 2666 people thought we weren't giving AMD a fair shot, so we upped the RAM speed this time to give AMD a more fair look. Would you also say we skewed the testing in favor of AMD if we used 128GB of 2666 RAM?

We do have an article coming looking at how RAM speed affects performance, however and that uses identical RAM on each platform. None of our benchmarks benefit from more than 32GB of RAM though, so capacity differences don't affect performance in this testing.

Please, let's keep these comments as discussions rather than attacks. We really prefer not the ban people if we can help it, but if things are no longer being constructive discussions, we will take that step.

Posted on 2019-07-19 23:04:53
Eric Marshall

They actually tested Ryzen using an overclocked memory configuration with memory that has an XMP profile with tight timings, while leaving Intel at stock memory clocks (slower) with much looser timings. If anything, they favored AMD with the memory selections by a wide margin.

Posted on 2019-07-21 02:37:38
Ramazan Doğan Eray

Directly at first sight on this article I noticed same things. Fan differences were shouting at first glance, plus they insist to use lower rams on R3 cpus with claiming there won't be any effect. Sorry intel do whatever u want(realuserbenchmarks already being published everywhere) this time amd is clearly winner. As a nearly 25 years intel user, I'm going to buy one of those amazing amd cpus first in my life. I mean come on, they are clearly too strong than intels 9xxx and have newest techs with x570 boards and even cheaper. Even cheapest 3600 is just around %7 weaker than 9700k. It's amazing.

Posted on 2019-07-21 09:24:23

We actually have a RAM speed/capacity article coming out today or tomorrow where we look at that. I can tell you for 100% certain that our benchmarks do not benefit from having more than 32GB of RAM. In the real world more RAM can help of course (especially in AE since it lets you have more frames in RAM Preview), but it all depends on exactly what you are doing and since our benchmarks test relatively short comps, more RAM doesn't improve playback/export performance.

I absolutely agree that AMD is in a terrific position. Right now, the 9900K may be at the top (barely), but I'm sure there are plenty of BIOS/driver optimizations to come. At the same time, however, I wonder if Intel is going to have to adjust their pricing a bit so that they maintain their lead over AMD. Honestly, I hope so, because competition between Intel and AMD is always going to be a good thing for consumers!

Posted on 2019-07-22 17:23:55

You guys gotta understand - we're not all retarded engineering managers you're trying to sell PC's to. We're enthusiasts. We're gamers. We're overclockers. We come from places like [H]ardOCP and GamersNexus. We see right through the marketing fluff. The first rule of benchmarking is APPLES TO APPLES and IS THIS REALLY IMPROVING MY EXPERIENCE.

If you can't level the playing field because you don't have the sticks of ram? Why bother? Save the article for when you do!

I still haven't gotten answers on the "Enhanced Graphics Performance" in SolidWorks articles. All those benchmarks are useless because AA is broken with the box checked. And not even DS will acknowledge it last I knew.

Posted on 2019-08-21 19:47:28
Jig Serencio Navasquez

Thank you for this analysis, I have now a much clearer perspective on upgrading my pc next year because I give AMD a shot on their first gen Zen cpu's and 90% of my work are in AE.

Posted on 2019-07-20 00:03:51

Finally, I was expecting this article! I thought the 9900k would be the best one for AE but I didn't think the difference between it and the R9 3900x would be so small. Surely the 3900x will crush the 9900k in Premiere...

As a 9900k system owner, I have been thinking on upgrade to the Ryzen 3900X, but the benefits of doing it would be negligible. I'm gonna expect what Zen 2+ vs Ice Lake bring to the market in 2020, maybe Intel decides to jump to the PCIe 4.0 and AMD has a reliable Thunderbolt support.

I'll be expecting for your Premiere and RAM frequency articles, keep the good work and thank you! =)

Posted on 2019-07-20 01:46:22

Why do the intel chips have a such a great lead in the render node benchmark? Is it because they have double the ram? If so, I would like to see the same benchmark with amd and intel with the same amount of ram. 2666 64gbs on both would be great. I don't care if amd can support more higher ram speeds, I just want a fair bonified comparision. I'm also curious if the 128gbs of ram impacted the rest of the results heavily as well.

Posted on 2019-07-20 02:04:47

Is was a little surprised by that too, but the AMD Threadripper system also had 128GB and it still came in behind Intel, even with almost twice as many cores on the 2990WX. I'm guessing it's just down to how After Effects works, and it must just not be as optimally threaded. Even among the Intel processors the order is odd, not just directly following core count. :/

Posted on 2019-07-20 02:27:47

Exactly. Adobe products are well optimized for Intel and Nvidia products that it is impressive that AMD does as well as it does in spite of that. Would love to see this test performed on Davinci Resolv to see how both parties fair on a more fairer playground.

Posted on 2019-07-20 05:33:23
Misha Engel

Adobe products are not optimized for anything, they are more a bunch of plugins that sometimes work and often don't.

Posted on 2019-07-20 13:03:16
Batt Mach

Yeah. I like davinci resolve because it better than premiere pro imo. But I don't think fusion can replace after effects.

Posted on 2019-07-21 21:25:57

Wait... did you really take the time to download my headshot, apply a warp, and create a new Disqus parody account just to post a completely normal and relevant comment? I'm not sure how I'm supposed to feel about this... You could have at least gone with "FatMatt" or something.

Posted on 2019-07-22 20:56:06
Batt Mach

Yes, I did take the time to do all that. I mean, you are the legendary Matt Batch, so I wanna be Batt Mach.

Posted on 2019-07-23 22:10:22

I regret making this, but this is all I think of when I see "Batt Mach"

Posted on 2019-07-23 22:24:18
Batt Mach

LMAO. That's amazing

Posted on 2019-07-24 07:38:42

Honestly, I'm not sure why AMD doesn't do better for that either. It isn't the RAM capacity since our tests never ran out of RAM - the render would have failed if that had been the case. That is definitely something we want to look at closer in the future.

Posted on 2019-07-20 02:32:18

Will the 3950x beat out the 9900k with a slight faster clock speed than the 3900x?

Posted on 2019-07-20 05:15:00

I don't know if that is something we can really even speculate about until it comes out to be honest. The .1GHz Turbo likely only applies for single-threaded workloads, and since the base clock speed is so much lower, it might end up being slower in applications like After Effects. At least, that is what I suspect, but real-world testing is going to be the only way to know for sure.

Posted on 2019-07-20 15:22:09

This does make me a tad sad. I do use rendergarden, and will probably try out BGrender soon. Was hoping the 3900x would be an absolute monster with that on. Maybe it's a problem with those scripts? Or is an issue on Adobe's end.

Posted on 2019-07-20 05:35:50

Our Render Node benchmark is a great start, but I do think there is some work for us to do on it. A couple of the things I want to address is to increase the frame count of the test comps since after about a dozen or so render threads, the time to startup aerender is not an insignificant potion of the total time. It also looks like the comp itself makes a huge difference in how much speedup you get. Effects in AE have a huge spread between the truly single-threaded, and ones that are actually very well threaded. If a comp uses mostly single-threaded effects, plugins like RenderGarden should show amazing speedups. But likewise, if it uses more parallel effects, it would show little gain since you are just spinning up extra threads that aren't necessary.

I think our test comps are right about in the middle - we went through all of our normal benchmark comps and used the ones that showed the best speedup, but they weren't really designed for this in mind specifically so I think we can do more to show a "range of potential performance gains" if we specifically target this better.

Posted on 2019-07-20 15:29:06

I also think that Pro users should wait few weeks until all the 1st day lauch bugs are fixed, many Bios updates coming out on daily basis almost, so it's better to wait a little and then buy the best CPU/MB combo for your needs.
Also regarding Cooling- @William M George , Good thing to know that the New Ryzen 3000 can actually boost to highers clocks if the cooling is good, around 100Mhz for every 20C reduction even on stock settings[similar to Nvidia GPU boost ], So for your professional builds I am sure you gonna use best Noctua coolers because they will provide around 50~100 Mhz more while rendering.
*stock cooler doesn't throttle the 3900X but better cooler that can reduce 20c at load will benefit from ~100Mhz higher boost.

Also waiting for your Asrock X570 with TB3 review, if TB3 works good as on z390 then Asrock boards will be very popular in your builds. https://uploads.disquscdn.c...

Posted on 2019-07-20 07:49:25
Misha Engel

This video gives a better overview of the temp. scaling starting at 5:51


Posted on 2019-07-20 13:08:50

We are constantly testing and re-testing, but we do try to have day 1 (or as close to it) content on any major launch from AMD, Intel, or NVIDIA since people are always clamoring for results. Things definitely can change over time, however, so I completely agree that performance can be different weeks or months after launch.

In fact, I believe we are better about this than most review sites. A lot of them (not all of course) tend to re-use old results in newer articles without actually mentioning it. Whenever we do new testing, we always completely re-test everything. It definitely takes a lot longer to do, but since we are very specific in what we test it is do able for us to manage.

Posted on 2019-07-20 15:33:07

Given how bad After Effects is with memory management I'd say that Intel machines having twice the RAM skews the test considerably.
And it sure sounds weird, why not level the playing field by giving the same RAM setup to both CPU platforms ? It's just inviting conspiracy theories.
Curious how the Premiere and Media Encoder tests will come out.
Threadripper is not R3 so who cares.

Posted on 2019-07-20 08:37:59

Our AE benchmarks need about 32GB of RAM so that you can actually render all the frames into RAM Preview or so you can spin up enough render threads in the new Render Node benchmark, but having more than that doesn't affect performance at all. Typically, we just max out the amount of RAM the system can support, but I admit it is a bit weird with the Ryzen platform. 32GB models (which would give 128GB total) are currently only available at DDR4-2666, but we've had so many people think that we are holding AMD back by using that speed of RAM, so we decided to up it to DDR4-3000. We could have gone all the way up to DDR4-3200, but then we technically would be limited to just two sticks of RAM (32GB total) if we wanted to stay in spec and that potentially could skew the results a bit. Even DDR4-3000 is technically bending our testing "rules" since you are only supposed to use that speed with single rank RAM (which typically is only available in 8GB modules)

We are going to have a RAM speed article coming out soon, and in that one we used the exact same RAM (even the same physical sticks) on each platform. If you compare the results across the articles, you would see that there is no significant difference between using 64GB total or 128GB total.

Posted on 2019-07-20 15:38:54

Thanks for this great article. Was the 9900k overclocked to 5ghz? I am thinking of making the jump from my 8700k to the 3900x and I'm trying to estimate my performance increase

Posted on 2019-07-20 10:05:33

Nope, all our testing is done at stock speeds. We rarely do any overclock testing since that isn't something we offer to our customers. Overclocking can be a great way to get extra performance if you are the kind of person who builds their own systems and like to tinker, but our customers are mostly people who just need solid, reliable workstations.

Posted on 2019-07-20 15:41:08

Um, I'm not buying Matt's explanation for giving AMDs only half the memory. If they have been testing other chips with 128 GB in the past, one can surmise that size of memory is important in these tests. Yet, they elected to gimp the new AMD chips with only half the memory (64GB)? Who cares if running with 128GB would have meant "only" 2666 MHz if test is bottlenecked by the size? I wouldn't have posted this comment if Matt had clarified that 64 GB vs 128 GB doesn't matter. But he hasn't.

Posted on 2019-07-21 01:35:57
Eric Marshall

There is only a +/-13% spread in performance across the entire range of CPU's tested here (from 6-32 cores), and to make matters even more convoluted, the cheapest CPU in the whole bunch, a $200 3600, performed smack dab in the middle of the pack surrounded by CPU's with way more cores and cost. My advise to anyone buying a machine for AE based on looking at these results, is going to be a 3600/3700X, and stash the savings for the day when Adobe updates this software to scale.

Posted on 2019-07-21 02:27:54

I stuck this on your comment on the Photoshop article, but for the benefit of anyone else reading this, here is a (slightly modified) copy of my response:

You are definitely right that at the high-end, there isn't a massive difference in performance between different CPUs. However, I can tell you that at least for our customers, even a 5-10% performance gain is often well worth the cost of the higher-end CPUs. I know our customers are not the average user, but for them, any investment that saves them time pays off incredibly quickly.

But, that is one reason we not only publish our thoughts on the results, but the raw benchmark results and scores as well. Our commentary is always going to be skewed towards our customer base - after all, that is the main reason we do this testing. The fact that is is helpful to the masses is great, and we see no reason to hide it, but I think many people don't quite understand that we are a high-end workstation manufacturer, and everything we do is geared towards helping our customers get the exact right system for their workflow.

Posted on 2019-07-21 04:31:15

Yes, so funny to see the comments sometimes, first time I learned about you was after watching Barnacules Nerdgasm video, OMG it was a joy to see how professional PC builders you are, If I was from US I would for sure order my workstation PC from you :-).
When you are making money from your workstation PC, you want the fastest config you can afford since it will make you more money, Time=Money!.
Also what your customers feel about the possibility of X399 platform that they can just swap the CPU for the newer double the core SKU each year? [1950X-2990WX and soon 48C~64C] without building new PC from scratch and wasting many many hours on installing all the software and config every thing from start?
Setting up your workstation PC witl all your APPs and configs takes a lot more time then installing new Win10 for Gaming PC :-).

Posted on 2019-07-21 06:36:09

To be honest, while a lot of our customers like the idea of upgrading (it is often a topic of discussion with our consultants), it rarely ever actually happens. Easy things like Storage or RAM does get upgraded, but basically anything that they would need to send the system back to us for the upgrade (since most of our customers are not they kind of people who would do that themselves) tends to not happen. Time is money, and having to be without the system for even a few days completely offsets any performance gains they might see with an upgrade.

Far more often, they just get a completely new system. It may not be as cost effective in terms of dollars, but there is no downtime - and that alone means it actually ends up saving them more money in the end. Plus, then they have a backup system that they can use if their main system ever has issues.

Posted on 2019-07-22 17:18:49
Mid Pak

Thank you so much for this article. How can I compare my 6900K (not overclocked) with 3900X that I wish to buy? I'm using the 6900K with a GTX1080, 64GB RAM and all SSD's.

I use AE but also use Premiere Pro a lot, and learning Davinci to hopefully use it in the near future as well.

Posted on 2019-07-21 02:40:16

Best thing to do: run our benchmarks on your system and compare the results to the ones in our articles.

Premiere Pro benchmark: www.pugetsystems.com/go/PrB...
After Effects benchmark: www.pugetsystems.com/go/AeB...
DaVinci Resolve benchmark (actual benchmark download is coming soon!): www.pugetsystems.com/go/DrB...

This does mean it won't be a strictly CPU-only comparison since you will be using a different GPU, storage, etc. but it will be pretty close for Pr and Ae since they are largely CPU limited. Resolve will be the sticky one since many things are so heavy on the GPU. It should still give you a ballpark idea, however.

Posted on 2019-07-21 04:34:19

So now, if I use AE only, which CPU do you recommend for the best performance? Thank you!

Posted on 2019-07-21 15:33:28

Myself, I would use the 9900K. I expect AMD and Intel will flip-flop at the very top-end as BIOS/driver optimizations come out, but the 9900K is a more established platform which means it should have less bugs to deal with. Give it 6 months, however, and you honestly probably won't be able to tell the difference between the 9900K and 3900X.

The main reason to use the 3900X, in my opinion, is if you plan to upgrade the CPU at a later date since AMD platforms tend to allow you to do that easier. Another is PCI-E 4.0 support which may be useful for a high-speed disk cache drive. We really don't know if it will be any better in the real world than the PCI-E 3.0 NVMe drives, but it probably will allow slightly more frames to be written to the cache which may slightly improve overall Ae performance.

Posted on 2019-07-22 17:27:37
Mark Harris

You always ignore the big advantage of the iGPU

Posted on 2020-02-29 14:57:02

That's because it usually isn't much of an advantage. Intel iGPU gets you Quicksync which is used for hardware accelerated encoding/decoding of H.264/H.265 in Ae/Pr/Me, but I have never seen an actual difference in decoding performance on mid/high-end desktop CPUs. I'm sure it makes some difference on laptops, but for this level of CPU? Not so much.

Encoding you can definitely make arguments for, but most of the professionals we work with end up not using hardware encoding for H.264 since the quality is lower at the same bitrate. That is completely something that each person is going to decide for themselves, however, so if the faster encoding time is worth the quality loss for what you do, it can be a factor.

Posted on 2020-03-02 17:59:00

When can we expect Premiere Pro roundup?

Posted on 2019-07-22 11:08:05

Hopefully either today or tomorrow.

Posted on 2019-07-22 17:27:46
Mark Harris

Awesome stuff as always! With my 9900k at 5Ghz all cores using Premiere for 4K editing (nothing fancy on the editing), you think I can benefit from running 64GB? I know if go with 32GB (4x8GB sticks) I can get better speeds from the ram and timing vs doing 4x16GB but not sure if 64GB is actually any helpful vs 32GB which is still a good amount of Ram.

Posted on 2019-07-22 17:51:38

RAM is all about having enough. If your current workflow doesn't use more than 80% of your current RAM, there will likely be no benefit to getting more RAM. If you do use more than 80% of the max capacity, however, more RAM should help quite a bit.

As for RAM speed, I'm working on a post right now about it and for Premiere Pro, we saw basically no difference with the 9900K. Since you are overclocking, that may change things a bit, but in general faster RAM speed than what is natively supported is really only useful on AMD CPUs. So in most cases, using above DDR4-2666 on Intel just decreases reliability with minimal performance gain (usually less than 5%) to show for it. If you are looking to get every percent of performance, go for it, but I definitely wouldn't prioritize faster RAM over simply getting more RAM.

Posted on 2019-07-22 18:50:47
Mark Harris

Thanks for the information, that helps a lot! Looking forward to that RAM tests as well.

Posted on 2019-07-22 20:50:38

Just went up about 5 min ago actually! https://www.pugetsystems.co...

Posted on 2019-07-22 20:52:05

Excellent article!!! Can we expect a roundup in AE for the newer GPUs that have been released? If so, could you please include the RX 580? I think it will be interesting to include it for a couple of reasons. It will be the baseline GPU in the new $6000 Mac Pro and it is one of the top GPUs that you can pair with a Core i9-9900K on a 27" iMac, which ironically with its i9-9900K could outperform their own baseline Mac Pro, at least in After Effects. The other reason is that the RX 580 is the cheapest 8GB card right now. I'm currently using Photoshop and AE with integrated graphics, and the RX 580 right now is around $200, I don't know if I will see a performance penalty if I buy one instead of buying a RTX 2060 which is around $100 more expensive.
Also, I don't know if I am the only one experiencing this issue, but when I try to zoom in at the individual benchmark results in this article, the results are illegible. It seems that the image for the benchmarks has a 1200 x 345 resolution. The individual benchmark image in the Photoshop article has a 5531 x 1892 resolution and the numbers look terrific when you zoom in.
Thanks for the time and effort you put into these articles, they have become something that I look forward to, keep up the good work!!!

Posted on 2019-07-23 01:00:16

As always, great article and so helpful. Thanks.
I have the 8700k on a z370 platform, I'm thinking of swapping it for a 9900k but though my mobo is supposed to support it I wonder if it might have an impact on the performance. Any thoughts?

Posted on 2019-07-23 22:51:25

Obviously you can't beet the 3700x from a price performance point of view. That's less than 8% behind the 9900k for only 2/3 of its price. But given the performance of the 3900x in some of my usecases, I will go for that one. The ultimate allrounder.

Posted on 2019-07-31 19:06:13

Keep in mind that the CPU is only one part of a whole system, so while the cost of a 3700X vs 3800X seems big when just looking at the chip price, it is a much smaller difference when you look at the cost of a whole system. Unless you are buying the CPU alone (if upgrading an existing Ryzen computer, for example) it may actually be more cost-effective to get the fastest model :)


Posted on 2019-07-31 21:40:53
Mark Harris

Actually yes you can beat it because for example, the 9900k is faster in Photoshop, faster in Premiere and that is not even taking into account the iGPU which not only boosts its Premiere performance passed the 3900x but is also an added feature AND value vs the AMD part. And of course any apps that favor speed, will perform much better with the 9900k not to mention gaming.
So yes you can "beet" it depending on your specific needs.

Posted on 2019-08-01 20:27:05

Is the 9900k 75% faster over the 3700x? ... because that's how much more expensive it is.

Posted on 2020-03-05 15:07:34

No, but neither is the Ryzen 3900X (which is the same price as the Core i9 9900K). And the Core i9 10980XE isn't 2x faster, just like the Threadripper 3990X isn't 12x faster. Performance does not scale linearly with price, so if you are just looking at pure price/performance ratios, the cheapest CPU is almost always going to be the best "value".

The right CPU is going to vary for different people depending on what they are doing, whether it is professional or a hobby, and what kind of return on investment they expect. If a CPU that is $150-200 more expensive will result in them being able to finish a project a day sooner, be able to get a job they otherwise wouldn't, or just be able to finish a render before the end of the day so they don't have to let it go overnight unattended, most professionals are going to be more than happy with that investment. Our customers are often perfectly fine with paying a premium for an extra 5-10% performance simply because this is powering their livelihood, and any time (and frustration) savings is well worth the investment.

But that is something you have to decide on your own. It is just like how I don't personally own $200 drill for home repairs. For me, a $50 drill is perfectly adequate, but for a professional contractor or home builder, a higher quality product is likely well worth the investment.

Posted on 2020-03-05 17:02:56
Patrik Lindahl

Wow! Great article! I have been waiting for this!
Regarding the surprising render node performance benchmark, how was the CPU utilization on the poorly performing higher core count systems?
For example for the 32 core AMD system, was the CPU utilization relay 100% during the multi node render?

Also, how does the benchmark work when launching new render nodes, is it one render node per core or one per each other core? Is it a ramp up?
I haven't tested with your benchmark tool yet, but my own tests suggests that the number of render nodes scale very differently depending on the type of project you use. For example a project that uses GPU rendering will sometimes hardly scale at all with multiple render nodes, but when I turn off GPU rendering the renders usually scales much more linearly.
Some very simple projects that don't use effects much will scale very linearly and be limited mostly by disk and/or network performance but some other more advanced project scale badly.
The way I made the tests I had a few different projects and then I scaled the render nodes from 1 and up one at a time until I couldn't see any performance gain. I did this twice, one with GPU on and one with GPU off.
Also the memory settings you give each node can be very important, especially when the render node numbers are starting to come up a bit. To my experience the actual rendering pipeline can work very efficiently even with a limited amount of RAM per node. I might be wrong, but it seems to me that it caches a lot of frames unnecessary in RAM that just sits there unused for preview purposes.In my testing I have had decent results with splitting the RAM equally over all render nodes by using the -mem_usage flag and both the cache and max values.

Posted on 2019-08-16 03:13:05

Fascinating information! I've been combing through all the AE benchmark results I can get my hands on to see if there's a clear comparison between the TR 2950X and the Ryzen family of processors. From a value standpoint, is it worth getting a 2950X (because of the 256GB Ram cap), or a 3800X or 3900X Ryzen? If I read your tests correctly, and if in fact RAM capacity becomes moot at a certain capacity, it's a better value to purchase a 3800X over the TR 2950 (if they were the same price). Thoughts?

Posted on 2019-11-21 18:47:22

More RAM is always good for After Effects, but if your comps aren't long or complex enough that you really need it, it is better to get the better performance of the Ryzen CPUs in order to get better RAM Preview performance. We have a handy chart on our Hardware Recommendations page (https://www.pugetsystems.co... that can help give you an idea of whether you may need more than 128GB of RAM or not.

Also, keep in mind that there are new CPUs from both Intel and AMD launching soon. You may want to hold off on making any decisions until those drop.

Posted on 2019-11-21 18:55:49
Ryan Rocamora

Nice Article! I am not surprised by this, since Adobe After Effects is not optimized for multi-cores, but for higher frequency.


Posted on 2019-12-01 13:42:43

Just making sure, you know that School of Motion made that video with Puget Systems (that is actually me in that section)? Also, make sure you check out our most recent article on CPU performance in After Effects: https://www.pugetsystems.co... since things have gotten really interesting lately.

Both Intel and AMD have been making really great progress allowing their high core count CPUs to Turbo to really high clock rates when only a handful of cores are being used which makes the new AMD Ryzen and Threadripper CPUs are really good for After Effects even though they have a ton of cores. This means that we are finally at the point where more cores won't necessarily make things faster, but they finally are not actively making things slower.

Posted on 2019-12-02 18:56:44
Tracey Aston

Are there project settings/preview settings in AE you recommend to get the best out a machine like this? CUDA/SOFTWARE. I bought a machine after looking at your recommendations on School of motion sadly couldn't come direct as I am in the UK. Having migrated from Mac the PC doesn't seem that quick at previewing and the system doesn't seem to be working particularly hard when I look in task manager, so not sure if I have something set up wrong. There seems to be a wealth of misinformation out there but you guys seem to have a handle on it, so thought I would ask the experts.

Posted on 2020-02-28 10:33:17

Honestly, there isn't much that I would recommend changing. You definitely want to be use CUDA for the Mercury Playback Engine (or OpenCL if you have an AMD GPU), but that should set by default. Same with anything in the preferences regarding hardware acceleration.

Really the only significant thing I would recommend is to have a dedicated NVMe drive for your disk cache. Other than that, I know some people like disabling video preview during render queue output (in Preferences -> Video Preview), but I'm not sure how much performance that actually gets you.

Posted on 2020-02-28 16:50:48
Tracey Aston

That is so goos of you yo respond so quickly. Sounds llike I was doing all the right things - transpires that the pump on the liquid cooler has failed which is perhaps the route of my issues.

Posted on 2020-02-28 16:56:26