Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1533
Article Thumbnail

After Effects CPU Roundup: AMD Ryzen 3, AMD Threadripper 2, Intel 9th Gen, Intel X-series

Written on July 19, 2019 by Matt Bach
Share:

Introduction

Way back in 2014, After Effects was very efficient at using multiple CPU cores to improve performance which would have made processors like AMD's new Ryzen series (which currently have up to 12 cores and soon will have a 16 core model) absolutely terrific. However, Adobe has trended away from this in recent years, largely due to the addition of GPU acceleration. Newer versions of Ae are certainly still faster than older versions, but today it is less about having a ton of CPU cores than having a CPU that has very fast individual cores.

This makes the new Ryzen 3rd generation CPUs very interesting since not only do they have more cores than the previous generation, AMD has also spent considerable effort to improve performance for moderately threaded applications like After Effects. In a large part, this is really what makes AMD's new processors very exciting. The increase in core count is certainly nice (and may be useful for those using multi-threading plugins/scripts to improve render performance), but most applications simply are not going to see a benefit from the higher core counts that both Intel and AMD are trending towards. Instead, it is the IPC (instructions per clock) improvements that will be more significant for most users - even if that doesn't show up in the marketing specs.

AMD Ryzen 3rd Gen After Effects Performance

In this article, we will be looking at exactly how well the new Ryzen 3600, 3700X, 3800X, and 3900X perform in After Effects. Since we expect these CPUs to shake up the market quite a bit, we also took this opportunity to do a full CPU roundup. Not only will we include results for a few of the previous generation Ryzen CPUs, but also the latest AMD Threadripper, Intel 9th Gen, and Intel X-series CPUs. And for good measure, we will throw in a 14-core iMac Pro and a current (for the moment) 2013 Mac Pro 12-core as well.

If you would like to skip over our test setup and benchmark sections, feel free to jump right to the Conclusion.

Looking for an After Effects Workstation?

Puget Systems offers a range of workstations that are tailor made for your unique workflow. Our goal is to provide most effective and reliable system possible so you can concentrate on your work and not worry about your computer.

Configure a System!

Test Setup & Methodology

Listed below are the specifications of the systems we will be using for our testing:

Shared PC Hardware/Software
Video Card NVIDIA GeForce RTX 2080 Ti 11GB
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit (version 1903)
Photoshop CC 2019 (Ver 20.0.4)
Puget Systems Ps Benchmark V18.10 BETA
Mac Test Platforms
iMac Pro 14-core Intel Xeon W
64GB 2666MHz DDR4 ECC
Radeon Pro Vega 64 16GB
1TB SSD
Mac Pro (2013) 12-core, 2.7GHz
64GB 1866MHz DDR3 ECC
Dual AMD FirePro D700 6GB
1TB PCIe-based SSD

*All the latest drivers, OS updates, BIOS, and firmware applied as of July 2nd, 2019

Note that while most of our PC test platforms are using DDR4-2666 memory, we did switch up to DDR4-3000 for the AMD Ryzen platform. AMD CPUs can be more sensitive to RAM speed than Intel CPUs, although in our Does RAM speed affect video editing performance? testing, we found that the new Ryzen CPUs only saw modest performance gains in Creative Cloud applications when going from DDR4-2666 to even DDR4-3600 RAM.

For each platform, we used the maximum amount of RAM that is both officially supported and actually available at the frequency we tested. This does mean that the Ryzen platform ended up with only 64GB of RAM while the other platforms had 128GB, but since our benchmarks never need more than 32GB of RAM to run, this does not actually affect performance at all. We have recently re-confirmed this in our RAM speed article linked above.

However, keep in mind that this is technically overclocking since the AMD Ryzen 3rd Gen CPUs support different RAM speeds depending on how many sticks you use and whether they are single or dual rank:

Ryzen 3rd Gen supported RAM:

  • 2x DIMM: DDR4-3200
  • 4x single rank DIMM: DDR4-2933
  • 4x dual rank DIMM: DDR4-2667

Since we are using four sticks of dual rank RAM (almost every 16GB module available will be dual rank), we technically should limit our RAM speed to DDR4-2666 if we wanted to stay fully in spec. However, since many end users may end up using a RAM configuration that supports higher speeds, we decided to do our testing with DDR4-3000, which right in the middle of what AMD supports.

The benchmarks we will be using are the latest release of our public After Effects benchmarks. Full details on the benchmarks and a link to download and run it yourself are available at:

Benchmark Results

While our benchmark presents various scores based on the performance of each test, we also wanted to provide the individual results. If there is a specific task that is a hindrance to your workflow, examining the raw results for that task is going to be much more applicable than our Overall scores. Feel free to skip to the next section for our analysis of these results if you rather get a wider view of how each CPU performs in After Effects.

After Effects Benchmark Analysis

Looking at the overall performance in our After Effects benchmark (which combines RAM Preview, Rendering, and Tracking tests), the results are very interesting. The Intel Core i9 9900K continues to stand at the top of the chart, but the new AMD Ryzen CPUs are right on it's tail. In fact, AMD takes the 2nd, 3rd, and 4th place spots!

What this means is that while Intel is still the top dog for general After Effects usage, the AMD Ryzen 5 and 7 CPUs match very favorably against Intel. While pricing is constantly changing, with current MSRP pricing this makes AMD somewhere around 6% faster than Intel at similar price points in the mid-range. At the top-end, the Intel Core i9 9900K maintains a 3% lead over AMD.

To us, this is very exciting to see. Competition between AMD and Intel is always going to be a good thing for consumers, but it has been a while since AMD could seriously compete in applications like After Effects that are only lightly threaded. Given how close the performance was (5% is hard to notice in the real world), we would say that at least in this benchmark, it is pretty much a wash between Intel and AMD.

After Effects Render Node Benchmark Analysis

While most motion graphics artists use After Effects in a fairly traditional manner, many are starting to leverage multiprocessing plugins or homemade scripts to improve rendering performance. These typically leverage a little-known application called "aerender" that is installed alongside After Effects which allows you to divide up your render across multiple threads in order to fully utilize the performance of your CPU and GPU. In fact, the limiting factor is often the amount of RAM and VRAM you have available since each thread requires it's own share of memory.

Since this form of rendering (whether it be on your local machine or across multiple systems on a network) has been gaining in popularity lately, we decided to create a new version of our After Effects benchmark that tests a number of compositions with anywhere from a single render thread to as many threads as your system has CPU cores. If you want to try this benchmark yourself, we have a beta available at Puget Systems After Effects CC Render Node Benchmark.

This is the first iteration of our After Effects Render Node benchmark, but already the results are completely different than what we expected. Given that the Ryzen 9 3900X has four more cores than the Intel Core i9 9900K, we expected it to take the lead. Instead, the 9900K actually pulled further ahead and ended up being ~10% faster than the 3900X.

On the whole, the real winners for this test are clearly the Intel X-series CPUs. The Intel 9th Gen CPUs also did extremely well for their price and are likely still what most users will want, but if you need a system that can tear through renders as fast as possible, the higher-end X-series ended up being on average about 11% faster than the Intel 9th Gen series.

Are the Ryzen 3rd generation CPUs good for After Effects?

Overall, the new AMD Ryzen CPUs perform neck-in-neck with the Intel 9th Gen CPUs in traditional After Effects workflows. The Intel 9900K continues to hold the performance crown by a slim margin (especially for RAM Preview performance), but AMD has a slight lead at the Ryzen 5 and 7 price-points. If you use multithread plugins or scripts like BG Renderer Max or RenderGarden, however, Intel take a pretty significant lead over AMD at the top-end, but it continues to very close at the mid/low-end.

In the real world, the extra performance during RAM Preview with the Intel Core i9 9900K will likely be noticeable, but if you are looking for a CPU at a slightly lower price point, it is going to be really hard to tell the difference between a system with an Intel 9th Gen CPU and one with an AMD Ryzen 3rd generation CPU. Because of this, many users are going to want to consider other factors beyond just price and performance. While there are many more factors that go into your choice of platform, here are a few other considerations to keep in mind:

On the Intel side, the Z390 platform has been available for quite some time which means that most of the bugs and issues have been worked out. In our experience over recent years, Intel also simply tends to be more stable overall than AMD and is the only way to get Thunderbolt support that actually works. Thunderbolt can be a bad time on PC, and there are only a few motherboard brands (like Gigabyte) where we have had it actually work properly.

For AMD, the X570 platform is very new and there will be a period of time where bugs will need to be ironed out. However, AMD is much better about allowing you to use newer CPUs in older motherboards, so if upgrading your CPU is something you will likely do in the next few years, AMD is the stronger choice. In addition, X570 is currently the only platform with support for PCI-E 4.0. This won't directly affect performance in most cases (although it may allow Ae to write to the disc cache faster), but it will open up the option to use insanely fast storage drives as they become available.

Keep in mind that the benchmark results in this article are strictly for After Effects. If your workflow includes other software packages (We have articles for Photoshop, Premiere Pro, DaVinci Resolve, etc.), you need to consider how the processor will perform in all those applications. Be sure to check our list of Hardware Articles for the latest information on how these CPUs perform with a variety of software packages.

Looking for an After Effects Workstation?

Puget Systems offers a range of workstations that are tailor made for your unique workflow. Our goal is to provide most effective and reliable system possible so you can concentrate on your work and not worry about your computer.

Configure a System!

Tags: After Effects, Intel 9th Gen, Intel X-series, Intel vs AMD, AMD Ryzen 3rd Gen, AMD Threadripper 2nd Gen
Hwgeek

Have you tested Asrock X570 boards? they officially support Thunderbolt 3.
Thanks for your great articles!.

Posted on 2019-07-19 19:54:24

We have not. As far as I'm aware, there is no completely official implementation for Thunderbolt on AMD chipsets, but motherboard manufacturers can slap it on if they want. A few brands have that on AMD Threadripper boards, but when we tried it it really didn't work well (which is likely why they didn't list Thunderbolt in the specs). My take is that Asrock is willing to take more risks than other brands, but whether that is because they are confident in it working or simply because it is something they can use in marketing to try to drive sales I don't know.

After working with Thunderbolt for years and dealing with the huge hassle it has been, I personally wouldn't trust it - at least not in this early of an implementation. I've been wrong before, however, so who knows.

Posted on 2019-07-19 20:11:07
Hwgeek

Thanks you for your reply, indeed looks like only Asrock put an effort to implement TB3, on some boards it's already built in like the ITX model and there is special Creators model too, IMO it's will be interesting model for you:
https://www.anandtech.com/s...
https://uploads.disquscdn.c...

Posted on 2019-07-19 20:31:45
Misha Engel

All ASRock X570 boards officially support Thunderbolt.
Only the X570 Phantom Gaming-ITX/TB3 and the not yet announced creator* board have it build in.

Posted on 2019-07-20 12:59:40

This is a bit of an old thread, but we spoke with AMD at SIGGRAPH and they cleared a few things up for us. First of all, the X570 chipset does NOT have Intel-certified Thunderbolt support as of now. So the Thunderbolt implementation by ASRock is "official" in that ASRock is saying it will work, but neither AMD nor Intel have certified it.

Given how poor of an experience we have had with Thunderbolt on completely certified platforms (beyond a few manufactures like Gigabyte), I would be very wary of Thunderbolt on any AMD platform right now. It might work depending on the exact Thunderbolt device you are using, but my guess is that you have somewhere around a 50/50 chance it won't work or will have tons of issues.

Posted on 2019-08-16 16:14:18
Nick Hamilton

Thanks for this in-depth look! I very recently upgraded my work machine to an i9 9900K... I was seriously thinking the new Ryzen 3900X would beat it in benchmark tests, so that's a surprise (altho it is clearly very close). For the price point, the 3700X could be a good pick though for a slightly less high-end machine.

Posted on 2019-07-19 20:10:45

Aw, it's a good result, but I was hoping Ryzen 3900X would be even closer to the 9900K. Well, at least you get extra cores, so perhaps it'll even out as more parts of AE get better multi-threading support.

Posted on 2019-07-19 21:30:52

I'm not surprised to see Puget Systems (principally an Intel vendor, when you actually look at their inventory proportionally) protect the Intel brand (and back inventory), against nearly unanimous findings otherwise across the World Wide Web. Here are a few flaws in their testing platform:
1. One great benefit of the Zen 2 architecture is native support for high-frequency DRAM; I installed 16gb x 2 of 3600 MHz DRAM and that was actually the budget choice; it runs very stable paired with my 3900x on an Asus Prime-Pro X570 motherboard. Puget admits that they damaged the capability of their DRAM for the AMD tests, to create "a level playing field." But Intel should be penalized as a CPU choice, for failing to support fast DRAM that is uber-cheap today.
2. Hiding in the test bench disclosure, is that the included cheapo stock cooler got used on the AMD, compared to the best-in-class Noctua on the Intel. Obviously, speed throttles according to temperature, and Puget knows this (and cannot verify that temperature had no impact on these results, even if they nonchalantly claim it).
3. Oddly, they gave double the amount of DRAM to the Intel test bed. Why? Again, Puget can claim it has no impact, but can't verify it.
4. Simply put, when you are motivated to be pro-Intel, you emphasize single-threaded tasks. Whether you focus on the single- or multi-threaded tasks inside of After Effects -- not to mention Adobe Media Encoder's functionality which is un-severable from an After Effects workflow -- that is extremely dispositive (and revealing of bias). Whatever kinds of things you do, are subjective until the point that you add bias.

I just find it hilarious and frankly vintage how the Intel Industrial P.R. machine has been squirming these days, after years of violating its own Moore's Law, owing to greed and throttling innovation guided by their accounting department. Intel got creamed by AMD, most of the universe has come around to admitting it, save for a few financially-motivated holdouts...

Posted on 2019-07-19 22:35:12

LOL... Did you miss Matt's Photoshop article yesterday, where AMD's new CPUs took 3 of the top 4 performance spots? Or my preview (full article coming next week) showing these chips roundly defeating Intel at CPU-based rendering? We do our best to make realistic testing and present unbiased results. Will it be perfect? No, but nothing in this world is :)

Also, Matt is working on a RAM speed article focusing on how much impact that has on Ryzen performance - so we'll see soon how much that impacts things.

Posted on 2019-07-19 22:40:10

This entire field of science holds ego and brand in irrelevance; results and methodologies are the only thing relevant. I listed four problems, just to begin with. They are unaffected factually by feelings, etc...

Posted on 2019-07-19 22:42:30

Sigh...

1) Matt is, as I said, testing RAM speed impact on performance with Ryzen. We aren't ignoring that. Personally, I am including 2666 and 3200MHz memory results in my articles - as those are the lower and upper bounds of AMD's official memory speed support for Ryzen 3rd Gen chips. Matt is going wider in his full RAM testing, coming soon, but opted to go with 3000MHz for these CPUs in this roundup. That is the max supported speed for four (4) single-rank modules on Ryzen 3rd Gen.

The 3600MHz stuff you said you're using, though? That is above AMD's official support spec, even if motherboard makers allow it (and indeed, AMD has shown higher speeds in their own marketing materials as well).

2) AMD's stock cooler is actually pretty decent, and I haven't seen any evidence of thermal throttling in my tests. We separate performance testing from qualification here, though, and we have not yet qualified a "better" (aftermarket) cooler for these chips. When we do, I expect we'll start to use that cooler up here in Labs.

3) The largest memory modules we have at speeds above 2666MHz is 16GB per module, thus forcing the 4 x 16 = 64GB limit on Ryzen. Had he used 32GB 2666MHz modules, people would instead have complained about the RAM speed. Damned if you, damned if you don't :/

For my part, I did try to keep my tests at the same amount of RAM when possible - but that is because RAM amount can actually affect some of my benchmarks (especially photogrammetry). I don't think 64 vs 128GB has an impact on Matt's tests, but he would be able to speak to that more than I can.

4) We aren't motivated to be pro-Intel. Sure, everyone has some biases which they aren't even aware of... but we definitely don't try and tailor tests or results to favor any particular manufacturer. I'm sorry you think so little of us.

Posted on 2019-07-19 22:57:53
jerrytsao

You guys should ignore trolls like him, let them have fun in WCCFTech where all retards spamming under mama's basement, btw great tests as always.

Posted on 2019-07-20 11:10:01

You seem to have a bit of a vendetta against us for some reason, so I'm close to simply ignoring your posts from here on out rather than explaining the reasons behind our testing, but I'll give this one last shot.

First, official and unofficial RAM speed support are completely different things. Sure, you can use 3600 with Ryzen, but you can do that with Intel too (and in our testing, the performance gain is pretty close to the same with either x570 or Z390) . And you may not have issues with your one system with higher RAM, but we do thousands of systems and I assure you, using higher frequency RAM than what is officially supported definitely makes systems less stable overall.

As for the coolers, the reason we used the wraith cooler is because it is frankly an excellent cooler and is more than enough to prevent throttling at stock speeds, especially on the open air test beds we use.

The reason we used more RAM on Z390 is because you can get 32GB modules at ddr4-2666. Previous testing where we kept AMD at 2666 people thought we weren't giving AMD a fair shot, so we upped the RAM speed this time to give AMD a more fair look. Would you also say we skewed the testing in favor of AMD if we used 128GB of 2666 RAM?

We do have an article coming looking at how RAM speed affects performance, however and that uses identical RAM on each platform. None of our benchmarks benefit from more than 32GB of RAM though, so capacity differences don't affect performance in this testing.

Please, let's keep these comments as discussions rather than attacks. We really prefer not the ban people if we can help it, but if things are no longer being constructive discussions, we will take that step.

Posted on 2019-07-19 23:04:53
Eric Marshall

They actually tested Ryzen using an overclocked memory configuration with memory that has an XMP profile with tight timings, while leaving Intel at stock memory clocks (slower) with much looser timings. If anything, they favored AMD with the memory selections by a wide margin.

Posted on 2019-07-21 02:37:38
Ramazan Doğan Eray

Directly at first sight on this article I noticed same things. Fan differences were shouting at first glance, plus they insist to use lower rams on R3 cpus with claiming there won't be any effect. Sorry intel do whatever u want(realuserbenchmarks already being published everywhere) this time amd is clearly winner. As a nearly 25 years intel user, I'm going to buy one of those amazing amd cpus first in my life. I mean come on, they are clearly too strong than intels 9xxx and have newest techs with x570 boards and even cheaper. Even cheapest 3600 is just around %7 weaker than 9700k. It's amazing.

Posted on 2019-07-21 09:24:23

We actually have a RAM speed/capacity article coming out today or tomorrow where we look at that. I can tell you for 100% certain that our benchmarks do not benefit from having more than 32GB of RAM. In the real world more RAM can help of course (especially in AE since it lets you have more frames in RAM Preview), but it all depends on exactly what you are doing and since our benchmarks test relatively short comps, more RAM doesn't improve playback/export performance.

I absolutely agree that AMD is in a terrific position. Right now, the 9900K may be at the top (barely), but I'm sure there are plenty of BIOS/driver optimizations to come. At the same time, however, I wonder if Intel is going to have to adjust their pricing a bit so that they maintain their lead over AMD. Honestly, I hope so, because competition between Intel and AMD is always going to be a good thing for consumers!

Posted on 2019-07-22 17:23:55
Jig Serencio Navasquez

Thank you for this analysis, I have now a much clearer perspective on upgrading my pc next year because I give AMD a shot on their first gen Zen cpu's and 90% of my work are in AE.

Posted on 2019-07-20 00:03:51
yezhacker

Finally, I was expecting this article! I thought the 9900k would be the best one for AE but I didn't think the difference between it and the R9 3900x would be so small. Surely the 3900x will crush the 9900k in Premiere...

As a 9900k system owner, I have been thinking on upgrade to the Ryzen 3900X, but the benefits of doing it would be negligible. I'm gonna expect what Zen 2+ vs Ice Lake bring to the market in 2020, maybe Intel decides to jump to the PCIe 4.0 and AMD has a reliable Thunderbolt support.

I'll be expecting for your Premiere and RAM frequency articles, keep the good work and thank you! =)

Posted on 2019-07-20 01:46:22
anonymous

Why do the intel chips have a such a great lead in the render node benchmark? Is it because they have double the ram? If so, I would like to see the same benchmark with amd and intel with the same amount of ram. 2666 64gbs on both would be great. I don't care if amd can support more higher ram speeds, I just want a fair bonified comparision. I'm also curious if the 128gbs of ram impacted the rest of the results heavily as well.

Posted on 2019-07-20 02:04:47

Is was a little surprised by that too, but the AMD Threadripper system also had 128GB and it still came in behind Intel, even with almost twice as many cores on the 2990WX. I'm guessing it's just down to how After Effects works, and it must just not be as optimally threaded. Even among the Intel processors the order is odd, not just directly following core count. :/

Posted on 2019-07-20 02:27:47
MarketAndChurch

Exactly. Adobe products are well optimized for Intel and Nvidia products that it is impressive that AMD does as well as it does in spite of that. Would love to see this test performed on Davinci Resolv to see how both parties fair on a more fairer playground.

Posted on 2019-07-20 05:33:23
Misha Engel

Adobe products are not optimized for anything, they are more a bunch of plugins that sometimes work and often don't.

Posted on 2019-07-20 13:03:16
Batt Mach

Yeah. I like davinci resolve because it better than premiere pro imo. But I don't think fusion can replace after effects.

Posted on 2019-07-21 21:25:57

Wait... did you really take the time to download my headshot, apply a warp, and create a new Disqus parody account just to post a completely normal and relevant comment? I'm not sure how I'm supposed to feel about this... You could have at least gone with "FatMatt" or something.

Posted on 2019-07-22 20:56:06
Batt Mach

Yes, I did take the time to do all that. I mean, you are the legendary Matt Batch, so I wanna be Batt Mach.

Posted on 2019-07-23 22:10:22

I regret making this, but this is all I think of when I see "Batt Mach"
https://uploads.disquscdn.c...

Posted on 2019-07-23 22:24:18
Batt Mach

LMAO. That's amazing

Posted on 2019-07-24 07:38:42

Honestly, I'm not sure why AMD doesn't do better for that either. It isn't the RAM capacity since our tests never ran out of RAM - the render would have failed if that had been the case. That is definitely something we want to look at closer in the future.

Posted on 2019-07-20 02:32:18
Miles

Will the 3950x beat out the 9900k with a slight faster clock speed than the 3900x?

Posted on 2019-07-20 05:15:00

I don't know if that is something we can really even speculate about until it comes out to be honest. The .1GHz Turbo likely only applies for single-threaded workloads, and since the base clock speed is so much lower, it might end up being slower in applications like After Effects. At least, that is what I suspect, but real-world testing is going to be the only way to know for sure.

Posted on 2019-07-20 15:22:09
MarketAndChurch

This does make me a tad sad. I do use rendergarden, and will probably try out BGrender soon. Was hoping the 3900x would be an absolute monster with that on. Maybe it's a problem with those scripts? Or is an issue on Adobe's end.

Posted on 2019-07-20 05:35:50

Our Render Node benchmark is a great start, but I do think there is some work for us to do on it. A couple of the things I want to address is to increase the frame count of the test comps since after about a dozen or so render threads, the time to startup aerender is not an insignificant potion of the total time. It also looks like the comp itself makes a huge difference in how much speedup you get. Effects in AE have a huge spread between the truly single-threaded, and ones that are actually very well threaded. If a comp uses mostly single-threaded effects, plugins like RenderGarden should show amazing speedups. But likewise, if it uses more parallel effects, it would show little gain since you are just spinning up extra threads that aren't necessary.

I think our test comps are right about in the middle - we went through all of our normal benchmark comps and used the ones that showed the best speedup, but they weren't really designed for this in mind specifically so I think we can do more to show a "range of potential performance gains" if we specifically target this better.

Posted on 2019-07-20 15:29:06
Hwgeek

I also think that Pro users should wait few weeks until all the 1st day lauch bugs are fixed, many Bios updates coming out on daily basis almost, so it's better to wait a little and then buy the best CPU/MB combo for your needs.
Also regarding Cooling- @William M George , Good thing to know that the New Ryzen 3000 can actually boost to highers clocks if the cooling is good, around 100Mhz for every 20C reduction even on stock settings[similar to Nvidia GPU boost ], So for your professional builds I am sure you gonna use best Noctua coolers because they will provide around 50~100 Mhz more while rendering.
*stock cooler doesn't throttle the 3900X but better cooler that can reduce 20c at load will benefit from ~100Mhz higher boost.

Also waiting for your Asrock X570 with TB3 review, if TB3 works good as on z390 then Asrock boards will be very popular in your builds. https://uploads.disquscdn.c...
https://www.youtube.com/wat...

Posted on 2019-07-20 07:49:25
Misha Engel

This video gives a better overview of the temp. scaling starting at 5:51

https://youtu.be/WXbCdGENp5I?t=351

Posted on 2019-07-20 13:08:50

We are constantly testing and re-testing, but we do try to have day 1 (or as close to it) content on any major launch from AMD, Intel, or NVIDIA since people are always clamoring for results. Things definitely can change over time, however, so I completely agree that performance can be different weeks or months after launch.

In fact, I believe we are better about this than most review sites. A lot of them (not all of course) tend to re-use old results in newer articles without actually mentioning it. Whenever we do new testing, we always completely re-test everything. It definitely takes a lot longer to do, but since we are very specific in what we test it is do able for us to manage.

Posted on 2019-07-20 15:33:07
brumbach

Given how bad After Effects is with memory management I'd say that Intel machines having twice the RAM skews the test considerably.
And it sure sounds weird, why not level the playing field by giving the same RAM setup to both CPU platforms ? It's just inviting conspiracy theories.
Curious how the Premiere and Media Encoder tests will come out.
Threadripper is not R3 so who cares.

Posted on 2019-07-20 08:37:59

Our AE benchmarks need about 32GB of RAM so that you can actually render all the frames into RAM Preview or so you can spin up enough render threads in the new Render Node benchmark, but having more than that doesn't affect performance at all. Typically, we just max out the amount of RAM the system can support, but I admit it is a bit weird with the Ryzen platform. 32GB models (which would give 128GB total) are currently only available at DDR4-2666, but we've had so many people think that we are holding AMD back by using that speed of RAM, so we decided to up it to DDR4-3000. We could have gone all the way up to DDR4-3200, but then we technically would be limited to just two sticks of RAM (32GB total) if we wanted to stay in spec and that potentially could skew the results a bit. Even DDR4-3000 is technically bending our testing "rules" since you are only supposed to use that speed with single rank RAM (which typically is only available in 8GB modules)

We are going to have a RAM speed article coming out soon, and in that one we used the exact same RAM (even the same physical sticks) on each platform. If you compare the results across the articles, you would see that there is no significant difference between using 64GB total or 128GB total.

Posted on 2019-07-20 15:38:54
Evond

Thanks for this great article. Was the 9900k overclocked to 5ghz? I am thinking of making the jump from my 8700k to the 3900x and I'm trying to estimate my performance increase

Posted on 2019-07-20 10:05:33

Nope, all our testing is done at stock speeds. We rarely do any overclock testing since that isn't something we offer to our customers. Overclocking can be a great way to get extra performance if you are the kind of person who builds their own systems and like to tinker, but our customers are mostly people who just need solid, reliable workstations.

Posted on 2019-07-20 15:41:08
M2018

Um, I'm not buying Matt's explanation for giving AMDs only half the memory. If they have been testing other chips with 128 GB in the past, one can surmise that size of memory is important in these tests. Yet, they elected to gimp the new AMD chips with only half the memory (64GB)? Who cares if running with 128GB would have meant "only" 2666 MHz if test is bottlenecked by the size? I wouldn't have posted this comment if Matt had clarified that 64 GB vs 128 GB doesn't matter. But he hasn't.

Posted on 2019-07-21 01:35:57
Eric Marshall

There is only a +/-13% spread in performance across the entire range of CPU's tested here (from 6-32 cores), and to make matters even more convoluted, the cheapest CPU in the whole bunch, a $200 3600, performed smack dab in the middle of the pack surrounded by CPU's with way more cores and cost. My advise to anyone buying a machine for AE based on looking at these results, is going to be a 3600/3700X, and stash the savings for the day when Adobe updates this software to scale.

Posted on 2019-07-21 02:27:54

I stuck this on your comment on the Photoshop article, but for the benefit of anyone else reading this, here is a (slightly modified) copy of my response:

You are definitely right that at the high-end, there isn't a massive difference in performance between different CPUs. However, I can tell you that at least for our customers, even a 5-10% performance gain is often well worth the cost of the higher-end CPUs. I know our customers are not the average user, but for them, any investment that saves them time pays off incredibly quickly.

But, that is one reason we not only publish our thoughts on the results, but the raw benchmark results and scores as well. Our commentary is always going to be skewed towards our customer base - after all, that is the main reason we do this testing. The fact that is is helpful to the masses is great, and we see no reason to hide it, but I think many people don't quite understand that we are a high-end workstation manufacturer, and everything we do is geared towards helping our customers get the exact right system for their workflow.

Posted on 2019-07-21 04:31:15
Hwgeek

Yes, so funny to see the comments sometimes, first time I learned about you was after watching Barnacules Nerdgasm video, OMG it was a joy to see how professional PC builders you are, If I was from US I would for sure order my workstation PC from you :-).
When you are making money from your workstation PC, you want the fastest config you can afford since it will make you more money, Time=Money!.
Also what your customers feel about the possibility of X399 platform that they can just swap the CPU for the newer double the core SKU each year? [1950X-2990WX and soon 48C~64C] without building new PC from scratch and wasting many many hours on installing all the software and config every thing from start?
Setting up your workstation PC witl all your APPs and configs takes a lot more time then installing new Win10 for Gaming PC :-).

Posted on 2019-07-21 06:36:09

To be honest, while a lot of our customers like the idea of upgrading (it is often a topic of discussion with our consultants), it rarely ever actually happens. Easy things like Storage or RAM does get upgraded, but basically anything that they would need to send the system back to us for the upgrade (since most of our customers are not they kind of people who would do that themselves) tends to not happen. Time is money, and having to be without the system for even a few days completely offsets any performance gains they might see with an upgrade.

Far more often, they just get a completely new system. It may not be as cost effective in terms of dollars, but there is no downtime - and that alone means it actually ends up saving them more money in the end. Plus, then they have a backup system that they can use if their main system ever has issues.

Posted on 2019-07-22 17:18:49
Mid Pak

Thank you so much for this article. How can I compare my 6900K (not overclocked) with 3900X that I wish to buy? I'm using the 6900K with a GTX1080, 64GB RAM and all SSD's.

I use AE but also use Premiere Pro a lot, and learning Davinci to hopefully use it in the near future as well.

Posted on 2019-07-21 02:40:16

Best thing to do: run our benchmarks on your system and compare the results to the ones in our articles.

Premiere Pro benchmark: www.pugetsystems.com/go/PrB...
After Effects benchmark: www.pugetsystems.com/go/AeB...
DaVinci Resolve benchmark (actual benchmark download is coming soon!): www.pugetsystems.com/go/DrB...

This does mean it won't be a strictly CPU-only comparison since you will be using a different GPU, storage, etc. but it will be pretty close for Pr and Ae since they are largely CPU limited. Resolve will be the sticky one since many things are so heavy on the GPU. It should still give you a ballpark idea, however.

Posted on 2019-07-21 04:34:19
liao78

So now, if I use AE only, which CPU do you recommend for the best performance? Thank you!

Posted on 2019-07-21 15:33:28

Myself, I would use the 9900K. I expect AMD and Intel will flip-flop at the very top-end as BIOS/driver optimizations come out, but the 9900K is a more established platform which means it should have less bugs to deal with. Give it 6 months, however, and you honestly probably won't be able to tell the difference between the 9900K and 3900X.

The main reason to use the 3900X, in my opinion, is if you plan to upgrade the CPU at a later date since AMD platforms tend to allow you to do that easier. Another is PCI-E 4.0 support which may be useful for a high-speed disk cache drive. We really don't know if it will be any better in the real world than the PCI-E 3.0 NVMe drives, but it probably will allow slightly more frames to be written to the cache which may slightly improve overall Ae performance.

Posted on 2019-07-22 17:27:37
Siyabend

When can we expect Premiere Pro roundup?

Posted on 2019-07-22 11:08:05

Hopefully either today or tomorrow.

Posted on 2019-07-22 17:27:46
Mark Harris

Awesome stuff as always! With my 9900k at 5Ghz all cores using Premiere for 4K editing (nothing fancy on the editing), you think I can benefit from running 64GB? I know if go with 32GB (4x8GB sticks) I can get better speeds from the ram and timing vs doing 4x16GB but not sure if 64GB is actually any helpful vs 32GB which is still a good amount of Ram.

Posted on 2019-07-22 17:51:38

RAM is all about having enough. If your current workflow doesn't use more than 80% of your current RAM, there will likely be no benefit to getting more RAM. If you do use more than 80% of the max capacity, however, more RAM should help quite a bit.

As for RAM speed, I'm working on a post right now about it and for Premiere Pro, we saw basically no difference with the 9900K. Since you are overclocking, that may change things a bit, but in general faster RAM speed than what is natively supported is really only useful on AMD CPUs. So in most cases, using above DDR4-2666 on Intel just decreases reliability with minimal performance gain (usually less than 5%) to show for it. If you are looking to get every percent of performance, go for it, but I definitely wouldn't prioritize faster RAM over simply getting more RAM.

Posted on 2019-07-22 18:50:47
Mark Harris

Thanks for the information, that helps a lot! Looking forward to that RAM tests as well.

Posted on 2019-07-22 20:50:38

Just went up about 5 min ago actually! https://www.pugetsystems.co...

Posted on 2019-07-22 20:52:05
jvindas

Excellent article!!! Can we expect a roundup in AE for the newer GPUs that have been released? If so, could you please include the RX 580? I think it will be interesting to include it for a couple of reasons. It will be the baseline GPU in the new $6000 Mac Pro and it is one of the top GPUs that you can pair with a Core i9-9900K on a 27" iMac, which ironically with its i9-9900K could outperform their own baseline Mac Pro, at least in After Effects. The other reason is that the RX 580 is the cheapest 8GB card right now. I'm currently using Photoshop and AE with integrated graphics, and the RX 580 right now is around $200, I don't know if I will see a performance penalty if I buy one instead of buying a RTX 2060 which is around $100 more expensive.
Also, I don't know if I am the only one experiencing this issue, but when I try to zoom in at the individual benchmark results in this article, the results are illegible. It seems that the image for the benchmarks has a 1200 x 345 resolution. The individual benchmark image in the Photoshop article has a 5531 x 1892 resolution and the numbers look terrific when you zoom in.
Thanks for the time and effort you put into these articles, they have become something that I look forward to, keep up the good work!!!

Posted on 2019-07-23 01:00:16
aaaariel

As always, great article and so helpful. Thanks.
I have the 8700k on a z370 platform, I'm thinking of swapping it for a 9900k but though my mobo is supposed to support it I wonder if it might have an impact on the performance. Any thoughts?

Posted on 2019-07-23 22:51:25
Flanders

Obviously you can't beet the 3700x from a price performance point of view. That's less than 8% behind the 9900k for only 2/3 of its price. But given the performance of the 3900x in some of my usecases, I will go for that one. The ultimate allrounder.

Posted on 2019-07-31 19:06:13

Keep in mind that the CPU is only one part of a whole system, so while the cost of a 3700X vs 3800X seems big when just looking at the chip price, it is a much smaller difference when you look at the cost of a whole system. Unless you are buying the CPU alone (if upgrading an existing Ryzen computer, for example) it may actually be more cost-effective to get the fastest model :)

https://www.pugetsystems.co...

Posted on 2019-07-31 21:40:53
Mark Harris

Actually yes you can beat it because for example, the 9900k is faster in Photoshop, faster in Premiere and that is not even taking into account the iGPU which not only boosts its Premiere performance passed the 3900x but is also an added feature AND value vs the AMD part. And of course any apps that favor speed, will perform much better with the 9900k not to mention gaming.
So yes you can "beet" it depending on your specific needs.

Posted on 2019-08-01 20:27:05
Patrik Lindahl

Wow! Great article! I have been waiting for this!
Regarding the surprising render node performance benchmark, how was the CPU utilization on the poorly performing higher core count systems?
For example for the 32 core AMD system, was the CPU utilization relay 100% during the multi node render?

Also, how does the benchmark work when launching new render nodes, is it one render node per core or one per each other core? Is it a ramp up?
I haven't tested with your benchmark tool yet, but my own tests suggests that the number of render nodes scale very differently depending on the type of project you use. For example a project that uses GPU rendering will sometimes hardly scale at all with multiple render nodes, but when I turn off GPU rendering the renders usually scales much more linearly.
Some very simple projects that don't use effects much will scale very linearly and be limited mostly by disk and/or network performance but some other more advanced project scale badly.
The way I made the tests I had a few different projects and then I scaled the render nodes from 1 and up one at a time until I couldn't see any performance gain. I did this twice, one with GPU on and one with GPU off.
Also the memory settings you give each node can be very important, especially when the render node numbers are starting to come up a bit. To my experience the actual rendering pipeline can work very efficiently even with a limited amount of RAM per node. I might be wrong, but it seems to me that it caches a lot of frames unnecessary in RAM that just sits there unused for preview purposes.In my testing I have had decent results with splitting the RAM equally over all render nodes by using the -mem_usage flag and both the cache and max values.

Posted on 2019-08-16 03:13:05