Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1957
Article Thumbnail

Adobe Lightroom Classic: AMD Ryzen 5000 Series CPU Performance

Written on November 5, 2020 by Matt Bach
Share:

TL;DR: AMD Ryzen 5000 Series performance in Lightroom Classic

In the past, there were arguments for using an Intel processor for Lightroom Classic if you wanted to optimize for active tasks like scrolling through images, but with the new Ryzen 5000 Series CPUs, AMD takes a solid lead no matter the task. We saw some odd performance issues with the Ryzen 9 5950X, but the Ryzen 7 5800X and Ryzen 9 5900X beat the Intel Core i9 10900K by a solid 14% and 21% respectively, while the Ryzen 5 5600X outperforms the similarly-priced Intel Core i5 10600K by a bit smaller 11%.

Introduction

Over the last few years, AMD has been making great strides with their Ryzen and Threadripper processors, often matching - or beating - the performance from similarly priced Intel options. AMD has had a strong lead in Lightroom Classic for passive tasks like exporting, but Intel managed to maintain a small advantage for active tasks like scrolling through images and switching between modules.

With the launch of AMD's new Ryzen 5000-series processors, however, it is very likely that AMD will be able to take a very solid lead over Intel in Lightroom Classic no matter what task you are looking at. AMD hasn't added any more cores to their new line of processors, but among other things, they are touting a 19% IPC (instructions per clock) improvement. In theory, this could translate to almost a 20% performance increase over the previous generation, although it will likely heavily depend on the application.

AMD Ryzen 5000-series for Adobe Lightroom Classic

In this article, we will be examining the performance of the new AMD Ryzen 5600X, 5800X, 5900X, and 5950X in Lightroom Classic compared to a range of CPUs including the Intel 10th Gen, Intel X-10000 Series, AMD Threadripper 3rd Gen, and the previous generation AMD Ryzen 3000-series processors. If you are interested in how these processors compare in other applications, we also have other articles for Premiere Pro, After Effects, Photoshop, and several other applications available on our article listing page.

If you would like to skip over our test setup and benchmark sections, feel free to jump right to the Conclusion.

Looking for a Lightroom Classic Workstation?

Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!

Test Setup

Listed below are the specifications of the systems we will be using for our testing:

AMD Ryzen Test Platform
CPU AMD Ryzen 9 5950X ($799)
AMD Ryzen 9 5900X ($549)
AMD Ryzen 7 5800X ($449)
AMD Ryzen 5 5600X ($299)


AMD Ryzen 9 3950X ($749)
AMD Ryzen 9 3900XT ($499)
AMD Ryzen 7 3800XT ($399)
AMD Ryzen 5 3600XT ($249)
CPU Cooler Noctua NH-U12S
Motherboard Gigabyte X570 AORUS ULTRA
RAM 4x DDR4-3200 16GB (64GB total)
Intel 10th Gen Test Platform
CPU Intel Core i9 10900K ($488)
Intel Core i7 10700K ($374)
Intel Core i5 10600K ($262)
CPU Cooler Noctua NH-U12S
Motherboard Gigabyte Z490 Vision D
RAM 4x DDR4-3200 16GB (64GB total)
AMD Threadripper 3rd Gen Test Platform
CPU AMD TR 3990X ($3,990)
AMD TR 3970X ($1,999)
AMD TR 3960X ($1,399)
CPU Cooler Noctua NH-U14S TR4-SP3
Motherboard Gigabyte TRX40 AORUS PRO WIFI
RAM 4x DDR4-3200 16GB (64GB total)
Intel X-10000 Series Test Platform
CPU Intel Core i9 10980XE ($979)
Intel Core i9 10940X ($784)
Intel Core i9 10920X ($689)
Intel Core i9 10900X ($590)
CPU Cooler Noctua NH-U12DX i4
Motherboard Gigabyte X299 Designare EX
RAM 4x DDR4-2933 16GB (64GB total)
Shared PC Hardware/Software
Video Card NVIDIA GeForce RTX 3080 10GB
Hard Drive Samsung 970 Pro 1TB
Software Windows 10 Pro 64-bit (version 2004)

*All the latest drivers, OS updates, BIOS, and firmware applied as of October 26, 2020

In order to see how each of these configurations performs in Lightroom Classic, we will be using our PugetBench for Lightroom Classic V0.92 benchmark and Lightroom Classic version 10.0. This benchmark version includes the ability to upload the results to our online database, so if you want to know how your own system compares, you can download and run the benchmark yourself.

One thing we do want to note is that the pre-launch BIOS that is available for Ryzen motherboards is using AGESA 1.0.8. Soon after launch, there should be an update that adds support for AGESA 1.1.0 which is supposed to increase the performance of each Ryzen CPU by another few percent.

Benchmark Results

While our benchmark presents various scores based on the performance of each test, we also like to provide the individual results for you to examine. If there is a specific task that is a hindrance to your workflow, examining the raw results for that task is going to be much more applicable than the scores that our benchmark calculated.

Feel free to skip to the next sections for our analysis of these results to get a wider view of how each configuration performs in Lightroom Classic.

AMD Ryzen 5000-series Lightroom Classic Benchmark Results

Benchmark Analysis: AMD Ryzen 5000-series vs Intel 10th Gen

Overall, the new Ryzen 5000-series CPUs from AMD are terrific for Lightroom Classic. You can still get more overall performance from the (significantly) more expensive Threadripper processors, but the Ryzen 9 5900X, in particular, is not too far behind those beefier models.

The only oddity in our testing was that the Ryzen 9 5950X ended up performing worse than the 5900X - in large part due to some performance issues with the "Build 500x Smart Previews" tests. We confirmed these results multiple times, and for whatever reason, Lightroom Classic simply doesn't like the 5950X at the moment. Future software or BIOS updates could of course fix this issue, although we saw the same behavior between the Ryzen 9 3900X and 3950X, so this is unlikely to be a simple BIOS or software bug.

AMD Ryzen 5000-series vs 3000-series in Lightroom Classic

Compared to the previous generation AMD Ryzen 3000-series CPUs, these new processors are all roughly 10% faster than the CPUs they are replacing. They do have a 10-20% higher price tag as well, although in terms of absolute cost that works out to only a $50 increase which is fairly small if you look at it as a part of the overall cost of a computer.

Even this relatively small 10% increase in performance allows the modest Ryzen 5 5600X to beat every single Intel processor we tested, although it only snuck by the Intel Core i9 10900K by a few percent. Comparing the 5600X to the more similarly-priced Intel Core i5 10600K, the 5600X is a decent 11% faster in our Lightroom Classic benchmark.

With the higher-end Ryzen models, we are looking at roughly a 14% increase in performance over the Core i9 10900K with the Ryzen 7 5800X, or a 21% increase with the Ryzen 9 5900X. It is also worth noting that the 5800X and 5900X outperformed the 10900K not only in the passive tasks but the active ones as well, which was where Intel was previously maintaining a slight edge. This effectively puts AMD in the lead over Intel no matter what your budget is and what parts of Lightroom Classic you want to optimize for.

Are the AMD Ryzen 5000-series or Intel Core 10th Gen better for Lightroom Classic?

In the past, there were arguments for using an Intel processor for Lightroom Classic if you wanted to optimize for active tasks like scrolling through images, but with the new Ryzen 5000-series CPUs, AMD takes a solid lead no matter the task. We saw some odd performance issues with the Ryzen 9 5950X, but the Ryzen 7 5800X and Ryzen 9 5900X beat the Intel Core i9 10900K by a solid 14% and 21% respectively, while the Ryzen 5 5600X outperforms the similarly-priced Intel Core i5 10600K by a bit smaller 11%.

If you were to compare AMD and Intel processors based on price alone, AMD is anywhere from 11% to 30% faster than Intel. However, we do need to make clear that since the Intel X-series CPUs are not as strong in Lightroom Classic as the lower-priced Intel 10th Gen CPUs, that is being somewhat unfair to Intel. There is almost no reason to use the X-series when the Core i9 10900K is both less expensive and faster, so the true performance lead with the AMD Ryzen 5000-series peaks out closer to only 20%

AMD Ryzen 5000-series vs Intel in Lightroom Classic

Another factor that has changed recently is that the Gigabyte B550 Vision D motherboard - with fully certified Thunderbolt support - has launched and passed our internal qualification process. One of the reasons we sometimes used the Intel 10th Gen CPUs over Ryzen when the performance was similar was because only Intel platforms had passed our qualification process for Thunderbolt. With this motherboard, Thunderbolt support is no longer as much of a factor when choosing between Intel 10th Gen and AMD Ryzen CPUs in our workstations.

Keep in mind that the benchmark results in this article are strictly for Lightroom Classic and that performance will vary widely in different applications. If your workflow includes other software packages (we have similar articles for Premiere Pro, After Effects, and Photoshop), you need to consider how the system will perform in those applications as well. Be sure to check our list of Hardware Articles to keep up to date on how all of these software packages - and more - perform with the latest CPUs.

Looking for a Lightroom Classic Workstation?

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: Intel 9th Gen, Intel X-series, AMD Ryzen 3rd Gen, Intel X-10000, AMD Threadripper 3rd Gen, Intel 10th Gen, i9 10900K, i7 10700K, i5 10600K, AMD Ryzen 5000-series, 5600X, 5800X, 5900X, 5950X, Lightroom CLassic

SMT OFF?

Posted on 2020-11-05 15:09:04

No, SMT (and HT on Intel) is on. Turning off SMT can improve performance a bit in tasks like exporting, but in the last few versions of LrC, it also lowers performance in active tasks. So in general, it should be better overall to leave SMT on currently.

Posted on 2020-11-05 16:18:12

It be good to compare these results with SMT off. Even if some processes are slower, exporting and building previews can be twice as fast.

Posted on 2020-11-08 18:11:22
Jason Ekholm

I see that the 'active score' benchmarks are all under 100. And that '100' benchmark was established with a 9900k system.

At a glance then it would appear that all of the systems reviewed here are notably slower than that old 9900k test rig - which is clearly incorrect.

Is the correct interpretation then that Lightroom has become ~13% slower between versions 8.4 and 10.0 in the 'active' test? (assuming that the 10700k in these results is on par with that old 9900k). That seems huge considering we only see 5-15% gains between CPU generations.

Posted on 2020-11-07 06:41:41

Yep, it looks like performance has gotten worse for the active tasks we are testing since we first made the reference scores. It probably isn't just Lightroom though, Windows updates and drivers also have an impact on performance - and sometimes not in a good way. The devs have also been putting a ton of work into improving many aspects of LrC that we haven't figured out a good way to test like brush/slider lag and things like that. So, it is possible the work they are doing there is negatively affecting the tasks we can test, but LrC is still way better overall for the end users.

Posted on 2020-11-09 17:18:39
John Stewart

I notice that you perform the Lightroom benchmarks with 3200Mhz CL22 memory. However, your testing (Messy Memory Speed Standards) showed an overall increase of 9% in Lightoom compared with the slower 2666Mhz memory. I'm currently speccing up a new desktop build to mostly run Lightroom and Photoshop, and have read elsewhere that there are good gains in memory performance by using 3600Mhz ram with CL16 or CL18 timing. I was wondering if you had performed any testing using this faster memory, and whether further big gains were achievable for a modest investment. It does seem that Lightroom Classic in particular is memory speed sensitive and could benefit from faster RAM.

Posted on 2020-11-10 14:11:50

We actually just put a post up about why we are shifting to DDR4-3200 RAM on (most) of our systems: https://www.pugetsystems.co... . I really wouldn't advise going above 3200MHz though. Until recently, even 3200MHz didn't meet our stability standards, and going above that is definitely going to cause more system instability. Might not be much if you are lucky, or it might result in numerous random bluescreens or application crashes.

You are of course free to do whatever you want with your own system, but we've always taken the stance that reliability is more important than getting a bit more performance since in a production environment, system crashes and lost work costs far more money than losing a few percent performance.

Posted on 2020-11-10 18:55:09
Jason Denson

Here’s my thought and I’ll try not to ramble. I found these past couple of benchmarks incredibly helpful in choosing my next CPU. I’ve narrowed it down to 2 top contenders, the TR 3960X and the Zen 5900X. Right now I’m running an Intel i7-6850 and lightroom pretty much locks up my system (100% CPU Usage) when I’m importing and creating previews or exporting. Granted, I’m importing thousands of RAW files at a time and exporting hundreds of JPG’s (the life of a family photographer on the beach). Quite often I have to let my computer sit there over night while it churns out previews… I don’t want to do that. So my questions are: 1) given everything I’ve told you, which should I go with? 2) Should I expect my PC to continue to lock up with either of these CPU’s? 3) Adobe CLAIMS it only uses 6 cores, if that’s the case, do we expect them to start utilizing more cores in the future? And 4) Lastly, AMD is saying that the TR socket will be compatible with future Treadrippers… If the 2 CPU’s are close already, does that push the TR over the top to make it that worth the added expense? Thanks for the read!

Posted on 2020-11-12 07:49:14

From what your headaches are, the Threadripper 3960X is probably the way to go.
2) The system shouldn't lock up, but if it does, you can always do some trickery with Windows affinity so that Lightroom isn't allowed to use a handful of CPU cores. That shouldn't happen though, since Lightroom likely won't ever use all your cores.
3) I don't think there is an arbitrary limit like that. I would believe that scaling goes way down after 6 cores though. As for the future, only the developers could tell you.
4) No way to really know. AMD has said before that Threadripper wouldn't change socket, then they changes to TRX40 with the latest CPUs. Same with the new Ryzen - as far as I know, AMD hasn't made an official announcement, so no way to know for sure. Generally though, most people don't upgrade their CPU every generation since the performance gains usually aren't enough to warrant it. So, personally, I wouldn't worry too much about future socket compatibility, especially with DDR5, PCI-E Gen 5, and who knows what else that might be coming in the next several years.

Posted on 2020-11-12 17:30:03

Sadly the benchmark doesn't cover one of the most important metrics for real life photographer - how long it takes to import RAWs with Standard/1:1 previews to be generated, so I know which CPU will let me work asap. Not only it's probably more important and has bigger impact on the workflow than the export, but one usually exports less images than import and the work is already done. And it's not always straightforward and faster and 100% utilized with more cores etc, as export is.
Also it helps import previews and develop module when you make and apply a some preset with Sharpening and Noise Reduction set to 0. You can apply those after you're done, as a batch.

Posted on 2020-11-14 19:23:08

We used to test 1:1 preview generation, but it wasn't something supported by the API so we had to drop it when we made the benchmark available for public download. From what I remember, the difference between various CPUs for 1:1 previews was pretty close to what we see with generating smart previews. So if import with previews is a big concern, I would look at the scores for the Import and Smart Preview tests. The "Passive Score" does a pretty good job of summarizing performance for tasks like that as well.

There are quite a few things we want to test in LrC, but unfortunately the API is way behind other apps like Photoshop and Premiere Pro. We've tried to work with the devs to add the functionality we need, but it can be hard to find time to add features that help us when they are busy tackling bugs and adding features that are useful for their end users.

Posted on 2020-11-16 17:53:50

Wanted to ask - will there be benchmarking series, where the new amd GPUs are used in tandem with the new CPUs and SAM on, i am curious weather there is any performance gain to be found outside of games

Posted on 2020-11-21 19:34:26

We might do something for other apps that use the GPU more (Premiere Pro, After Effects, DaVinci Resolve, etc), but I doubt we will invest the time to test Lightroom Classic. Maybe once we are able to test the features that use the GPU a bit better, but for now, there is almost no chance our testing would show any difference.

Posted on 2020-11-23 20:11:58
FX-8350

Hello Puget Systems,

great job again with yours online database, but! :-)

- There are no information about Screenresolution
- There are no information about RAM CL-Timings

Both missing informations are very important for the endresult. The differents can be mor den 40% !!!

Is there a planned solution in the near future for this problem?

Best regards,

FX-8350

Posted on 2020-11-29 08:58:16

CL timings are really hard (impossible from what I have found so far) to get directly at the level we have access to through the various Adobe APIs. Frequency can be grabbed through WMI or through the command line, but timings would need an external application which we have tried to avoid doing since it makes cross-platform support much harder. Screen resolution is easier, but it also more complicated than it sounds. Multi displays can make it really hard to tell what the actual screen resolution is if there are different display resolutions in use, as does different DPI settings. So we would need to be able to detect what display the app is running on which I don't believe we can do very easily.

Ideally, I would love to have both, as well as if the CPU and GPU are overclocked or not. Maybe in the future we will try to figure out reliable ways to check for all those things, but for now we are more concerned about making the benchmarks reliable and that they are testing everything we want. After all that, we can try to track RAM timing, screen resolution, overclocking, and a number of other aspects of the system information.

The difference shouldn't be more than 40% though. In our testing for RAM timings for example, we only saw around a 5% max difference between RAM speeds: https://www.pugetsystems.co... . Display resolution I don't have an article to back it up (yet), but from what I've seen the difference is at most 5-10%. Definitely enough to skew results, which is why our own internal testing with locked down configurations is always going to be more reliable than publicly uploaded results.

Posted on 2020-11-30 20:12:08
FX-8350

Hi Matt,

Thank you for such a competent and detailed reply. I see, it's difficult and very interesting.

At the first look it seems like there can't be more than 5% but :-):

RAM
Dual rank -> Single rank
2 DIMM -> 4 DIMM
Daisy Chain -> T-Topology
2666 Mhz -> 3600Mhz -> 4400 Mhz
CL 19-19-19-19 -> CL-14-15-15
AMD -> INTEL

+

Resolution
1980 + 1020 -> 2560 x 1440 -> 3840 x 2160

In my case, switching between to Monitors (separately connected and separately tested on the same PC) 1980 + 1020 -> 2560 x 1440 (AMD RX570 4GB) gives me a difference of 17% in some important Tasks!

In the worst constellation and best constellation, I bet there exist more than 40% difference (LR Classic and PS). I also know Puget Systems recommendations for RAM frequency but in the real world there are many out there with 3600 Mhz or more, see Puget systems database results :-) My working settings are moderate CL 16-18-18-38 2933 Mhz.

For the Crowd - The overall result of active and passive tasks are indicators. If you take results seriously, you must search for your workflow results in details. For me in my example, switching between Modules in Lightroom and scrolling in developer modul is very important, also 1:1 Rendering . In Photoshop is “opening a file” or “filter results” for me very important, and on and on...

Lightroom is sooo good and simultaneously sooo bad :-) I love and edit my files sometimes in Capture One too, but I found Lightroom for my organisational tasks a little bit better. What is also interesting is Affinity Photo as a serious alternativ in some workflows (not all) for Photoshop. Is there a solution for the same Benchmark as Photoshop to validate both for example - new PS Action compared with new AP Macro? It seems like Affinity Photo is in some Tasks much faster. Or does there exist a “political correctness” problem with Adobe? Iknow, i know, it's a little bit malicious :-)

Big THX again for your invested time, very kind of you.

Best regards,

FX-8350

Posted on 2020-12-01 08:41:10

Interesting, that is a much larger difference than we have seen. I'm sure the hardware itself has an impact as well. Maybe it is a bigger deal on older GPUs like your RX 570?

Comparing applications is something we don't really try to do since there is so much more to why you would use one application over another than straight performance. And as knowledgeable as we are about workflows, we are likely never to be as good as the people who are deep in these apps every day using them to make a living. When we can, we try to have many of the tests be similar, but we first and foremost want to measure the performance for "typical" workflows in each app separately.

It also gets a bit hairy for us since we are partners with many of these companies, and very few of them seem to welcome head-to-head comparisons. I don't think that is because any of them are scared, but rather because it is much harder to place a value on workflow optimizations than it is for things like "how long does this effect take to apply?".

Posted on 2020-12-01 16:58:52