Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1592
Article Thumbnail

Lightroom Classic CPU Roundup: AMD Ryzen 3rd Gen, AMD Threadripper 2, Intel 9th Gen, Intel X-series

Written on October 16, 2019 by Matt Bach
Share:

Introduction

If you regularly follow our content, you may have noticed that several months ago we published a range of CPU comparison articles looking at Photoshop, Premiere Pro, After Effects, and many other applications. Unfortunately, Lightroom Classic was not something we were able to test at that time due to two primary factors:

  1. Our benchmark was not as good as we wanted it to be and we wanted some time to further develop it.
  2. We discovered an issue with Intel Hyperthreading and AMD SMT that causes low performance for some tasks.

The good news is that we finally have our benchmark updated to the point that we are comfortable resuming testing. Our new test process is just an improvement on the old and adds testing for Sony .ARW along with vastly improved testing for "active" tasks like scrolling through images and brush lag.

On the flip side, the Intel Hyperthreading (HT) and AMD SMT issue are still very much present - you can read the details about it in our support post Hyperthreading & SMT causing low performance in Lightroom Classic. We have reported the issue to all the relevant parties, but we are not sure how long it will take for a permanent solution to be put in place. Since that is up in the air, we decided to go ahead with this CPU roundup article since our testing uncovered some very interesting results.

Because the HT/SMT issue is so dramatic - it almost doubles export times in some cases! - we will be basing the majority of our conclusions with HT/SMT disabled in the instances that it improves performance. Doing this does not improve performance with every CPU, however, so we are going to clearly mark in the charts when the results are with HT/SMT off. In addition, we will have a separate table in the "Benchmark Results" section that has the results with HT/SMT enabled on every CPU that supports it.

AMD Ryzen 3rd Gen Lightroom Classic Performance

In this article, we will primarily be looking at how well the new Ryzen 3600, 3700X, 3800X, and 3900X perform in Lightroom Classic. Not only will we include results for a few of the previous generation Ryzen CPUs, but also the latest AMD Threadripper, Intel 9th Gen, and Intel X-series CPUs.

If you would like to skip over our test setup and benchmark sections, feel free to jump right to the Conclusion.

Looking for a Lightroom Workstation?

Puget Systems offers a range of workstations that are tailor-made for your unique workflow. Our goal is to provide the most effective and reliable system possible so you can concentrate on your work and not worry about your computer.

Configure a System!

Test Setup & Methodology

Listed below are the specifications of the systems we will be using for our testing:

Shared Hardware/Software
Video Card NVIDIA GeForce RTX 2080 Ti 11GB
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit (version 1903)
Lightroom Classic CC 2019 (Ver. 8.4.1)
Puget Systems Lr Benchmark V0.2 BETA

*All the latest drivers, OS updates, BIOS, and firmware applied as of July 2nd, 2019

While most of our PC test platforms are using DDR4-2666 memory, we did decide to upgrade to DDR4-3200 for the AMD Ryzen platform which is different from our past testing where we used DDR4-3000 for Ryzen. The reason behind this is simply that previously, we did not have Ryzen fully qualified as an entire platform and were not comfortable running the RAM beyond the official specifications. Now, however, we have and are currently planning on offering DDR4-3200 for our customers once JEDEC 3200MHz RAM is readily available so we will be doing our testing with that speed of RAM.

We did do some testing comparing DDR4-2666 to DDR4-3200 on both Intel and AMD CPUs, but the only place it measurably increased performance was when importing and exporting images. This does still mean that our testing is a bit biased in favor of Ryzen since we decided to stick with DDR4-2666 for the Intel and AMD Threadripper platforms, but as you will see in the final results, that extra performance in a couple of tests is not really going to change our conclusions so we are not too worried about it.

For each platform, we used the maximum amount of RAM that is both officially supported and available at the frequency we tested. This limits the Ryzen platform to 64GB of RAM while the other platforms had 128GB, but since our Lightroom Classic benchmark never needs more than 32GB of RAM to run, this does not affect performance at all.

The benchmarks we will be using are the latest version of our (as yet unreleased) Lightroom Classic benchmark. Full details on the benchmark are available at:

Benchmark Results

While our benchmark presents various scores based on the performance of each test, we also wanted to provide the individual results. If there is a specific task that is a hindrance to your workflow, examining the raw results for that task is going to be much more applicable than the total scores.

As a reminder, due to the HT/SMT performance issue in Lightroom Classic, our analysis from this point forward will be done with HT/SMT disabled whenever it results in a higher overall score. If you want to compare the scores between each CPU with HT/SMT enabled when supported, the second image below includes those results.

Feel free to skip to the next section for our analysis of these results if you rather get a wider view of how each CPU performs in Lightroom Classic.

Lightroom Classic Benchmark Analysis

One last reminder: due to the HT/SMT performance issue in Lightroom Classic, our analysis from this point forward will be done with HT/SMT disabled whenever it resulted in a higher overall score.

Overall, the new 3rd generation AMD Ryzen processors are a clear winner. On average, the new 3rd generation Ryzen processors were about 20% faster than a similarly priced Intel 9th gen processor.

However, if you really dig into the results, there are really two primary tasks where Ryzen blows away Intel that is causing the higher overall scores: exporting and building smart previews. In these tasks, the Ryzen 9 3900X is about 80% faster than the Core i9 9900K while the Ryzen 7 3700X/3800X are about 55% faster than the Core i9 9700K. On the "low" end, the Ryzen 5 3600 ranges from 70% to more than 2x faster than the Core i5 9600K! In fact, that means that the Ryzen 5 3600 is faster than even the Core i9 9900K for these two tasks!

These are absolutely amazing results, but it is worth pointing out that for most of the "active" tests, such as scrolling through images and switching between modules, the Intel 9th gen processors are a bit faster than AMD. The only exception to this is our new brush lag test where AMD holds a firm lead. All this means is that if you don't have a problem with longer export times and don't often use smart previews, Intel is likely to still feel a bit "snappier" in Lightroom Classic.

As far as the Intel X-series and AMD Threadripper processors go, there honestly isn't much to talk about. The X-series CPUs did fairly well for the passive tasks, but outside a few specific tests, none of them were able to fully match the Ryzen 9 3900X. If you only use Lightroom occasionally, they will certainly do the job, but definitely not optimal. At the same time, the AMD Threadripper CPUs are just overall not a great fit for Lightroom, especially the higher-end "WX" models, so we would recommend avoiding them if possible.

Are the Ryzen 3rd generation CPUs good for Lightroom Classic?

Absolutely! The 3rd generation Ryzen processors are terrific for Lightroom Classic and were on average about 20% faster than a similarly priced Intel 9th gen processor. And in some cases - primarily exporting and building smart previews - the Ryzen CPUs get close to twice the performance! You may want to skip over the 3800X since the 3700X performs almost exactly the same, but all the other models are great choices.

Whether you are looking for the best performance per dollar, or best overall, the 3rd generation Ryzen processors are currently it. The only caveat is that for many of the active tasks in Lightroom Classic (scrolling through images, switching between modules, etc.), the Intel 9th gen processors do still hold a slight lead. So, if your workflow involves culling through thousands of images, but only exporting a handful of them, there is an argument to be made for using an Intel 9th gen processor.

Keep in mind that the benchmark results in this article are strictly for Lightroom Classic. If your workflow includes other software packages (We have articles for Photoshop, Premiere Pro, After Effects, DaVinci Resolve, etc.), you need to consider how the processor will perform in all those applications. Be sure to check our list of Hardware Articles for the latest information on how these CPUs perform with a variety of software packages.

Looking for a Lightroom Workstation?

Puget Systems offers a range of workstations that are tailor-made for your unique workflow. Our goal is to provide the most effective and reliable system possible so you can concentrate on your work and not worry about your computer.

Configure a System!

Tags: Intel 9th Gen, Intel X-series, Intel vs AMD, AMD Ryzen 3rd Gen, AMD Threadripper 2nd Gen, Lightroom, Lightroom Classic
Behrouz Sedigh

I guess Gamecache ( 32mb L3 on Ryzen 3800x ) plays important role.different between 2700x and 3800x is massive.

Posted on 2019-10-17 12:41:13
Angelscry

Too bad the Intel K chips weren't OCed in these tests. They are great overclockers. And yes I understand that you sell these systems to your customers at stock speeds. But still...

Posted on 2019-10-17 15:41:49
Tiq.Us

How many photographers out there OC their rigs? This is not gaming benchmarks. Different story here. I also don't think the result will be much different. I use Lightroom everyday. The app is scalable. It use all cores when importing and exporting. This is why Ryzen 3900x blows away the 9900k in that task.

Posted on 2019-10-19 15:30:47
David Farkas

I'm a landscape photographer that lives in LR, working through thousands of very large RAW images per outing. I definitely OC my rigs. Why not get 30-40% more speed for free (other than power and heat)? I currently use a 10-core i7-6950X OC'd to 4.4GHz on all cores and still want more speed for LR and Premiere Pro. Looking very seriously at the upcoming Cascade Lake X i9-10980XE or Ryzen 9 3950X, but leaning towards the Intel for its greater OC headroom.

Posted on 2019-10-28 01:03:35
Neo Morpheus

And because you do that, it automatically means that every photographer does the same thing!

Also, conveniently, you ignored to include the remediation to vulnerabilities on Intel cpus, which can halve performance in half (disabling HT) and their outrageous prices, compared to the performance obtained next to Ryzen.

Posted on 2019-10-30 14:07:52

mee too 7920x 2 cores @ 4.6 rest @ 4.4

Posted on 2019-10-30 14:58:22
Alessandro Manson Dionigi

i really dont think u get 30% more performance on premiere pro (i use it too) just oc'ing your rig ;) the number of cores makes the difference on premiere pro

Posted on 2019-11-04 14:46:39
Igor Baryshev

Ryzen 3 is not the same as Ryzen 3rd gen.
Title says "Intel 9th gen"(which is also not completely correct), but "Ryzen 3" and "Threadripper 2"...

Posted on 2019-10-17 16:26:16
Tiq.Us

Ryzen 3000 is indeed third gen Ryzen. The architecture is Zen 2.

Posted on 2019-10-19 15:31:44
Лучик Сергей

Hey, you are running XMP overclocked memory on AMD, and stock jedec crap on Intel. Find jedec-3200 or run Intel on XMP or stop faking tests.

Posted on 2019-10-17 22:16:02
Behrouz Sedigh

He uses stock setting Without using any OC settings.

https://www.gamersnexus.net...

The new Ryzen 3000 chips officially support memory speeds up to 3200MHz.

Remember his Mb was Gigabyte X570.Maximum Memory is 3200 Mhz
https://www.pugetsystems.co...

Posted on 2019-10-18 01:11:16
Лучик Сергей

No, it is oveclocked xmp memory. It is c16, it's 1,35V. Jedec memory is C20-C22-C24 and 1.2V.
Or use XMP for both systems with like 2666C12 for intel (not c19 crap like now), or find at last jedec spec 3200 memory for AMD system.

Posted on 2019-10-18 01:20:55
Frodo Boggins

3200 c16 is truly bottom of the barrel for Ryzen. With 3866 c16, or even 3600 c16 kit, the Ryzen CPU could really stretch its legs. Based on Gamers Nexus recent youtube video on Ryzen memory tuning, gains of 8-10% are possible using 3866 c16 downclocked to 3800 and 1900mhz FCLOCK.

Posted on 2019-10-27 20:23:33

Note that 3200MHz is the maximum officially supported RAM speed for the 3rd gen Ryzen chips, and that is only if you are using 2 sticks. If you use 4 sticks, it is either 2933 or 2677 depending on whether the RAM is single or dual rank. So 3200MHz isn't really bottom of the barrel, it is the fastest that AMD is comfortable calling viable with these chips. We have been doing testing with all three of these speeds since these chips launched, and I can tell you for sure that running the RAM at 3200Mhz is definitely a bit less stable if using four sticks, and it gets worse if you go beyond spec to 3600MHz.

You absolutely can get performance gains with higher speed RAM, but it is going to depend heavily on the application. Some we saw decent gains, others almost nothing at all: https://www.pugetsystems.co... . Just remember, this is not free performance - it is overclocking and has many of the same stability risks associated with something like CPU or GPU overclocking. Not saying you can't do it of course, but just be aware of the potential issues before jumping into it.

Posted on 2019-10-28 17:15:39
Simon Lft

Does the performance on Smart previews reflect the performance of 1:1 previews ?

Posted on 2019-10-17 22:38:13

I'm not actually 100% sure to be honest. We dropped 1:1 preview testing when we switched over to using Adobe's plugin API as much as possible and you unfortunately can't tell Lightroom to generate 1:1 previews through the API for some reason. That is something we're trying to convince the Lightroom Classic dev team to add so we can test it in the future though.

My impression is that the relative performance between different CPU models should be similar, but again I can't be 100% sure at the moment.

Posted on 2019-10-18 00:32:19
Simon Lft

Great, thank you ! I keep zooming in and out and that's where I lose most of my time.

Posted on 2019-10-18 06:42:22

Thank you for nice comparison! This new AMD CPUs are really nice :)
Last year i have buyed (for next few years) an i7 9700K and its blazingly fast even with huge 42Mpx A7R3 files!
I keep watching your reviews and if someone asks for photo/video computer, i know, where to go for relevant informations... :)

Posted on 2019-10-18 14:38:24
Jurriaan

So the only way to prevent the 9900K from being run into the ground by the 3900X is by disabling HT, and thus crippling it badly for all other applications you use - including the OS itself.

Posted on 2019-10-19 08:37:50

Disabling HT is one fix, but you can also just adjust the processor affinity so that Lr doesn't use the virtual cores and it gets back most of that performance without affecting other applications. We made a small utility that does this automatically as a temporary workaround until the root issue is fixed that you can download: https://www.pugetsystems.co...

Posted on 2019-10-21 17:05:45
Jakub Badełek

Hi Matt, many thanks for the long awaited test, it's amazing! I have several questions:
- when testing the Brush Lag - did you have GPU acceleration turned on? I couldn't find this info in the text, sorry if I missed it. Also, if we are at this topic - are you going to test the GPU acceleration some-time in the future? (for example with FPS capture software like we discussed in another topic some time ago).
- looking at the tables - how come Ryzen 3700x is faster in merging panorama than 3800x or even 3900x? the latter ones have faster clock... 3800x is basically an overclocked 3700x. Unless, I understand the detailed results wrong, see below:
- (a slight critique from my side) in the detailed result tables the results are a little confusing: I understand that results for particular raw file types and processors are given as time (seconds?) needed to perform a task (the lower the better), but the scores are... well, scores (the higher the better) - this is a little confusing as there is no clear explanation of this in the table.

Anyway, my next machine will be based on AMD then :) thanks again for your effort, this test is a real benchmark of how CPU should be really tested in software...

Posted on 2019-10-21 07:33:11

1) Yes, GPU acceleration is pretty much always enabled in our testing unless otherwise noted. GPU performance is definitely something we want to look at in the future, but since display resolution is apparently a big factor, we will have to also test things like HD vs 4K, multiple displays, etc. which makes it a pretty big project to tackle.
2) Honestly, I think most of that is margin of error. With these kinds of real-world tests, anything around 5% or less you should really consider the same. We try to compensate by running the benchmark multiple times and taking the best overall run, but you still get those kinds of discrepancies. That is just a fact of life with this kind of testing.
3) I hear you, and thanks for the feedback! Presenting that many results in a clear and concise manner is really difficult. We have some really cool projects we are going to be starting on (I hope) early next year that will dramatically improve this.

Posted on 2019-10-21 17:12:06
Jakub Badełek

Perfect, thanks! :)

Posted on 2019-10-21 17:41:58

Can't wait for your GPU testing!

I never game with my system so it'd be interesting to know if it's worthwhile upgrading from my ancient Radeon HD 7750 while I'm upgrading the rest of my system, just for Lightroom.

Posted on 2019-11-05 20:32:51
Dave Sang

Thanks for this Matt! This is perfect timing as I'm planning a 3rd gen Ryzen build for Lightroom specifically. A few quick questions if you have a minute:
- You mention in the article that something like a 9900K could feel snappier than a 3900x, but the scrolling, module switch, and auto-develop benchmarks are within like 2-3% at most. In your experience, was this actually discernible? My conclusion from your article that the significant outperform on the other benchmarks for 3900x (including "responsiveness" benchmarks like the brush & previews) is better than the maybe unnoticeable outperform on 9900k on these handful. Do you think that is right?
- Do you think 3rd gen Ryzen using gen 4 NVME would change any of these results meaningfully?
- Do you think the benchmark for Develop Auto WB & Tone is representative of overall "slider responsiveness" in the develop module? That is another important consideration for me
- Any guesses on how 3950x will fare in these benchmarks? Perhaps 3800x>3900x by benchmark is a good indicator of 3900x>3950x improvement?
- Do you think 3rd gen threadripper will still be bad for LR?

Thanks again!

Posted on 2019-10-23 22:05:04

The 3950X and TR 3rd gen we'll just have to wait and see. Really no way to know until it actually launches and we can test it.

As for the responsiveness difference, I really don't think it will be all that much better with the 9900K, which is why we are exclusively pushing Ryzen on our Lightroom workstations: https://www.pugetsystems.co... . We just wanted to note that since there are some people who really don't export a ton of images, but do a lot of edit work directly in Lightroom. So for those people, the few percent better "responsiveness" is worth it since the export performance doesn't matter at all to them.

Gen4 PCIe I doubt will have any influence on Lightroom performance. Disk speed is just not a factor even with 500MB/s SATA SSDs, so going from a 3.5GB/s Gen3 NVMe drive to one that is up 7GB/s really isn't going to do much. Doesn't matter how fast the drive is if the CPU/RAM is the bottleneck.

The brush lag and auto WB & Tone are our first attempts at measuring things like slider responsiveness. That is really, really hard to test consistently and accurately, especially when comparing CPUs where the difference is likely going to be minimal. That is definitely something we want to expand on, but in general I think those two tests should be relatively accurate for slider responsiveness. Not perfect, but certainly more accurate than making a wild guess.

Posted on 2019-10-23 22:19:12
Dave Sang

Awesome, thanks so much for everything! This is really helpful info that isn't found anywhere else. Do you expect to be able to publish results for the new chips shortly after launch? I want to get one (prob 3950x) but would love to see your analyses before pulling the trigger

Posted on 2019-10-24 05:46:40

We hope to have things ready pretty shortly after launch, but it all depends on exactly when the launch is. I'll be at Adobe MAX in a few weeks as well, which could potentially throw a wrench in things if the launch happens to be right around the same time. Worst case, it shouldn't be more than a week or two after launch that we have at least most of our articles up.

Posted on 2019-10-24 17:02:39
Dave Sang

Awesome, thanks! Look forward to it!

Posted on 2019-10-25 04:28:38
Frodo Boggins

I recommend waiting for 3950x. It has 25% more cores and boost clocks are a bit higher at 4.7Ghz. I am certain the performance jump will be less than what we've seen from 3800x to 3900x, but it will still hammer intel's newest Cascade Lake offerings.

The subjective "snappier" interface feel is suspect, IMO. There are no actual numbers posted, so I'm dubious about whether the editors can actually perceive real differences without using measurement tools objectively.

Posted on 2019-10-27 20:36:32
mihaii

Thanks for the review Matt. Very good and detailed as usual. Can you estimate when we can download the benmark to test our machines? I'm currently running LR on my I7-6700 and since i'm doing about 2000-4000 pics a month editing, there are times when i'd like a faster machine. I know that a 6 AMD would give me at 60-100% increased performance in "pasive" tasks (1:1 previews, smart previews and exporting) but i wonder how much performance i'd get on active tasks. Since you changed the methodology of testing (actually, the Lightroom versions changed) it's a bit hard for me to estimate the increase i'd get if i'd switch to a AMD 3600(x). If it's less than 15% .. i wouldnt go all the trouble.

Posted on 2019-10-25 07:31:32
Wojciech Szałata

I edit 2-4k pics monthly too, culling through at least 20k - on a mobile i5-4278U (2.6ghz). I would love to run this benchmark to test mine system (just to see & laugh).
--EDIT: well, I forgot that Intel is changing their chipset almost every gen. Well, having Z170 or similar you need to switch mobo too. If you can wait - just wait for 10th gen Intel. Changes in core count/price are coming.
I don't think you're gonna have 15% gain in "active tasks" with 3600x over 6700. Wait - save - upgrade higher. Unless the editing is really a pain for you. You can try optimizing your current system for running faster during your work time ;)
---end edit
//With i7-6700k you should have 1151 socket board - so just throw in there i9-9900k. Active tasks are similar to top Ryzen (3800x-3900x) and for passive task //you're gonna have time for another coffee. Intel CPU will cost you around the same that this 3600 AMD CPU + motherboard.
//Possibly you're gonna gain some performance when they resolve problems with HT (or not, I would not take this much into consideration).

Posted on 2019-10-25 11:50:58

I'm actually not 100% sure when we will have the Lr benchmark up for download, but probably in the next couple months. It is one of the more "finicky" benchmarks we have since we have to use a lot of external scripts to do things that can't be done through the plug-in API. I'm also holding off for a bit in case they launch a new version at Adobe MAX in a few weeks that changes anything.

If I had to give a guess, probably late November. But it may be sooner or later (I know, so precise) depending on what happens at MAX.

Posted on 2019-10-25 16:17:52
Roman Borodaev

Is it right, that 9900k simply destroys everything, if I work with 8k timelapses (42mp)? Import a lot of files, color correction, then export a lot of high resolution files to jpeg. As I understood, higher scores is better, right?

Posted on 2019-10-26 09:39:16

Yes, higher scores are better. But the 9900K isn't always the best - it trades with the Ryzen 3900X depending on what you are doing. For culling and just moving around Lr, the 9900K is better by a small margin, but the 3900X is significantly faster if you care about export performance or use smart previews. So it is just a matter of which of those kinds of tasks are more important for you (from a performance standpoint at least).

Posted on 2019-10-28 17:02:53
Hwgeek

Small question, why didn't you go with X570 MB with TB3 support for Lightroom workstation build instead of the GB board?

Posted on 2019-10-29 08:42:53

There is actually no certified Thunderbolt support on X570 - we have confirmed this directly with AMD. There are a few boards from ASrock (I believe) that have Thunderbolt, but that is their own implementation that is not certified by either AMD or Intel. Thunderbolt on PC is inconsistent enough even on fully certified platforms that I would highly recommend against using Thunerbolt on X570. You may get lucky and it will work with whatever specific device you happen to be using, but it is more likely that it won't work quite right.

Thunderbolt is finicky enough that we only ever use Gigabyte motherboards for it since they seem to be the best in terms of firmware/driver support. Even then, however, we only use boards that have TB integrated onto the boards. The PCI-E add-on cards (even from Gigabyte) just don't seem to be as stable or reliable as the integrated version for whatever reason.

Posted on 2019-10-29 16:39:20
Hwgeek

Thank you for such detailed answer, was wondering why it's only Asrock that used TB3.
also good news that all x299 got 50% price cut, in Israel all current (9th gen ) X299 8c~18c got 50% off.
suddenly workstation got much cheaper ;-).

Posted on 2019-10-29 18:05:53
Myga

Question due to the AMD 3900X update and it taking such a high spot in the ranking:

Do you guys also test these CPU's on how they react to high load / multitasking scenarios? Let me setup a bit of context here:

I'm in events photography where high multitasking efficiency is very desirable. I've personally noticed a huge difference in between AMD and Intel (Ryzen 1700x and 9900K) in how it handles situations where CPU is at 100% load already.

My observation on 9900K vs 1700x (not exactly a direct competitor but they have similar export/rendering power) is as follows:

9900K - Renders a timelapse with LRtimelpse from ~600 RAW files (files from A7III downscaling to 4K).
CPU utilisation on 100% on all cores. I'm still able to jump to a different catalogue, start making my picks, possibly even do some light editing (slightly slower ofc but doable). TLDR: it's possible to still interact with the machine for light multitasking operations.

1700x - Same scenario - 100% CPU utilisation on all cores - machine feels fully loaded with tasks, unresponsive in general.
Can't do any other selections etc. Getting micro-stutters if I absolutely try to shift to doing anything else up the the point where it's possible to freeze the machine.

I'm aware that 1700x might have been affected by the scheduler issues, but my bios, windows updates, drivers, etc are all up to date (which supposedly was to remove the issue).

Now the question is: Do the new AMD CPU's react the same way to full load scenarios? If so I'd still be keen to pick up the Intel over the AMD due to that efficiency while exports are running.

Would greatly appreciate hearing your thoughts on this,
Kind regards,

Posted on 2019-10-31 10:48:22
Jan Albrecht

that is exactly what i would to like to know too... i dont care waiting for export a little longer, when i can do meanwhile other tasks smoothly.
Planning to upgrade from i7 quad to i9 9900k wit GB z390 designare with 64-128gb, reading this article doubting if maybe amd would be worth it.....the choice would be easier if the HT?SMT issue would be solved.

Posted on 2019-11-12 12:06:23

It is very difficult to do subjective testing like that, at least in an automated / repeatable fashion. If you want, though, I could fire up some sort of all-core load test on both the 9900K and 3900X and then try using the systems to see how they feel... start a browser and pull up a web page, copy a file around, that sort of thing. It would not be terribly scientific, but if it would help inform your decision I'm up for giving it a shot :)

Posted on 2019-11-14 20:48:40
Myga

Hi William, thank you for taking the time to respond. I'd be super happy to get any kind of additional information about it! I'm totally aware this is not something scientific but nevertheless super valuable. You guys are the only ones that do these kind of test in the business and are uniquely placed to do them fairly easy (I hope). No one else is making (or should I say sharing?) these observations, so this kind of knowledge is priceless.

It would be super helpful to find out how you guys feel about both systems. Which one feels more responsive under heavy load. And exactly as you've mentioned opening a few tabs here and there (eg. Chrome, YouTube, file explorer Windows) is all it takes to feel the big difference :)

Thanks again!

Posted on 2019-11-14 23:42:23

Okay, so I loaded up Cinebench R20 on repeat, putting a ray-tracing render workload on each thread the CPU provides, and then:

- Opened up Edge (didn't have Chrome on these testbeds)
- Opened several tabs with various websites, including YouTube
- Watched a video, surfed around a bit, etc
- Copied files in Windows Explorer
- Played a quick game of Microsoft Solitaire

Both systems felt perfectly usable. I did feel more lag when doing stuff on the internet, compared to what I am used to on systems with no active CPU load, but YouTube videos even at HD were perfectly smooth and the delays waiting for pages to load were not obnoxious. File copying didn't seem affected, but maybe that was unfair since both systems have extremely fast NVMe drives so that wasn't really something that would take long anyway :)

Overall, I don't think I could feel any difference between the 9900K and 3950X in this subjective comparison.Hopefully that info helps!

Posted on 2019-11-15 20:17:47
Myga

Thank you so much! That really good news! Really happy that the bottlenecks are removed for AMD, as they look even better now with the 3800 - 3950 series. Thanks again for this additional time investment :)

Posted on 2019-11-15 20:29:46

Thanks - great article.
Export is definitely the task which I feel like I'm waiting around for most (i.e. wasted time) and so is my biggest concern.

Posted on 2019-11-04 20:33:01
Dan McDermott

I thought the Gigabyte X570 AORUS Ultra supported 128 GB RAM....so why "For each platform, we used the maximum amount of RAM that is both officially supported and available at the frequency we tested. This limits the Ryzen platform to 64GB of RAM "

Posted on 2019-11-11 13:32:22

It looks like Matt was using 3200MHz memory for the Ryzen platform in this test, and currently the largest memory modules available at that speed are 16GB. 4 slots populated with 16GB modules each gives a maximum of 64GB. He could have run 128GB, but would have been limited to doing so at 2666MHz... and whenever we run that speed on these Ryzen processors, we get tons of folks complaining that we are making them under-perform... even though, technically, that is the maximum supported RAM speed on Ryzen 3rd Gen when using four dual-rank memory modules :/

Posted on 2019-11-11 21:16:13
Dan McDermott

Thanks for the reply. I could not tell that was the reason....given how the article was written. I understand the stabity rationale for the 2666MHz RAM. I love stabilty. I wonder why these folks who are "complaining that we are making them under-perform" don't also say they have run into stabity problems like Puget has discovered. I assume they have....correct?

Posted on 2019-11-11 21:54:05

Presumably yes, but most folks aren't repeatedly doing the same thing over and over with different hardware combinations. If someone had errors, they might not jump to suspecting their RAM speed. It is also very possible that different combinations of speed, timings, voltages, and even individual CPU samples will lead to more (or less) problems - so some might get lucky and have a resilient CPU, and thus not have many problems themselves. Given that we are building hundreds of systems a month, we have to stick with what we can be sure is the most reliable configuration possible - and in the case of RAM, that means sticking with what CPU manufacturers officially certify their chips to work with. Even if it is a few percent slower in some situations, just a handful of crashes across our customer base due to using out-of-spec memory would cause more time (and maybe data!) loss than the amount that would be saved by having faster RAM.

Posted on 2019-11-11 22:00:04
Dan McDermott

Great info and thanks. PUGET technical folks really know their stuff :)

Posted on 2019-11-11 22:18:10