Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1529
Article Thumbnail

Photoshop CPU Roundup: AMD Ryzen 3rd Gen, AMD Threadripper 2, Intel 9th Gen, Intel X-series

Written on July 17, 2019 by Matt Bach


For quite a while, Intel has been the dominant CPU choice for many applications - especially ones like Photoshop that do not scale well with a high number of CPU cores. This has been evident even in our own workstation product line, which have predominantly been Intel-based for years with AMD-based options only listed for very specific workloads.

However, with the new Ryzen 3rd generation CPUs, AMD has spent considerable effort to improve performance for moderately threaded application like Photoshop. In a large part, this is really what makes AMD's new processors very exciting. The increase in core count is certainly nice, but most applications simply are not going to see a benefit from the higher core counts that both Intel and AMD are trending towards. Instead, it is the IPC (instructions per clock) improvements that will be more significant for most users - even if that doesn't show up in the marketing specs.

AMD Ryzen 3rd Gen Photoshop Performance

In this article, we will be looking at exactly how well the new Ryzen 3600, 3700X, 3800X, and 3900X perform in Photoshop. Since we expect these CPUs to shake up the market quite a bit, we also took this opportunity to do a full CPU roundup. Not only will we include results for the previous generation Ryzen CPUs, but also the latest AMD Threadripper, Intel 9th Gen, and Intel X-series CPUs. And for good measure, we will throw in a 14-core iMac Pro and a current (for the moment) 2013 Mac Pro 12-core as well.

If you would like to skip over our test setup and benchmark sections, feel free to jump right to the Conclusion.

Looking for a Photoshop Workstation?

Puget Systems offers a range of workstations that are tailor made for your unique workflow. Our goal is to provide most effective and reliable system possible so you can concentrate on your work and not worry about your computer.

Configure a System!

Test Setup & Methodology

Listed below are the specifications of the systems we will be using for our testing:

Shared PC Hardware/Software
Video Card NVIDIA GeForce RTX 2080 Ti 11GB
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit (version 1903)
Photoshop CC 2019 (Ver 20.0.4)
Puget Systems Ps Benchmark V18.10 BETA
Mac Test Platforms
iMac Pro 14-core Intel Xeon W
64GB 2666MHz DDR4 ECC
Radeon Pro Vega 64 16GB
Mac Pro (2013) 12-core, 2.7GHz
64GB 1866MHz DDR3 ECC
Dual AMD FirePro D700 6GB
1TB PCIe-based SSD

*All the latest drivers, OS updates, BIOS, and firmware applied as of July 2nd, 2019

Note that while most of our PC test platforms are using DDR4-2666 memory, we did switch up to DDR4-3000 for the AMD Ryzen platform. AMD CPUs can be more sensitive to RAM speed than Intel CPUs, although in our Does RAM speed affect video editing performance? testing, we found that even the new Ryzen CPUs only saw modest performance gains in Creative Cloud applications when going from DDR4-2666 to even DDR4-3600 RAM.

For each platform, we used the maximum amount of RAM that is both officially supported and actually available at the frequency we tested. This does mean that the Ryzen platform ended up with only 64GB of RAM while the other platforms had 128GB, but since our benchmarks never need more than 32GB of RAM to run, this does not actually affect performance at all. We have recently re-confirmed this in our RAM speed article linked above.

However, keep in mind that this is technically overclocking since the AMD Ryzen 3rd Gen CPUs support different RAM speeds depending on how many sticks you use and whether they are single or dual rank:

Ryzen 3rd Gen supported RAM:

  • 2x DIMM: DDR4-3200
  • 4x single rank DIMM: DDR4-2933
  • 4x dual rank DIMM: DDR4-2667

Since we are using four sticks of dual rank RAM (almost every 16GB module available will be dual rank), we technically should limit our RAM speed to DDR4-2666 if we wanted to stay fully in spec. However, since many end users may end up using a RAM configuration that supports higher speeds, we decided to do our testing with DDR4-3000, which right in the middle of what AMD supports.

The benchmark we will be using are the latest release of our public Photoshop benchmarks. Full details on the benchmark and a link to download and run it yourself are available at Puget Systems Adobe Photoshop CC Benchmark.

Benchmark Results

While our benchmark presents various scores based on the performance of each test, we also wanted to provide the individual results. If there is a specific task that is a hindrance to your workflow, examining the raw results for that task is going to be much more applicable than our Overall scores. Feel free to skip to the next section for our analysis of these results if you rather get a wider view of how each CPU performs in Photoshop.

AMD Ryzen 3rd generation Photoshop Performance Benchmark

Benchmark Analysis

Testing 22 different CPUs makes a bit of an overwhelming chart, but there are a few very clear conclusions we can pull from the results:

  1. AMD Threadripper CPUs are not good for Photoshop.
  2. Intel X-series CPUs are OK for Photoshop, but not an ideal choice for heavy Photoshop workflows since you can get more performance for less money with other CPU options.
  3. Intel 9th Gen CPUs are great for Photoshop.
  4. The new Ryzen 3rd generation CPUs are terrific for Photoshop.

We typically don't get too much into cost to performance since pricing is constantly in a state of flux, but there is no argument that the new Ryzen 5 and Ryzen 7 CPUs in particular are excellent for Photoshop. With current MSRP pricing, the AMD Ryzen 7 3800X is $100 cheaper than the Intel Core i9 9900K, yet performs almost exactly the same. At the lower end of the product stack, the Ryzen 5 3600 is $60 less than the Intel Core i5 9600K, but ended up being a small ~4% faster overall.

Something else we want to specifically point out is how large the performance gap is between the new Ryzen CPUs and the previous generation. The most direct comparison is between the Ryzen 7 2700X and the new Ryzen 7 3700X, where we saw a terrific 19% gain in performance. After years of seeing small 5-10% performance gains from one generation of CPUs to the next, this is very impressive feat and an indicator that AMD is definitely on the right track.

Are the Ryzen 3rd generation CPUs good for Photoshop?

Overall, the new AMD Ryzen CPUs are a great choice for a Photoshop workstation. There may not be much of a reason to go with the more expensive Ryzen 9 3900X over the Ryzen 7 3800X, but the Ryzen 5 and Ryzen 7 CPUs should be slightly faster than the similarly prices Intel equivalents.

However, the performance advantage over Intel is really only a handful of percent which is going to be almost impossible to notice in the real world. While the AMD CPUs are slightly faster or more affordable than Intel, oddly enough that may not be enough of a reason to go with team red (AMD) over team blue (Intel) for everyone. There are many more factors that go into your choice of platform than just benchmark performance - especially when the results are as close as these are. Many consumers may make the choice based simply on which company they want to support (which is a perfectly valid reason), there are a few other considerations to keep in mind:

On the Intel side, the Z390 platform has been available for quite some time which means that most of the bugs and issues have been worked out. In our experience over recent years, Intel also simply tends to be more stable overall than AMD and is the only way to get Thunderbolt support that actually works. Thunderbolt can be a bad time on PC, and there are only a few motherboard brands (like Gigabyte) where we have had it actually work properly.

For AMD, the X570 platform is very new and there will be a period of time where bugs will need to be ironed out. However, AMD is much better about allowing you to use newer CPUs in older motherboards, so if upgrading your CPU is something you will likely do in the next few years, AMD is the stronger choice. In addition, X570 is currently the only platform with support for PCI-E 4.0. This won't directly affect performance in most cases, but it will open up the option to use insanely fast storage drives as they become available.

Keep in mind that the benchmark results in this article are strictly for Photoshop. If your workflow includes other software packages (After Effects, Premiere Pro, DaVinci Resolve, etc.), you need to consider how the processor will perform in all those applications. Be sure to check our list of Hardware Articles for the latest information on how these CPUs perform with a variety of software packages.

Looking for a Photoshop Workstation?

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: Photoshop, Intel 9th Gen, Intel X-series, Intel vs AMD, AMD Ryzen 3rd Gen, AMD Threadripper 2nd Gen

Wow! I'm sure no one was truly expecting AMD to take the top 2 spots so soon.

Posted on 2019-07-18 21:37:28

Three of the top four spots is really good! I'm working on the After Effects article now as well, and AMD again takes three of the top four in that application as well. Really nice since competition is always good and it has been a while since AMD truly competed in these kinds of workloads. It will be interesting to see how Intel responds.

Posted on 2019-07-18 21:41:00
Jig Serencio Navasquez

Waiting for AE finally!

Posted on 2019-07-19 08:33:32
David C

Hi Matt,

Thanks for the insight. I referenced your articles when building my own 3900x + rtx2060 oc system and configuring the Photoshop scratch disk.

Are you able to publish your settings for the Ryzen processors, e.g. was PBO on or off? Is windows in performance power mode? What about the GPU?

My system scores after tuning were higher everywhere except for the photomerge score so I'm trying to understand why.

My Scores: https://pasteboard.co/ItEzi...

My system:
- 3900x with all core 4300MHz@1.25V
- 64GB @3200Mhz CL14-17-17-17-30
- RTX 2060 @2100MHz core, 8000MHz VRAM
- x570 Asus tuf wifi
- Puget PS benchmark v18.10beta

Posted on 2019-08-21 02:30:43

Great results man, congrats. Can you share what model of cooler are you using in your 3900X and your settings for ryzen processor in bios? I just bought 3900X and x570 TUF too but can't seem to find the settings to reach 4300MHz. Thanks!

Posted on 2019-08-25 15:02:36

Also, what link have you used as guide to configure the scratch disk? I found one article made in 2016, only. Thanks

Posted on 2019-08-25 15:22:56
Jakub Badełek

Matt, could you also make a quick separate test of influence of x570? Those mobos are very expensive and have active air cooling which may be problematic (there are already reports of models getting overheated). Many people are considering buying older mobos or even lower end models because of that. I wonder what is the real benefit of pcie 4.0 besides crazy fast (and expensive) storage few will really benefit from.

Great article btw, can't wait for tests in other programs especially Lightroom or Premiere pro

Posted on 2019-07-18 21:43:36
Jozsef Weigert

There is already a comparison here: https://www.techpowerup.com...

Posted on 2019-07-18 21:46:38
Jakub Badełek

Thanks! Very nice article. Still, I kind of trust Matt more :D

Posted on 2019-07-18 21:54:31

Aww, thanks! But honestly, I personally go to techpowerup, Tom's Hardware, and Anandtech primarily when it comes to things like that. We do our own internal qualification for specific products, but we really don't ever see in person the myriad of brands and models that those guys do. Especially things like underlying architecture changes, those guys do amazing jobs.

Posted on 2019-07-18 22:03:08

There really shouldn't be much in the way of a performance advantage to x570 over x470 - just additional features like PCI-E 4.0 (although that could impact performance in some workflows). To be honest, we probably won't do much formal testing on that topic. We're already flooded with our to-do lists, and we always end up using the latest chipsets in our workstations anyways so that kind of testing isn't a large priority for us.

Premiere Pro article will probably be either Friday or Monday - it depends on how long it takes me to sift through the 3,334 data points to turn it into meaningful results. Lightroom will be a while, however. We're currently trying to revamp our testing since most of the things people care about (culling, performance with brushes, etc) is extremely hard or impossible to benchmark. Our current testing is largely related to things like importing/exporting/previews/etc. which actually utilizes the CPU quite a bit differently than "active" tasks which can often lead to people picking what actually isn't the right CPU for them. Once we figure out a solution, we'll start including Lightroom testing again.

Posted on 2019-07-18 21:53:44
Jakub Badełek

Thanks, this comment plus the article linked above makes it clear for me. It's great you're working on benchmarking brushes for LR too! They can be pain in the.... back. Especially spot removal. Anyways take your time! I am considering putting a new machine together this year so your tests will be great source of knowledge

Posted on 2019-07-18 22:08:10
Jozsef Weigert

Looking forward to the new Lightroom test! In my personal experience the speed of moving between images in the Develop module is critical. Also zooming into images should be snappy on a good hardware setup. The fluidness, snappiness of editing is the most defining factor for me. Import/export time is less important as it is already fast enough on good hardware and we cannot really feel the difference - maybe only those who edit tens of thousand of images at the same time. :)

Posted on 2019-07-18 22:13:25

Yea, what you described is exactly what we hear over and over - but is reaaaallllyyyy hard to benchmark. We have some great contacts with the Lightroom dev team so I'm hoping we can get some help, but it is going to require them to add some code to the plug-in API. That's going to be a hard sell most likely since they (like all developers) are swamped fixing bugs and adding features that their users want/need. Things like this are great for us, but really doesn't apply to the vast majority of their users so it can be understandably hard to devote resources to it.

Posted on 2019-07-18 22:16:42
Cameron Ware

For LR ild like to see some kind of comparison for switching photos in develop mode, although ild also like to see some comparisons for importing and converting to DNG (possibly also editing photos during the import/convert process), if there's little difference between the feel of active tasks but a big difference in importing and converting its a clear choice for me.

Posted on 2019-08-06 07:36:53

Thx for the article, would it be possible to also take a look at Lightroom and also possibly add the 3600x cpu ?

Posted on 2019-07-19 04:33:25

Lightroom will actually probably be a while. We're currently trying to revamp our testing since most of the things people care about (culling, performance with brushes, etc) is extremely hard or impossible to benchmark. Our current testing is largely related to things like importing/exporting/previews/etc. which actually utilizes the CPU quite a bit differently than "active" tasks which can often lead to people picking what actually isn't the right CPU for them. Once we figure out a solution, we'll start including Lightroom testing again.

As for the 3600X, we honestly probably won't end up testing it since we rarely use anything that low-end in our systems. The only reason we included the 3600 was because AMD was kind enough to get us a sample of the model (along with the 3800X).

Posted on 2019-07-19 04:38:59

Thanks for getting back to me Matt. I understand what you mean about active tasks and it will be great if you can figure out a way to benchmark this. I was one of those that picked the wrong CPU, I currently have a 2600x and although some things like importing/exporting in lightroom specifically are much faster than my previous system (i5 2500k OC @ 4.6ghz), the overall experience while doing active work on the 2600x feels surprisingly worst than with my 8 years old cpu... I was extremely disappointed in the 2600x performance while doing things like spot removal, gradients filters, etc, it feels laggy / sluggish... so I was wondering if upgrading to the 3600x would be worth it, i have a feeling it might... Have you had the chance to see how lightroom feels with the 3600 or 3700x (i would assume those would be fairly close to the 3600x).

Posted on 2019-07-19 17:00:40

I haven't had the chance to do anything in Lightroom with these CPUs yet. If you want my guess, any of the new Ryzen CPUs should feel much better than the previous gen. Really hard to quantify by how much, but since they are performing really close to Intel in apps like Photoshop and After Effects, they should be about on par in Lightroom "active" tasks as well.

Posted on 2019-07-19 17:08:44

Hey Matt, any news about the Lightroom benchmarking for the new 3000 series Ryzen?

Posted on 2019-09-17 10:17:08

Quite an impressive leap forward for AMD! Besides the obvious top performers, the R5 3600 looks like to bring a crazy good value for its performances on Ps.

Posted on 2019-07-19 07:41:15
Rafał Urbański

3900x or 3700x vs 9900k?

How about testing zbrush? I have ryzen 1700 atm. What about Unity and Unreal? What about streaming desktop (3d workflow - recording tutorials)

I am having some weird problems with ryzen system though... That's why I wanted to go with i9 9900k but damn... new ryzens are sooo good. :(

Zbrush is super important for me. I want best performance possible. Not sure how multi-core performance can affect my work. Some people say zbrush is amazing when it comes to using multi-core but others say it is not true and it is better to go with higher frequency... I am confused.

I also play games (it is not my main thing I do but I also create video games and use Unity a lot)

Clay Brush and Move brush are laggy on high poly model in zbrush on my ryzen 3.8ghz - around 20mil "points"

There is like 0 comparison benchmarks for Zbrush. I wish someone would do like comparison between i9 9900k, threadripper, 3700x and 3900x to show how cores and frequency affect zbrush performance. For example measure fps when rotating very high poly model, using different brushes etc.

Posted on 2019-07-19 08:24:54
David Allen

lol, doing a google search for just that, also vs a 3950x along with the 9900k. Guess nobody has bothered or figured out how to test z-brush and Marvelous Designer for a comparison. Last I read, PhotoShop is still single-threaded (for making texture maps) along with I think Marvelous Designer, so that leaves Z-Brush, unless there has been an update to support tech newer than Out Of Order execution from the early '90s, lol. I have a friend that has an 8700k, and I have been trying to figure out if anything is more than a sidegrade for her, I'm guessing not since the 8700k has similar boost clocks as the 9900k, and AMD has only just mostly matched intel's single thread abilities of late (the extra cores are mostly useless for her I think).

That FPS thing was my thought as well, it's just a matter of finding some one with the parts and time to do the testing (plus rental fees for the soft).

Posted on 2019-12-28 11:25:05
David Young

Any chances that we will see benchmarks for Unity in the future?

Posted on 2019-07-19 10:07:22

That is something we are actively looking at, but it is a little tricky. At the moment I can only think of two things to try and test: real-time FPS performance and compile time. Maybe some lighting related stuff too...

Is there more you would want to see tested, or any suggestions on good sample projects to use as the basis for testing?

Posted on 2019-07-19 16:39:26
Behrouz Sedigh

"Keep in mind that the benchmark results in this article are strictly for Photoshop. If your workflow includes other software packages (After Effects, Premiere Pro, DaVinci Resolve, etc.), you need to consider how the processor will perform in all those applications"

I'm waiting for Next non-Photoshop Bench !

Posted on 2019-07-19 12:29:25

All Asrock X570 boards are Thunderbolt ready [official support] and some of them have it built in.

Posted on 2019-07-19 12:47:07
Dennis L Sørensen

Nice results by AMD. But still. I am a little dissapointed. 12 cores and 24 threads with relative fast frequency and still only a smudge faster than Intels 8 core, and AMDs 3800X 8core only getting the same results as an Intel product released 8 months ago.. this shows how poorly and difficult it is to programme the use of more cores is and why the solution by AMD (and Intel in a near future) is not just to shove more cores into a package. We need more quality (efficiency/IPS) and not more quantity.

Posted on 2019-07-19 12:50:42

If you consider the fact that there is 0 optimizations for AMD then it's quite an achievement.
Some tech youtubers said that Adobe is Intel camp, then AMD will never be a choice no matter how good the CPU from AMD is.
It's a matter of time ( Adobe update ) to ruin performance for AMD or bring some performance optimizations exclusively for Intel and that chart will look different.
Quicksync ruined AMD for Premiere pro, even thought that same technology exists in GPU's for years (Nvidia NVENC and AMD VCE ) it never got used and never will.

Posted on 2019-07-19 14:54:46

I've never had the impression that Adobe is in the "Intel camp", and we've done quite a bit of work with the developers for various software packages. I do think there is some level of prioritizing code for the largest user base (of which Intel is definitely the most common CPU out there), but that is no different then prioritizing fixing a bug that is affecting 80% of users over one that only affects 20% of users.

My understanding is that the reason Intel has been better for Adobe apps for quite a while is simply because Intel has better per-core performance. Adobe apps tend to not scale super well (although that isn't an Adobe thing, but something market wide), and it took AMD improving their IPC to really make them competitive with Intel. I highly, highly doubt we'll see an update that tanks performance for AMD.

Posted on 2019-07-19 15:39:36

I really hope you are right but the question remains, why use Quicksync and not NVENC and AMD VCE.
Let's say AMD don't have the money to send engineers to implement this, Nvidia has plenty and still after so many years and it's not implemented.
I know you don't test in Vegas pro but that is one software that uses quicksync, NVenc and AMD VCE, no problem for a tiny company to implement this, impossible for a company that is worth 132 Billions.
I saw tests on some tech youtuber channel for warp stabilizer in Premiere pro , davinci or final cut it takes a few seconds, in Premiere Pro, MINUTES, this is just absurd, using 1-2 cores for stabilizing while others use everything you've got.

Posted on 2019-07-19 18:12:20

You would need to talk to the devs to get an answer to be honest, we are on the outside looking in just like you are. I know there are plugins that allow NEVC in Premiere Pro, but why Adobe doesn't use it natively, ¯\_(ツ)_/¯

Posted on 2019-07-19 18:20:41
Mike Bike

3800x cost around 370 USD 9900K around 100 USD more ( and that price Intel gives you only because AMD came back from death otherwise they would be around 1000 USD)
So go AMD give Intel a hard life please so we can enjoy lower prices

Posted on 2019-10-29 12:06:00
Dennis L Sørensen

I know you dont sell overclocked machines, but that is the next result I would love to see (after all the regular apps/programs have been tested @ stock). I have a feeling that if you push the OC, the 9900K is still coming out on top (but I dont know).

Posted on 2019-07-19 12:52:20

Very little chance we'll do overclock testing - just too much to do covering testing that is directly relevant to our customers. But from what I've seen in reviews, I agree that the 9900K should pull ahead a bit since it looks like Intel is stronger for overclocking.

Posted on 2019-07-19 15:40:53

From what I watched on some others reviews the 3900X is hands down bad as when scrubbing Videos

Posted on 2019-07-19 15:17:53

ASRock manufactures two X570 boards with integrated Thunderbolt 3.
it'd be interesting to see if it actually works.

Posted on 2019-07-19 17:41:34

I'm really curious as well. As far as I'm aware, there is no completely official implementation for Thunderbolt on AMD chipsets, but motherboard manufacturers can slap it on if they want. A few brands have that on AMD Threadripper boards, but when we tried it it really didn't work well (which is likely why they didn't list Thunderbolt in the specs). My take is that Asrock is willing to take more risks than other brands, but whether that is because they are confident in it working or simply because it is something they can use in marketing to try to drive sales I don't know.

After working with Thunderbolt for years and dealing with the huge hassle it has been, I personally wouldn't trust it - at least not in this early of an implementation. I've been wrong before, however, so who knows.

Posted on 2019-07-19 17:45:34
Eric Marshall

Great job on another roundup! Though I have to disagree with the conclusion. The conclusion I come away with looking at these results, is that there is a disturbing lack of difference from CPU to CPU in this application, and that the primary performance bottleneck in this program is still the program itself, as has been the case for the last 10 years or so. As a company in the business of trying to sell high end computing to customers, Puget should take a moment in each of these articles to berate Adobe for spending so much development effort pushing "cloud" garbage on us, and so little development effort optimizing the use of the rapid expansion of core count taking place.

Posted on 2019-07-21 02:13:32

We actually do give feedback directly to many Adobe teams, although I believe it is all under an NDA so I can't get into the details of it.

You are definitely right that at the high-end, there isn't a massive difference in performance between different CPUs. However, I can tell you that at least for our customers, even a 5-10% performance gain is often well worth the cost of the higher-end CPUs. I know our customers are not the average user, but for them, any investment that saves them time pays off incredibly quickly.

But, that is one reason we not only publish our thoughts on the results, but the raw benchmark results and scores as well. Our commentary is always going to be skewed towards our customer base - after all, that is the main reason we do this testing. The fact that is is helpful to the masses is great, and we see no reason to hide it, but I think many people don't quite understand that we are a high-end workstation manufacturer, and everything we do is geared towards helping our customers get the exact right system for their workflow.

Posted on 2019-07-21 04:29:36
Eric Marshall

I've built a number of systems at work like the ones you would for your clients; premium case/psu, enthusiast or workstation platforms, sometime with ECC memory, high end CPU's, quadro cards, etc. I can relate to the "time is money premise" and account for that in the decision to deploy more quality and performance (even at very high component cost) if it means we might get a couple more years out of the system before a rebuild is revisited, or if it saves a few minutes in a day in compute time for users, and just as importantly, saves me time by speeding up maintenance and reducing repairs. In business, the cost of the machine is typically a drop in the bucket compared to the big picture. The salary of the person sitting in front of the machine, is likely 50-500X greater than the cost of the machine amortized over the years it will be in service.

The problem I have here, is that we have a range of CPU's up for consideration, where the high end (something like a 3900X), has ~150% more processing power at a ~125% higher price, than say, an i5-9600K, yet in this application, the performance difference is only 15%. Adobe is not the only guilty party here. I have CAD guys at work using premium lake architecture many-core workstations for CAD applications (high end for longevity/reliability/5% reasons previously discussed), and they are constantly running into performance issues where 1 thread is pegged while the rest sit idle. CAD and modeling viewports are notoriously bad performing, with no development effort to spread the load to the compute resources made available in modern computing. Yet somehow, they always make their annual deadlines for a new version packed full of new features and cloud integration and more junk that nobody asked for, while leaving all the 20 year old problems unsolved. Typical software development these days.

Posted on 2019-07-28 02:37:00

I think it is a little of column A and a little of column B. New features and flashy updates sell, so of course companies like Adobe or Autodesk are going to spend more of their development funds on those things. I think they are starting to get the message that performance and stability are more important, but most companies are focusing on GPU acceleration over improving CPU threading performance. Unfortunately for Intel/AMD, often the easiest things to port over to the GPU are the ones that are really efficient at using a high number of cores which leaves the CPU-based tasks as the ones that are not (currently) very good at running in parallel.

At the same time, there are simply some things that cannot be run in parallel. CAD is a perfect example, in fact, since it is all parametric modeling where each point relies on the coordinates of the point before it. That is why, without a major redesign of how CAD programs operate, apps like AutoCAD or SOLIDWORKS simply are never going to be good at using more than a core or two for the viewport.

That said, we are definitely starting to hit the limit of per-core performance, so improved multi-threading capabilities is something developers are going to have to consider. All it takes is one company investing in it, and everyone else is going to have to in order to keep up.

Posted on 2019-07-29 02:54:58
Eric Marshall

On the subject of GPU acceleration... We see the same scaling problems with GPU's in these content creation applications. There's almost no performance scaling between a GTX 750 and a RTX 2080Ti in most GPU accelerated Photoshop manipulations. They are bottle-necking the high end GPU's on a single thread.

I don't see a major redesign required to begin tapping into multi-threaded viewport. Autodesk has been using directX for years. Recent developments in later versions of directX, as well as newer GPU's and their driver/API support, set the stage for muilti-threaded viewport capabilities. Games are using lots of threads to command GPU operations for their viewports these days.

Think about it... If the developers who spend most of their day "game testing" and smoking weed... can get multi-threaded viewports to work, SURELY the "professionals" from a place like Adobe or Autodesk could manage to work on multi-threaded optimization for viewports and other GPU accelerated functions.

Posted on 2019-07-30 02:06:11
Seb T

thanks for this benchemark its very clear and detailed !!!
i'm a photographer and i work with a (i5 3570k 32go Ram / 970gtw FTW) since 2013,
I saw those news AMD cpu so i really wanted to update my 7yo cpu but i have a question !!! i just heard that AMD gonna come out with new cpu at the end of 2019 < 2020 the ZEN 3 - 7mn+ EUV , Should i buy a 2700x who could give me a bump of +35% and save money for a high end 7mn+ ( less buggy and everything )? or buy right now a 3700x who could give me a bump of 50% and stay with it for 7 to 10yrs as i did with my actual CPU (sorry for my english !!!)

Posted on 2019-07-21 15:56:27
Rafał Urbański

Buy 3900x for "future proof". 3950x that is launching in 2 month will be IMO too expensive if you don't render 3d stuff with CPU. 12 core cpu with nice high frequency should be more than enough for next 7 years. In 7 years we will probably have 100 cores at 10ghz.

Posted on 2019-07-21 16:34:39

There is always going to be a new technology or product 1-2 years away in this industry, and if you constantly wait for the next thing you'll never buy a computer :)

My advice, personally, is to save up until you can afford a new system that will provide a substantial improvement over what you have now. For me, that ends up being a new system about every 3-5 years, with minor upgrades in between, but it will look different for each person. If you are on a 7 year old system, then a modern system should be a big upgrade - so just save up to make sure that you get good specs, and don't have to settle for lower end parts because of money.

Posted on 2019-07-22 17:49:41
Seb T

Thanks both of you for very helpful answer and advices ! i finally bought a 3700x 32go Ram 3200Mhz CL16 with a B450M mortar with my first M.2 :D so i think it will give me a gain of 50% compare to what i have now so :D i think i'll be happy with it

Posted on 2019-07-23 22:48:44

Congrats, and best of luck with your new system :)

Posted on 2019-07-23 22:53:09

The Mortar is not the greatest of MSI B450 boards, but it is not a failure either, so I'd say it is a pretty good build :)

Posted on 2019-08-07 07:06:21
Rafał Urbański

How much better 3900x is compared to i9 9900k in Zbrush and Blender (vewport tests like in this video https://www.youtube.com/wat...

He used an amazing benchmark pack (he linked it in the video description) for blender (for single core performance in different tasks), never seen anyone use it.

Posted on 2019-07-21 16:27:37

Great benchmark/article as always Matt. Really grateful for the effort you guys put in with these reviews. Nothing else like it online. It helped me in purchasing a 9900K setup a while ago - I have it OC’d at 5GHZ on all cores with 32GB of 3200 CL14 ram. I really do wonder how the results would vary if you overclocked both the Ryzen and Intel processors. I know previously you said you don’t overclock for benchmarking, but really, these top end processors are expected to be overclocked not left at stock speeds. Plus the ram for the Intel is slow. I assume it is the recommend speed for a Z390/Z370 system but even still. A review by Tom’s Hardware showed that the ideal speed for current Intel gen processors 3200MHZ CL14 for Photoshop. I’d love to see benchmarks with optimised ram and CPU’s OC’d. Can understand if you can’t but do end up thinking that the final scores you have here are not relevant as the majority will optimise their systems. Thanks again, David.

Posted on 2019-07-22 02:38:02

I don't know if I agree with the statement that these CPUs are meant to be overclocked. They are capable of it certainly, but I imagine the vast majority of 9900K CPUs in the field are running at stock speeds.

In the end, keep in mind that the point of our testing is to help our customers. The fact that it is useful to so many others is terrific, but we are always going to prioritize our time towards the things that directly affects the systems we sell. That is the same reason why we stick with the officially supported RAM speed (although we did fudge it a bit for Ryzen in this testing). Faster RAM is just another form of overclocking, and the performance gain from either simply isn't worth even the small decrease in system reliability for our customers. That absolutely isn't going to be the case for people who build their own systems and are willing to tinker if things start having issues, but that simply are not the people we are directly targeting.

That said, every once in a while we do testing that is beyond the scope of our customer base. Overclocking is sometimes a part of that, and so is RAM speed analysis (which is actually something we will be publishing an article on today or tomorrow). We need to keep an eye on things like that to make sure we aren't overlooking something that would benefit our customers, but I don't think it will ever become something we regularly do - at least not unless our testing/articles start paying for themselves and we can afford to hire more people to do that testing.

Posted on 2019-07-22 17:35:13

Thanks for the reply Matt. I can see why you don't included overclocking as it does create stability issues if not tested exhaustively. Even still, to truly judge the king of the pack you'd need to include it ideally. A 5-10% increase is noticeable in user experience.

Posted on 2019-07-31 05:03:57

I was waiting this article because I will probably upgrade my Sandy Bridge workstation after this summer.
I am really uncertain whether buying the 9900K or the brand new 3900x, even after viewing these test results. The new Ryzen looks like the best over 9900K, but we should consider that users with the latter usually OC to 5.0 GHz (or even more) with ease and given that, I am not sure the Ryzen 3900X would be still the best. At the same time, the 3900x is being reported having modest OC capabilities.

I feel attracted by this 24 threads processor, but I am not sure to buy it. Any user have already tested in Lightroom and Photoshop? What are your feelings?

Posted on 2019-07-27 07:29:21
Daniel Shortt

Ryzen 3900's are best left to "OC" themselves. Because of the way they use multiple chiplets, you will find that one chip is slower than the other binned chip, that will be the one that hits the higher boost speeds. Manual all core oc's will crash out because the slow chip will lag behind.

To answer your question, if you want to do just photo editing, the 12 cores will beat the intel especially in raw editing (Lightroom and Capture One) but if you want to game the Intel still has a small edge.

Posted on 2019-10-01 15:30:51
Seb T

Hi guys i need your help , im on of the happy owner of a new config with a 3700x + 32Go Ram + 970gtx , and i have a problem, when i open a RAW file in camera raw (Photoshop) and i start to use the healing brush, everything is fine and smooth at the beginning and after a few spot healed, the brush start getting slower and slower, seems PS dont like to manage to many healed spot so my question is, photoshop is related to CPU or GPU for this spécifique kind of task ? do i need to buy a CUDA graphics card to get it Smooth ??

Posted on 2019-07-31 20:08:00

I have same problem, spetz i5-6600k 4.5ghz 32gb 1060 6gb. Lightroom has the same problem.

Posted on 2019-08-23 21:00:13
Seb T

Hii , by now i did get any awnser from any professional , i just learn that photoshop nead a mix of a very good CPU + MAX RAM and it would eat all maximum of ram that you can throw at it, so what did is that i go to setting < performance push photoshop to 95% and the probleme nearly disappear try it an tell me how is it ;)

Posted on 2019-08-24 20:35:25

Nice benchmarking. It's nice seeing AMD CPUs finally being more competitive with Intel in workload where per-core performance matters.

One thing makes me wonder though is the choice of cooling solutions. While Wraith Prism is not too bad, it's still a stock cooler. But I guess if you ran it at its max RPM, it offered similar level of cooling as the Noctua, just with the difference that the Noctua was barely audible and the Prism was trying to destroy your eardrums. In that case, I admire your dedication and suffering.

Posted on 2019-07-31 23:17:54

The Prism was definitely louder under load! In the end, we plan to carry a Noctua for these Ryzen chips as well - but we hadn't qualified that configuration yet, so we started off with the stock cooler AMD provides. I looked for reviews of the Prism at some other websites as well, and at most it looks like it *might* make the CPU run ~0.1GHz slower under all-core load compared to a massive 360mm AIO. That is a pretty small difference, overall.

Posted on 2019-08-02 16:16:00

Well, the Noctua is not the top tier dual tower triple fan stuff anyway, so I guess the benchmark comparison didn't suffer much. If it suffered much, AMD wouldn't put up such a good fight. Looks like the times of Intel-only for Adobe suite are over.

Posted on 2019-08-06 23:05:38

Overall score using my 3900X improved from 942 to 1019 after upgrading RAM from 2400 Mhz CL16 to 3200 Mhz CL16 (both scores are using a hard drive for the data files, which may perhaps explain my lower score than Puget's even with equivalent RAM). FWIW a Ryzen 3 1200 2400 MHz CL16 scored 522 , which I suppose is not terrible considering how cheap it is!

Posted on 2019-08-29 00:58:20

That's a nice bump. I'm researching a new build right now. My past few builds have been Intel, currently 6700K, and was thinking 9900K. Now not sure, maybe AMD should be in teh mix :)

Posted on 2019-09-13 07:03:28
Healing Care

Where does the i7 4790k fit in this? Worth upgrading to 3600 or should I go for 3700x?

Posted on 2019-09-19 08:15:24
Daniel Shortt

Hi A question for Matt about your cooling methodology.
TLDR: You want apples to apples, different coolers don't give you that.

I understand you might be aiming for an out of the box experience. But especially with the way 2nd Gen Ryzen boosts, it's more like a GPU with how temperature will effect the clocks...significantly.
Putting the Intel chips under a Noctua with a 165W TDP but using the stock AMD cooler, that if you look around gets a straight "it's ok but not great" comments, seems like it will skew results. My Ryzen sat under it's stock cooler for about 24 hours before I put a kraken x62 AIO on it. For Raw file editing that hits all cores, the fan was up and down like crazy (the constant changing fan noise is more irritating than the actual volume). After fitting the water cooler without touching a setting, it's running about 200mhz higher 5 min into an capture one high res export, mainly because its at 58° not 75ish°C.

Posted on 2019-10-01 15:52:27

This testing was done before we had fully qualified Ryzen for use in our workstations, so things like coolers wasn't fully sorted out yet. That said, we did do a number of checks with a couple other coolers including a AIO liquid cooler and we saw no appreciable difference in performance with our benchmarks. The fan noise isn't great (like you mentioned), but since performance wasn't being affected we opted to stick with AMD's approved and recommended cooler for the majority of our testing.

There have been a number of BIOS and other updates that affect things like the aggressiveness of the Turbo clocks, however, so it is possible that the cooler may make a difference if we were to re-run these tests again today. The good news is that we have qualified the Noctua NH-U12S for our Ryzen workstations, so any future testing you see from us will be using that cooler which shouldn't have any problems at all keeping the Ryzen CPUs adequately cooled.

Posted on 2019-10-01 19:15:08
Daniel Shortt

Awesome, thanks for the response. I'm Surprised you didn't see a difference with an aio to clocks. But the BIOS updates and motherboard used have been all over the place the last month or so, that probably plays into it.

I dug up a graph below showing clock speed vs temps and it seems to hold true, 55° is 4200mhz, 80c° is 4000.

Posted on 2019-10-02 07:44:52
Ryszard Bąk

Nice comparision, but many poeple have older CPU lik i5 6600K. Could you make the same test on i5 6600K to compare with new CPU?
On your site I found test i5 6600K but test was diffrent than that. So I can't compare i5 6600K with i5 9600K.

Posted on 2019-10-19 08:36:26

It really isn't feasible for us to test every CPU, which is one of the reasons we made our Photoshop benchmark available for anyone to download and run: www.pugetsystems.com/go/PsB... . If you run it on your own system, you will be able to tell exactly how much faster a system with one of these CPUs is compared to what you have now. The GPU you have will make a small difference in the results, but unless you are using onboard graphics or something really old, it shouldn't affect the overall results too much.

Posted on 2019-10-21 17:00:24
Ryszard Bąk

I don't have Photoshop CC, I still use CS6, benchmark don't want run on CS6 :(

Posted on 2019-10-22 06:38:36

Recently I purchased a 3700x/2070super/32gb ram to work on some high res 16 bit photoshop files but it hasn't really been up to par. I was thinking about getting the 3900x or 3950x but it seems to me like it's double the price for only a fraction of the performance increase. What do I need to feed photoshop to get better performance at this rate?

Thanks so much

Posted on 2019-11-20 14:34:26

The biggest thing you could check is to make sure you aren't using all your RAM - in which case getting more RAM would help. Unless your total RAM usage goes above ~80% while you are working in Photoshop, however, more RAM won't help. At that point, unless you want to pay for a faster CPU like the 3900X, there really isn't much you can do from a hardware standpoint. On the software side, there are probably some workflow tweaks you could tackle that would help, but that is beyond what we really focus on.

Posted on 2019-11-20 16:49:41
Netgear AC1200 Review

I read this post and it was really helpful but if you want an instant solution for your problems
and want to solve it without unnecessary trouble then you should visit this websitehttps://youfree.tech...

Posted on 2020-07-09 12:26:47