Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1529
Article Thumbnail

Photoshop CPU Roundup: AMD Ryzen 3, AMD Threadripper 2, Intel 9th Gen, Intel X-series

Written on July 17, 2019 by Matt Bach
Share:

Introduction

For quite a while, Intel has been the dominant CPU choice for many applications - especially ones like Photoshop that do not scale well with a high number of CPU cores. This has been evident even in our own workstation product line, which have predominantly been Intel-based for years with AMD-based options only listed for very specific workloads.

However, with the new Ryzen 3rd generation CPUs, AMD has spent considerable effort to improve performance for moderately threaded application like Photoshop. In a large part, this is really what makes AMD's new processors very exciting. The increase in core count is certainly nice, but most applications simply are not going to see a benefit from the higher core counts that both Intel and AMD are trending towards. Instead, it is the IPC (instructions per clock) improvements that will be more significant for most users - even if that doesn't show up in the marketing specs.

AMD Ryzen 3rd Gen Photoshop Performance

In this article, we will be looking at exactly how well the new Ryzen 3600, 3700X, 3800X, and 3900X perform in Photoshop. Since we expect these CPUs to shake up the market quite a bit, we also took this opportunity to do a full CPU roundup. Not only will we include results for the previous generation Ryzen CPUs, but also the latest AMD Threadripper, Intel 9th Gen, and Intel X-series CPUs. And for good measure, we will throw in a 14-core iMac Pro and a current (for the moment) 2013 Mac Pro 12-core as well.

If you would like to skip over our test setup and benchmark sections, feel free to jump right to the Conclusion.

Looking for a Photoshop Workstation?

Puget Systems offers a range of workstations that are tailor made for your unique workflow. Our goal is to provide most effective and reliable system possible so you can concentrate on your work and not worry about your computer.

Configure a System!

Test Setup & Methodology

Listed below are the specifications of the systems we will be using for our testing:

Shared PC Hardware/Software
Video Card NVIDIA GeForce RTX 2080 Ti 11GB
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit (version 1903)
Photoshop CC 2019 (Ver 20.0.4)
Puget Systems Ps Benchmark V18.10 BETA
Mac Test Platforms
iMac Pro 14-core Intel Xeon W
64GB 2666MHz DDR4 ECC
Radeon Pro Vega 64 16GB
1TB SSD
Mac Pro (2013) 12-core, 2.7GHz
64GB 1866MHz DDR3 ECC
Dual AMD FirePro D700 6GB
1TB PCIe-based SSD

*All the latest drivers, OS updates, BIOS, and firmware applied as of July 2nd, 2019

Note that while most of our PC test platforms are using DDR4-2666 memory, we did switch up to DDR4-3000 for the AMD Ryzen platform. AMD CPUs can be more sensitive to RAM speed than Intel CPUs, although in our Does RAM speed affect video editing performance? testing, we found that even the new Ryzen CPUs only saw modest performance gains in Creative Cloud applications when going from DDR4-2666 to even DDR4-3600 RAM.

For each platform, we used the maximum amount of RAM that is both officially supported and actually available at the frequency we tested. This does mean that the Ryzen platform ended up with only 64GB of RAM while the other platforms had 128GB, but since our benchmarks never need more than 32GB of RAM to run, this does not actually affect performance at all. We have recently re-confirmed this in our RAM speed article linked above.

However, keep in mind that this is technically overclocking since the AMD Ryzen 3rd Gen CPUs support different RAM speeds depending on how many sticks you use and whether they are single or dual rank:

Ryzen 3rd Gen supported RAM:

  • 2x DIMM: DDR4-3200
  • 4x single rank DIMM: DDR4-2933
  • 4x dual rank DIMM: DDR4-2667

Since we are using four sticks of dual rank RAM (almost every 16GB module available will be dual rank), we technically should limit our RAM speed to DDR4-2666 if we wanted to stay fully in spec. However, since many end users may end up using a RAM configuration that supports higher speeds, we decided to do our testing with DDR4-3000, which right in the middle of what AMD supports.

The benchmark we will be using are the latest release of our public Photoshop benchmarks. Full details on the benchmark and a link to download and run it yourself are available at Puget Systems Adobe Photoshop CC Benchmark.

Benchmark Results

While our benchmark presents various scores based on the performance of each test, we also wanted to provide the individual results. If there is a specific task that is a hindrance to your workflow, examining the raw results for that task is going to be much more applicable than our Overall scores. Feel free to skip to the next section for our analysis of these results if you rather get a wider view of how each CPU performs in Photoshop.

AMD Ryzen 3rd generation Photoshop Performance Benchmark

Benchmark Analysis

Testing 22 different CPUs makes a bit of an overwhelming chart, but there are a few very clear conclusions we can pull from the results:

  1. AMD Threadripper CPUs are not good for Photoshop.
  2. Intel X-series CPUs are OK for Photoshop, but not an ideal choice for heavy Photoshop workflows since you can get more performance for less money with other CPU options.
  3. Intel 9th Gen CPUs are great for Photoshop.
  4. The new Ryzen 3rd generation CPUs are terrific for Photoshop.

We typically don't get too much into cost to performance since pricing is constantly in a state of flux, but there is no argument that the new Ryzen 5 and Ryzen 7 CPUs in particular are excellent for Photoshop. With current MSRP pricing, the AMD Ryzen 7 3800X is $100 cheaper than the Intel Core i9 9900K, yet performs almost exactly the same. At the lower end of the product stack, the Ryzen 5 3600 is $60 less than the Intel Core i5 9600K, but ended up being a small ~4% faster overall.

Something else we want to specifically point out is how large the performance gap is between the new Ryzen CPUs and the previous generation. The most direct comparison is between the Ryzen 7 2700X and the new Ryzen 7 3700X, where we saw a terrific 19% gain in performance. After years of seeing small 5-10% performance gains from one generation of CPUs to the next, this is very impressive feat and an indicator that AMD is definitely on the right track.

Are the Ryzen 3rd generation CPUs good for Photoshop?

Overall, the new AMD Ryzen CPUs are a great choice for a Photoshop workstation. There may not be much of a reason to go with the more expensive Ryzen 9 3900X over the Ryzen 7 3800X, but the Ryzen 5 and Ryzen 7 CPUs should be slightly faster than the similarly prices Intel equivalents.

However, the performance advantage over Intel is really only a handful of percent which is going to be almost impossible to notice in the real world. While the AMD CPUs are slightly faster or more affordable than Intel, oddly enough that may not be enough of a reason to go with team red (AMD) over team blue (Intel) for everyone. There are many more factors that go into your choice of platform than just benchmark performance - especially when the results are as close as these are. Many consumers may make the choice based simply on which company they want to support (which is a perfectly valid reason), there are a few other considerations to keep in mind:

On the Intel side, the Z390 platform has been available for quite some time which means that most of the bugs and issues have been worked out. In our experience over recent years, Intel also simply tends to be more stable overall than AMD and is the only way to get Thunderbolt support that actually works. Thunderbolt can be a bad time on PC, and there are only a few motherboard brands (like Gigabyte) where we have had it actually work properly.

For AMD, the X570 platform is very new and there will be a period of time where bugs will need to be ironed out. However, AMD is much better about allowing you to use newer CPUs in older motherboards, so if upgrading your CPU is something you will likely do in the next few years, AMD is the stronger choice. In addition, X570 is currently the only platform with support for PCI-E 4.0. This won't directly affect performance in most cases, but it will open up the option to use insanely fast storage drives as they become available.

Keep in mind that the benchmark results in this article are strictly for Photoshop. If your workflow includes other software packages (After Effects, Premiere Pro, DaVinci Resolve, etc.), you need to consider how the processor will perform in all those applications. Be sure to check our list of Hardware Articles for the latest information on how these CPUs perform with a variety of software packages.

Looking for a Photoshop Workstation?

Puget Systems offers a range of workstations that are tailor made for your unique workflow. Our goal is to provide most effective and reliable system possible so you can concentrate on your work and not worry about your computer.

Configure a System!

Tags: Photoshop, Intel 9th Gen, Intel X-series, Intel vs AMD, AMD Ryzen 3rd Gen, AMD Threadripper 2nd Gen
sturmen

Wow! I'm sure no one was truly expecting AMD to take the top 2 spots so soon.

Posted on 2019-07-18 21:37:28

Three of the top four spots is really good! I'm working on the After Effects article now as well, and AMD again takes three of the top four in that application as well. Really nice since competition is always good and it has been a while since AMD truly competed in these kinds of workloads. It will be interesting to see how Intel responds.

Posted on 2019-07-18 21:41:00
Jig Serencio Navasquez

Waiting for AE finally!
https://media1.giphy.com/me...

Posted on 2019-07-19 08:33:32
Jakub Badełek

Matt, could you also make a quick separate test of influence of x570? Those mobos are very expensive and have active air cooling which may be problematic (there are already reports of models getting overheated). Many people are considering buying older mobos or even lower end models because of that. I wonder what is the real benefit of pcie 4.0 besides crazy fast (and expensive) storage few will really benefit from.

Great article btw, can't wait for tests in other programs especially Lightroom or Premiere pro

Posted on 2019-07-18 21:43:36
Jozsef Weigert

There is already a comparison here: https://www.techpowerup.com...

Posted on 2019-07-18 21:46:38
Jakub Badełek

Thanks! Very nice article. Still, I kind of trust Matt more :D

Posted on 2019-07-18 21:54:31

Aww, thanks! But honestly, I personally go to techpowerup, Tom's Hardware, and Anandtech primarily when it comes to things like that. We do our own internal qualification for specific products, but we really don't ever see in person the myriad of brands and models that those guys do. Especially things like underlying architecture changes, those guys do amazing jobs.

Posted on 2019-07-18 22:03:08

There really shouldn't be much in the way of a performance advantage to x570 over x470 - just additional features like PCI-E 4.0 (although that could impact performance in some workflows). To be honest, we probably won't do much formal testing on that topic. We're already flooded with our to-do lists, and we always end up using the latest chipsets in our workstations anyways so that kind of testing isn't a large priority for us.

Premiere Pro article will probably be either Friday or Monday - it depends on how long it takes me to sift through the 3,334 data points to turn it into meaningful results. Lightroom will be a while, however. We're currently trying to revamp our testing since most of the things people care about (culling, performance with brushes, etc) is extremely hard or impossible to benchmark. Our current testing is largely related to things like importing/exporting/previews/etc. which actually utilizes the CPU quite a bit differently than "active" tasks which can often lead to people picking what actually isn't the right CPU for them. Once we figure out a solution, we'll start including Lightroom testing again.

Posted on 2019-07-18 21:53:44
Jakub Badełek

Thanks, this comment plus the article linked above makes it clear for me. It's great you're working on benchmarking brushes for LR too! They can be pain in the.... back. Especially spot removal. Anyways take your time! I am considering putting a new machine together this year so your tests will be great source of knowledge

Posted on 2019-07-18 22:08:10
Jozsef Weigert

Looking forward to the new Lightroom test! In my personal experience the speed of moving between images in the Develop module is critical. Also zooming into images should be snappy on a good hardware setup. The fluidness, snappiness of editing is the most defining factor for me. Import/export time is less important as it is already fast enough on good hardware and we cannot really feel the difference - maybe only those who edit tens of thousand of images at the same time. :)

Posted on 2019-07-18 22:13:25

Yea, what you described is exactly what we hear over and over - but is reaaaallllyyyy hard to benchmark. We have some great contacts with the Lightroom dev team so I'm hoping we can get some help, but it is going to require them to add some code to the plug-in API. That's going to be a hard sell most likely since they (like all developers) are swamped fixing bugs and adding features that their users want/need. Things like this are great for us, but really doesn't apply to the vast majority of their users so it can be understandably hard to devote resources to it.

Posted on 2019-07-18 22:16:42
Cameron Ware

For LR ild like to see some kind of comparison for switching photos in develop mode, although ild also like to see some comparisons for importing and converting to DNG (possibly also editing photos during the import/convert process), if there's little difference between the feel of active tasks but a big difference in importing and converting its a clear choice for me.

Posted on 2019-08-06 07:36:53
Rasta_Cook

Thx for the article, would it be possible to also take a look at Lightroom and also possibly add the 3600x cpu ?

Posted on 2019-07-19 04:33:25

Lightroom will actually probably be a while. We're currently trying to revamp our testing since most of the things people care about (culling, performance with brushes, etc) is extremely hard or impossible to benchmark. Our current testing is largely related to things like importing/exporting/previews/etc. which actually utilizes the CPU quite a bit differently than "active" tasks which can often lead to people picking what actually isn't the right CPU for them. Once we figure out a solution, we'll start including Lightroom testing again.

As for the 3600X, we honestly probably won't end up testing it since we rarely use anything that low-end in our systems. The only reason we included the 3600 was because AMD was kind enough to get us a sample of the model (along with the 3800X).

Posted on 2019-07-19 04:38:59
Rasta_Cook

Thanks for getting back to me Matt. I understand what you mean about active tasks and it will be great if you can figure out a way to benchmark this. I was one of those that picked the wrong CPU, I currently have a 2600x and although some things like importing/exporting in lightroom specifically are much faster than my previous system (i5 2500k OC @ 4.6ghz), the overall experience while doing active work on the 2600x feels surprisingly worst than with my 8 years old cpu... I was extremely disappointed in the 2600x performance while doing things like spot removal, gradients filters, etc, it feels laggy / sluggish... so I was wondering if upgrading to the 3600x would be worth it, i have a feeling it might... Have you had the chance to see how lightroom feels with the 3600 or 3700x (i would assume those would be fairly close to the 3600x).

Posted on 2019-07-19 17:00:40

I haven't had the chance to do anything in Lightroom with these CPUs yet. If you want my guess, any of the new Ryzen CPUs should feel much better than the previous gen. Really hard to quantify by how much, but since they are performing really close to Intel in apps like Photoshop and After Effects, they should be about on par in Lightroom "active" tasks as well.

Posted on 2019-07-19 17:08:44

Quite an impressive leap forward for AMD! Besides the obvious top performers, the R5 3600 looks like to bring a crazy good value for its performances on Ps.

Posted on 2019-07-19 07:41:15
Rafał Urbański

3900x or 3700x vs 9900k?

How about testing zbrush? I have ryzen 1700 atm. What about Unity and Unreal? What about streaming desktop (3d workflow - recording tutorials)

I am having some weird problems with ryzen system though... That's why I wanted to go with i9 9900k but damn... new ryzens are sooo good. :(

Zbrush is super important for me. I want best performance possible. Not sure how multi-core performance can affect my work. Some people say zbrush is amazing when it comes to using multi-core but others say it is not true and it is better to go with higher frequency... I am confused.

I also play games (it is not my main thing I do but I also create video games and use Unity a lot)

Clay Brush and Move brush are laggy on high poly model in zbrush on my ryzen 3.8ghz - around 20mil "points"

There is like 0 comparison benchmarks for Zbrush. I wish someone would do like comparison between i9 9900k, threadripper, 3700x and 3900x to show how cores and frequency affect zbrush performance. For example measure fps when rotating very high poly model, using different brushes etc.

Posted on 2019-07-19 08:24:54
David Young

Any chances that we will see benchmarks for Unity in the future?

Posted on 2019-07-19 10:07:22

That is something we are actively looking at, but it is a little tricky. At the moment I can only think of two things to try and test: real-time FPS performance and compile time. Maybe some lighting related stuff too...

Is there more you would want to see tested, or any suggestions on good sample projects to use as the basis for testing?

Posted on 2019-07-19 16:39:26
Behrouz Sedigh

"Keep in mind that the benchmark results in this article are strictly for Photoshop. If your workflow includes other software packages (After Effects, Premiere Pro, DaVinci Resolve, etc.), you need to consider how the processor will perform in all those applications"

I'm waiting for Next non-Photoshop Bench !

Posted on 2019-07-19 12:29:25
Hwgeek

All Asrock X570 boards are Thunderbolt ready [official support] and some of them have it built in.

Posted on 2019-07-19 12:47:07
Dennis L Sørensen

Nice results by AMD. But still. I am a little dissapointed. 12 cores and 24 threads with relative fast frequency and still only a smudge faster than Intels 8 core, and AMDs 3800X 8core only getting the same results as an Intel product released 8 months ago.. this shows how poorly and difficult it is to programme the use of more cores is and why the solution by AMD (and Intel in a near future) is not just to shove more cores into a package. We need more quality (efficiency/IPS) and not more quantity.

Posted on 2019-07-19 12:50:42
leonard

If you consider the fact that there is 0 optimizations for AMD then it's quite an achievement.
Some tech youtubers said that Adobe is Intel camp, then AMD will never be a choice no matter how good the CPU from AMD is.
It's a matter of time ( Adobe update ) to ruin performance for AMD or bring some performance optimizations exclusively for Intel and that chart will look different.
Quicksync ruined AMD for Premiere pro, even thought that same technology exists in GPU's for years (Nvidia NVENC and AMD VCE ) it never got used and never will.

Posted on 2019-07-19 14:54:46

I've never had the impression that Adobe is in the "Intel camp", and we've done quite a bit of work with the developers for various software packages. I do think there is some level of prioritizing code for the largest user base (of which Intel is definitely the most common CPU out there), but that is no different then prioritizing fixing a bug that is affecting 80% of users over one that only affects 20% of users.

My understanding is that the reason Intel has been better for Adobe apps for quite a while is simply because Intel has better per-core performance. Adobe apps tend to not scale super well (although that isn't an Adobe thing, but something market wide), and it took AMD improving their IPC to really make them competitive with Intel. I highly, highly doubt we'll see an update that tanks performance for AMD.

Posted on 2019-07-19 15:39:36
leonard

I really hope you are right but the question remains, why use Quicksync and not NVENC and AMD VCE.
Let's say AMD don't have the money to send engineers to implement this, Nvidia has plenty and still after so many years and it's not implemented.
I know you don't test in Vegas pro but that is one software that uses quicksync, NVenc and AMD VCE, no problem for a tiny company to implement this, impossible for a company that is worth 132 Billions.
I saw tests on some tech youtuber channel for warp stabilizer in Premiere pro , davinci or final cut it takes a few seconds, in Premiere Pro, MINUTES, this is just absurd, using 1-2 cores for stabilizing while others use everything you've got.
https://youtu.be/X40N0HSm6Y...

Posted on 2019-07-19 18:12:20

You would need to talk to the devs to get an answer to be honest, we are on the outside looking in just like you are. I know there are plugins that allow NEVC in Premiere Pro, but why Adobe doesn't use it natively, ¯\_(ツ)_/¯

Posted on 2019-07-19 18:20:41
Dennis L Sørensen

I know you dont sell overclocked machines, but that is the next result I would love to see (after all the regular apps/programs have been tested @ stock). I have a feeling that if you push the OC, the 9900K is still coming out on top (but I dont know).

Posted on 2019-07-19 12:52:20

Very little chance we'll do overclock testing - just too much to do covering testing that is directly relevant to our customers. But from what I've seen in reviews, I agree that the 9900K should pull ahead a bit since it looks like Intel is stronger for overclocking.

Posted on 2019-07-19 15:40:53
Elcideous

From what I watched on some others reviews the 3900X is hands down bad as when scrubbing Videos

Posted on 2019-07-19 15:17:53
Luca

ASRock manufactures two X570 boards with integrated Thunderbolt 3.
it'd be interesting to see if it actually works.

Posted on 2019-07-19 17:41:34

I'm really curious as well. As far as I'm aware, there is no completely official implementation for Thunderbolt on AMD chipsets, but motherboard manufacturers can slap it on if they want. A few brands have that on AMD Threadripper boards, but when we tried it it really didn't work well (which is likely why they didn't list Thunderbolt in the specs). My take is that Asrock is willing to take more risks than other brands, but whether that is because they are confident in it working or simply because it is something they can use in marketing to try to drive sales I don't know.

After working with Thunderbolt for years and dealing with the huge hassle it has been, I personally wouldn't trust it - at least not in this early of an implementation. I've been wrong before, however, so who knows.

Posted on 2019-07-19 17:45:34
Eric Marshall

Great job on another roundup! Though I have to disagree with the conclusion. The conclusion I come away with looking at these results, is that there is a disturbing lack of difference from CPU to CPU in this application, and that the primary performance bottleneck in this program is still the program itself, as has been the case for the last 10 years or so. As a company in the business of trying to sell high end computing to customers, Puget should take a moment in each of these articles to berate Adobe for spending so much development effort pushing "cloud" garbage on us, and so little development effort optimizing the use of the rapid expansion of core count taking place.

Posted on 2019-07-21 02:13:32

We actually do give feedback directly to many Adobe teams, although I believe it is all under an NDA so I can't get into the details of it.

You are definitely right that at the high-end, there isn't a massive difference in performance between different CPUs. However, I can tell you that at least for our customers, even a 5-10% performance gain is often well worth the cost of the higher-end CPUs. I know our customers are not the average user, but for them, any investment that saves them time pays off incredibly quickly.

But, that is one reason we not only publish our thoughts on the results, but the raw benchmark results and scores as well. Our commentary is always going to be skewed towards our customer base - after all, that is the main reason we do this testing. The fact that is is helpful to the masses is great, and we see no reason to hide it, but I think many people don't quite understand that we are a high-end workstation manufacturer, and everything we do is geared towards helping our customers get the exact right system for their workflow.

Posted on 2019-07-21 04:29:36
Eric Marshall

I've built a number of systems at work like the ones you would for your clients; premium case/psu, enthusiast or workstation platforms, sometime with ECC memory, high end CPU's, quadro cards, etc. I can relate to the "time is money premise" and account for that in the decision to deploy more quality and performance (even at very high component cost) if it means we might get a couple more years out of the system before a rebuild is revisited, or if it saves a few minutes in a day in compute time for users, and just as importantly, saves me time by speeding up maintenance and reducing repairs. In business, the cost of the machine is typically a drop in the bucket compared to the big picture. The salary of the person sitting in front of the machine, is likely 50-500X greater than the cost of the machine amortized over the years it will be in service.

The problem I have here, is that we have a range of CPU's up for consideration, where the high end (something like a 3900X), has ~150% more processing power at a ~125% higher price, than say, an i5-9600K, yet in this application, the performance difference is only 15%. Adobe is not the only guilty party here. I have CAD guys at work using premium lake architecture many-core workstations for CAD applications (high end for longevity/reliability/5% reasons previously discussed), and they are constantly running into performance issues where 1 thread is pegged while the rest sit idle. CAD and modeling viewports are notoriously bad performing, with no development effort to spread the load to the compute resources made available in modern computing. Yet somehow, they always make their annual deadlines for a new version packed full of new features and cloud integration and more junk that nobody asked for, while leaving all the 20 year old problems unsolved. Typical software development these days.

Posted on 2019-07-28 02:37:00

I think it is a little of column A and a little of column B. New features and flashy updates sell, so of course companies like Adobe or Autodesk are going to spend more of their development funds on those things. I think they are starting to get the message that performance and stability are more important, but most companies are focusing on GPU acceleration over improving CPU threading performance. Unfortunately for Intel/AMD, often the easiest things to port over to the GPU are the ones that are really efficient at using a high number of cores which leaves the CPU-based tasks as the ones that are not (currently) very good at running in parallel.

At the same time, there are simply some things that cannot be run in parallel. CAD is a perfect example, in fact, since it is all parametric modeling where each point relies on the coordinates of the point before it. That is why, without a major redesign of how CAD programs operate, apps like AutoCAD or SOLIDWORKS simply are never going to be good at using more than a core or two for the viewport.

That said, we are definitely starting to hit the limit of per-core performance, so improved multi-threading capabilities is something developers are going to have to consider. All it takes is one company investing in it, and everyone else is going to have to in order to keep up.

Posted on 2019-07-29 02:54:58
Eric Marshall

On the subject of GPU acceleration... We see the same scaling problems with GPU's in these content creation applications. There's almost no performance scaling between a GTX 750 and a RTX 2080Ti in most GPU accelerated Photoshop manipulations. They are bottle-necking the high end GPU's on a single thread.

I don't see a major redesign required to begin tapping into multi-threaded viewport. Autodesk has been using directX for years. Recent developments in later versions of directX, as well as newer GPU's and their driver/API support, set the stage for muilti-threaded viewport capabilities. Games are using lots of threads to command GPU operations for their viewports these days.

Think about it... If the developers who spend most of their day "game testing" and smoking weed... can get multi-threaded viewports to work, SURELY the "professionals" from a place like Adobe or Autodesk could manage to work on multi-threaded optimization for viewports and other GPU accelerated functions.

Posted on 2019-07-30 02:06:11
Seb T

thanks for this benchemark its very clear and detailed !!!
i'm a photographer and i work with a (i5 3570k 32go Ram / 970gtw FTW) since 2013,
I saw those news AMD cpu so i really wanted to update my 7yo cpu but i have a question !!! i just heard that AMD gonna come out with new cpu at the end of 2019 < 2020 the ZEN 3 - 7mn+ EUV , Should i buy a 2700x who could give me a bump of +35% and save money for a high end 7mn+ ( less buggy and everything )? or buy right now a 3700x who could give me a bump of 50% and stay with it for 7 to 10yrs as i did with my actual CPU (sorry for my english !!!)

Posted on 2019-07-21 15:56:27
Rafał Urbański

Buy 3900x for "future proof". 3950x that is launching in 2 month will be IMO too expensive if you don't render 3d stuff with CPU. 12 core cpu with nice high frequency should be more than enough for next 7 years. In 7 years we will probably have 100 cores at 10ghz.

Posted on 2019-07-21 16:34:39

There is always going to be a new technology or product 1-2 years away in this industry, and if you constantly wait for the next thing you'll never buy a computer :)

My advice, personally, is to save up until you can afford a new system that will provide a substantial improvement over what you have now. For me, that ends up being a new system about every 3-5 years, with minor upgrades in between, but it will look different for each person. If you are on a 7 year old system, then a modern system should be a big upgrade - so just save up to make sure that you get good specs, and don't have to settle for lower end parts because of money.

Posted on 2019-07-22 17:49:41
Seb T

Thanks both of you for very helpful answer and advices ! i finally bought a 3700x 32go Ram 3200Mhz CL16 with a B450M mortar with my first M.2 :D so i think it will give me a gain of 50% compare to what i have now so :D i think i'll be happy with it

Posted on 2019-07-23 22:48:44

Congrats, and best of luck with your new system :)

Posted on 2019-07-23 22:53:09
Vlasec

The Mortar is not the greatest of MSI B450 boards, but it is not a failure either, so I'd say it is a pretty good build :)

Posted on 2019-08-07 07:06:21
Rafał Urbański

How much better 3900x is compared to i9 9900k in Zbrush and Blender (vewport tests like in this video https://www.youtube.com/wat...

He used an amazing benchmark pack (he linked it in the video description) for blender (for single core performance in different tasks), never seen anyone use it.

Posted on 2019-07-21 16:27:37
Behive

Great benchmark/article as always Matt. Really grateful for the effort you guys put in with these reviews. Nothing else like it online. It helped me in purchasing a 9900K setup a while ago - I have it OC’d at 5GHZ on all cores with 32GB of 3200 CL14 ram. I really do wonder how the results would vary if you overclocked both the Ryzen and Intel processors. I know previously you said you don’t overclock for benchmarking, but really, these top end processors are expected to be overclocked not left at stock speeds. Plus the ram for the Intel is slow. I assume it is the recommend speed for a Z390/Z370 system but even still. A review by Tom’s Hardware showed that the ideal speed for current Intel gen processors 3200MHZ CL14 for Photoshop. I’d love to see benchmarks with optimised ram and CPU’s OC’d. Can understand if you can’t but do end up thinking that the final scores you have here are not relevant as the majority will optimise their systems. Thanks again, David.

Posted on 2019-07-22 02:38:02

I don't know if I agree with the statement that these CPUs are meant to be overclocked. They are capable of it certainly, but I imagine the vast majority of 9900K CPUs in the field are running at stock speeds.

In the end, keep in mind that the point of our testing is to help our customers. The fact that it is useful to so many others is terrific, but we are always going to prioritize our time towards the things that directly affects the systems we sell. That is the same reason why we stick with the officially supported RAM speed (although we did fudge it a bit for Ryzen in this testing). Faster RAM is just another form of overclocking, and the performance gain from either simply isn't worth even the small decrease in system reliability for our customers. That absolutely isn't going to be the case for people who build their own systems and are willing to tinker if things start having issues, but that simply are not the people we are directly targeting.

That said, every once in a while we do testing that is beyond the scope of our customer base. Overclocking is sometimes a part of that, and so is RAM speed analysis (which is actually something we will be publishing an article on today or tomorrow). We need to keep an eye on things like that to make sure we aren't overlooking something that would benefit our customers, but I don't think it will ever become something we regularly do - at least not unless our testing/articles start paying for themselves and we can afford to hire more people to do that testing.

Posted on 2019-07-22 17:35:13
Behive

Thanks for the reply Matt. I can see why you don't included overclocking as it does create stability issues if not tested exhaustively. Even still, to truly judge the king of the pack you'd need to include it ideally. A 5-10% increase is noticeable in user experience.

Posted on 2019-07-31 05:03:57

I was waiting this article because I will probably upgrade my Sandy Bridge workstation after this summer.
I am really uncertain whether buying the 9900K or the brand new 3900x, even after viewing these test results. The new Ryzen looks like the best over 9900K, but we should consider that users with the latter usually OC to 5.0 GHz (or even more) with ease and given that, I am not sure the Ryzen 3900X would be still the best. At the same time, the 3900x is being reported having modest OC capabilities.

I feel attracted by this 24 threads processor, but I am not sure to buy it. Any user have already tested in Lightroom and Photoshop? What are your feelings?

Posted on 2019-07-27 07:29:21
Seb T

Hi guys i need your help , im on of the happy owner of a new config with a 3700x + 32Go Ram + 970gtx , and i have a problem, when i open a RAW file in camera raw (Photoshop) and i start to use the healing brush, everything is fine and smooth at the beginning and after a few spot healed, the brush start getting slower and slower, seems PS dont like to manage to many healed spot so my question is, photoshop is related to CPU or GPU for this spécifique kind of task ? do i need to buy a CUDA graphics card to get it Smooth ??

Posted on 2019-07-31 20:08:00
Vlasec

Nice benchmarking. It's nice seeing AMD CPUs finally being more competitive with Intel in workload where per-core performance matters.

One thing makes me wonder though is the choice of cooling solutions. While Wraith Prism is not too bad, it's still a stock cooler. But I guess if you ran it at its max RPM, it offered similar level of cooling as the Noctua, just with the difference that the Noctua was barely audible and the Prism was trying to destroy your eardrums. In that case, I admire your dedication and suffering.

Posted on 2019-07-31 23:17:54

The Prism was definitely louder under load! In the end, we plan to carry a Noctua for these Ryzen chips as well - but we hadn't qualified that configuration yet, so we started off with the stock cooler AMD provides. I looked for reviews of the Prism at some other websites as well, and at most it looks like it *might* make the CPU run ~0.1GHz slower under all-core load compared to a massive 360mm AIO. That is a pretty small difference, overall.

Posted on 2019-08-02 16:16:00
Vlasec

Well, the Noctua is not the top tier dual tower triple fan stuff anyway, so I guess the benchmark comparison didn't suffer much. If it suffered much, AMD wouldn't put up such a good fight. Looks like the times of Intel-only for Adobe suite are over.

Posted on 2019-08-06 23:05:38