Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1320
Article Thumbnail

Premiere Pro CC 2019 CPU Roundup: Intel vs AMD vs Mac

Written on December 13, 2018 by Matt Bach
Share:

Introduction

Across all the workstations we configure, sell, and support, Premiere Pro is one of the more difficult software packages to design a system for. Not only does the resolution and codec of the media you work with affect how much raw performance you need, it can also vary based the kind of effects you use, the number of video layers, etc. In addition, there is no single piece of hardware that is the most important. The CPU is the biggest single piece, but the choice of GPU, RAM, and storage are also critical to having a well-tailored and efficient video editing workstation.

In addition to being one of the more important choice, getting the right CPU is also one of the more complicated decisions. Unlike applications like Photoshop and After Effects where there is a relatively clear "best" CPU, in Premiere Pro there are reasons to use a wide range of processors depending on your budget and what you are doing. To help you decide which CPU to use for Premiere Pro, today we will be looking at a wide range of processors from Intel and AMD including the Intel 9th Gen, Intel X-series, AMD Ryzen 2nd Gen, and AMD Threadripper 2nd Gen CPU lines. In addition, we will be comparing them to a current Mac Pro 12 Core and iMac Pro 14 Core for those that are curious about how much faster a PC workstation can be compared to a Mac.

One thing to note is that we will not be including results for any previous-gen CPUs in this article. At first, we were going to include them but the charts and tables soon got out of hand. Instead, if you want to know how these CPUs compare to previous generations, we recommend checking out the following articles:

If you would like to skip over our test setup and benchmark result/analysis sections, feel free to jump right to the Conclusion section.

Test Setup & Methodology

Listed below are the systems we will be using in our testing:

Shared PC Hardware/Software
Video Card: 1-2x NVIDIA GeForce RTX 2080 Ti 11GB
Hard Drive: Samsung 960 Pro 1TB M.2 PCI-E x4 NVMe SSD
OS: Windows 10 Pro 64-bit (version 1803)
Mac-based PC Test Hardware  
System: Apple Mac Pro (12 Core) Apple iMac Pro (14 Core)
CPU: 12-core
2.7GHz 30MB of L3 cache
14-core Intel Xeon W
2.5GHz Turbo Boost up to 4.3GHz
RAM: 64GB 1866MHz DDR3 ECC 64GB 2666MHz DDR4 ECC
Video Card: Dual AMD FirePro D700, 6GB of GDDR5 VRAM Radeon Pro Vega 64, 16GB of HBM2 memory
Hard Drive: 1TB PCIe-based SSD 1TB SSD
OS: MacOS Mojave (10.14.1)

To thoroughly benchmark Premiere Pro CC 2019 (ver. 13.0.0) on each processor, we used a range of codecs across 4K, 6K, and 8K resolutions:

Codec Resolution FPS Bitrate Clip Name Source
H.264 3840x2160 29.97 FPS 80 Mbps Transcoded from RED 4K clip
H.264 LongGOP 3840x2160 29.97 FPS 150 Mbps Provided by Neil Purcell - www.neilpurcell.com
DNxHR HQ 8-bit 3840x2160 29.97 FPS 870 Mbps Transcoded from RED 4K clip
ProRes 422 HQ 3840x2160 29.97 FPS 900 Mbps Transcoded from RED 4K clip
ProRes 4444 3840x2160 29.97 FPS 1,200 Mbps Transcoded from RED 4K clip
XAVC S 3840x2160 29.97 FPS 90 Mbps Provided by Samuel Neff - www.neffvisuals.com
RED (7:1) 4096x2304 29.97 FPS 300 Mbps A004_C186_011278_001 RED Sample R3D Files
CinemaDNG 4608x2592 24 FPS 1,900 Mbps Interior Office Blackmagic Design
[Direct Download]
RED (7:1) 6144x3077 23.976 FPS 840 Mbps S005_L001_0220LI_001 RED Sample R3D Files
RED (9:1) 8192x4320 25 FPS 1,000 Mbps B001_C096_0902AP_001 RED Sample R3D Files

Rather than just timing a simple export and calling it a day, we decided to create six different timelines for each codec that represent a variety of different type of workloads. For each of these timelines we tested both Live Playback performance in the program monitor as well as exporting via AME with the "H.264 - High Quality 2160p 4K" and "DNxHR HQ UHD" (matching media FPS) presets.

Lumetri Color

Heavy Transitions

Heavy Effects

4 Track Picture in Picture

4 Track MultiCam

4 Track Heavy Trimming

Benchmark Results

While our benchmark presents various scores based on the performance of each type of task, we also wanted to provide the individual results in case there is a specific task someone may be interested in. Feel free to skip to the next section for our analysis of these results.

Live Playback - Benchmark Analysis

The "Score" shown in our charts is a representation of the average performance we saw with each CPU for that test. In essence, a score of "80" means that on average, the system was able to play or export our projects at 80% of the tested media's FPS. A perfect score would be "100" which would mean that the system gave full FPS even with the most difficult codecs and effects.

Premiere Pro CC 2019 Live Playback CPU Roundup - Intel 9th Gen, Intel X-series, AMD Threadripper 2nd Gen, AMD Ryzen 2nd Gen, Apple Mac Pro, Apple iMac Pro

Live playback in Premiere Pro is one of those times where there is great benefit to having a more powerful CPU, but there is also a clear point of diminishing returns. With a modern CPU, this appears to happen right when you get up to a ~$350 processor. After that point, you are only talking about a ~10% difference in live playback performance even if you go up to the highest-end CPUs currently available. Because of this, there are really only a handful of things we want to point out:

First, if you are considering Threadripper there is little difference between the current models for live playback. There is a slight advantage to using the 2950X over the 2920X, but no reason to use the more expensive 2970WX or 2990WX. Both the 2920X and 2950X fare pretty well against the similarly priced Intel CPUs, although there is minimal difference between the 2920X and the less expensive Core i9 9900K.

On the Intel side, things are a little bit confusing due to how good the Intel 9th Gen Core i9 9900K is for live playback. Even though it is much lower cost, the i9 9900K beats or matches the Intel X-series 9800X, 9820X, and 9900X models. In fact, you would probably need to go up to the Core i9 9940X before you really noticed much of a difference for live playback. However, the higher-end Intel X-series CPUs like the 9960X top the chart, so if you are looking for the best possible performance, the Intel X-series is the way to go.

The Apple-based systems did about as well as we expected. The aged (but still current) Mac Pro 12 Core is way at the bottom of the chart and is out-performed by even the Intel Core i5 9600K. The iMac Pro 14 Core did better as long as it was in OpenCL mode, but even then, it was still slightly slower than the Intel Core i9 9900K. To be fair, the GPU used in the Mac systems is significantly less powerful than the RTX 2080 Ti we used in our other tests, but we are using the highest-end GPUs currently available in both the Mac Pro and iMac Pro.

AME Export - Benchmark Analysis

Premiere Pro CC 2019 Exporting CPU Roundup - Intel 9th Gen, Intel X-series, AMD Threadripper 2nd Gen, AMD Ryzen 2nd Gen, Apple Mac Pro, Apple iMac Pro

Moving on to the exporting results with Adobe Media Encoder, the results are slightly different than they were for Live Playback, but are overall very similar.

Again, starting with AMD Threadripper, the 16 core 2950X was actually the fastest model, beating the more expensive 2970WX and 2990WX models by a slight margin. Surprisingly, the 2920X actually dropped a bit with both the Core i9 9900K and iMac Pro 14 Core now slightly beating it.

For Intel, things are largely the same with a couple of exceptions. This time, the Core i9 9900K firmly out-performed the 8/10 core Intel X-series CPUs by 6-23%. On the less positive side, however, the i5 9600K did quite a bit worse than we expected and was beat by even the Mac Pro 12 Core. Still, Intel continues to top the chart with the i9 9960X and i9 9980XE beating the top Threadripper models by about 13%.

On the whole, the Mac systems fared a bit better in this test. The Mac Pro 12 Core still does poor, but the iMac Pro 14 Core was right in the middle of the pack. Of course, you are paying a hefty premium for the iMac Pro so it certainly is not a very cost-effective solution. Just as an example, the iMac Pro we are using costs $8,000 while one of our workstations with an Intel Core i9 9980XE and otherwise similar specs costs about $6,000 while being ~30% faster. Of course, the iMac Pro includes a monitor, keyboard, and mouse, but for $2,000 you can easily get a very nice monitor (or pair of monitors) and still have plenty of money left over.

Intel vs AMD vs Mac for Premiere Pro CC 2019

For Premiere Pro, there is no clear winner as far as Intel vs. AMD goes, although in many ways Intel has the edge. At the lower-end, the Ryzen 2700X and Core i7 9700K are pretty even both in terms of cost and performance. Across most of the rest of the pricing stack, however, Intel tends to have the price/performance lead with the exception of the ~$900 CPU mark where the AMD Threadripper 2950X beats similarly priced - and several more expensive - Intel CPUs.

PC vs Mac, however, is a much easier question to answer. Compared to the iMac Pro we tested, you can easily up to 16% higher performance at a much lower cost with a PC. Against the aged (but current) Mac Pro, you are looking at a 75% increase in performance with a Core i9 9980XE which, in even our high-end workstations, would still come out to being the less expensive option.

Premiere Pro CC 2019 Benchmark CPU Roundup - Intel 9th Gen, Intel X-series, AMD Threadripper 2nd Gen, AMD Ryzen 2nd Gen, Apple Mac Pro, Apple iMac Pro

Overall, choosing the right CPU for Premiere Pro is a bit of a complicated topic and depends heavily on how much CPU power you both need and can afford. Of course, there are other factors such as Thunderbolt support (which is not officially available on AMD), but in terms of pure price and performance, our current CPU recommendations are:

  • <$500: Intel Core i7 9700K 8 Core or AMD Ryzen 7 2700X 8 Core (~33% faster than the Mac Pro)
  • ~$500: Intel Core i9 9900K 8 Core (14% faster than <$500 CPU)
  • ~$900: AMD Threadripper 2950X 16 Core (8% faster than ~$500 CPU)
  • ~$1,400: Intel Core i9 9940X 14 Core (1.5% faster than ~$900 CPU)
  • ~$1,700+: Intel Core i9 9960X 16 Core (5% faster than ~$1,400 CPU)

To be honest, we almost skipped the $1,400 price point entirely since a 1.5% performance gain with the i9 9940X over the TR 2950X is pretty trivial. However, there are two reasons why we decided to keep it in. First, there are some relatively common instances where the i9 9940X does better than the overall score indicates - particularly with 4K H.264 150mbps media. Second, Threadripper doesn't fare well in other Adobe applications like After Effects and Photoshop. Because of this, you could see much better overall system performance with the i9 9940X if your workflow also includes those (or similar) applications.

If you are curious how the latest Intel and AMD processors perform in other applications, be sure to check out our recent Processor articles as we have a number of articles looking at CPU performance in Photoshop, Lightroom, After Effects, DaVinci Resolve, and many other software packages.

Tags: Premiere Pro, Intel 9th Gen, Intel X-series, AMD Threadripper 2nd Gen, AMD Ryzen 2nd Gen, Apple Mac Pro, Apple iMac Pro, 9900K, 9700K, 9600K, 9980XE, 9960X, 9940X, 9920X, 9900X, 9980X, 9800X, 2990WX, 2970WX, 2950X, 2920X, 2700X
Turing

https://helpx.adobe.com/pre...

"Bugs fixed in December 2018 (version 13.0.2) release
Performance drop in software decoding in the current build when compared to previous builds (12.11 and 12.12)"

Matt, does this affect the Live Playback scores? Can you please test the 9900K with the latest 13.0.2 release if you can only test 1 CPU?

Posted on 2018-12-14 11:19:24

I don't think so. I accidentally did some testing on 13.0.2 with a few of the CPUs before I noticed that I had updated to the new version on accident, but I didn't really notice any difference in the results. My guess is it is either a small performance improvement or it was to fix a specific bug that only occurs in limited situations.

Posted on 2018-12-17 17:34:07
Nick Lam

Ooops wrong thread. I meant to reply to the thread about HEVC.

Posted on 2019-05-04 05:20:08
HAS-Tecno

The truth is that I am surprised that intel promotes the I9 9900k as the best gaming processor when it looks professional. I think that it is currently the most balanced processor for most applications.

Posted on 2018-12-14 17:04:37

The i9 9900K is definitely a really good CPU. Its main limitation is really the fact that it can only use up to 64GB of RAM. That is really fine for most people even at 4K resolution, but once you get beyond that it is really going to be a problem. The 9900K is really the only reason we ended up adding a Z390/Intel 9th Gen option to our Premiere Pro workstations https://www.pugetsystems.co...

Posted on 2018-12-17 17:35:54
Mark Harris

Exactly! THat was very weird marketing when it is as the best overwall CPU for the money for people that do gaming but also do Premiere (its iGPU its great here) and photoshop among others.

Posted on 2018-12-29 18:44:42
Jonathan Emms

Principled Technologies 2.0?
Running a Corsair H80i V2 on a 2990WX? That would have been thermal throttling significantly.
What drugs are you smoking?

Posted on 2018-12-21 00:02:40

The H80i is actually one of coolers AMD recommends for Threadripper CPUs and is more than capable of keeping the 2990WX cooled -
especially on the open-air test benches we use. https://www.amd.com/en/ther... . We've never had any sign of throttling using this cooler, even when running much harder loads than Premiere Pro like Prime95 or Linpack.

Posted on 2018-12-21 00:07:43
Jonathan Emms

AMD does NOT recommend that cooler on their CPUs.
"The following list contains some of the thermal solutions submitted by manufacturers with specifications to support AMD Ryzen™ Threadripper Processors."
I.e. they are simply showing a list of coolers that the manufacturers claim support threadripper. AMD haven't recommended it themselves, they haven't benchmarked them, and they don't specify which cooler is suitable for which threadripper. So one cooler may be suitable for say the 2920X but not the 2990WX.

Posted on 2018-12-21 00:31:36

I can assure you, the H80i is more than capable of cooling the 2990WX. We take the quality, reliability, and performance of our systems extremely seriously and adequate cooling is one of the first things look at. In fact, if you can excuse the mobile screenshot, this is a temperature graph from one of our recent systems with a 2990WX and a RTX 2080ti in a Fractal Define XL: https://uploads.disquscdn.c...

If you want to get into overclocking (which we do not do), you may want more cooling, but for the way we use these CPUs the H80i has plenty of cooling capacity.

Posted on 2018-12-21 00:49:40
Jonathan Emms

I'm not even talking about overclocking. Just take default out the box >>turbo boost speeds<< & base clock speeds. You know what I'm talking about. Either drugs or Principled Technologies 2.0 confirmed.

Posted on 2018-12-21 00:56:22

I'm not sure how to convince you here since I already gave you a screenshot of a thermal graph showing that the CPU doesn't even pass 70c in our benchmark process. That process also includes much higher CPU loads than Premiere Pro including Prime95 (the first peak) and Linpack (the next three peaks).

At this point I'm just going to let this thread trail off. If you aren't going to trust that I am being truthful, I'm not sure what else I can do.

Posted on 2018-12-21 01:04:57
Jonathan Emms

- It may "work" but won't be performing it's best. It won't turbo above the 3.0ghz base very long, if at all (likely not). Out the box it can turbo up to 4.2GHz (not on all cores though). In any case it WILL perform better on a better cooler. That's a fact.
- Corsair H80i simply not good enough to handle the 2990WX.
- No other manufacturer recommends a 120mm single rad AIO on any threadripper let alone 2990wx.
- Also H80i doesn't have full IHS coverage.
Those are the reasons why I don't trust you. Not a matter of being truthful (no where did you outright lie). Just that the testing was HIGHLY flawed from the start.

Posted on 2018-12-21 01:27:38
Jonathan Emms

Is it coincidental that their is NO OTHER manufacturer even suggests a single 120mm rad AIO?

Posted on 2018-12-21 00:35:14
Jonathan Emms

Also define your version of thermal throttling. Not sure we're talking on the same level at this point.

Posted on 2018-12-21 00:44:12
Jonathan Emms

Why did you edit out any mention of "overclocking" in your comment?

Posted on 2018-12-21 02:34:07
Dragon

No one (including Matt) will question the need for more cooler if you OC the 2990 (it uses a LOT of power at full tilt), but Matt clearly stated that PS doesn't OC anything they sell and the 2990 barely uses more power than the 9900k when run at standard clocks. Also, how well a radiator works is highly dependent on how fast you let the fan run (i.e. a single radiator fed by a leaf blower will happily cool the 2990 at full OC power). If you turn PBO to max and you want a quiet system, then you will need a really big radiator for your 2990. At stock, not so much and the above thermal chart tells the story.

Posted on 2018-12-25 20:15:50
Dennis L Sørensen

The 9900K is a 95w TDP product @ stock.. The 2990WX is a 250w TDP product and it can draw 250w indefinitely. They are no where the same.. If you see a 9900K use more than 95w it is turboing, but it is only allowed to do for 8 sec (in which it can theretically go to 210w - realisticly 160w). If it does more than that it is per Intel spec overclocked and not stock (in which we need to have a totally different talk if that is allowed). So if you see a test where it says it draws 160w (or more) I would argue that the testers are not doing their job correctly as they are only showing the spike in the data they collect.

Posted on 2018-12-25 23:03:06
Dragon

What you say is textbook true, but many (actually most, if not all) z390 motherboards default to a much more aggressive position. AnandTech has several articles on the subject. I just built a new 9900k setup on a Gigabyte Designare MB with the only tweak being to set memory to the intel extreme profile so the computer will recognize the faster memory I put in it, It has a good water cooler, so it doesn't overheat, but it runs full tilt all the way through a Cinebench run and that is longer than 8 sec. AnandTech has tested the 9900k at default MB settings and also at 95W TDP. There is a considerable difference in both power consumption and performance. Most testers have not made the distinction, so their performance numbers reflect the higher power numbers (around 170W for a fairly sustained period). The chip is also much smaller in contact area than the 4 chips in the 2990, so the 9900k actually requires about the same kind of cooler.

Posted on 2018-12-26 06:32:37
Dennis L Sørensen

Ya. I am aware of that. But that is overclocking - pr Intel spec. Not stock. So not comparable.

Posted on 2019-01-02 23:21:14
Dragon

It may be technically overclocking, but it is what you get by default, so 95% of 9900k's will be running that way with the downside that many will be thermally throttling if they don't have enough cooling. Mine will run flat out at 4.7GHz on all cores rendering for hours and never go over 65C, but I have a good water cooler with the pump and the fans both sensing the CPU temp. Intel is a little duplicitous with their spec, because they clearly want this processor to run wide open in order to keep ahead of the 2700 (and maybe soon the 3800, which may not be so easy).

Posted on 2019-01-03 00:14:32
Dennis L Sørensen

Its overclocking. Its going out of normal spec. That is the fault of the motherbord vendors. Asus fx does not allow this OC without asking you in the BIOS.

If others are doing it differently, its their fault. Whethere or not others are doing it does not matter :-)

Posted on 2019-01-03 00:20:48

One thing I want to add about the auto-overclocking is that it was a major issue about a year back. In fact, it was so bad that we actually put up a post about it: https://www.pugetsystems.co... . The issue at the time was that motherboard manufacturers started to make the default or auto setting for "Multicore Enchancement" (or whatever each brand calls it) to being enabled. This setting switches the motherboard from following the per-core frequencies outlined by Intel to simply running at the maximum Turbo frequency regardless of how many cores are being used.

Around October 2017, a bunch of hardware reviewers (ourselves included) made a lot of noise about how auto-overclocking is NOT something motherboards should do without user input which resulted in most mobo manufacturers putting out a BIOS update to change the "auto" behavior back to being disabled by default. We still deal with this occasionally, however, and have to get after them every once in a while when they randomly switch back to having "enabled" as the default . It usually just means they pull the BIOS off their download page and put up a new one, but I can totally see someone updating their BIOS and getting an auto overclock without them knowing.

It hasn't been as big of a deal with Z370/Z390 - at least not for the boards we carry - so I don't think there are too many people out there with it on without knowing it. On the other hand, we only carry and use a fraction of the boards that are available, so it is also possible that there are some brands that continue to have it enabled by default. If I had to completely make up a number based on the little information I have, I would guess that maybe 1/8 to 1/4 of people with a Intel 9th Gen CPU have that feature enabled.

Posted on 2019-01-03 00:38:27
Dragon

I don't agree that this is all on the backs of the MB manufacturers. If Intel didn't want the boards to operate this way, the MB guys would be shut down in a hot second. I don't have an issue with the boards working this way, but it would be nice if somebody in the chain was aboveboard about the necessary cooling. So far, AnandTech is the only place I have seen a decent dissertation on the subject. BTW, I ran your PS benchmark on my machine and the numbers are almost identical to yours (slightly lower in a couple of places due to the fact that I was running a GTX1080), so hard to believe the board you ran the most recent published PS benchmarks on wasn't running continuous turbo mode just like mine.

Posted on 2019-01-03 01:40:41

I may be mistaken, but this is the first time I can remember where Turbo Boost mode on a processor was supposed to be time-limited. Maybe that has been a factor all along, as this sort of tech has been around for years / multiple CPU generations now, but in the past I mostly recall Turbo being thermally limited - in other words, that if the CPU got too hot at turbo speeds, it would slow down to a lower turbo clock or even all the way to the base clock speed. Because we consider Turbo modes to be part of the processor / manufacturer spec, and want to ensure good performance for our customers, we have always aimed to provide cooling in our systems that could handle the max turbo clock speed indefinitely.

For the 9900K, then, our stance (as I understand it - I don't set these policies) is that a system should be able to cool it running at 5GHz on a single core or 4.7GHz on all cores for as long as is necessary. I am under the impression that Intel's "official" spec is that it should only run at the full turbo for something like 8 seconds before slowing back down, and I think this is what a lot of motherboard manufacturers are not including in their default settings. That is distinct from what Matt described in past generations, where some boards were defaulting to all cores running at the same speed (which in the case of a 9900K would translate to all cores at 5GHz under full load). That is definitely overclocking, as it is going beyond the manufacturer's stated clock speed with a given number of cores.

I don't think the same term is fair to apply to what we are seeing in this generation, though. It is not over-*clock*-ing, as the clock speeds don't exceed the specified frequencies. If anything, it would be something like 'overtiming': spending longer at a given clock speed than the CPU maker stated. But the whole thing about time-limited clock speed settings seems weird to me in general. If a CPU can handle a certain speed, and the temperatures can be kept in check, why should you limit how long it spends there? That would give really weird results. Any benchmarks that took less that the specified time would seem artificially high compared to what a CPU could really sustain under load.

I think it must have been done so that Intel could still claim a lower TDP, because they only "want" the CPU to exceed that for short bursts where the added heat wouldn't be enough to overwhelm coolers that are barely adequate for the stated TDP. But with the sort of cooling we use here at Puget, the 9900K is safe to sustain its default turbo clock speeds indefinitely. And if it ever did have trouble with heat for some reason, the motherboard / BIOS will still down-clock to avoid overheating and thermal shutdown... so there is no safety issue with how we do it either. Is it technically outside of Intel specs? Yes. Are the Intel specs dumb in this case? Also yes. Is it in line with what many (or even most) motherboard manufacturers are using as default settings in the BIOS? Again, yes. Is it overclocking? I would argue no, based on the description above. Does it fit with how we at Puget have approached Turbo clock speeds in the past? Most definitely yes.

Posted on 2019-01-03 16:50:51
Dragon

Your analysis is consistent with what I am seeing, but challenges Mr. Sorensen's view on the matter. I think the first mainstream processor to noticeably break the TDP barrier was the 8700k, but the 9900k is way out there. Ian Cutress did a nice job of explaining the issue here https://www.anandtech.com/s... , but that is the only real dissertation on the issue I have seen. Interestingly, Intel doesn't even make a stock cooler that will properly support this processor. It is too early to tell what the big OEM's will do, but I suspect they will set the bios to the 8 second limit in order to avoid providing an adequate heat sink. Intel clearly is trying to avoid admitting how much power this little beastie really uses, but my sense is that avoidance will cause them more pain than pleasure in the long run.

Posted on 2019-01-03 19:11:02

To be honest, TDP is a meaningless number these days. It is the expected power draw when the CPU is running at the base frequency. With Speedstep and Turbo, anything above an i3 (which doesn't support Turbo) is pretty much never going to run at the base clock speed. That goes back well before the 8700K, but what we have been seeing in the last several years is that the Turbo frequencies are getting higher and higher above the base clock speed.

I really ignore TDP at this point - just like I ignore the base clock frequency. Both are not really a factor in reality.

Posted on 2019-01-03 19:14:57
Dragon

Yes, that is the case for Intel, but AMD defines TDP differently and their processors don't go over TDP unless you actually overclock them. This gap will get even worse if AMD succeeds in getting 7nm out early this year. Intel really needs to get a move on in the process department, and that is not to suggest that the 9900k isn't a good processor. It is awesome, but it sucks juice like an x processor and may have lifetime problems in builds that don't have adequate cooling.

Posted on 2019-01-03 19:25:39
Dennis L Sørensen

I do not think that is enough.. As far as I know most come pre-OCed. MSI, Asrock, Gigabyte and EVGA does it.

Posted on 2019-01-03 10:01:06
Dragon

All I had to do was turn on XMP, and if you don't do that, the board won't support the specified 2666 RAM.

Posted on 2019-01-03 01:33:55
Dennis L Sørensen

XMP within spec is fine.

Its the PL settings I have a problem with. A 95w CPU vs a 170w+ CPU is an OCed advantage.

Posted on 2019-01-03 10:04:16
Dragon

No argument, but that is the way Intel is playing the game right now. This is not a MOBO issue, but clearly Intel's marketing choice to avoid admitting that the 9900k uses twice as much power as a Ryzen 2700. See the link in my above response to William George.

Posted on 2019-01-03 19:30:43

The most up to date info I can find is from Tom's Hardware - who I really trust for things like real world power draw: https://www.tomshardware.co...

From what I see, the 9900K pulls a peak of ~133W without AVX or ~233W with AVX on. That is close to the X-series with AVX on, but with it off it should be lower than the i9 9900X and probably around the same as the i9 9800X (which makes sense as they are both 8 core CPUs). I think most people getting a 9900K wouldn't be running software that both uses the AVX instruction sets and continuously stresses the CPU, so the 133W number is likely going to be the most common peak power draw users will see. Plus, if you are doing scientific computing or something else that uses AVX, you should get the X-series anyway since they support AVX-512.

For reference, the Ryzen 2700X was ~125-135W (it doesn't support AVX I believe) which is almost identical to the 9900K in a typical scenario. The 2970WX/2990WX is ~250W which is about double the 9900K in non-AVX workloads.

Posted on 2019-01-03 19:44:00
Dragon

Needless to say, the test data isn't consistent https://www.anandtech.com/s... . AnandTech got 106w for the 2700x with a 105w tdp, which is well within the margin of error. The AVX difference on the 9900k you point out is quite interesting. That calls for even more cooling if you are using AVX intensive software and verifies Intel's stated PL2 numbers, a point which Ian missed in his analysis. Back to the top of the thread, this puppy needs as much cooler as a 32 core Threadripper if you want to get all it has to offer (without really overclocking).

Posted on 2019-01-03 21:10:52
Dragon

FYI I just ran prime95 at max power setting (avx on) and the onboard sensor (using HWINFO) shows 212.6W for the 9900k (the clocks held at 4.7GHz), so yes, way more power than Cinebench or other render tests. With a top line 240mm water cooler, I still had 30 degrees of headroom, but a good cooler is clearly a requirement.

Posted on 2019-01-03 21:54:38
Jonathan Emms

AVX is supported on Ryzen. I think the last AMD CPUs that did not support AVX was around 2011/2012. I know as I was looking into it recently and just got a large lot of Opteron 6174 12 core CPUs for about $7 each. I was a bit disappointed as I later found out that AVX wasn't supported and they were the last Opteron CPUs that didn't support AVX.

Posted on 2019-01-04 05:01:03

dennislsoerensen (:

Posted on 2019-01-01 21:27:27
Jonathan Emms

Interesting discussion. Reason I raised it was Matt mentioned they were not overclocking their CPUs, I pointed out I wasn't even talking about overclocking. Just out the box turbo using AMD precision boost, that it wouldn't turbo very long (likely not at all). As soon as I mentioned that he edited out that they don't overclock out his comments.

Posted on 2019-01-04 03:48:12

I haven't edited any of these posts (I think you can go into Discus and see edits maybe? Not sure, never had to look into it). There was a comment 14 days ago where I mentioned that we don't do overclocking though, perhaps you just missed it with the number of comments there were?

Regarding you other comment linking to Williams comment on the Cinebench testing, he only saw a 2% varience in performance which is well within the margin of error for most benchmarks. He also stated that the CPU was maintaining 3.3-3.4GHz which is exactly right when all the cores are being used. The maximum Turbo of 4.2GHz is only if you are using a single core (or overclocking) on the 2990WX

Posted on 2019-01-04 03:56:59
Jonathan Emms

I may have been mistaken

Posted on 2019-01-04 12:58:24
Jonathan Emms

Just writing a more detailed response now.
As you can see from screenshots here http://disq.us/p/1ykawyj it does indeed thermal throttle with most cores running a full 1GHz below base clock speed.

Posted on 2019-01-04 03:50:53
Jonathan Emms

Matt and William, thought I’d just reply here as it’s where most of the discussion is occurring. First of all cheers and thanks to William for bearing with me lol. He has provided some benchmark results and screenshots attempting to address the concerns raised. I was going to reply here instead of holding discussion across two different posts as most of the discussion is on this thread anyway. Results were posted here: http://disq.us/p/1ykawyj

Ok here goes, I'm still a puzzled by your results and how you still think the H80i is ok you’re your own results posted seem to clearly show the opposite even by your own data (shows all but 3 of the cores running at just 2GHz)...

Apart from the fact it doesn't pass the common-sense test (every other person I spoke to about it agreed including some tech youtubers, so hopefully I’ll be able to get the results validated by others). There still seems to be quite a few issues. If anything, it goes to show how efficient the TR is with 32 cores rather than the cooling capacity of H80i lol.

Would have been good to do the same test on a better cooler with full IHS coverage to compare results as requested. Can’t compare thermal throttling between H80i and another cooler with out results from another cooler.

I generally wouldn’t consider Cinebench as a great benchmark to test thermals, however running it in burn-in mode at least it’s constant load. But there are other benchmarks that have a much higher thermal load on the CPU. Cinebench has quite a low thermal load on the CPU.

I would consider low 90s to be a good indication the cooler is running at it's limits, and CPU is thermal throttling at least a bit. It's not a good look when you have to run the fan at 100% to keep the temps in the high 80s-low 90s range. Also officially 32c/64t all core boost with precision boost is around the 3.4ghz mark (AMD Precision Boost presentation slides) however this likely is not sustained boost. As you can clearly see task manager only reports highest core frequency. The base clock should be 3GHz, in your test the majority of cores are only running at 1.995 or 2.194 GHz, so most of the cores are running a full 1GHz or more below the base clock speed. Seems to be a thermal throttling issue to me or am I missing something? Again really need the comparison with a better cooler to tell what’s going on.

Matt you also correctly state that Maximum Turbo of 4.2GHz is single core speeds, (actually not quite I believe it’s either 2 or 4 cores, but still valid point). However ALL core boost speed is still officially 3.4GHz. Ok here’s the thing, in you benchmark:
• Only 3 cores running at 3.2/3.3 GHz, so it’s not even achieving all core boost across all cores, only 3. Also the 3.2/3.3 GHz reported is still 100-200mhz short of 3.4.
• All the rest of the 29 cores are running a full 1GHZ below stock speed of 3ghz let alone all core boost speed.

In your benchmarks is that they all use the MSI MEG X399 Creation, as you are aware the most expensive/high end X399 motherboard on the market with the best VRMs. However all the systems you build actually seem to use the Gigabyte X399 AORUS Xtreme 10G. Probably not going to make much of a difference, especially when not overclocking (32 cores overclocked would probably favour better VRMs). That said its possible different motherboards would handle boost clock speeds slightly differently, although this seemed to be more of an issue on Z370 boards and coffee lake/coffee lake refresh where there was a lot of variance between motherboards.

Also another thing, which seems to be the most requested (almost every Ryzen/TR benchmark post) is memory frequency. However both 2nd gen Ryzen and TR support 2993MHz and I've seen plenty of companies providing faster memory (2933 or higher) on their professional workstations. It's really not that hard to run a few benchmarks with faster memory even if it's just as a comparison so your audience can see the difference with a disclaimer that you don't sell workstations with faster memory. I know your "standard" response, but you really need to think about who the benchmarks are for. As a company you've built your whole marketing/sales pitch around transparency and sharing benchmark results with everyone regardless of whether they buy a system from you or not. So much so that it reached a point where you’re almost regard as the defacto benchmark results for a wide range of software that doesn’t have a lot of benchmark coverage elsewhere. The thing is for your audience who don’t purchase directly from you but want to build there own systems (well you don’t ship overseas so already most people), many will get higher memory frequency builds as they know it Ryzen/Threadripper performs better. This comes up in comments for literally every benchmark you do. People want to know!!!! The instant someone else comes out with benchmarks with as many different software as you, you will instantly lose reputation as go to place for benchmark results on this one issue alone. If you don’t fill the void, I’m sure someone else will.

The largest issue holding someone else back from filling the faster memory void is cost of a full range of Intel & AMD HEDT. But I'm fairly certain if you continue to bat back standard response about memory speed someone else will fill that void. The question really comes down to, do you just release benchmarks purely to try and gain sales, or do you actually wish to provide relevant benchmarks and knowledge empowerment to everyone else outside of the States. Even then I don't see how you could possibly lose out in any sales by providing additional benchmarks comparing memory speed.

I swear there was another issue I had noticed with the methodology but can’t remember, it’s been a few weeks since I investigated it last and I took a few weeks off over Christmas. I struggle to remember stuff after taking the weekend off lol.

Posted on 2019-01-04 05:38:25
Jonathan Emms

https://uploads.disquscdn.c...

Posted on 2019-01-04 05:41:11

CoreTemp does not show clock speeds in real time. Those per-core speeds are what it recorded when I started it up, before beginning the testing in Cinebench. That program was being run exclusively for temperature monitoring, and the screenshot it was on has Task Manager running for clock speed monitoring (which does update in near-real-time).

Posted on 2019-01-04 17:04:28
Jonathan Emms

CoreTemp has FSB polling disabled by default... But even with it switched on I couldn't get reliable readings, it would work most of the time but there would sometimes be a glitch in the readings. In my testing when it showed incorrect readings the freq, bus speed & multiplier were inconsistent, it seemed to have issues showing bus speed. Anyway I usually use HWMonitor myself & CPU-Z.

At this point all I have time to say is I have had someone with a 2990wx who was able to validate results on a better cooler and was able to run at a higher frequencies and lower temperatures, I'm yet to be able to find a way to test a range of different coolers as I have to rely on others to provide me with benchmark results (a few people I've contacted have been very helpful so far).

I also contacted Corsair and they recommended against using H80i on Threadripper.

I also still think it's not a coincidence that ALL other OEMs use better coolers, either .

Short bursts of work probably fine, continual load not so fine.

https://uploads.disquscdn.c...

Posted on 2019-01-11 02:56:10

It may be worth noting that we do not use the stock Corsair fan when employing the H80i, either in our Labs test beds or systems we build for customers. Maybe that is why we have better results? We normally employ this Cooljag Everflow 120mm PWM fan: https://www.pugetsystems.co... . Comparing it to the specs Corsair lists for the H80i, the airflow appears to be substantially higher: 110 CFM on the Everflow vs 77 CFM on the stock Corsair fan (exact model unknown). See: https://www.corsair.com/us/...

Posted on 2019-01-12 00:22:28
Jonathan Emms

ok interesting, that would help

Posted on 2019-01-12 00:40:17
Jonathan Emms

Corsair were even hesitant to suggest the H80i for 16 core Threadripper, and suggested it would reduce reliability as fans would have to be run higher than desired. As mentioned before previously in my own experience running fans at max settings for prolonged times tends to kill fans or reduce lifespan.

¯\_(ツ)_/¯

Posted on 2019-01-11 03:00:22
Jonathan Emms

Also I find it contradictory to use system stability & reliability as a reason not to test higher RAM frequencies, however in my experience an undersized cooler will not be reliable. I've had a number of CPU cooler fans and and GPU fans die on me due to constant heavy load.

Posted on 2019-01-04 06:37:25
Jonathan Emms

One last thing is I could be wrong but it seems to me you avoid selling AMD systems as much as possible. It's not under any of the product options apart from custom build. Under solutions it maybe comes up about 4 or 5 times. So basically you really have to go digging to even find a way to order an AMD system. Not sure if this was intentional or not, maybe look into adding just one more system to the products list for an AMD Ryzen or AMD Threadripper build.

Posted on 2019-01-04 07:38:49

We sell AMD systems where they are the best choice from a performance and feature standpoint, for a given application. That is why we do all this testing. It isn't just so that folks can benefit when building their own computers, though we know that publishing it publicly and freely will help those folks. Instead, the main focus of our testing - and indeed, our whole Labs department here at Puget - is to be able to understand how software applications perform on different hardware, and use that information to craft system recommendations for our customers and consulting staff.

Our testing has shown that Threadripper has one particular place where it excels: heavily threaded CPU loads. As such, we offer it among our V-Ray and Pix4D workstations, since those applications can benefit greatly from high CPU core counts. We also have a TR system on the Premiere Pro page, at the moment, since it works well for some specific codecs that are very CPU intensive. Applications which don't use as many cores effectively will generally be better off with Intel chips that have fewer cores but higher clocks. Threadripper can also make sense for quad GPU systems, thanks to a lot of PCI-E lanes / slots, but due to Threadripper's lower per-core clock speed and higher cost we often lean toward Xeon W there instead (which can also support quad GPU).

Posted on 2019-01-04 17:13:19
Eric Marshall

If there's actually a coverage problem, then this may not be an ideal cooler selection, however, I think there's a lot of confusion surrounding cooler requirements for this CPU. I suspect most AIO "pucks" do wind up adequately covering all 4 dies under the IHS, despite not achieving full IHS coverage. It's worth noting that less than ideal full IHS coverage would actually be a greater concern for CPU's like the 2950X, that contend with higher thermal density concentrated on 2 dies. The 2990 has much lower thermal density concerns when operating with workloads spread out across all 4 dies at lower clock speeds and voltages.

I've tinkered with lower quality 120mm AIO's in overclocking experiments that ran over 300W to the socket, cooling a ~3cm^2 die w/soldered IHS. Didn't start to run into thermal margin limits until ~350W+ depending on ambient and other conditions.

The 2990 spreads out ~250W across more like 8cm^2 of die w/soldered IHS. I would expect any 120mm AIO to handle this without fanfair. If you're having a hard time conceptualizing this, consider for a moment the Fury X GPU, which dissipates ~350W average in stress/compute workloads, and is held in check with a basic 120mm AIO. 350W is a walk in the park for a 120mm AIO if the source is 6cm^2. Tomshardware stress testing reached a maximum 64C during the 350W stress test of this GPU. The 2990 has even more surface area and operates at even lower power.

The big bottleneck in CPU cooling these days has nothing to do with the total power dissipation. The real challenge is thermal density, which is something the 2990, relative to other CPU's, does NOT suffer from. A 9900K operating at 150W presents a far greater challenge in terms of thermal density than a 2990 operating at 250W. As can be seen by the thermal graphs posted Matt Bach, the H80iV2 has absolutely no problem dissipating the ~250W of a 2990 with ample thermal margin to spare. Meanwhile, if we observed the same test running on a 9900K being cooled by the same H80iV2, the thermal margin remaining would be much smaller, despite lower overall power dissipation.

----------

Keep in mind, the 2990's lower clocked EPYC cousins, are being cooled by nothing more than a block of aluminum with fins and air in server environments.

Posted on 2019-02-02 14:00:11
Sue

I have been eagerly awaiting this update for CC 2019. Thank you! I'm upgrading my system over the holiday to a 9900K and I'm glad, that for the price, I made a good choice. Also glad to see you're using the Aorus boards. Since I have been getting those ever since they put the DAC-UP ports on them, cheapest DAC ever, your tests really relate to my rig.

One multi-faceted question for you though - in your tests for media encoder/premiere export, did you have any after effects files in the timeline? I am noticing major bugs with the dynamic link manager (on my current 6600 system and an older iMac) and I wonder if using dynamic link manager is throttling performance and if I should just render all AE (I currently use cineform as an intermediate) and use that exclusively in premiere. I mean, a 2 minute video can take me 3 hours when linked to up to 5 AE files and around 3 comps per file. Any hardware or rendering setting suggestions? (OpenCL seems to be winning on the Mac by a noticeable margin) Granted, one of the projects I'm working with was created in 2018, which is causing it's own challenges as the conversions are not going well, but even the new project files are taking ridiculously long. Of course, I'd rather just use dynamic link manager so I don't have to render tons of versions (and thus keep track of them) and have a more fluid workflow - but it just doesn't seem to be productive overall to work that way. Oh, and I've tried uninstalling all non-2019 Premiere/ME/AE programs and using a fresh install of 2019 along with both 2018 and 2019 versions installed with no change.

Thanks in advance and you guys rock for putting out all of this great testing data!

Posted on 2018-12-22 19:14:10

Our testing isn't using Dynamic Link - partly because of bugs like you pointed out. I hear similar reports quite often, but if Dynamic Link isn't working for you, then it might be worth experimenting with rendering your Ae projects out to an intermediate codec. I would stick with OpenCL in Premiere, but Metal is a bit faster in Ae. Although, I'm not sure if Dynamic Link would like having them different (or if it cares at all), so you might want to stick with OpenCL on both.

Beyond that, you probably should open up a support ticket with Adobe. If it really takes that much longer to render with Dynamic Link than doing it separately, that sounds like a software bug to me.

Posted on 2019-01-02 20:16:49
Mishal Albawardi

How come you guys aren't offering the x-series 9th edition?

Posted on 2018-12-23 11:39:19

Lack of supply at the moment. We can get a handful of a few models, but there isn't enough available to move our product line straight over yet. I believe we are doing "free" (they are the same cost) upgrades for some customers, however

Posted on 2018-12-23 15:28:07
der Koekje

The TR1950X is around the same price or if used, a lot cheaper than the 9900k. How would it stack up in this list?

Posted on 2018-12-25 12:43:10

The 1950X ends up being pretty close to the 2920X - at least in Premiere Pro. Personally, I would stick with the 9900K. It is slightly faster for playback/exporting, but even more than that is the much better single threaded performance. For things like opening/saving projects, warp stabilize, and a bunch of other random tasks, only one core is used which is where the 9900k in particular excels.

The main argument for the 1950X is that you have an easy upgrade path if you want a more powerful CPU in the future. The 9900K is the top CPU of that line, so if you decide you need more power, that means not only a new CPU but at least a new motherboard as well.

Posted on 2018-12-25 21:36:13
Robert Baum

Thank you for the excellent review. I'm still wondering whether I should go with the 9900K or the 7940X with the new socket mainboard 2066. Right now I'm working with the i7 7700K and an ASUS ROG STRIX 270E mainboard. Works pretty well, but working with drone footage or more clips in the timeline (premiere pro cc) with linked AE compositions are getting tricky right now. Often I need to switch resolution to half, specially when Warp Stabilizer is activated. So the question is really if it's worth it to update socket 1151 to 1151 v2 in order to get the 9900K. I read that beyond 4K it would be a problem. Right now I see myself far away from 6k or 8k footage. Would you still go with the 7940X one? Thank you so much. Unfortunately you don't ship to Germany ,-)

Posted on 2018-12-27 17:05:15

Both AE and warp stabilize will be faster with the 9900k than the 7640x since they primarily only use one CPU core. So if those are your main pain points, I don't see any reason to use the 7940x since that CPU is only better for more well threaded tasks.

Posted on 2018-12-27 17:09:17
Robert Baum

Wow. What a fast answer!!! Thanks Matt. I just thought in order to think about the future it's not worth it buying a socket 1151 v2 system again since everything "better" will be socket 2066, which means next time I have to get a new system again. Do you think a 9900K workstation with 64gb RAM is worth a while or it's only a "fast" matter of time when it's time to switch again? Thanks again.

Posted on 2018-12-27 17:55:24

That is always going to be a problem to be honest. No matter what, there is always some new platform or CPU just around the corner so if you are always waiting, you are never going to get an upgrade.

If you went with the 7640x, you could upgrade to a 9940X or other X-series CPU in the future, but the 9900k will still be better for AE and warp stabilize than any X-series CPU on the X299 platform. So I think it is going to be at least several years before something comes out that will be significantly faster than the 9900K for you, but I really have no idea which platform that CPU would use. It depends on so many factors we have no information on include long what Adobe does feature wiae in AE.

Posted on 2018-12-27 18:00:41
Robert Baum

Okay. Then I'll downgrade to a Pentium 1 with 90 MHz, then every upgrade will be significant. Probably I'm buying the new system and 3 weeks later a new CPU comes out anyway. AE I don't use so much to be honest, but recently I used a lot of the Warp Stabilizer in Premiere, all 4K Material, so probably I'm gonna stick with the 9900K rather than the 7940X one. Thank you!!!!! I'm just wondering what people would use the X-Series then, probably for more advanced things like 3D CAD, etc. ... (sorry for noobing ,-) )

Posted on 2018-12-27 18:18:56

I might have misunderstood how important AE was for you since you mentioned AE linked comps being a problem.

The X-series are mostly useful for people who need 128GB of RAM (typically for 6/8k), those who need faster export times, or those who want to minimize the need for proxies even when doing a lot of transitions and effects. I think the 9900K is really a nice sweet spot right now, but for some people the extra performance is worth it in Premiere Pro.

Posted on 2018-12-27 18:25:11
Hwgeek

AMD Ryzen Threadripper 2990WX performance can see up-to 2X Boost with CorePrio tool
Can you see the effect too? just for getting more info :
https://www.youtube.com/wat...

Posted on 2019-01-03 19:44:08

I believe that is what AMD's "Dynamic Local Mode" setting in the Ryzen Master Software is supposed to address. We tested that and didn't see much of a difference for video and photo editing applications:
https://www.pugetsystems.co...
https://www.pugetsystems.co...

Edit: Link to AMD's post about it: https://community.amd.com/c...

Posted on 2019-01-03 19:47:49

Finally found the blog post for the software that goes into it in more detail: https://bitsum.com/portfoli... . Looks like it is the same as AMD's DLM, but I guess it is just better at doing what it is supposed to do (core prioritization). We will definitely keep an eye on it, I'm very interested to see what AMD says about it.

Posted on 2019-01-03 20:37:25
Jonathan Emms

Hopefully it will be clarified soon. Already been picked up by about 3 other well know tech youtubers just commenting on the video above. Could be interesting.
Might be worth being proactive and try and see if you can get better core prioritization with the Core Prio?

Posted on 2019-01-04 04:42:53
Jonathan Emms

I beleive it's somewhat different. Just came across this in last day or so haven't had a chance to look into properly. It seems to be an issue with Windows Kernel being unable to handle threadripper number of cores properly. Also it's an issue on EPYC as well with memory channels to each CCX without requiring a hop through infinity fabric to access the memory. Many people simply assumed it was memory latency issue which AMD "Dynamic Local Mode" is supposed to fix. However that doesn't address Windows Kernel issue. Also something to do with Numa nodes which I need to get my head around.
There was research in only last 24hrs or so showing how everyone's assumption that it was memory latency issue was wrong and it's a Windows Kernel issue.
Yes older version of Core Prio just had their own version of Local Dynamic Mode (similar to AMD). However that's not the actual issue. Try running the most recent version of Core Prio tool (10th Dec) with the NUMA Dissociater option enabled. It's supposed to work around Windows Scheduler issue. Another way to test is any benchmarks that also have Linux version you can compare results and see sometimes between 50-100% improvement on high core count Threadripper/Epyc.
https://bitsum.com/portfoli...
https://wccftech.com/amd-ry...
https://www.extremetech.com...

Posted on 2019-01-04 04:32:17
Jonathan Emms

Until Microsoft release a major patch to their kernel it's possible that Threadripper is under-performing and most of us didn't even realise lol. Means all the benchmarks here and on youtube might need to be re-looked at.

Posted on 2019-01-04 04:34:55
Jonathan Emms

Some people also had success manually setting process affinity after the process started I think?

Posted on 2019-01-04 04:38:38
Jonathan Emms

100% increase in some multi-threaded applications isn't insignificant lol

Posted on 2019-01-04 04:44:04
Morten Telling

What would one gain from overclocking a i9 9900-9940X on a Gigabyte Designare x299 in Premiere Pro CC? Is it worth it at all? And if so, what settings to get a stable system without extra cooling (besides fx a Noctua NH-U12DX i4?

Posted on 2019-01-15 14:21:16

If you want the system to be just as stable as stock speeds, you really shouldn't be getting into overclocking since you likely won't be able to get an appreciable performance increase over the normal Turbo speeds. To be honest, that is really why we don't get into overclocking in our systems - reliability and uptime is typically way more important for our customers than a small performance bump.

Posted on 2019-01-21 17:27:09
Remo Wakeford

Hey guys, not sure if there are other people out there that this is happening too but for some reason my little mac book pro with a 2.9GHz Intel Core i7, 16gb of ram and a radeon pro 560 mobile is rendering and performing better than my windows based ryzen threadripper 1950x with 64 gb of ram and a 1080ti card. the mac renders twice as fast and also previews much faster than my pc. Has any body got any ideas as to why that could happen? just asking because it seemed appropriate here.

Posted on 2019-04-10 21:50:24
Eric Pipedream Leisy

Yep, I also noticed this... I have an iMac i5 3.3 ghz with a puny 8 gigs of ram and i also just noticed that it renders almost more than double the speed of my i7 32 gigs of ram, geforce 1080 WITH an Intel HD 630 for quick sync...

Posted on 2019-05-20 05:54:57
Nick Lam

Will Pudget Systems start to include HEVC timeline scrubbing and sequential playback performance?

Posted on 2019-05-04 05:12:13

Our latest Pr articles include H.265/HEVC testing: https://www.pugetsystems.co... . We are still working out exactly all the codecs, bitrates, and resolutions we want to test in the future, but since more and more cameras are recording in H.265 that is definitely one that we will continue to test.

Posted on 2019-05-04 05:15:54
Nick Lam

That's awesome because I currently have a Xeon 18 core @ 2.69GHz and it can do sequential playback and random scrubbing up to 4K @ 60p HEVC 10-bit 4:2:0 200mbps Long GOP from the Fuji X-T3 in PP CC 2019 pretty well. So I am wondering how much more performance would I get if I were to upgrade my CPU.

And BTW CPU = 100% and GPU = 6% in the above example. The CPU loads briefly per clip at 100% then idles at around 6% also.

Obviously measuring scrubbing performance is difficult, but any measurement helps to understand the differences between each cpu.

Posted on 2019-05-04 05:22:42