Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1206
Article Thumbnail

Premiere Pro CC 2018: NVIDIA GeForce vs AMD Radeon Vega

Written on August 1, 2018 by Matt Bach


While GPU acceleration has become fairly common in Adobe applications, in most situations it is much more important to have a powerful CPU, plenty of RAM, and fast enough storage. Despite this, a popular request we get is to compare AMD's Radeon Vega video cards to NVIDIA's GeForce cards. In previous articles we have compared these cards in both Photoshop and Media Encoder, but now it is time to take a look at how they do in Premiere Pro. It is worth noting that while we will be focusing on Premiere Pro performance in this article, choosing a specific GPU to use is a much more complicated topic. Many other factors including current pricing, reliability, power draw, noise level, and available cooler designs are all things that need to be considered.

If you would like to skip over our test setup and benchmark result/analysis sections, feel free to jump right to the Conclusion section.

Test Setup & Methodology

For this testing, we will be using the following hardware and software:

This CPU, RAM, and storage combination we are using is among the best you can currently get for Premiere Pro which should give each GPU the chance to perform to the best of its ability. To compare AMD and NVIDIA, we chose a wide range of cards from both the Radeon and GeForce lines. We do want to point out that at the time we did this testing, it was difficult to source a quality AMD Radeon Vega card that was not factory overclocked. Rather than delaying our testing we decided to go ahead and use the overclocked cards even though it will slightly skew the results in favor of those cards.

To thoroughly benchmark each GPU, we used a range of codecs across 4K, 6K, and 8K resolutions:

Codec Resolution FPS Camera Clip Name Source
CinemaDNG 4608x2592 24 FPS Ursa Mini 4K Interior Office Blackmagic Design
[Direct Download]
RED 4096x2304
29.97 FPS RED ONE MYSTERIUM A004_C186_011278_001 RED
Sample R3D Files
RED 6144x3077
23.976 FPS WEAPON 6K S005_L001_0220LI_001 RED
Sample R3D Files
RED 8192x4320
25 FPS WEAPON 8K S35 B001_C096_0902AP_001 RED
Sample R3D Files
ProRes 422 HQ
ProRes 4444
3940x2160 29.97 FPS Transcoded from RED 4K clip

Rather than just timing a simple export and calling it a day, we decided to create six different timelines for each codec that represent a variety of different type of workloads. For each of these timelines we tested both Live Playback performance in the program monitor as well as exporting via AME with the "H.264 - High Quality 2160p 4K" and "DNxHR HQ UHD" (matching media FPS) presets.

Lumetri Color

Heavy Transitions

Heavy Effects

4 Track Picture in Picture

4 Track MultiCam

4 Track Heavy Trimming

Live Playback - Raw Benchmark Results

[Click Here] to skip ahead to analysis section

Live Playback - Benchmark Analysis

The "Live Playback Score" shown in the chart above is a representation of the average performance we saw with each GPU for this test. In essence, a score of "80" would mean that on average that card was able to play our timelines at 80% of the tested media's FPS. A perfect score would be "100" which would mean that the system did not drop any frames even with the most difficult codecs and timelines.

It should be pretty clear from the chart above that for Live Playback, the AMD Radeon cards are overall slower than their NVIDIA GeForce counterparts. However, to fairly compare AMD and NVIDIA, we first want to define which cards we really should be looking at. While pricing varies widely based on numerous factors like current sales or the popularity of bitcoin mining, in general you can think of the following rough price parity:

  • AMD Radeon RX 580 8GB ~ NVIDIA GeForce GTX 1060 6GB
  • AMD Radeon Vega 56 8GB ~ NVIDIA GeForce GTX 1070 Ti 8GB
  • AMD Radeon Vega 64 8GB ~ NVIDIA GeForce GTX 1080 8GB

Using these as comparison points, the NVIDIA GeForce cards were on average 20-25% faster than their AMD Radeon equivalents across all our tests. However, if you dig into the results you will notice that most of this is from our tests with RED RAW (.R3D) footage. This type of footage utilizes the video card for debayering (converting the raw sensor data to a usable video format) and with this codec, we saw on average 40-60% higher FPS with the GeForce cards.

The results with non-RED footage much less dramatic and only showed about 7-8% higher FPS with GeForce cards. This isn't a huge difference, but either way NVIDIA is clearly the winner for live playback.

AME Export - Raw Benchmark Results

[Click Here] to skip ahead to analysis section

Export with "H.264 - High Quality 2160P 4K" Preset

Export with "DNxHR HQ UHD" (matching media FPS) Preset

AME Export - Benchmark Analysis

In case you missed it explained in the previous section, the "AME Export Score" shown in the chart above is a representation of the average performance of each GPU for this test. In essence, a score of "60" would mean that on average that card was able to export our timelines at 60% of the tested media's FPS.

Similar to the Live Playback tests, once again the AMD Radeon cards fall behind their NVIDIA GeForce counterparts. Comparing the video cards based on very rough price parity, the NVIDIA cards were overall on average about 13% faster than their AMD equivalents, but this rises all the way up to a 50-60% performance improvement if we only look at RED footage. Interestingly, with Non-RED footage the AMD and NVIDIA cards averaged out to being within 1% of each other so for those codecs there is no clear winner to AMD vs NVIDIA in this test.

We do want to point out that this is really only comparing up to the GTX 1080 since AMD does not currently have a consumer GPU that is similarly priced to the GTX 1080 Ti. So if you are looking to get the best performance possible, the GTX 1080 Ti is still going to be ~5% faster than any of the other GeForce or Radeon cards.


If we combine the results from our Live Playback and AME Export tests, we get the following Overall Score for each GPU:

NVIDIA GeForce vs AMD Radeon Vega Premiere Pro CC 2018 Benchmark
Using the same rough pricing equivalents we used earlier (RX 580 ~ GTX 1060, Vega 56 ~ GTX 1070 Ti, and Vega 64 ~ GTX 1080), we found that the NVIDIA cards were on average 16-20% faster than their Radeon equivalent. However, much of the performance gap shown in the chart is due to the fact that the AMD Radeon cards performed so poorly with RED footage. For users that don't work with RED footage, the actual difference between NVIDIA GeForce and AMD Radeon should be much smaller. It will obviously vary based on what codec you use and the type of timeline you have, but on average the NVIDIA GeForce cards were up to 8% faster with non-RED footage and around 50% faster with RED footage.

Keep in mind that this is comparing factory overclocked AMD Radeon Vega cards against stock NVIDIA GeForce cards. While this probably didn't actually affect the results by a large amount, we would estimate that if we used stock AMD Radeon Vega cards the performance would likely be 1-2% lower than what we saw in our testing.

Premiere Pro Workstations

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: Premiere Pro, Radeon, Vega, RX 580, GeForce, 1060, 1070, 1070 Ti, 1080, 1080Ti

This reminds also that there's simply not much value investing in the high-tier GPUs compared to the lower ones - only slight differences in performance. Whether because the coders at Adobe fail to leverage full capacities of hardware as usual, or just that these cards are fundamentally designed for gamers, I'd never buy any more than a GTX 1060 6gb GPU today for even a power workstation.

Posted on 2018-08-01 23:17:41
Jakub Badełek

Another cool article ;) you'll have to redo in in a few months after nVidia releases new cards :D anyway, is there a chance to throw in mp4 files with XAVCS codec from Sony popular cameras (A7 and A6000 series)? it is a pain to work with them and I wonder how different tiers of GPUs deals with them...

Posted on 2018-08-02 07:43:44

Oh yea, we'll definitely retest when the new NVIDIA cards come out. CPU/GPU hardware launches and major software updates tend to be the two primary things that drive our testing/publishing schedule.

As for XAVCS, that is actually something I've wanted to add to our testing but just never got around to it. Some of the difficulty is actually just sourcing proper footage that we can test with. It looks like the closest to XAVCS that I can easily get is either XAVC QFHD Intra Class300 or XAVC QFHD Long GOP since I could simply transcode some of our existing footage to those in Media Encoder. I know that's not the same since it would be in a MXF container, but do you think either of those would be accurate for people in your situation? Or even better, do you think there is any chance you could record a short clip and send it our way to include in our testing? A 12 sec clip at 3840x2160, 29.97FPS would be ideal since it would match our other 4K test clips. If so, toss me an email at labs@pugetsystems.com

Posted on 2018-08-02 17:01:19
Neil Purcell

Always great information. I'm also curious to see more testing of non 'edit-friendly' codecs, specifically the Long GOP variant from the GH5 as I've read in several places that Premiere was having real issues with it, regardless of hardware? (on windows only?)

I don't know if this specific issue is now resolved or whether highly compressed LongGOP, 4k 10bit codecs will always stress the CPU a lot? (with optimised hardware elsewhere in the system?)

Posted on 2018-08-29 22:14:55

Codecs like that I've wanted to add for a while, but I've actually had trouble getting my hands on a good test clip (~15 seconds, 3840x2160, 29.97 FPS) that we can use. Right now my plan is to add XAVC QFHD Long GOP as at least one codec along those lines since that is something I can simply transcode some of our other test media to.

If you could shoot a short clip that is ~15 seconds, 3840x2160, and 29.97 FPS and toss it over to me at labs@pugetsystems , however, I would prefer to use that. Or maybe both... although our testing is starting to get really long right now (about 6 hour for each test run fully automated) and I really want to keep it from getting too much longer. I'm close to the tipping point where I wouldn't be able to start a second test run before I leave for the day and let it finish overnight which would effectively make the testing for these articles take twice as long.

Posted on 2018-08-30 18:35:08
Samuel Neff

Matt Bach I can shoot this clip for you at those specs and duration. Will do tonight!

Posted on 2018-09-04 23:52:52
Neil Purcell

Hi Matt, Thanks for your quick reply and apologies for my late reply.

I have sent an email to you with a link to a clip. Thanks, neil

Posted on 2018-09-10 20:08:02

Hi there, there's been reports in the last couple of months regarding PrPro now supporting iGPU acceleration, using QuickSync on Intel chips, leading to significant improvements in render times. Have you had a chance to look into this?
Thank you.

Posted on 2018-08-02 11:28:18

Actually, we have and put up a video on Youtube looking at it. I've been meaning to get it embedded in a short article so people can find it easily and your comment prompted me to finally do that: https://www.pugetsystems.co...

The problem with hardware encoding for H.264 is that it really isn't an apples to apples comparison to using software only since the quality difference is large enough that in many ways they really should be considered two different codecs. If you are simply looking for the fastest encoding times (proxy generation, etc.) of H.264, hardware acceleration is a pretty neat and useful feature. If quality is a concern, however, it may not be worth the loss in quality unless you are on extremely tight deadlines.

Posted on 2018-08-02 17:58:44

Thank you for your quick response, Matt. Have a few more questions, but will ask those in the other post you put up regarding this.

Posted on 2018-08-03 01:53:48

wasn't even aware of a pugetsystems youtube channel. thanks for the hint, subbed right away.

Posted on 2018-08-06 09:40:05

Given the preponderance of difference is in the Red codec, it seems like you should have summary charts with and without Red since a lot of folks who use Premiere will never get near a Red and with Red out of the picture, there is very little difference in any of the cards. Clearly the Red codec was written for Cuda.

Posted on 2018-08-02 17:12:52

Hey Dragon, we talked about the difference between RED and non-RED quite a bit in the text, but I actually agree that it would have been useful to separate our the two types of codecs in at least the benchmark analysis sections.

I don't think RED was really written for CUDA, however. It's just a codec and I really doubt the people who made it cared one way or the other about OpenCL or CUDA. I think it is more that either the folks at Adobe were able to optimize their code for RED debayering more for CUDA or that CUDA is simply a better fit for that type of work.

Edit: Thought about it for another minute and just decided to update the Live Playback and AME Export charts right now to have the total score across all media as well as RED and Non-RED media separated out. I'm already dividing it up in the text, so I definitely should have had the charts that way as well.

Posted on 2018-08-02 18:03:24

Thanks. That will make the decision easier for those who don't dig through the detailed charts. As I thought, the comparable models are almost dead even in export when you remove Red.

Posted on 2018-08-02 18:23:27

I am wondering if any of the GPU is actually used for exporting or encoding. It seems that the only task that uses GPU fully is RED conversion?
GPU-Z running in background, while testing, will be very interesting.

Posted on 2018-09-29 21:56:53

It definitely is being used (you can see it most clearly in the PiP tests), but except for RED footage Premiere Pro is really a much more CPU-limited application. We've done GPU/CPU load logging in the past - and still do if something looks weird - but honestly it often does more harm than good. Especially in something like Premiere Pro where it leans more on the CPU, the GPU is probably only at ~20-40% load even with RED footage. We have had people look at that and think that it means that a more powerful GPU won't help and that they can use a much lower-end GPU without giving up any performance, but load levels don't work like that. So we found it was better to give straight performance numbers rather than delving into the (often confusing) realm of individual component load.

Posted on 2018-10-01 16:04:27

Do you think that Premiere will ever be able to update it's software to take more advantage of the radeon line? Similar to the way FCPX does? Even Davinci runs much faster in playback. Or is that more of a core thing that won't be able to be addressed? I'm a premiere user myself and this has always been a core issue and one of the main reasons to build a PC. Just curious on your thoughts!

Posted on 2018-10-08 17:37:20

To be honest, my guess is no. NVIDIA CUDA seems to be favored by developers in general over OpenCL, and with NVIDIA's push to add Tensor and RT cores to their consumer cards I think we are going to see a further shift towards NVIDIA unless AMD comes up with something amazing. Really, in Premiere Pro and most other Adobe applications, we are hitting a GPU performance wall where there really isn't much difference between a $400 GPU and a $3,000 one. To me, the fastest and easiest way for Adobe to break that performance barrier isn't to improve either OpenCL (AMD) or CUDA (NVIDIA) performance, it is going to be looking towards Tensor/RT cores. However, the biggest factor in my mind is going to be what Apple decides to do. If they decide to hop on the Tensor/RT train then I 100% believe that is the way Adobe will go. If Apple doesn't, we might get some weird development branches where they focus on Metal for Mac systems, and CUDA/Tensor/RT on the PC side. Really no way to know unless you can get some developers at Adobe to spill the beans.

Something else to keep in mind for DaVinci Resolve is that while Vega is good, it is really more of a mid range card. It does out-perform the GTX 1080, but if you need more performance your only option is to put in multiple GPUs. With NVIDIA, however, you can go up to the GTX 1080 Ti or even better, one of the new RTX cards (https://www.pugetsystems.co... ). So while I do agree that DaVinci Resolve is faster with an AMD Radeon Vega card, that is only true for a mid-range Resolve workstation. Once you need something more, NVIDIA is a much better choice.

Posted on 2018-10-08 17:53:21

Really interesting to get your take on this. I'll do my best to infiltrate Adobe and report back to you. Just putting the ski mask and black gloves in my Amazon cart now! :D hehe. I currently have a threadripper desktop with a 1080ti. I do a lot of editing and after effects and some 3D as well. What's been driving me nuts lately though is when I travel what laptop to use. My XPS 15 isn't cutting it. I've got one from 2 years ago with a woefully inadequate 960m in it. Gotta upgrade. But what to do? Razer with a GPU enclosure or Mac with a non-upgradeable enclosure or something completely different? Give me an laptop with an 8 core i9 and an amazing GPU that's only the width of my pinkie :). Would love to hear your take oh wise computer sage!

Posted on 2018-10-08 18:56:48

Laptops are... hard. We got out of selling them years ago for a number of reasons, but one of them was that it is really, really difficult to get good performance without the laptop being absolutely massive. And they still had thermal issues even when they were massive like that!

I think the future for laptops is going to be external GPUs, but it may be a few more years before it is fully fleshed out. So my main advice is to get a laptop with a Thunderbolt 3 port and focus more on the CPU than on the GPU. My general advice on laptops is to stick to the big brands, however, since they tend to have more reliable supply/repair lines. So Dell, HP, Apple mostly. Razer seems to be pretty good as well from what I've heard and they've been pushing the eGPU for a while now.

Posted on 2018-10-08 19:08:06

Thanks Matt! Noted. And I appreciate the fast response. You guys are awesome.

Posted on 2018-10-08 19:14:36

intel is soon going to market its dedicated gpu so developers who want only one code path for any gpu and cpu (opencl also take advantage of cpu) may now start to favor opencl over cuda

Posted on 2019-01-07 10:17:40

To be honest, I'm not sure what to make of Intel making a GPU. They (kind of) tried that with Intel Phi for compute, but that really didn't take off. It will definitely be interesting, however, and I'm curious to see what they come up with.

That being said, I'm not sure how much that will prompt developers to spend more time on OpenCL over CUDA. We already have that right now with AMD vs NVIDIA, and I can't see Intel GPUs gaining enough of a market share to make that much of a difference - at least not for several year. When I've brought up CUDA vs OpenCL with developers at conferences, it was pretty universal that they vastly preferred working with CUDA. I'm not really a programmer so I can't get into exactly why, but my impression is that it is simply better documented and easier to work with.

Who knows though, I could be completely off the mark here. It would definitely be good to get some stronger competition for NVIDIA - especially in the high-end space like the Titan line.

Posted on 2019-01-07 17:56:28