Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1878
Article Thumbnail

DaVinci Resolve Studio - NVIDIA GeForce RTX 3080 Performance

Written on September 17, 2020 by Matt Bach
Share:

TL;DR: NVIDIA GeForce RTX 3080 performance in DaVinci Resolve Studio

While performance when editing and doing basic grades may only be 10-25% faster with the new RTX 3080 compared to the previous generation RTX 20-series cards, when doing noise reduction or using OpenFX, the performance gap widens from a minimum of 30% faster compared to a Titan RTX to almost 2x faster compared to an RTX 2060 SUPER!

This also puts a single RTX 3080 10GB within spitting distance of a dual RTX 2080 Ti 11GB setup, which considering that is comparing a $699 GPU to a $2,400 pair of cards is extremely impressive.

Introduction

DaVinci Resolve is known in the industry as having excellent GPU-acceleration support - greatly benefiting from a powerful (or multiple powerful) video cards. This makes it very interesting in regards to NVIDIA's recently announced GeForce RTX 30 Series GPUs since NVIDIA has been pushing their GeForce line of cards more and more into the professional content creation space with their "Studio" program. Gaming may still be front and center during these launches, but we have high expectations for what these new GPUs will be able to achieve in a professional application like DaVinci Resolve.

DaVinci Resolve Studio GPU Performance Benchmark - NVIDIA GeForce RTX 3080 10GB

If you want to see the full specs for the new GeForce RTX 3070, 3080, and 3090 cards, we recommend checking out NVIDIAs page for the new 30 series cards. But at a glance, here are what we consider to be the most important specs:

VRAM CUDA Cores Boost Clock Power MSRP
RTX 2070S 8GB 2,560 1.77 GHz 215W $499
RTX 3070 8GB 5,888 1.70 GHz 220W $499
RTX 2080 Ti 11GB 4,352 1.55 GHz 250W $1,199
RTX 3080 10GB 8,704 1.71 GHz 320W $699
Titan RTX 24GB 4,608 1.77 GHz 280W $2,499
RTX 3090 24GB 10,496 1.73 GHz 350W $1,499

While specs rarely line up with real-world performance, it is a great sign that NVIDIA has doubled the number of CUDA cores compared to the comparable RTX 20 series cards with only a small drop in the boost clock. At the same time, the RTX 3080 and 3090 are also $500-1000 less expensive than the previous generation depending on which models you are comparing them to.

While it is a bit odd that the RTX 3080 has less VRAM than the 2080 Ti, all three of these new cards should all be capable of working with 4K timelines in DaVinci Resolve. If you want to work with 8K and above media, however, only the 3090 (with 24GB of VRAM respectively) would meet our current recommendation. Using a GPU with less than 20GB of VRAM when using 8K and larger media is likely to result in constant "out of GPU memory" errors that are not conducive to a smooth workflow.

Since only the RTX 3080 is fully launched at this point (the 3090 is set to launch on Sept 24th, and the 3070 sometime in October), we, unfortunately, will only be able to examine the 3080 at this time. However, we are very interested in how the RTX 3070 and 3090 will perform, and when we are able to test those cards, we will post follow-up articles with the results.

DaVinci Resolve Workstations

Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!

Test Setup

Listed below is the specifications of the system we will be using for our testing:

*All the latest drivers, OS updates, BIOS, and firmware applied as of September 15th, 2020

To test each GPU, we will be using the fastest platform currently available for DaVinci Resolve- most notably the AMD Threadripper 3970X. Since Resolve utilizes the CPU so heavily, this should minimize the impact of the processor and allow each GPU to perform at their fullest potential.

One thing we need to specifically point out is that we are using an un-released build of DaVinci Resolve due to a bug that prevents the current builds (including the 16.3 beta) from properly processing R3D media with the RTX 3080. This is a known issue, with the fix being implemented in this new build.

For the testing itself, we will be using another un-released bit of software - version .92 of our PugetBench for DaVinci Resolve benchmark. This new version is very close to being available to the public, but since the tests are much better than the version that you can currently download, we opted to go ahead and use it for this comparison.

We will be using the "Extended" preset that includes both 4K, 8K media as well as specific effects and Fusion tests. Using 8K media with most of the cards we are testing is actually not a good idea due to the "out of GPU memory" errors you would likely encounter, but our benchmark does not load the Resolve UI which means that the VRAM load is much lower; allowing GPUs with just 8GB of VRAM to successfully complete the 8K tests.

Raw Benchmark Results

While we are going to go through our analysis of the testing in the next section, we always like to provide the raw results for those that want to dig into the details. If there is a specific codec or export setting you tend to use in your workflow, examining the raw results for that task is going to be much more applicable than our more general analysis.

NVIDIA GeForce RTX 3080 performance in DaVinci Resolve Studio

Overall DaVinci Resolve Studio Performance Analysis

While many reviewers like to solely look at things like temporal noise reduction (often to an unrealistic degree) or OpenFX that heavily utilize the GPU, we first want to start off by looking at the overall performance we saw from our DaVinci Resolve benchmark with each GPU in order to show what most users would likely experience in their day-to-day work.

Looking at the Overall Extended Score, the new RTX 3080 does very well, beating the similarly priced RTX 2080 SUPER by about 20%. It also does very well compared to the more expensive RTX 2080 Ti and Titan RTX, out-performing those cards by 14% and 11% respectively. It even manages to trade blows with a dual RTX 2080 Ti setup, only losing by a few percent.

If you are currently using a lower-end RTX card, an AMD Radeon GPU, or an older GTX 1080 Ti, the performance gains are significant. Depending on the exact card, you are looking at anywhere from a 20 to 50% increase in performance with the new RTX 3080.

However, this is looking at all our tests - including the Fusion portion which is almost entirely CPU limited. To get a better idea of the maximum performance difference between these cards, we should hone in on the "GPU Effects" portion of our benchmark which looks at tasks like TNR and various GPU-accelerated OpenFX.

GPU Score Analysis

NVIDIA GeForce RTX 3080 10GB DaVinci Resolve Studio GPU Effects benchmark performance

The GPU effects portion of our benchmarks looks at the performance of individual GPU-accelerated effects such as temporal noise reduction, film grain, lens blur, optical flow, face refinement, and more. In our testing, these effects easily show the largest benefit from having a powerful GPU, which means that they should give us the best look at the maximum performance gain you may encounter from each of the GPUs we are testing.

In this test, the RTX 3080 puts up some very impressive numbers, beating the similarly priced RTX 2080 SUPER by a whopping 62%. This is a massive increase in performance that, in most cases, make the older RTX 20-series cards obsolete.

Compared to the more expensive RTX 2080 Ti and Titan RTX, the RTX 3080 also handily beats those cards by around 30%. Dual RTX 2080 Ti is still faster than a single RTX 3080, but as they are only ~15% faster, this is a good indicator that the upcoming RTX 3090 may be able to match or beat a dual GPU setup from the previous generation. Be sure to check back after the RTX 3090 launches on 9/24/2020 to see exactly how that card performs!

Compared to the lower-end RTX 20-series and GTX 1080 Ti cards, you are looking at up to almost a doubling of performance once you get down to the RTX 2060 SUPER or GTX 1080 Ti. And if you are considering moving from an AMD Radeon GPU to the RTX 3080, the performance gain is even more significant - up to 3x faster!

How well does the NVIDIA GeForce RTX 3080 perform in DaVinci Resolve Studio?

Overall, the new RTX 3080 does extremely well in DaVinci Resolve. Performance while editing and doing basic grades may only be 10-25% faster than the previous generation RTX 20-series cards, but when doing noise reduction or using OpenFX, the performance gap widens from a minimum of 30% faster compared to a Titan RTX to almost 2x faster compared to an RTX 2060 SUPER!

This puts a single RTX 3080 10GB within spitting distance of a dual RTX 2080 Ti 11GB setup, which considering that is comparing a $699 GPU to a $2,400 pair of cards is very exciting. It also makes us excited to see what the RTX 3090 24GB can do after it launches on September 24th. If that card can match or beat a dual GPU setup from the previous generation, this could open to doors to significantly faster, more capable, and cheaper workstations for DaVinci Resolve.

One thing we want to point out is that the testing in this article was all done with a single RTX 3080 video card. We are very interested in doing multi-GPU testing with these new RTX 30-series cards since the new cooling designs make us concerned for the viability of multi-GPU configurations. Unlike the previous generation, these new cards (including all the third-party models we have seen so far) do not vent a significant portion of their heat directly outside the chassis which may mean that using more than 2 GPUs will not be feasible without a complex and expensive liquid cooling setup.

Given what we have seen from the RTX 3080, we doubt that you will be able to get a faster setup using the older cards even though you could stack four of them in a system, but it is possible that the maximum performance possible from a single workstation will not actually increase all that much even though these new cards are so much more powerful on an individual basis. Once the RTX 3090 has launched and we get our hands on multiple cards to test, expect to see a number of articles revolving around multi-GPU configurations.

As always, keep in mind that these results are strictly for DaVinci Resolve. If you have performance concerns for other applications in your workflow, we highly recommend checking out our Hardware Articles (you can filter by "Video Card") for the latest information on how a range of applications perform with the new RTX 3080 GPU, as well as with different CPUs and other hardware.

DaVinci Resolve Workstations

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: Hardware Acceleration, hardware encoding, NVIDIA, NVIDIA vs AMD, AMD, Vega 64, Radeon RX 5700 XT, RTX 2060 SUPER, RTX 2070 SUPER, RTX 2080 SUPER, RTX 2080 Ti, Titan RTX, RTX 3080, DaVinci Resolve
Håkon Broder Lund

Solid review! Will be interesting once you get the 3090 and two 3080. If the 24GB is not needed. I know scaling is not linear but wiill a dual 3080 be faster than a single 3090? The price will be about the same, but the system heat will be much higher.

I see that on GPU effects you have optical flow set to enhanced better. Why not Speed Warp? It is much more processor heavy and will likely show more differance between the cards.

Posted on 2020-09-17 13:33:59

24GB of VRAM is pretty necessary for editing 8K footage naively - or rather, around 16GB is needed, and 24GB is the only VRAM amount on modern NVIDIA cards that fits that requirement.

As for dual RTX 3080, usually it is about a 40-50% performance gain going from one to two GPUs, so we'll have to see how two of them compares to a single RTX 3090. We might not be able to do multi-GPU testing for a bit depending on when we will be able to get multiple cards in for testing.

Speed warp is actually interesting. It was originally a part of our effects test, but I couldn't get it to run with the 3080 without Resolve locking up. I've seen other people include it in their review, so I'm not sure why we had so much trouble with it. For the RTX 3090 launch, I might try to get it working and add it back into the results. Pre-launch testing like this with early drivers, an unreleased version of Resolve, and very new hardware always has a couple little bugs like this, so I'm not too worried about it, but definitely something I would like to add back.

Posted on 2020-09-17 17:24:39
Håkon Broder Lund

Regarding the VRAM. Does timeline resolution play into the VRAM usage, or mostly footage resolution? 8K footage in 8K timeline, compared to 8K footage in 4K/2K timeline?

Interesting times for sure. Strange with the Speed warp. As you say, likely software bugs that will be sorted shortly. Looking forward to the coming reviews!

Posted on 2020-09-17 18:06:19

It is mostly the timeline resolution, but the footage resolution plays a small part from what I can tell. So using 8K media on a 4K timeline shouldn't require nearly as much VRAM as 8K media on a 8K timeline. The 10GB of the RTX 3080 should be able to handle a 4K timeline regardless of the media resolution, but you will have a bad time if you ever decide to edit on an 8K timeline.

Posted on 2020-09-17 18:32:47
Jay Smith

I run with 12GB of VRAM and on 4K timelines/6K footage I often run out at some point and need to restart.

Posted on 2020-09-17 19:20:54
Misha Engel

The 6 GB of VRAM on the RX5600XT won't run out of memory with 8k timelines/8k footage.
It's the way NVidia does the memory handling the gives the "out of memory".

Good to see that the newer Radeon VII(Release Date 7th February, 2019) was left out of the equation in favor of the RX Vega 64(Release Date 14th August, 2017) and the 1080ti(Release Date 10th March, 2017).

https://www.pugetsystems.com/labs/articles/DaVinci-Resolve-15-AMD-Radeon-VII-16GB-Performance-1382/

Posted on 2020-09-17 21:35:50

The Radeon VII is pretty much EOL at that point, which is why we don't include it in our testing anymore. It also had some pretty bad stability issues that we could never get resolved before it disappeared. We typically stick with what you can actually purchase, whatever cards are being replaced, and in some cases we include a card from a few generations back to help with those who are considering upgrading a bit older of a system.

I'm not sure how much memory management between AMD and NVIDIA really comes into play. Any time we have a test that runs out of memory with an 8GB NVIDIA card, it does it on an AMD 8GB card as well. It might make a difference if you are right on the edge, but you always want a bit more VRAM than you actually need to give yourself some breathing room - just like you do with RAM or storage.

Posted on 2020-09-17 21:50:56
Misha Engel

As far as I know both the VEGA 64 and the 1080ti are also EOL.
Turn HBCC on and any modern AMD card won't crash DR due to the "out of memory" error, as long as you have enough DRAM.
The drawback is that is becomes very slow due to the relative low speed of the PCIe bus, it won't crash.

Posted on 2020-09-18 13:07:07

The Radeon Vega 64 is still a current card actually. The 5700XT (and other cards of that series) weren't a full replacement to the Vega line since they were focused more on gaming while the Vega cards are often better for more compute tasks. The 1080 Ti definitely is EOL, but I explained why we included it in my previous comment.

Resolve definitely can overflow to system RAM - it does that with pretty much any GPU. But, the "Out of memory" message still comes up and gets in the way. In evaluating whether we wanted to include 12K BRAW testing yet or not, we ran it on some of the 8GB cards, and it did complete (with the memory message constantly coming up), but like you mentioned, it took forever.

Posted on 2020-09-18 17:20:57
Jason Niu

If I am working with EOS R5 8K Raw files and edit them on a 4K timeline, will that be okay?

Posted on 2020-09-17 21:33:28

It is really hard to say - it is going to come down to what kind of effects you use, number of nodes, number of media streams, etc. If you are a customer of ours, it would probably be best to send us some sample footage and even a copy of a test project so we can test and make sure. It is close enough to the edge that I can't really say one way or the other in a more general sense.

In general, however, we would recommend a Titan RTX (or waiting for the 3090 next week) if you are working with 8K media, regardless of the timelines resolution.

Posted on 2020-09-17 21:47:39
Ryan Mills

Can't be done, the 3080 does not support NVLINK only the 3090 does.

Posted on 2020-09-18 06:10:27

You don't need NVLink for Resolve. In fact, since you have to enable SLI in order to get NVLink to work in Windows, it is actually worse for performance. The reason is that when you have two cards in SLI, Resolve is only able to see one of the two cards. So turning on SLI/NVLink effectively disables the second GPU in Resolve.

That may change in the future if Blackmagic implements better SLI/NVLink support, but give that NVIDIA is dropping it on most of their cards makes it hard for me to believe that they will invest the time to do so.

Posted on 2020-09-18 17:17:52
Ampere

https://www.nvidia.com/en-u...
https://forum.blackmagicdes...
https://twitter.com/Blackma...

Nvidia Studio Driver 456.38 for Ampere GPUs & DaVinci Resolve 16.2.7.

Posted on 2020-09-17 15:09:56

Yeah, I believe Matt was using that Resolve update (a beta of it, maybe?) in order to get proper support for the new RTX 3080. On the driver side, NVIDIA had provided us with a preview that was labeled as version 456.34.

Posted on 2020-09-17 16:01:35
phanter II

Are there any infos on the NVDEC Gen 5 decoder? Does it finally support H.265 4:2:2 decoding?

Posted on 2020-09-18 12:25:34

It doesn't look like NVIDIA's page showing NVENC and NVDEC support has been updated with the 30 Series cards yet:

https://developer.nvidia.co...

Posted on 2020-09-18 15:37:26
MisterWU

Nice review

Posted on 2020-09-18 14:37:01
Alex S

Awesome in-depth comprehensive review as always! I'm interested in your thoughts regarding the possible VRAM limitations on the 3080 while editing 6K RAW footage on a 6K timeline in Resolve. Based on my experience, the 2080ti's 11GB GDDR6 VRAM holds up without many "Out of memory" errors. Would this be a worthy upgrade over the 2080ti? Or would you recommend a 3090 for the 24 GB of GDDR6X VRAM?

Posted on 2020-09-19 05:45:17

10GB (or 11GB for that matter) is cutting it pretty close for 6K editing. If you don't use many OpenFX or noise reduction, it should be OK, but I don't know if I would recommend it. What I would do is to watch Task Manager while you are working and see how VRAM is being used on your 2080 Ti. If it is 8GB or less, then the 10GB on the 3080 should be OK. Any more than that, however, and I would see if you can go up to the 3090. We'll have performance data for that card on September 24th.

Keep in mind that even if 10GB would be OK for you right now, as more apps (including internet browsers) start to leverage the GPU, or if Resolve even starts to use it even more, or if you ever want to upgrade the number or resolution of your displays, you may suddenly find that you start to get out of memory errors. Overall, I don't think it is worth it to cut it that close so I would see what the 3090 has to offer later this week.

Posted on 2020-09-21 15:57:28
Media Environment

Nice... And what NVIDIA will do with all the clients who bought a 2070, 2080, 2080ti card for the same price in the last 3 months?

Where is the program to exchange one card to another for a few bucks? It's the minimum they should do for their clients.

Posted on 2020-09-19 11:29:42
ben

hello!
will the difference of "loosing" 1 GB of VRAM between 2080TI to 3080, will be noticeable?
or the many advantages of the 3080 out wins this disadvantage?

Posted on 2020-09-20 12:11:41

Usually, if 1GB of VRAM is going to impact your workflow, you were already cutting it too close. 11GB vs 10GB shouldn't matter much for 4K editing, but there could be some people just sneaking by with 11GB on 6K timelines that 10GB would throw out of memory errors with. But, those people with 11GB would probably start seeing errors relatively soon as well since more and more apps are leveraging the GPU, which means increased VRAM usage overall.

Posted on 2020-09-21 15:59:47
David Varela

Will there be a significant drop in performance that I use the RTX3080 a b450 board (pcie 3.0) rather than in a x550/570 (pcie 4.0)? thank you matt!

Posted on 2020-11-22 15:12:09

Actually, we should have an article coming in the next few days looking at PCI Express generational performance - so keep an eye out for that and I think it will answer your question :-)

Posted on 2020-11-23 16:33:18