Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/2021
Article Thumbnail

DaVinci Resolve Studio - NVIDIA GeForce RTX 3060 Ti Performance

Written on December 30, 2020 by Matt Bach

TL;DR: NVIDIA GeForce RTX 3060 Ti 8GB performance in DaVinci Resolve Studio

Overall, the new NVIDIA GeForce RTX 3060 Ti is a very solid GPU for DaVinci Resolve. At most, the RTX 3060 Ti trails the RTX 3070 by just 8%, but can be as much as 2x faster than the AMD Radeon 5700 XT. This also makes the RTX 3060 Ti faster than any AMD GPU we have tested to date, besting even the Radeon 6800 XT which costs significantly more. This makes it a terrific option for those on a tighter budget.


Over the last three months, NVIDIA has been doing a rolling launch of their new GeForce RTX 30 Series video cards, culminating in the GeForce RTX 3060 Ti 8GB which just recently hit the market. While gaming is almost always a major focus during these launches, professional applications like DaVinci Resolve are becoming increasingly important for NVIDIA's GeForce line of cards.

DaVinci Resolve (especially the Studio edition) is known in the industry as having excellent GPU-acceleration support - greatly benefiting from a powerful video card. We are especially interested in how the RTX 3060 Ti in particular will perform as it is in previous articles, NVIDIA's RTX 30 series cards were able to out-pace everything from AMD. The RTX 3060 Ti, however, is the most affordable RTX 30 series card from NVIDIA to date, so it will be interesting to see how it stacks up against the latest GPUs from AMD.

DaVinci Resolve Studio GPU Performance Benchmark - NVIDIA GeForce RTX 3060 Ti 8GB

If you want to see the full specs for the latest GPUs from NVIDIA and AMD, we recommend checking out the NVIDIA GeForce RTX 30 Series and AMD Radeon RX Graphics Cards product pages. But at a glance, here are what we consider to be the most important specs:

VRAM Cores Boost Clock Power MSRP
Radeon 5700X 8GB 2,560 1.9 GHz 225W $399
RTX 3060 Ti 8GB 4,864 1.67 GHz 200W $399
RTX 3070 8GB 5,888 1.70 GHz 220W $499
Radeon 6800 16GB 3,840 2.1 GHz 250W $579
Radeon 6800 XT 16GB 4,608 2.25 GHz 300W $649
RTX 3080 10GB 8,704 1.71 GHz 320W $699
RTX 3090 24GB 10,496 1.73 GHz 350W $1,499

While specs rarely line up with real-world performance, the biggest thing to note is that the NVIDIA RTX 3060 Ti is at the bottom end of the product stack from NVIDIA and has the same exact MSRP as the AMD Radeon 5700 XT. So, from a price-to-performance standpoint, that will be the card to beat for NVIDIA.

Note that the current supply is so poor that you will be lucky to find many of these cards for anywhere near the MSRP. However, we typically use the MSRP as a baseline for price in order to rule out fluctuations due to different brands, sales, and scarcity. The actual cost you will likely need to pay for either an AMD or NVIDIA card is likely to be quite a bit different, so keep that in mind as you read this article.

DaVinci Resolve Workstations

Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!

Test Setup

Listed below is the specifications of the system we will be using for our testing:

*All the latest drivers, OS updates, BIOS, and firmware applied as of December 1st, 2020

To test each GPU, we will be using the fastest platform currently available for DaVinci Resolve - most notably the AMD Threadripper 3970X. Since Resolve can utilize the CPU heavily in some workloads, this should minimize the impact of the processor and allow each GPU to perform at their fullest potential.

For the testing itself, we will be using the 0.92.1 version of our PugetBench for DaVinci Resolve benchmark. If you wish to run our benchmark yourself, you can download the benchmark and compare your results to thousands of user-submitted results in our PugetBench database.

We will be using the "Extended" preset that includes both 4K, 8K media as well as specific GPU Effects and Fusion tests. Using 8K media with the RTX 3060 Ti (and other GPUs with only 8GB of VRAM) is actually not a good idea due to the "out of GPU memory" errors you would likely encounter, but our benchmark does not load the Resolve UI which means that the VRAM load is much lower; allowing GPUs with just 8GB of VRAM to successfully complete the 8K tests.

Raw Benchmark Results

While we are going to go through our analysis of the testing in the next section, we always like to provide the raw results for those that want to dig into the details. If there is a specific task you tend to perform in your workflow, examining the raw results is going to be much more applicable than our more general analysis.

NVIDIA GeForce RTX 3060 Ti 8GB DaVinci Resolve Studio GPU Performance Benchmark

Overall DaVinci Resolve Performance Analysis

While many reviewers like to solely look at things like temporal noise reduction (often to an unrealistic degree) or OpenFX that heavily utilize the GPU, we first want to start off by looking at the overall performance we saw from our DaVinci Resolve benchmark with each GPU in order to show what most users would likely experience in their day-to-day work.

Looking at the Overall Extended Score, the new GeForce RTX 3060 Ti does very well; scoring within a few percent of the more expensive RTX 3070 and previous generation RTX 2080 Ti.

Compared to the AMD GPUs we tested, the RTX 3060 Ti is even more attractive. Not only does it beat the AMD Radeon 5700 XT by a large 30%, but it even manages to best the Radeon 6800 and 6800 XT by over 10%. The 6800 (XT) does have the advantage of having twice the VRAM, although that typically is only a consideration for those working with 6K+ timelines. And at that point, however, most users will likely want to use the RTX 3080 due to how much faster it is compared to the AMD GPUs.

Bear in mind that this looking at the Overall Extended Score which measures the performance in all of our tests - including a number that can be CPU bound. To get a better idea of the maximum performance difference between these cards, we should hone in on the "GPU Effects" portion of our benchmark which looks at tasks like TNR and various GPU-accelerated OpenFX.

GPU Score Analysis

DaVinci Resolve Studio GPU OpenFX Noise Reduction benchmark performance NVIDIA GeForce RTX 3060 Ti 8GB

The GPU effects portion of our benchmarks looks at the performance of individual GPU-accelerated effects such as temporal noise reduction, film grain, lens blur, optical flow, face refinement, and more. In our testing, these effects easily show the largest benefit from having a powerful GPU, which means that they should give us the best look at the maximum performance gain you may encounter from each of the GPUs we are testing.

In this test, the RTX 3060 Ti continues to out-perform all of the AMD GPUs we tested. It may be only ~7% faster than the significantly more expensive Radeon 6800 XT, but the RTX 3060 Ti is just shy of 2x faster than the Radeon 5700 XT.

Compared to the higher-end NVIDIA GPUs, the RTX 3060 Ti is of course at the bottom of the stack (as it should be). However, the RTX 3070 is only about 8% faster which is about right considering it has a $100 higher MSRP. The RTX 3080 and 3090 are really where NVIDIA takes off from a performance standpoint, with those cards being 54% and 78% faster than the RTX 3060 Ti respectively.

How well does the NVIDIA GeForce RTX 3060 Ti 8GB perform in DaVinci Resolve Studio?

Overall, the new NVIDIA GeForce RTX 3060 Ti is a very solid GPU for DaVinci Resolve. At most, the RTX 3060 Ti trails the RTX 3070 by just 8%, but can be as much as 2x faster than the AMD Radeon 5700 XT. This also makes the RTX 3060 Ti faster than any AMD GPU we have tested to date, besting even the Radeon 6800 XT which costs significantly more. This makes it a terrific option for those on a tighter budget.

If you need more performance, the RTX 3080 and 3090 are of course going to be significantly faster than the RTX 3060 Ti, but for just $399, the RTX 3060 Ti holds its own very well. Especially if you do not use many OpenFX or noise reduction, you may opt to use the RTX 3060 Ti over the more powerful (and expensive) cards in order to devote more of your budget to a more powerful CPU, more RAM, or faster storage.

As always, keep in mind that these results are strictly for DaVinci Resolve. If you have performance concerns for other applications in your workflow, we highly recommend checking out our Hardware Articles (you can filter by "Video Card") for the latest information on how a range of applications perform with the latest AMD and NVIDIA GPUs, as well as with different CPUs and other hardware.

DaVinci Resolve Workstations

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: NVIDIA, NVIDIA vs AMD, AMD, Radeon RX 5700 XT, RTX 2080 Ti, RTX 3080, RTX 3090, RTX 3070, Radeon RX 6800, Radeon RX 6800 XT, DaVinci Resolve, RTX 3060 Ti

Thank you!

Posted on 2020-12-31 22:24:36

hello, how well does the 3060TI fair as an upgrade if at all to a non super 2070?

and another one in theory: if the 3060 12gb will be a real product, will it fair better for resolve having 12gb ram than say a 3070 with 8 gigs?


Posted on 2021-01-03 08:32:55

I'd also like to know the answer to this. The 3060 12GB has already been confirmed by several AIB's, so it may very well be a better option than an 8GB 3070.

Posted on 2021-01-04 13:33:22

A 2070 should score around 72 on the GPU test, so a 3060 Ti is about 20% faster in the best case.

As for VRAM limits, it all depends on what you are working with. Having more VRAM than you need doesn't impact performance, but not having enough can cause a ton of problems. If you are working with 4K media, most people will be just fine with 8GB of VRAM unless you are running other applications that are also going to eat into the VRAM.

If you want to work with 8K timelines, however, 8GB isn't going to cut it. Although, 12GB won't either since we typically recommend at least 20GB for people working with that kind of projects in Resolve. The difference between 8GB and 12GB oddly isn't often going to be that big of a deal for Resolve itself due to how the VRAM usage breaks down for common timeline resolutions. It really will be only if you want to work in Resolve and another app like Premiere Pro at the same time. Or if you like to have hundreds of Chrome tabs open, or multiple 8K monitors, or something else that is going to eat into VRAM.

As another note: I don't ever believe leaks about upcoming products. Leaks are as often wrong as they are right, and personally, a 12GB 3060 seems really, really weird to me. Might happen, but I don't know if I would try to plan around it.

Posted on 2021-01-04 18:35:26
Sound Advice

I will be upgrading from DaVinci Resolve Free version to DaVinci Resolve 17 Studio. Is there any indication if 2 x 3060ti will work well with the updated DaVinci Studio software? I know over the years that it has been indicated that 2 or 3 GPUs can work well in DaVinci. So wondering if 2 x 3060ti would be better than say 1 x 3080?
I'm using DaVinci for up to 4K with some (but not loads) open FX / fusion. So not sure if one higher spec GPU is better than 2 mid spec GPU's? Or the other way around?
Also at the moment, as far as i understand it, the 2 x GPU VRAM do not add together - so 2 x 8Gb GPU still only give total of 8Gb available VRAM?
Look forward to any input into the above. Thanks in advance!

Posted on 2021-01-06 16:55:19

Yes! If you upgrade from the free version to Studio, you will be able to take advantage of dual RTX 3060 Ti (or technically up to 8 GPUs I believe is the limit). Going to version 17 isn't going to be nearly as big of a deal as switching to the Studio version though, and I wouldn't switch to version 17 until it is out of beta.

Generally, going from one GPU to two nets you about a 50% increase in performance for things like OpenFX and noise reduction (https://www.pugetsystems.co... . However, that would put dual RTX 3060 Ti at best about equal to a single RTX 3080. But, since VRAM isn't additive (you are correct on that), you would only have 8GB with the 2x 3060 Ti versus 10GB with a single 3080. In addition, Fusion is a bit weird and oddly tends to see a small loss in performance as you add more GPUs.

All around, I would definitely do a single RTX 3080. More VRAM, better Fusion performance, more room for upgrading in the future (easier to add a second 3080 than a third/fourth 3060 Ti), and simpler configurations tend to have less issues over the life of a system.

Posted on 2021-01-06 18:56:19
Sound Advice

Thanks Matt for your clear answer.
Originally i had been considering going for the AMD 6800 XT GPU because potentially it could work well with the PCI4 motherboard and 5900 series CPU using the RDNA 2 tech.
However, seeing the Puget testing results for the 6800 and 6800XT were disappointing. Although the 16Gb VRAM is tempting with the AMD GPU's, the results show that maybe the 3080 would be better, even though it is slightly more expensive and less VRAM. Also, there is no evidence yet that AMD have improved their drivers for the GPU's.
So it may be that i will have to wait until a 3080 comes into stock (i can hear the laughter already!) and purchase 1 of them.

Posted on 2021-01-07 12:11:17
Far Middle

Can somebody explain how the 3090 can do WORSE in 8K Media testing than 3 other cards with significantly less memory? I thought we needed more VRAM for 8K but these tests contradict that, don't they?

Posted on 2021-01-07 16:30:10

Answered your question on the other article you posted on, but for anyone coming across this, here is a copy of my reply:

VRAM capacity doesn't affect performance unless it is simply not enough. Kind of like how if you have tons of free space on your storage drive, that doesn't make anything faster, but if you start to run out, you start to run into problems. Normally, for 8K you definitely want a GPU with 20GB of VRAM since otherwise you will get tons of "out of memory" errors in Resolve that will bring your work to a halt, but since our benchmark doesn't actually load the UI, we can get away with testing GPUs with just 8GB. We would never recommend using an 8GB GPU for 8K work, but it lets us run a single benchmark preset that applies to everything which makes testing significantly easier.

As for performance in 8K, the difference is really small enough that it is likely just margin of error. For more straight-forward processing of media (without effects applied), generally the architecture of the GPU makes a bigger impact than the raw performance of the cards. That is why everything from the 3060 Ti to the 3090 effectively tie for much of the 4K/8K tests.

Posted on 2021-01-07 19:54:06
Far Middle

Thanks for responding, sorry for posting twice. Ok, so the big takeaway here is that this margin of error is critical to understanding the relative value of these cards because we're talking a price difference of up to $700! From your testing, it appears that the RTX 3000 GPUs tested are all within the margin of error and other components in your build are likely to play a bigger factor than selecting the 3090 over a substantially cheaper 3070? (assuming you can buy either!)

Posted on 2021-01-08 21:08:31

For playback of media without effects, yes, the RTX 3000 series cards are all pretty much the same. The only place you would likely notice much of a difference is with something like RED 8K (which uses the GPU for debayering). For H.264/265, that processing is done with the NVDEC chips on the GPU, which I believe is the same on all models. I'm not 100% certain on that, but I'm pretty sure there isn't anything like a more powerful NVDEC chip on different cards.

Where a higher-end GPU really helps is with things like noise reduction and OpenFX. Really complex color grades with a good number of nodes could also show some difference, but nowhere near something like noise reduction. If you don't do any of that, then getting something like a RTX 3070, and spending the savings on a faster CPU, more RAM/storage, etc. should net you more overall performance.

Posted on 2021-01-08 21:17:01
Far Middle

Exactly what I needed to hear. The ONLY reason I don't do anything OpenFX or noise reduction is because my computer is too GPU constrained right now. I'm converting my workflow to 4K 10bit editing of videos captured with the new Sony A7SIII and wanted to make sure I'm not GPU limited. When rendering the final video, is that CPU or GPU constrained? I don't know whether to get the Ryzen 5900x or 5950x.

Posted on 2021-01-08 23:50:49

Exporting will be able to use the GPU for H.264 decoding and encoding (as long as you have the Studio version), but the CPU is still used for quite a bit of the rendering as well. Between the 5900X and 5950X though, there is only around a 6% difference, so it isn't massive. https://www.pugetsystems.co... . Still, 6% for $250 isn't bad in the grand scheme of a multi-thousand dollar workstation, and being able to finish renders 6% faster can be more than worth it for some professionals. It isn't unless you go up to Threadripper that you are really going to see a decent jump in performance from the 5900X/5950X.

One thing to watch out for is that the H.264 "flavor" you are shooting in has decoding support from the NVIDIA cards. I believe the main place you can get into trouble is if it is 4:4:4 since most hardware decoding I know of doesn't support that. In that case, the GPU won't be able to be used for decoding, so a beefier CPU is more important.

Posted on 2021-01-09 00:07:13

Hey guys, just found a little typo, on the test platform the TR 3970x is marked as 24 core.
32 core *

Posted on 2021-01-09 19:21:04

Thanks for pointing that out, all fixed now!

Posted on 2021-01-11 18:06:33

Matt Bach HI @

will you be making a 3060 12gb review as well (now that its been launched)

Posted on 2021-02-27 18:15:07

The 3060 is a bit lower end than we usually worry about - especially for applications like Resolve that are very heavy on the GPU. The 12GB is definitely interesting, but if you are someone who needs that amount of VRAM in Resolve, you probably need a lot more performance than the RTX 3060 is going to get you! We might do a whole article around the RTX 3060, but more likely we will just include it in the next Resolve GPU article we publish.

Posted on 2021-03-01 18:32:48

I wanted to do a Pugetbench 0.92.1 with my RTX 3060 12GB yesterday, but unfortunately the test doesn't work with my Resolve 17 Studio version (17.0, no beta).

Posted on 2021-03-05 00:11:28

We just updated the benchmark the other day to 0.92.2 which added support for the release version of DaVinci Resolve 17. No changes to the tests, just updated the disk databases as needed by the new version.


Posted on 2021-03-05 17:18:37


That ist great to hear... I give it a new try...
In the meanwhile I have edited a project with some Full HD clips: Timeline 4K, super scaled 2x, some color grading, motion blur, sharpening, transitions. The 12 GB of VRAM was almost fully utilised, see attached GIF.
However, working smoothly in the 4K timeline was not possible throughout, so I had to work in proxy mode.
More importantly for me: no GPU out of memory messages either when working in the timeline or when rendering in H.265....

System: AMD R9 3900x, 32 GB RAM, RTX 3060 12GB. DaVinci Resolve Studio 17 final.

Posted on 2021-03-05 22:35:16
Nic Harding

Nice results. Id be looking at the 3060 12GB as the pricing for anything 3070 and up is just crazy (unless your making money from this type of work).

Posted on 2021-03-12 12:04:32