Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1238
Article Thumbnail

DaVinci Resolve 15: NVIDIA GeForce RTX 2080 & 2080 Ti Performance

Written on September 28, 2018 by Matt Bach


In many ways, DaVinci Resolve is the poster child for how video editing applications should leverage the power of the GPU. While other core components like the CPU are certainly important, if you want the best FPS possible while color grading, a significant portion of your workstation budget will likely be spent on either a single powerful GPU or multiple GPUs.

NVIDIA's new RTX series cards are especially interesting since not only should they perform extremely well from the get-go, DaVinci Resolve is listed as already implementing (or planning to implement) the new features found on the RTX platform by using "Turing Tensor Cores in Resolve 15 to accelerate AI inferencing for graphics enhancement". This is a fairly generic bit of text, and it really only makes sense if you understand the two major new features found in these RTX cards: Tensor cores and RT cores.

What are Tensor Cores?

While already available on the more expensive Titan V GPU, the RTX line introduces tensor cores at a more reasonable price point. These tensor cores operate alongside the normal CUDA cores that traditionally do the heavy lifting, but are designed specifically for machine learning inference (running already created and trained machine learning models). Blackmagic has already announced that they will be using these cores, but exactly how they do so and where exactly it will improve performance is still to be seen.

What are RT Cores?

RT cores are brand new in this generation of graphics cards, and are specialized for a single type of operation: ray tracing. It is possible that Blackmagic may utilize these cores for ray tracing in the Fusion tab, but if or when they will take advantage of these RT cores is currently unknown.

If you would like to skip over our test setup and benchmark result/analysis sections, feel free to jump right to the Conclusion section.

Test Setup & Methodology

Listed below are the test platforms we will be using in our testing:

Test Hardware  
Motherboard: MSI MEG X399 Creation
CPU: AMD Threadripper 2990WX 3.0GHz
(4.2GHz Turbo) 32 Core
CPU Cooler: Corsair Hydro Series H80i v2
RAM: 8x DDR4-2666 16GB (128GB total)
Hard Drive: Samsung 960 Pro 1TB M.2 PCI-E x4 NVMe SSD
OS: Windows 10 Pro 64-bit
Software: DaVinci Resolve 15 (ver.

To see how the new RTX cards perform in DaVinci Resolve, we tested it against a selection of cards from NVIDIA as well as AMD's Vega 64 GPU.

Our testing for DaVinci Resolve primarily revolves around the Color tab and focuses on the minimum FPS you would see with various media and levels of grading. The lowest level of grading we test is simply a basic correction using the color wheels plus 4 Power Window nodes with motion tracking. The next level up is the same adjustments but with the addition of 3 OpenFX nodes: Lens Flare, Tilt-Shift Blur, and Sharpen. The final level has all of the previous nodes plus one TNR node.

We kept our project timelines at Ultra HD (3840x2160) across all the tests, but changed the playback framerate to match the FPS of the media. For all the difficult RAW footage we tested (CinemaDNG & RED), we not only tested with the RAW decode quality set to "Full Res" but we also tested at "Half Res" ("Half Res Good" for the RED footage). Full resolution decoding should show the largest performance delta between the different cards, but we also want to see what kind of FPS increase you might see by running at a lower decode resolution.

Codec Resolution FPS Bitrate Clip Name Source
H.264 3840x2160 29.97 FPS 80 Mbps Transcoded from RED 4K clip
H.264 LongGOP 3840x2160 29.97 FPS 150 Mbps Provided by Neil Purcell - www.neilpurcell.com
DNxHR HQ 8-bit 3840x2160 29.97 FPS 870 Mbps Transcoded from RED 4K clip
ProRes 422 HQ 3840x2160 29.97 FPS 900 Mbps Transcoded from RED 4K clip
ProRes 4444 3840x2160 29.97 FPS 1,200 Mbps Transcoded from RED 4K clip
XAVC S 3840x2160 29.97 FPS 90 Mbps Provided by Samuel Neff - www.neffvisuals.com
XAVC Long GOP 3840x2160 29.97 FPS 190 Mbps Transcoded from RED 4K clip
Blackmagic RAW 4608x1920 24 FPS 210 Mbps A001_08122231_C008 Blackmagic RAW
RED (7:1) 4096x2304 29.97 FPS 300 Mbps A004_C186_011278_001 RED Sample R3D Files
CinemaDNG 4608x2592 24 FPS 1,900 Mbps Interior Office Blackmagic Design
[Direct Download]
RED (7:1) 6144x3077 23.976 FPS 840 Mbps S005_L001_0220LI_001 RED Sample R3D Files
RED (9:1) 8192x4320 25 FPS 1,000 Mbps B001_C096_0902AP_001 RED Sample R3D Files

With the addition of the "Fusion" tab in Resolve, we are also going to be including some basic tests for that tab as well. At the moment these are relatively easy projects that specifically test things like particles with a turbulence node, planar tracking, compositing, and 3D text with a heavy gaussian blur node. These projects are based on the following tutorials:

If you have suggestions on what we should test in the future, please let us know in the comments section. Especially if you are able to send us a sample project to use, we really want to hear from you!

Color Tab FPS - Raw Benchmark Results

Color Tab FPS - Benchmark Analysis

To analyze our benchmark results, we are going to break it down based the three different levels of color grading we tested. The easiest - a basic grade with 4 power windows - is not too difficult and every GPU we tested should be able to give full playback FPS in everything but RED 8K (Full Res Premium). However, each level up should show more and more of a difference between the different cards.

The "Score" shown in the charts is a representation of the average performance we saw with each GPU for that test. In essence, a score of "80" means that on average, the card was able to play our project at 80% of the tested media's FPS. A perfect score would be "100" which would mean that the system gave full FPS even with the most difficult codecs and grades.

While GPU pricing has been especially volatile recently, the best way to evaluate the new RTX cards is to compare them against the similarly priced NVIDIA cards from the previous generation. This isn't a perfect comparison since the older cards give you more VRAM at each price point, but when looking at performance it should be close enough to give us an idea of how these cards perform.

  • NVIDIA RTX 2080 8GB -> NVIDIA GTX 1080 Ti 11GB
  • NVIDIA RTX 2080 Ti 11GB -> NVIDIA Titan Xp 12GB

Starting with the RTX 2080, it comes in at a decent 10% faster than the GTX 1080 Ti which puts it right on par with the more expensive Titan Xp video card. 10% isn't anything groundbreaking, but combined with the new Tensor cores which gives the card even more potential for the future makes it an easy pick over the GTX 1080 Ti unless you really need that extra 3GB of VRAM.

The RTX 2080 Ti, however, is much more impressive. With that card, we saw a 20-25% performance gain over the Titan Xp. This puts it right in line with the Titan V which is more than twice the cost!

Fusion Tab FPS - Raw Benchmark Results

Fusion Tab FPS - Benchmark Analysis

Fusion is relatively new to our DaVinci Resolve testing, and so far we haven't been too impressed with how well it takes advantage of the GPU. To be fair, we are not using media footage in these projects that is particularly difficult to process, but given the FPS we saw in each project we doubt that that having multiple GPUs would significantly improve performance even if you are using 8K RED media.

Whether it is due to our test projects or simply how much more CPU dependent Fusion is, we really didn't see much of a difference with any of the cards we tested. However, we will again point out the future potential of the RTX cards for Fusion. The new RT cores in particular could be very interesting and while we have no idea if/when they will be used, it is certainly something to keep in mind.

GPU Scaling - Raw Benchmark Results

GPU Scaling - Benchmark Analysis

We have done a number of articles looking at GPU scaling in the past, and while we don't expect thing to change much with the RTX cards, we decided to go ahead and test it anyway. Unfortunately, we did not have access to multiple RTX 2080 Ti cards at the time of this article, but even with just the 2080 we should still be able to get a great idea of whether things are the same or if the RTX somehow behaves differently than we expect.

Starting with the Color tab, the results with the RTX 2080 line up almost exactly with the GTX 1080 Ti. The 2080 is of course a bit faster, but the scaling itself is almost identical. While we did not test up to four GPUs, it is pretty clear that two of these RTX cards should be more than adequate for most users, or three if you do especially intensive grading.

Like we stated in the previous section, at the moment Fusion isn't terrific at leveraging the video card and it shows in our multi-GPU testing. In fact, we saw a slight drop in performance when we used multiple RTX cards. It isn't enough to really be too concerned about (will you really notice a fraction of an FPS?), but it does show that if you care about performance in Fusion, adding more GPUs isn't the way to go right now.

Are the RTX video cards good for DaVinci Resolve?

Compared to the previous generation cards of similar cost, in the Color tab we saw up to a 10% performance increase with the RTX 2080 and a 25% performance increase with the RTX 2080 Ti. This makes the RTX cards terrific for DaVinci Resolve, even ignoring the new Tensor and RT cores which may be utilized in the future.

NVIDIA GeForce RTX 2080 & 2080 Ti DaVinci Resolve 15 Benchmark

GPU pricing has been especially volatile recently, but the best way to evaluate the new RTX cards is to compare them against the similarly priced NVIDIA cards from the previous generation. This isn't a perfect comparison since the older cards give you more VRAM at each price point, but when looking at performance it should be close enough to give us an idea of how these cards perform.

NVIDIA RTX 2080 8GB vs GTX 1080 Ti 11GB for DaVinci Resolve

While not massively faster, the RTX 2080 does give a very respectable 10% performance increase over the GTX 1080 Ti in the Color tab. This puts it about on par with the more expensive Titan Xp video card, although the 8GB of VRAM may be slightly limiting for those working with 8K footage and multiple high resolution displays.

NVIDIA RTX 2080 Ti 11GB vs Titan Xp 12GB for DaVinci Resolve

Depending on the level of grading you are doing, the RTX 2080 Ti is up to 25% faster than the Titan Xp or about 30% faster than the RTX 2080 which puts it in line with the much more expensive Titan V video card. Due to how Resolve scales with multiple GPUs, a single RTX 2080 Ti is also almost exactly the same performance as a pair of RTX 2080 or GTX 1080 Ti cards making it a great value even at its relatively high cost.

There isn't really much mystery here, the RTX cards are simply really good for DaVinci Resolve. Especially if you consider the fact that Blackmagic has already stated that they will be taking advantage of the Tensor cores in Resolve 15, it makes these cards incredibly attractive for use in a color grading workstation. However, not everything is perfect and there is one big issue that you may have to work around: the cooler most RTX cards use.

To put it bluntly, the style of cooler used on the reference cards from NVIDIA and most 3rd party manufacturers is not good for multi-GPU configurations. They can be excellent for a single GPU, but if you want two or more cards the design is sub-optimal. The issue is that the cooler does not exhaust out the back of the system so the hot air generated by the cards is simply recycled inside the system over and over. We didn't see a significant performance drop in our Resolve testing, but we are also testing in an ideal environment with relatively short clips. In GPU-heavy applications like OctaneRender and Redshift, however, we have seen up to a 30% performance drop over time using multiple reference RTX cards. This doesn't mean you cannot use the RTX cards in multi-GPU configurations, but rather that you should try to use cards with a "blower" style cooler that is designed to vent the heat directly outside of the chassis.

If you are interested in how the RTX cards perform in other applications, be sure to check out our recent Video Card articles as we have (or are working on) a number of other articles for the RTX 2080 and RTX 2080 Ti.

DaVinci Resolve Workstations

Puget Systems offers a range of poweful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: DaVinci Resolve, GeForce, RTX, 2080, 2080 Ti, 1060, 1070, 1070 Ti, 1080, 1080Ti, Radeon, Vega
Håkon Broder Lund

Great work and interesting. Did not expect the 2080 Ti to perform on par with Titan V. A 3k GPU.
Didn't know that they will take advantage of the tensor cores. Promising in the future. Improved TNR? Time will tell.

Posted on 2018-09-28 21:57:07
Alan Gordon

The big thing that I'm curious about is if Resolve will support NV-link to pool the GPU Ram. 11 gigs is a bit anemic for timeline resolutions larger than 4k.

Posted on 2018-09-29 04:24:03

The NV-Link on these RTX cards is not like the one on the Quadro GV100, so there isn't going to be anything like VRAM sharing. Really, it is just SLI over a faster connection. I believe they might have left it open to being used in more ways in the future, but I wouldn't expect that anytime soon. If NVIDIA did that, it would remove one of the major reason to purchase one of their much more expensive cards.

Posted on 2018-09-29 04:36:25
Alan Gordon

At what data rate do these card's nvlink run? And at what data rate would it need to run to be useful to pool memory for an 8k frame size? Which unless my math is wrong at 24fps would be about 18GB/s uncompressed in 32 bit float

Posted on 2018-10-02 15:47:01

Technically, I believe the connection on these RTX cards is a x8 NVLink channel which should provide 25 GB/s peak bandwidth in either direction. However, there is a ton of confusion right now about whether the RTX cards are true NVLink or just SLI over NVLink. Everything from NVIDIA indicates that it is just SLI with a higher bandwidth, but some big GPU rendering engines from ChaosGroup and OTOY seem to indicate that the RTX card can be used for memory pooling (true NVLink). The hard thing is that even if they do support that, the software has to support it as well and right now, nothing publicly available actually supports true NVLink on the RTX cards. So there is really no way to test and find out ourselves. We are trying to do that now, but we have had issues getting the Quadro NVLink brodges to work on these cards and the actual RTX NVLink bridges won't be in for a few days.

Even if these cards do support real NVLink (which I'm doubtful of), Blackmagic would need to implement support in Resolve. I give that a 50/50 chance to be honest.

Posted on 2018-10-02 16:58:09

Just a head's up, we've done some testing with NVLink and it looks like the RTX cards do NOT support full NVLink: https://www.pugetsystems.co... . So even though they call it NVLink, it is really just a higher bandwidth SLI bridge using the physical NVLink connection. It's always possible we're wrong and there is some work-around people can use, but using the direct CUDA library we couldn't get peer-to-peer to work between two RTX 2080 cards.

Posted on 2018-10-06 00:48:29
Jakub Badełek

Thanks for the test Matt. I was thinking, is there a way to compare the SOFTWARE? to measure if Da Vinci can be faster than Premiere Pro for example. At least in typical actions. Some time ago I decided to learn myself some video editing and I decided to start with most popular software (Premiere Pro) but now more and more people are mentioning of switching to Resolve as it is faster for them. It would be cool if someone could actually test whether this is true. It won't be easy as this depends on particular scenario (timeline, workflow, computer configuration etc) but... would VERY useful ;)

Posted on 2018-10-01 12:53:51

We've consider doing that from time to time, but there are a couple reasons we end up shying away from it. The primary reason is simply because our focus is really on the hardware and figuring out what is best for the programs our customers already use. Getting into comparing software packages is really not our forte and since our to-do list is already so long we feel it is better to focus on what we really excel at doing right now.

The second reason is that while performance is a consideration when choosing which software to use, I honestly think it shouldn't be the primary one. Using whichever software fits your workflow is really the thing you should mostly be worrying about since there are usually workarounds for performance issues (such as using proxies). For example, Premiere Pro is a terrific editor and has great integration with other Adobe apps like After Effects and Photoshop. Resolve, on the other hand, is amazing for color grading and has the advantage of having everything (editing, motion graphics, audio, and color) all in one application. Resolve is also better IMO if you have multiple people working on a project (or remotely) due to it's database structure and collaboration tools.

Posted on 2018-10-01 16:20:14

Bit of a rebel here, I am thinking of buying the 2080 Ti for Magix Vegas and I was wondering if this GPU would help with it's general performance, namely preview speed? Is there a significant difference is OpenCL performance from the 1080 Ti to the 2080 Ti or should I stick with the 1080 Tis?

Posted on 2018-10-05 17:07:21

Vegas isn't something we typically look at, so I can't say for sure. My best guess is that you won't see much of a performance difference with a 2080 Ti. Similar to Premiere Pro, I believe Vegas is much more CPU dependent so my guess is that except in limited situations you are going to be CPU bottlenecked even with a 1080 Ti.

Posted on 2018-10-05 17:14:41

I have a 1080 Ti and I frequently get GPU Out of Memory errors when grading 4K RED footage. It seems to happen randomly when I go heavy on noise reduction. I would love to see these cards tested for where there memory limits are and compared to the Quadro RTX 6000 with it's 24GB memory to see if it blows away the limits I keep running into, or if it's maybe just a poor memory management issue in Resolve that even 24GB won't fix.

Posted on 2018-10-14 02:00:21

If it happens randomly, you are probably right on the edge of having enough VRAM. So a Quadro RTX with 24GB should be plenty to prevent those errors. One thing you can do is download a program called GPU-Z and use that to keep an eye on the "Memory Used" to see how quickly you are filling up the GPU memory. That might give you an idea of how much VRAM your workflow needs.

Posted on 2018-10-15 15:55:00
Fahim Khan.

Is there real advantage of SLI bridge compared to without in multi GPU scenario in Davinci resolve?

Posted on 2018-10-23 14:05:14

That is actually a much more complicated answer than it was before these RTX cards came out. In the past, the answer was "no". SLI did nothing and in some cases resulted in some bugs so it was always better to have it off. With the new RTX cards it is still a "no" right now, but in the future that may change.

What is going on is that the RTX 2080 & 2080 Ti have NVLink which theoretically allows the cards to talk to each other without having to go through the PCI-E bus. Having SLI enabled is how you also enable this link, but the current issue is that you also have to be using software that is coded to take advantage of it. Right now, pretty much nothing can use NVLink outside of some HPC software so having SLI/NVLink on doesn't do anything. If Blackmagic adds support for NVLink to do something like memory pooling or simply to improve performance, however, then having SLI/NVLink on (if you have an RTX card) could give big advantages. How much completely depends on what Blackmagic actually uses it for.

To sum it up:
Non-RTX card = SLI doesn't do anything, may cause bugs
RTX card = SLI doesn't do anything today, but it may be useful in the future by enabling NVLink.

Posted on 2018-10-23 17:22:51
Fahim Khan.

Thank you Matt for thorough explanation. Much appreciated.

Posted on 2018-11-06 16:11:06
Joe S.

Thanks again for your extensive testings!

2 Questions:

1. 1080ti vs 2080: in the performance lead of the rtx, did you factor in that the rtx are now factorz overclocked? Is the gtx overclocked as well?

2. I'm trying to decide if I need bettre performance or the 11GB vram more. I mostly work with full HD timelines and 4k footage. I use TNR in every shot and sometimes would like to even use 2 separate nodes with NR. But on my 970 I run out of memory quickly and often have to minimize resolve while rendering. I don't care so much about rendering performance, more about the performance while grading, but there will always will be a delay I figured. I think I answered my question already, but I'm curious what would you recommend?

Posted on 2018-10-30 12:41:01

The factory overclock is actually an interesting topic since more and more the term "overclock" means "stock". It is at times really hard for us to get non-overclocked versions of cards simply because the overclocks are so stable that manufacturers are basically treating it like their default option. Now that even NVIDIA is doing that, it really is blurring the lines between what is an overclock and what is simply a higher turbo clock. Either way, generally an overclocked card only improves performance by a few percent in these kinds of applications, so it shouldn't make a big difference. That is why we didn't worry too much about overclocked vs not.

For HD/4K, we generally recommend at least 8GB of VRAM, but I don't think you should need 11GB. It sounds like you are right on the edge of 4GB being enough, so doubling that should be plenty. Going up to 11GB would be more future-proofing if you think you might get into 6K/8K in the near future.

Posted on 2018-10-30 17:33:38
Hamitham Support Team

Thanks for this article.
We have a Resolve workstation and we want to upgrade the GPU. Can you say what are your thought about NVIDIA RTX 2080 Ti 11GB vs 2x GTX 1080 ti?

Posted on 2018-10-31 10:24:01

The RTX 2080 is a bit faster than the GTX 1080 Ti, and since the RTX 2080 Ti is about the same as a pair of RTX 2080 cards, I would go with a single RTX 2080 Ti. That was a bit more confusing than I meant it to be, so just get a single RTX 2080 Ti. It should be the same performance as a pair of GTX 1080 Ti but the RTX 2080 Ti will be:

1) Less expensive
2) Less wattage/heat/noise
2) More futureproof with the RT/Tensor cores and the possibility of NVLink if you get a second card
4) Easier to upgrade since you can just toss in another if you need to
3) Simpler which should mean better stability and less bugs

Posted on 2018-10-31 16:33:08
Landon Parks

It seems to me that Vega 64 is about on par performance wise with 1080ti, minus the extra 3GB RAM. However, it seems that Vega might actually have a RAM advantage over anything nVidia has right now, with its HBCC (high-bandwidth cache controller) memory. It appears that once the VRAM runs out, it starts dumping to system RAM. If I'm thinking correctly, this should basically prevent VRAM running out?

Myself, I am building a Threadripper 2950x system with dual Vega 64's, both running OC on dual custom liquid loops.

Posted on 2018-11-08 11:19:59

Yea, it is pretty close to the 1080 Ti in Resolve in terms of performance. For HBCC, I don't believe that it allows you to use your system memory as a slower version of video memory if you run out of VRAM. My understanding is that it is just a caching controller that prioritizes the fastest parts of the VRAM to be used by frequently used data. Just like NVIDIA (or older AMD cards), if you run out of VRAM you are still going to get an error in Resolve.

Posted on 2018-11-08 17:49:39
Kævin Sønderberg Hansen

Can you mix GPU's together, 1080Ti and the 2080?

Posted on 2018-11-19 15:33:02

For Resolve, yes you can and it should work pretty well. The biggest thing to be aware of is that you are limited to the smallest amount of VRAM - 8GB in this case.

Something to be aware of, however, is that even though Resolve should work fine, that doesn't mean everything will be fine. I don't think you should run into too much trouble with those cards, but the more different the cards are, the more likely things will break. We've seen Windows updates get confused and break things, Photoshop give errors when it can't figure out how to handle mixed cards, etc. So just keep in mind that you may need to do a bit of troubleshooting a few times a year.

Posted on 2018-11-19 17:34:38

Nevermind. It was answered in the comments.

Posted on 2018-12-06 10:13:33
Charles Unice

Have you done any testing with your heavy color (basic + 4 power windows+3 open fx + TNR) on a feature length time line? Im wondering if 11gb of vRam is enough or if resolve will use up all 11 gig with that much TNR? Do you have plans on testing the Titan RTX?

Posted on 2018-12-30 17:53:15

We will be testing the Titan RTX, although since it will only be available with the dual fan cooler design, it will have to be really amazing for it to make sense for Resolve since you are going to be limited to one (mayyybe two) GPUs. The 24GB of VRAM might be well worth it for some people, however.

As for VRAM usage on longer timelines, we haven't really done that all that much. My understanding is that VRAM usage shouldn't be dependent on the length of clips you are using, but rather the resolution and format of the media as well as the effects applied - especially things like noise reduction.

Posted on 2019-01-02 20:25:55
Richard Milner

How much noisier is the blower fan RTX than the twin fan? Will a single twin fan card stay cool enough under load?c

Posted on 2019-02-01 10:56:44

It really depends on the brands. I don't deal too much with that (that is more our product qualification and production teams), but I believe PNY, EVGA, and Gigabyte are brands that have blower style coolers that aren't too loud. Definitely louder than the dual fan models at high load, but idle or low load they aren't too bad at all.

Posted on 2019-02-05 03:26:52

This site is a great value to all of us who are looking at building better video editing systems. Thank you so much!

Posted on 2019-07-02 19:00:28


I have two external Vega 56 (in Breakaway Boxes) connected to a Mac Mini 2018 (on thunderbolt ports 2 and 4).
The two cards are recognized by Resolve, however the rendering is actually slower than it was with just one card connected.
I read in your article that two cards are supposed to give a significant speed up when using temporal noise reduction.
Do you guys have any idea where the problem could be?
The Mac Mini has the fastest CPU configuration and 32 GB Ram.


Posted on 2020-01-27 22:13:08

Hey Armin, it could really be any number of things, so I would recommend asking Apple, Blackmagic, or the manufacturer of the external GPU. A couple guesses though:

1) You may be CPU bottlenecked even with a single GPU, so adding more GPUs isn't going to get you anything
2) I believe there are two Thunderbolt controllers on the Mac Mini, but I don't know if they share PCI-E lanes or not. If they do, then adding a second GPU will make each GPU only get 2 lanes which can make performance overall worse.
3) No idea which ports go to which controller, but if they don't share PCI-E lanes, then you might be just putting both GPUs on the same controller. Try different ports to try to get them onto the separate controllers.
4) Some setting in Resolve isn't set right

Posted on 2020-01-27 23:42:45
Dan Hopkins

This is measured in FPS during playback correct? Not rendering?

Posted on 2020-02-21 02:21:02

Correct, these results are playback FPS in the Color and Fusion tabs. Newer articles test rendering (export) performance instead, however, because testing playback is really, really hard to automate.

Posted on 2020-02-21 02:27:21
Dan Hopkins

Thank you!

Posted on 2020-02-21 02:32:53
Dan Hopkins

Do you know if it's okay to run two GPU's that aren't the same? Right now i'm in between a few options and would love your thoughts...I mainly want to make sure i'm not getting bogged down when playing back / rendering noise reduction and OFX such as sharpening. I'm okay with rendering cache in order to help playback, and I assume a better GPU will speed up the render cache process? It's fairly slow with the 1070.

Files that i'm editing: 4k 24/30/60p 10-bit H265, maybe a few 6k 10-bit H265. All LongGOP

Current setup

i9 10920x clocked to 4.1Ghz
64GB 3200MHz RAM
GTX 1070
Working media on a SATA Sandisk Extreme SSD, render cache on another SATA WD Blue SSD, operating system on a Samsung EVO 850

My thoughts...

1. Add a 1080TI and run that in tandem with the 1070
2. Swap the 1070 for a 1080TI
3. Swap the 1070 for a 2080 TI

I had thought about the 2080 GB, but it doesn't seem to have really any benefit over the 1080TI, and is more expensive. Any guidance based on y needs would be great!


Posted on 2020-02-21 14:57:38

Technically, Resolve should work just fine with mixed GPUs - the only downside is that you are limited to the smallest amount of VRAM between each card. In reality, however, there definitely are issues that come up. Sticking withing the same model line (1070 + 1080 Ti, 2070 + 2080 Ti, etc.) should be pretty stable, but the more different the cards are, the more likely Windows update is going to mess stuff up whenever it pushes a driver update.

That said, for what you described as your workflow, likely the best bang-for-your-buck would be to go with "Add a 1080TI and run that in tandem with the 1070". Just upgrading the GPU to a 1080/2080 Ti would certainly help, but not as much as adding a second GPU.

Posted on 2020-02-21 17:10:35
Dan Hopkins

Hey Matt, thanks for your input. I'm a little confused though. It sounds like issues are likely to come up if I use two different GPU's with driver mismatches and such. Also, I feel like one of the biggest bonuses of a new GPU is adding VRAM so I can export with less of a worry about a GPU memory error. You're saying that if I add the 1080TI and use it in tandem with the 1070, Resolve will only be able to use 8GB of RAM, correct? Or, when I render out a sequence, can I tell Resolve to just use the 1080TI?

Where do you think having two GPU's will help the most? Playback fps? Render cache?

Posted on 2020-02-21 17:26:32

Pretty much yes to everything.

With a 1070 8GB + 1080Ti 11GB, Resolve is going to be limited to 8GB of usable VRAM unless you tell it to only use the 1080 Ti. But at that point, you might as well remove the 1070 entirely since all it is doing is making your system more complicated (which makes it more prone to random issues). You should run into the same out of memory issues whether you are rendering or playing back during editing (unless you are using lower res optimized media), so generally if something is a problem for the render, it will be an issue for live playback as well.

Multiple GPUs will help pretty much anywhere you have OpenFX or noise reduction nodes applied. So live playback, render cache, exporting, etc. The places where those nodes are not present (creating optimized media, or anything with clips before you have applied OpenFX/NR), are largely going to be CPU-limited, so more GPU power isn't going to make much of a difference.

Posted on 2020-02-21 17:33:51
Dan Hopkins

Thank you. Lastly, what about just adding another 1070, or maybe a 1070TI? Would that allow Resolve to use 16GB of VRAM and be overall faster than one 1080 TI?

Posted on 2020-02-21 17:37:07

Two 1070 would be faster than a single 1080 Ti, but when it comes to VRAM, it doesn't matter if the cards match or not. You will always be limited to the smallest available VRAM capacity since Resolve has to duplicate all the "render data" onto each card. It is possible Blackmagic could try to utilize NVLINK to enable the pooling of VRAM (where VRAM could be additive), but everything I have heard points to them not doing that. From my understanding, it sounds great on paper, but just doesn't work well in the real world for things like video editing.

Posted on 2020-02-21 17:44:05
Dan Hopkins

Aw shucks, I thought I was onto something with the whole pooled VRAM thing. I’m definitely foreseeing 8GB of VRAM being an issue when I start adding NR and sharpening to 50-75% of a 10 minute edit. Sounds like your first option could be worth checking into though. Use dual GPUs (1070 and 1080TI) through the edit and color process, and if I ever get the GPU memory error, just tell Resolve to only use the 1080TI until I’m done that section of the edit / for rendering? So many possibilities.. if the 2080 TI was just a little cheaper I’d go for it in a second

Posted on 2020-02-21 18:02:54