Puget Systems print logo


Read this article at https://www.pugetsystems.com/guides/1382
Article Thumbnail

DaVinci Resolve 15: AMD Radeon VII 16GB Performance

Written on March 12, 2019 by Matt Bach


DaVinci Resolve is one of the few applications in the video editing world that really pushes the envelope in terms of GPU acceleration. Not only will a higher-end GPU almost always give you better performance, but Resolve can take advantage of multiple GPUs at the same time. In fact, many professional Resolve workstations will have three, four, or even more GPUs all working together to provide the user with the smoothest color grading workflow possible.

For quite a while now, NVIDIA GPUs have been our go-to recommendation for DaVinci Resolve. AMD's Radeon video cards were fine for their price, but for the level of performance most of our customers desired, AMD simply couldn't keep up. However, AMD's new Radeon VII 16GB has a number of features that could shake things up.

Right away, we knew that this was a card that was going to be very interesting for DaVinci Resolve due to its 16GB of VRAM. Many of our customers and readers are starting to work with 8K media or are using Resolve for heavy noise reduction where having a high amount of VRAM is incredibly important.

AMD Radeon VII DaVinci Resolve

While the 16GB of VRAM alone makes this an interesting option for DaVinci Resolve, raw performance is a major concern as well. To see how the Radeon VII fares, we are going to take a look at how it stacks up against the AMD RX Vega 64 as well as the entire NVIDIA GeForce RTX lineup - plus the Titan RTX - in DaVinci Resolve Studio 15.3. We also have (or will soon have) a number of articles available looking at the performance of the Radeon VII in a range of other applications that you can view by filtering our recent articles to just those about Video Cards.

If you would like to skip over our test setup and benchmark result sections, feel free to jump right to the Conclusion section.

Test Setup & Methodology

Listed below is the system we will be using in our testing:

Our testing primarily revolves around the minimum FPS you would see with various media and levels of grading in the Color Tab. The lowest level of grading we test is simply a basic correction using the color wheels plus 4 Power Window nodes with motion tracking. The next level up is the same adjustments but with the addition of 3 OpenFX nodes: Lens Flare, Tilt-Shift Blur, and Sharpen. The final level has all of the previous nodes plus one TNR node.

We kept our project timelines at Ultra HD (3840x2160) across all the tests, but changed the playback framerate to match the FPS of the media. For all of the difficult RAW footage we tested (CinemaDNG & RED), we not only tested with the RAW decode quality set to "Full Res" but we also tested at "Half Res" ("Half Res Good" for the RED footage). Full resolution decoding should show the largest performance delta between the different cards, but we also want to see what kind of FPS increase you might see by running at a lower decode resolution.

Codec Resolution FPS Bitrate Clip Name Source
H.264 3840x2160 29.97 FPS 80 Mbps Transcoded from RED 4K clip
H.264 LongGOP 3840x2160 29.97 FPS 150 Mbps Provided by Neil Purcell - www.neilpurcell.com
DNxHR HQ 8-bit 3840x2160 29.97 FPS 870 Mbps Transcoded from RED 4K clip
ProRes 422 HQ 3840x2160 29.97 FPS 900 Mbps Transcoded from RED 4K clip
ProRes 4444 3840x2160 29.97 FPS 1,200 Mbps Transcoded from RED 4K clip
XAVC S 3840x2160 29.97 FPS 90 Mbps Provided by Samuel Neff - www.neffvisuals.com
XAVC Long GOP 3840x2160 29.97 FPS 190 Mbps Transcoded from RED 4K clip
Blackmagic RAW 4608x1920 24 FPS 210 Mbps A001_08122231_C008 Blackmagic RAW
RED (7:1) 4096x2304 29.97 FPS 300 Mbps A004_C186_011278_001 RED Sample R3D Files
CinemaDNG 4608x2592 24 FPS 1,900 Mbps Interior Office Blackmagic Design
[Direct Download]
RED (7:1) 6144x3077 23.976 FPS 840 Mbps S005_L001_0220LI_001 RED Sample R3D Files
RED (9:1) 8192x4320 25 FPS 1,000 Mbps B001_C096_0902AP_001 RED Sample R3D Files

With the addition of the "Fusion" tab in Resolve, we are also going to be including some basic tests for that tab as well. At the moment these are relatively easy projects that specifically test things like particles with a turbulence node, planar tracking, compositing, and 3D text with a heavy gaussian blur node. These projects are based on the following tutorials:

One thing we want to note is that we had quite a bit of GPU driver issues during our benchmarking process. First, we discovered that the NVIDIA GeForce 418.x and 419.x are currently giving about 10-20% lower performance in Resolve. We have a full write up on this issue in our Support Guide: DaVinci Resolve 15 - Performance loss with NVIDIA 418 & 419 driver. Due to this, we used the NVIDIA driver version 417.71 for our testing.

AMD also had some driver issues that caused image corruption of Blackmagic RAW footage. Even after trying every GPU driver AMD has available for the Radeon VII (ranging from 19.2.1 to 19.3.1) and going back to Resolve 15.0, we were unable to resolve the issue. The benchmark results are in line with what we would expect, however, so to keep this from holding up this article we decided to go ahead with the latest AMD driver. Just be aware that the results with BRAW may not be 100% accurate.

AMD GPU Blackmagic RAW BRAW corruption

Corruption of Blackmagic RAW footage with current AMD Radeon VII drivers (19.2.1-19.3.1)

Benchmark Results

While our benchmark presents various scores based on the performance of each type of task, we also wanted to provide the individual results in case there is a specific codec or level of grade someone may be interested in. Feel free to skip to the next section for our analysis of these results.

AMD Radeon VII 16GB DaVinci Resolve Studio 15 Benchmark Performance Results

Benchmark Analysis

Green = NVIDIA, Red = AMD. Darker colors indicate dual GPU configuration

Our results are divided into several categories based on the level of grade, an "Overall Color Grading Score" that is a combination of each grading score, and a dedicated result for Fusion. For most users, the Overall Color Grading Score should be a pretty accurate breakdown of how you would expect each GPU to fare in DaVinci Resolve. However, if you tend to do very heavy grades with things like noise reduction, you may want to scroll to the fourth chart which has the results for a grade with a combination of Power Windows, OpenFX, and TNR.

From a color grading perspective, the Radeon VII does extremely well. Even though it is priced right between the RTX 2070 and RTX 2080, it is about 20% faster than the RTX 2070 and 13% faster than the RTX 2080. In fact, since it scored a single point higher than the 2080 ti, it was technically the fastest single GPU we tested! Keep in mind that the Radeon VII has a MSRP about $500 less than the 2080 Ti and more VRAM than any NVIDIA GeForce video card. Only the Titan RTX 24GB (which isn't technically a GeForce card) has more VRAM and it costs 3.5x more than the Radeon VII. Yea... that makes the Radeon VII 16GB a pretty good GPU for Resolve.

While the Radeon VII isn't ideally suited for multi-GPU configurations due to its cooler design (we've talked about this before in regards to the similarly designed RTX card), with proper chassis airflow you should be able to use two of these cards in a full-sized chassis. What is interesting is that in a dual GPU setup, the Radeon VII doesn't scale quite as nicely as the RTX cards. Two Radeon VII are definitely much faster than one, but where the NVIDIA cards saw about a 35-40% performance bump on our hardest grading tests with TNR, the Radeon VII only saw about a 22% improvement.

Don't get us wrong, the Radeon VII 16GB is still very good in a dual GPU configuration since it is a hair faster (and has twice the VRAM) than a pair of slightly more expensive RTX 2080 8GB cards. But even if you were somehow able to get three or four Radeon VII cards in a system without them overheating, our guess is that NVIDIA would end up taking over in terms of raw price-to-performance once you got up to a triple GPU setup.

Is the AMD Radeon VII 16GB good for DaVinci Resolve?

The short answer is: YES! The AMD Radeon VII 16GB is an excellent GPU for DaVinci Resolve. We had issues with BRAW footage, but the fact that the Radeon VII has 16GB of VRAM and matches the RTX 2080 Ti 11GB in terms of single-GPU performance (and with a MSRP that is $500 less) makes it a very, very strong GPU for DaVinci Resolve.

However, nothing is perfect and there are two issues with the Radeon VII that you may want to consider:

  • First, supply is extremely tight at the moment. This usually improves over time, but if you are trying to get your hands on a Radeon VII at the time of this article, don't expect to get one quickly.
  • Second, the cooler design is not very good for multi-GPU setups which are very common on high-end DaVinci Resolve workstations. You may be able to get two cards in a system that has enough space and airflow, but that is going to be a hard limit. To be fair, the NVIDIA RTX cards have the exact same issue, but there are "rear exhaust" versions available that you can get for multi-GPU setups. Whether a manufacturer will come out a blower-style Radeon VII at some point in the future is completely unknown, but right now they simply don't exist.

Dual AMD Radeon VII 16GB Multi GPU

While the Radeon VII is great for Resolve, we do want to point out that if you have a very healthy budget, you can still get better performance from NVIDIA. Two (or three, or four) RTX 2080 Ti will give you more performance in Resolve than a pair of Radeon VII, and if you really need the VRAM you can go up to the Titan RTX or the Quadro RTX 6000/8000. That will dramatically increase the cost of your system, but if you absolutely need the best performance possible, NVIDIA is still king.

Of course, very few workflows involve just one piece of software. Because of this, we highly recommend checking out our recent Video Card articles where we look at - or will be looking at - the performance of the Radeon VII in Premiere Pro, Photoshop, DaVinci Resolve, as well as a number of other applications.

Looking for a
Content Creation Workstation?

Puget Systems offers a range of workstations designed specifically for video and image editing applications including Premiere Pro, After Effects, Photoshop, DaVinci Resolve, and more.

Tags: DaVinci Resolve, Vega 64, Radeon VII, RTX 2060, RTX 2070, RTX 2080, RTX 2080 Ti, Titan RTX
Misha Engel

First of all thanks for the review and all the time and effort you put in to this, Pugetsystems is the best and the worst review-site for post production hardware/software, yes you are the only ones.

I do have some questions/remarks:
What bios version did you use in the Davinci/radeon7 article?
(First comment Hwgeek in the photoshop/radeon7 article https://www.pugetsystems.co... )

After reading this: One thing we want to note is that we had quite a bit of GPU driver
issues during our benchmarking process. First, we discovered that the
NVIDIA GeForce 418.x and 419.x are currently giving about 10-20% lower
performance in Resolve. We have a full write up on this issue in our
Support Guide: DaVinci Resolve 15 - Performance loss with NVIDIA 418 & 419 driver. Due to this, we used the NVIDIA driver version 417.71 for our testing.
I get the feeling that Pugetsystems doesn't care much about security where NVidia does https://nvidia.custhelp.com... , NVidia doesn't change those drivers for fun, thay got 3 months time to come up with an update before the security flaws would go public.

Security is a big thing these days with Spectre, Meltdown and Spoiler.

How can the Radeon be faster with +3 OpenFX than without in many occasions.

Then this part: But even if you were somehow able to get three or four Radeon VII cards
in a system without them overheating, our guess is that NVIDIA would end
up taking over in terms of raw price-to-performance once you got up to a
triple GPU setup.
Based on what?

For a company like pugetsystems it should be easy to fit a waterblock "Bykski AMD RADEON VII Full Coverage GPU Water Block" on a radeon 7, or any other GPU you put in your systems, it would distinguish Pugetsystems form box-building OEM's like HP and Dell.

Thanks again for the article, although it's pretty confusing

Posted on 2019-03-14 15:17:40

Lots of questions here, but I'll try to hit them all:

1) We are using BIOS version 106 which I believe is the newest available.

2) We absolutely care about security! However, when doing performance benchmarks we never want the results to be skewed by a bug that is going to be fixed in the near future. The performance issue with the newer NVIDIA drivers has already been confirmed by NVIDIA/Blackmagic and they are working on a solution. The problem is not being caused by the security updates at all - if it was, then the issue would not be present in the 418 driver since the security stuff was added in 419. If this performance loss was due to a permanent change, then we would of course use the latest driver (just like how we test all Intel CPUs with the meltdown/spectre fixes even though it can cause a drop in performance).

3) Higher FPS with more effects is definitely odd, but actually not that unusual. I can't really comment on the why since I don't have full knowledge of the render pipeline, but my understanding is that sometimes, shifting things around a bit between the different stages in the pipeline can actually make the software slightly more efficient. This actually comes up in Premiere Pro more often where applying a color grade with Lumetri Color can sometimes slightly raise the playback FPS. It seems to be more common with long-GOP codecs, but can happen with others as well. The couple of results you pointed out could also just be due to normal testing variations - we try to minimize those, but we aren't going to run the benchmark over and over until we get the results we expect since what we expect is not always what reality actually is.

4) We actually are very intentional that we do NOT do full system liquid cooling. We used to do it years ago, but what we found is that if a customer is not the kind of person who would build their own system, they are likely not the kind of person who is willing to do the maintenance required with full liquid cooling. There are of course exceptions, but way too many of those liquid cooled systems ended up being a bad time for everyone involved. So not offering liquid cooling is a very conscious choice we made in order to ensure that we are giving our customers the best possible experience.

Posted on 2019-03-14 16:45:32
Misha Engel

1) I think you are right about that.
2) So it's okay to use insecure drivers and leave your system open for known security issues till NVidia and BMD have solved this problem.
The RTX Titan X values look like they are from the patched drivers (Slower than the RTX2080ti with insecure drivers).
3) Will ask Wendell for this.
4) Good point, it's more something for tweakers and data-center to do customloops.
I do however think that AIO of the shelf good CPU coolers and AIO assembled GPU coolers(by puget) would be a good option
(also maintenance free).

I noticed an other strange conclusion:

"What is interesting is that in a dual GPU setup, the Radeon VII doesn't scale quite as nicely as the RTX cards. Two Radeon VII are definitely
much faster than one, but where the NVIDIA cards saw about a 35-40% performance bump on our hardest grading tests with TNR, the Radeon VII
only saw about a 22% improvement."

Looking at Basic + 4 Power Window + 3 OpenFX + TNR (where it makes sense to use multiple GPU's)

8k.red full

Radeon 7 15 fps
RTX2080 9 fps
2x Radeon 7 17 fps
2x RTX2080 9 fps

Long GOP relies on CPU-power when the hardware acceleration in the GPU is not supported by the software (davinci in this case).

Your conclusion could also have been that on average 1x radeon 7 has about the same speed as 2x RTX2080 (Looking at Basic + 4 Power Window + 3 OpenFX + TNR).

and another one:

"But even if you were somehow able to get three or four Radeon VII cardsin a system without them overheating, our guess is that NVIDIA would end
up taking over in terms of raw price-to-performance once you got up to a triple GPU setup."


2x RTX2080 has on average the same speed as 1 radeon 7 while costing twice as much
2x RTX2080ti is $2400 MSRP where 2x radeon 7 is $1400 MSRP
2x RTX2080ti is $2400 MSRP where 3x radeon 7 is $2100 MSRP
1x RTX Titan X is $2500 MSRP where 3x radeon 7 is $2100 MSRP.

How are you planning to achieve that?

You did a comparision a while ago with GTX1080ti 1x..4x and TitanV 1x..4x with davinci resolve 14 and 3 different CPU setups and by the scaling of that
it's easier to derive that it is often useless to use more than 2 GPU's


Davince Resolve improves every year but that part of the engine had no upgrade going for 14 to 15.

As much as I value your article's (yes I check them on a regular basis) and insites, you have some confussing conclussions.

Posted on 2019-03-14 19:51:14
Jose Santos

Hi Thanks for another great review. I'm building my own PC, unfortunately I live outside the US and cannot order one of your amazing workstations,. While my budget is definetly not small I was planning on getting 1x 2080ti, but with the information of this article, for the same price I could buy 2 Vega VII for the price of the 2080ti and have better performance. I'm a colorist and the main purporse of the workstation would be color grading.

Would you recommend the dual Vega VII instead of the single 2080ti? (For the rest of the build I am basicaly copying your Resolve workstation configuration.

Posted on 2019-03-17 09:42:15

Yes, dual Vega VII should be much better than a single RTX 2080 Ti. The only thing you need to be careful of is that you have enough airflow around the cards because they run really hot. At least 1 space between the cards and a side fan blowing over them should be sufficient, but you may want to have that side fan PWN controlled so it can ramp up under load. We are still working on the full qualification to add the Radeon VII to our product line (deciding we want to carry it is just step one, then we have to make sure it is reliable and work out cooling/power concerns), so I unfortunately can't give you much more detail than that.

Posted on 2019-03-18 16:17:18
Jose Santos

Thanks so much for your answer! If I could bother you some more, like I said I am basically replicating your Resolve workstation, but I've come across another interesting motherboard, the MSI Meg Creation. My question is, why did Puget Systems decided to go with the Designare EX from Gigabyte? Did you test the MSI Creation?

Posted on 2019-03-18 17:20:16

We actually used the X399 version of the MSI MEG CREATION for a bit and it was a fine board, so I'm sure the X299 is good as well. The main reason we use Gigabyte these days is because they seem to be among the most stable, have the best direct engineering/support/RMA for us, and Gigabyte by far has the best track record for Thunderbolt actually working properly. Thunderbolt is pretty important for us since a significant portion of our customers either need or want it, which is why we like the Designare EX in particular - it has Thunderbolt 3.

If you don't need Thunderbolt, the MSI MEG CREATION should work just fine since most of the benefits we see from Gigabyte are not things that you as an individual consumer would be able to take advantage of. I still feel like Gigabyte is one of the better brands currently available for motherboards, but I also don't think you are going to get yourself in any trouble by using MSI.

Posted on 2019-03-18 17:26:52
Jose Santos

wow thanks for answering so quickly. I definitely need and will make use of thunderbolt! Maybe I'll go for the gigabyte then! Thanks again!

Posted on 2019-03-18 17:29:09

When you say dual Vega VII, this means you enabled crossfire?
I think I've read it wouldn't be supported.

Posted on 2019-03-23 17:11:21

No, you don't want to have the GPUs in SLI/Crossfire for Resolve or other apps that use the cards for raw compute. It either does nothing, or can cause it to bug out. For example, Resolve will only see one GPU if you are in SLI/Crossfire.

Posted on 2019-03-23 18:17:08

So I simply connect two GPUs (Radeon VII) to my motherboard, then in Resolve select both cards under Configuration and that's it?
I currently have Quadro P4000, and hoped by adding two Radeon VII instead would help me working on my 1080p timeline which starts to be sluggish when NR node is added.

Posted on 2019-03-23 18:31:30

Yep! Do note that mixing NVIDIA and AMD cards may cause driver conflicts. Resolve itself should be fine (although i would deselect the Quadro in Resolve and only use the Radeon VII cards for compute), but anytime we've had a customer do a setup like that it has resulted in issues. Usually fixable issues, but Windows update pushing drivers may mean you will have to manually uninstall, clean, and reinstall drivers randomly. Also, other apps like Photoshop tend to not like having a mix of GPU brands/models which can result in them not launching properly.

Posted on 2019-03-23 18:42:06

Thank you so much. My plan is to sell P4000 and use 2 Radeon VII instead.
I was just wondering if I would boost Resolve performance by simply adding second Radeon VII or it would be waist of money and Resolve would ignore it.

Posted on 2019-03-23 18:46:56
Misha Engel

The Radeon VII has around 3 times more compute power and 4 times more memory bandwidth than the P4000, I would start with 1 if I were you.
The second one doesn't bring that much extra.

Posted on 2019-03-23 20:53:25

Keep in mind that theoretical performance doesn't typically line up with the real-world performance in applications like Resolve. The Quadro P4000 performs a bit under the GTX 1070 which scored about 550 in one of our recent Resolve benchmarks: https://www.pugetsystems.co... . I would estimate that a single Radeon VII will end up being about 75% faster in Resolve versus mirekti's current P4000. Two would be about 85% faster. So not quite double, but starting to get pretty close to it.

Going up from 8GB or 16GB of VRAM will be a nice extra benefit, but since he is just working with 1080p and not currently running out of VRAM (he would be getting errors if he was), that is more of a future proofing kind of thing rather than something that will immediately benefit his workflow.

Posted on 2019-03-23 21:03:01

Thanks for this review and comparison. There are plenty of game benchmark related reviews out there, and nothing that looks at Davinci Resolve in any depth until this article came out. I really appreciate this as I now know this is the card I have been waiting several years to upgrade to from my gtx1070 and those GPU Memory Full warnings I keep getting while grading anything 4K and up. The Nvidia cards with 8GB and 11GB just don't cut it for color grading outside of basic adjustments. Hats off to AMD for offering such a beast of a card with enough VRAM for 8K editing and grading at such a reasonable price.

Posted on 2019-03-22 11:19:03
Boris D.

This is something that isn't talked about ... For enthousiast not having the money for a dual RTX2080Ti, the Radeon VII gives more VRAM at an incredible price point.

Posted on 2019-03-22 11:49:17

For sure. This is true even for professionals who want to spend their hard earned cash on actually capturing the footage and not just on systems for editing and grading it :)

Posted on 2019-03-22 12:08:29

There is actually an issues that make the dual vega II result in correct and this requires a fix from both AMD and possibly davinci resolve (black magic)

When using dual R7VII, Davinic resolve utilises the main Gpu which as the monitors plug in into more heavy. So if you are using 2x monitor in my case a 5K LG ultra wide and a 4K panel in vertical (portrait mode) both plug into GPU 1, Davinic resolve will use just that card mainly for majority of all it processing and because the bandwidth of the R7VII is so fast it give resolve little to no chance to get to utilize the bandwidth of the second GPU.

In order to fix this and get an even amount of processing on both GPU, each one must have a monitor plugged into it and Davinci resolve running in dual monitor UI setup. This way you resolve split process cpu onto both GPU.

This is the reason for the poor performance of have 2x R7VII for resolve it just to fast even the 2x 2080ti to fast upon breaking the 800GB/s Bandwidth it seems reason get confused on what to do with the second GPU.

1x R7VII is good enough for most most Flow the time you really benefit from he second is with heavy TNR and nodes but then Resolve has no idea where how to split workload onto GPU2, 3... Hence the massive deminishing returns we are seeing i personally see this as a issue from Resolve. Bandwidth of this level are unknown untill now on a single GPU.

I would love to see a fix on this from resolve and AMD together and hopefully finger cross BM will start to implement AMD's hardware encoding and decoding in Resolve.

R7VII is on par with resolve native render but the Geforce wins when CUDA is turned on.

Intel is about to join the party so BM need to stop focusing support only for Nvidia hardware acceleration. AMD has there own Hardware and Intel will also follow suit. Opencl is what both AMD and Intel are on if there H.A was also include we will start to know if CUDA is as great as it should be.

Posted on 2019-03-25 12:11:49

Nice findings- thanks!
Also I see that the stock issues are solved now- at-least on Newegg, all Radeon VII in stock now+ Discount's just started - PowerColor Radeon VII Starts at $679, it's hard to beat that Value/$$.
Let's Hope the Radeon Pro drivers will be ready soon and they will improve the Performance/compatibility/stability of the card for Pro programs.
P.S- Just for curiosity- does OCing the memory to ~1200Mhz improves performance or the 1TB's is already un-utilized?

Posted on 2019-03-25 14:04:33

Yes it bring it to about 1.2TB if you overclock the memory to 1200mhz which is insane. Have not been able to get the GPU above 85% usage on R7VII for me i max out the 16GB vram with fusion comps, NR, grades on compressed media format H. 264 4K in a 6K time line for upscaling just to push the Gpu. Still average Usage is 50-60% on a 1hr plus timeline.

To answer the question do overclock the memory the faster the GPU can move data in and out the smoother the playback and edit and it will run silent.

Posted on 2019-03-25 16:20:31

Incredible findings, I hope they do a re-test.

Posted on 2019-06-13 00:07:25

So, the 750 euro Radeon VII is faster in Resolve than any Quadro, and also faster than the 2600 euro Quadro RTX 5000 that has the same amount of ram (16gb).

Posted on 2019-03-26 08:44:39

Comparing the Radeon VII to the Quadro RTX 5000 isn't really an apples-to-apples comparison and neither would be comparing the Radeon VII to the AMD Radeon Pro WX 9100 16GB. Quadro & Radeon Pro are not intended for the average consumer, but rather the professional market where cost is secondary to reliability or where specialty features like 10-bit display support, ECC VRAM, or even larger VRAM capacities are needed. If you don't need any of that, then the Quadro/Radeon Pro cards are not necessary and you will get way more for your money with a GeForce or Radeon card.

Different product lines have very different use-cases and goals, and you really can't look at just the raw price, performance, and VRAM capacity to determine whether a specific card is worth the cost or not.

Posted on 2019-03-26 17:21:24
René Gibson

Hey thanks for this! I've been struggling with Mojave and Nvidia, then switched officially to AMD with the wx9100. But have been a bit disappointed with performance mostly in resolve and AE especially considering the price of the wx9100. I'm curious if you think the Radeon VII (once fully supported on 10.14.4 - please let me know if this is now stable?) will out perform the wx9100. Based on these charts it seems like it would...

Posted on 2019-03-28 04:47:14

Definitely can't help with the MacOS compatibility, but in terms of raw performance the Radeon VII should easily out-perform the WX9100. They are really designed for different use-cases though, with the WX9100 supporting 10-bit displays and having a higher reliability.

Just as a note, Ae doesn't really use the GPU much since pretty much anything real-world that you do is going to be CPU-limited. So no matter what GPU you use, you likely won't see much of a difference there.

Posted on 2019-03-28 17:15:59
René Gibson

Good to know. Are you saying the Radeon VII doesn't support 10bit? That will be a factor for me for sure. Do you think it wold be possible to run the wx9100 and VII together via two egpus ?

As for AE definitely something I had to learn the hard way, It will be nice when software design catches up to the hardware flux that's happening.

Posted on 2019-03-28 18:41:21

No, the normal Radeon line doesn't support 10-bit out with OpenGL applications - just like how NVIDIA's GeForce line doesn't. You can still work with 10-bit footage no problem, but actually sending out a 10-bit signal to a monitor requires either a Radeon Pro or Quadro card. Alternatively, you can use something like a Blackmagic Decklink card to power a dedicated display. The color accuracy is better than what you can get through a GPU, so people doing dedicated color work tend to use that kind of a card.

You can mix and match GPUs in Resolve and it tends to work OK - although you are limited to the smallest amount of VRAM on the cards (not a concern for you since both have 16GB). The bigger issue is general stability. I can't speak for MacOS, but at least on Windows things like automatic updates tend to break things, requiring manual re-installation of the drivers at random times. Other applications like Photoshop tend to get confused as well and can just break. So in general, we recommend against mixing different product lines if you can at all help it.

Posted on 2019-03-28 18:51:59
Misha Engel

AMD provides the same driver support to Radeon VII that is available on
other Radeon consumer hardware as listed in the table below. To be
specific, workstation performance, application certifications, and
features do not apply to Radeon consumer hardware when using Radeon Pro
Software. The explicit purpose of our "One Driver" program is to
simplify implementation for businesses that use Radeon consumer and
Radeon Pro products across their install base.


Posted on 2019-03-28 19:27:11

This is the image that was linked (it was broken for some reason): https://www.techpowerup.com...

That doesn't talk about 10-bit display in OpenGL applications, just performance, compatibility, etc. Although AMD probably considers 10-bit display support to be a part of the "Workstation Feature Set" that is only for the Radeon Pro line. Also, I really wouldn't believe anything you might find on the web unless it is an official statement from AMD that it supports "10-bit out in OpenGL" or something along those lines. Most of the info you will find is from people who don't understand the difference between DirectX and OpenGL apps, that many apps let you enable "10-bit support" even if it doesn't work, they don't get that you can work with 10-bit (or even 16-bit) footage in applications like Resolve or Premiere Pro without the card being able to display that color depth to your display.

Posted on 2019-03-28 19:40:09

Maybe the Radeon VII Pro driver will be out during "AMD Radeon Pro at NAB 2019".

Posted on 2019-03-29 12:47:35
Alvaro Robles

Fiirst of all thank you for this one and for all the amazing reviews/analysis that you do.
And then my question follows up a little bit what it’s been said, but out of the two AMD Radeon VII vs PRO WX9100, which one you’d recommend? I mean you can almost build two Radeon VII for the price of one WX9100.
Also, any chance to add the WX9100 to the benchmarks? as it’s the theoretical top of the line GPU AMD does for the professional market, right?

Posted on 2019-04-10 06:08:15
Misha Engel

There is not much benefit in using 2 Radeon VII's over just one, I will probably ghetto mod them with 2 noctua NF-A12x25 PWM to give them a better cooling and silence them.

WX9100 has about the same speed as the RX VEGA 64 and the VEGA FE.

Posted on 2019-04-13 01:12:06
Alvaro Robles

Thank you for your reply.
Do you mean the Radeon VII is faster that the WX9100? If that's the case I guess it's better to get Radeon VII for DaVinci which costs half the price.

Posted on 2019-04-13 01:21:06
Misha Engel

Yes and 1 Radeon VII is fast enough for realtime editing at full-res premium in Resolve. You better spend the extra money on a Decklink card and a good monitor for color correction(even when it's only 1080p, color is more important than resolution for color correction) than on a second GPU.

Posted on 2019-04-13 15:01:34

So is Blackmagic Raw still and issue for the Radeon VII? I was considering getting one until I saw that.

Posted on 2019-04-16 16:42:01

I don't believe it has been fixed yet. We're also still having stability problems (mostly when using dual displays) with the Radeon VII, which is why we are not offering it on our systems quite yet.

Posted on 2019-04-16 16:43:52

Darn! Thanks a ton for the update! :)

Posted on 2019-04-16 17:25:23

Do you think that's a Resolve issue that could potentially be fixed as v16 comes out of beta or is it an AMD driver issue?

Posted on 2019-04-28 05:39:16
Misha Engel

It was a AMD driver issue, solve already.

Posted on 2019-05-03 17:22:56
Thatcher Kelley

I just upgraded an old system to the Radeon VII from a Titan Black. In a 4K project on the Titan it didn't take many nodes for the playback speed to slow down, but with very little correction I was able to get 4K playback out of my 4K mini monitor.
When I switched to the Radeon VII I got much more power than the Titan when the Mini Monitor was disabled, but as soon as I enabled the mini monitor and tried 4K playback, the FPS took a big hit and even with zero node corrections I still don't get realtime playback. If I switch down to HD I get fullspeed playback.

Has your testing with Radeon VII had any results like that where 4K playback is great with mini-monitor disabled, but enabled it slows to a crawl (only in 4K)

Also wondering what driver version you're currently on and if you've found a good stable driver yet.

Posted on 2019-04-21 18:51:43

We haven't done testing yet looking at performance with multiple displays (including with video monitoring cards), but that is on the to-do list for us. As for stable drivers for the Radeon VII, the biggest issue we currently have is with bluescreens when using multiple displays. Every driver we've tried has this issue, however, so we aren't sure if it is a driver, firmware, or something else that is causing it. We've reported it to AMD, but until they can get us a fix we aren't offering the Radeon VII in our product line.

Posted on 2019-04-25 04:40:08
Connor Tarabocchia

Do you know how much of a difference the 16gb of ram actually makes? It would be interesting to see the performance difference between the consumer vega cards, radeon VII and the vega frontier edition. Considering the frontier edition also has 16gb of ram it would be interesting to see that middle ground between the vega 64 and radeon VII, and we could see how much of a difference the raw power of the VII gives over the extra ram. At this point the frontier pro drivers are pretty mature(gaming drivers haven't gotten too much love), last year i ended up picking on up with an ebay coupon for 425 USD and now they sell for that normally on ebay, so I could definitely see that being a great option for a more budget oriented system working with higher resolutions. Otherwise for lower resolutions it seems like a vega 56/64 seem like a great deal for davinci resolve when looking at them used on ebay. At this time a vega 56 costs 260 USD shipped on ebay and offers close to identical performance to the vega 64 once you consider the power power consumption allowing it to boost higher than the 64.

Posted on 2019-04-22 17:51:12

VRAM (and RAM for that matter) is almost purely about simply having enough. If you have more than you need, that doesn't make anything faster, but not having enough can cause errors or other problems. With Resolve, it usually comes down to how much noise reduction you are doing and the resolution of your media. Even 4K media can need quite a bit of VRAM if you are doing lots of noise reduction, while 8K can need quite a bit even if you only do moderate noise reduction.

Posted on 2019-04-25 04:37:54
Misha Engel

We have 3 frontiers. At the same clock speed they have about the same speed as the consumer RX VEGA 64.
With heavy noise reduction it's nice to have more than 8 GByte of VRAM(BRAW 4.6k and above).
We borrowed a Radeon VII for testing and it's a lot faster with noise reduction than the frontier(in Resolve).
All GPU's are limited in noise reduction by their memory bandwidth, I got the impression that the Radeon VII is not.

Posted on 2019-05-03 17:31:14
Elliot Hoffman

Given the time since this article was written, has the BRAW issue been resolved on the Radeon VII? And can you also describe the nature of the issue? If there is still a problem, then is it safe to assume that for a single card, the Nvidia GeForce 2080Ti performs best of all the single GPUs with Resolve?

Posted on 2019-05-12 21:21:17
Elliot Hoffman

Given the time since this article was written, has the BRAW issue been resolved on the Radeon VII? (And can you also describe the nature of the issue?) If there is still a problem, then is it safe to assume that for a single card, the Nvidia GeForce 2080Ti performs most reliably of all the single GPUs with Resolve?

Posted on 2019-05-12 21:23:24

I've seen some people reporting that a newer GPU driver fixed the Radeon VII issue, but to be honest I haven't verified it. We are still have unrelated stability issues with that card, so we are not offering it in our workstations until those issues are resolved.

I would say overall that the NVIDIA cards are more stable than AMD at the moment for Resolve. Specific models don't matter as much, although I will tell you that the 2080 Ti is easily the most common GPU we use for our Resolve workstations and it has always been a solid card.

Posted on 2019-05-13 17:41:07
Jeremy Hilderbrand

For 4K encoding and editing, I am curious how Resolve would like an older GPU + a Tesla K80 accelerator, versus a single, monstrous 16GB VII. A refurbished K80 can be grabbed for $450 on newegg today. How would Resolve do when it has a weaker GPU that is still able to spit out 4K to monitors ? Depending on how the software is written, shouldn't it have all the GPU power it needs from the accelerator, letting the weaker card display the results that were calculated by the mightier part ? (Pardon my ignorance, but I am preparing to build a 4K encoding / editing machine, so doing much research.)

Posted on 2019-06-07 10:09:37

You can set which GPU Resolve will use for processing in "Preferences -> Memory and GPU -> GPU Configuration". You have to manually set the processing mode and change GPU selection to manual, then just select the cards you want to be used to do the heavy lifting. That said, just running the GUI really doesn't take much processing, so you will almost always get better performance from a single, slightly more expensive single card then trying to split up between a low-end card for the GUI and another card for processing.

I definitely wouldn't use a Tesla K80 for Resolve. It might work (I've never used Tesla for Resolve), but that is a 5 year old card at this point. In terms of raw horsepower, it is only going to be on par with around a RTX 2070 or Vega 56. Driver optimization makes a big difference, but that is only going to make a newer card even better than a old Tesla. You should get much better performance out of a newer GeForce or Radeon card - although if you are going AMD I would wait for the new cards rather than use the Radeon VII - we've had quite a bit of stability issues with that card.

Posted on 2019-06-07 18:37:50
Jeremy Hilderbrand

Thank you so much for the super-fast response, & your helpfulness in general. Based on your review & benchmarking of the 16GB VII, it seems the ideal candidate. What do you mean by "stability issues" ? (That could run the gammit, from bad drivers to software interactions to RAM problems to power supply problems to cooling problems to . . . ) Did the problems occur when using a workstation running only one VII card ? Regarding upcoming AMD super GPU's: when do you expect them to be both out & actually purchasable (commonly in stock on newegg, for example) ?

Posted on 2019-06-08 08:03:24

I believe most of the issues we've had comes from bluescreens with multiple 4K monitors. Something else to note, however, due to the cooler design of the Radeon VII and it's power draw, I would never use more than one Radeon VII unless you have some crazy airflow through the system. Cards like that (and the reference GeForce RTX cards) don't vent any of the heat directly outside the system, so you are relying on chassis fans to get rid of the heat. If you don't have enough airflow, the cards - and everything else in the system - is going to cook and likely start throttling.

Last I heard, the new AMD cards are slated for sometime in Q3, but no firm date has been announced. I believe there is supposed to be more information at E3 later today, however, so we'll probably get a more accurate timeline then. Of course, no idea if the new cards will be faster or slower than the Radeon VII - that card always felt like a beta product to me, so it is possible the new cards won't have the same price/performance ratio.

Posted on 2019-06-10 17:04:43
Jeremy Hilderbrand

I plan to run the powerful GPU (Radeon VII or whatever) in an E-ATX, rackmounted Rosewill 4U case with 6x 120mm case fans + 2x 80mm case fans with a dual Socket 2011-3 motherboard and 2x E5-2698v3 CPU's & either 128GB or 256GB of DDR4-2400, a PCIe x4 1TB OS / editing drive, and 4x 2TB SATA data drives. Although I don't foresee having any cooling problems, if that were to become a problem, I'd just install a water cooling loop with dual 120mmx360mm radiators & go ahead & cool both CPU's and the GPU with water & make the whole thing quieter, anyway. Am I heading the right direction, you think ?

Posted on 2019-06-16 06:08:55

Maybe my question does not have much on the subject.

I set up a hackintosh with Z390 + i9900k + 64GB Corsair 3000 + RX580 + M2 Pro

It turns out that I have tried everything to improve the loading of large photo files, in PSD with 1.5GB, delay in average 10s. Before I had an i7 and it took more time type 20 / 25s.

I already tested with Raid, M2 in Raid .. and I can not download these 10s from loading.

Is the problem in Video Card or Processor?

Or am I already on the edge? 100% loading I'm talking about LR.

Could a Radeon VII improve?

Posted on 2019-06-08 15:48:37

Highly doubtful it is GPU limited - LR really doesn't use the GPU all that much. Your best bet is to watch your system resource monitor to see if there is a bottleneck - specifically keep track of per-core CPU load. Although, it could also be a combination of factors including CPU, RAM, storage, or simple program limitations.

Posted on 2019-06-10 17:06:36