Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/1242
Article Thumbnail

V-Ray: NVIDIA GeForce RTX 2070, 2080, & 2080 Ti GPU Rendering Performance

Written on November 16, 2018 by William George
Share:

Introduction

V-Ray, from Chaos Group, is made up of a pair of rendering engines - one that uses the CPUs (processors) and another which focuses on GPUs (video cards). We have already tested modern CPUs on V-Ray, using the benchmark utility they make publicly available, but since GPUs can be utilized too it is important to look at how various video cards perform as well.

Recently, NVIDIA released a new graphics architecture (Turing) with both mainstream GeForce and professional-grade Quadro video card models. Since GeForce cards are much more affordable, and multiple GPUs are often desired, they are popular for use with GPU based rendering. We are going to take a look at how the latest GeForce RTX series cards perform individually in V-Ray Benchmark.

Test Setup

The current version of the V-Ray Benchmark (1.0.8) tests the CPU and GPU(s) separately, even though the latest version of V-Ray Next GPU itself can use both the CPU and video cards at the same time. Because of the benchmark's limitation, it doesn't really matter what CPU we use to test the new GeForce cards - but still, we decided to stick with the same platform for all GPUs to ensure a fair test.

If you want to see full details, with links to the various part pages, .

Individual GPU Benchmark Results

Here are the results, in seconds, from V-Ray Benchmark 1.0.8 (using V-Ray 3.57.01) for the different video cards we tested. The new GeForce RTX series cards are in a darker shade of green, to make it easy to pick them out:

V-Ray Benchmark 1.0.8 GPU Comparison with GeForce RTX 2070, 2080, and 2080 Ti

Analysis

Surprisingly, the RTX 2070 actually outperformed the more expensive 2080 - just by a hair, but I had expected it to be slower. To verify this result I looked up several systems we have sold with the 2070 and 2080 in recent weeks, and sure enough: the RTX 2080 came in at 68 to 70 seconds, while the 2070 ranged from 67 to 69 seconds. There is some variance in there from one system to another because of things like the CPU and other specs, as well as different variants of these video cards which may be clocked slightly higher or lower... but the trend of the 2070 being as fast or slightly faster than the 2080 is solid. I am not sure why, though, and it flies in the face of the relative specs of the two cards. It makes me want to go back and test the 2070 in other GPU rendering engines like OctaneRender and Redshift, since we did our last round of testing with the RTX 2080 and 2080 Ti on those applications before the 2070 was available.

Beyond that strange outlier, the rest of the results line up pretty logically. The RTX 2080 Ti outperforms the similarly-priced Titan Xp, and the 2070 & 2080 are basically tied with the older GTX 1080 Ti. There is not much reason to get older video cards now, though in some places they do offer more VRAM. Topping it off, Titan V is still the fastest single card - at least on this version of V-Ray Benchmark.

I bring up the version of V-Ray because the engine used in the current benchmark is rather dated now. It is based on V-Ray 3.57.01, but both V-Ray 3.6 and V-Ray Next have been out for some time now. Chaos Group has been saying for a while that an updated benchmark is in the works which will demonstrate the newer features and performance of V-Ray Next, but I have not seen an ETA. Additionally, these new GeForce RTX cards feature hardware-based ray tracing cores - so if V-Ray is updated to utilize those in the future it will swing performance even more in favor of the RTX cards, likely pushing them ahead of the Titan V (which lacks RT cores). Newer versions of V-Ray may alter how video cards are utilized, potentially putting the RTX 2080 back in front of the 2070.

Dual GPU Benchmark Results

To further investigate the situation with the RTX 2070 outperforming what should be a faster 2080, we re-ran the V-Ray Benchmark on another platform with each of the three GeForce RTX series cards - both individually, as before, and in pairs. This gave us yet another point of reference on individual card performance along with a chance to see if, perhaps, the 2070 was great as a single card but wouldn't scale as well with multiple GPUs. Here are the results, with dual GPU configurations highlighted in a darker shade of green:

V-Ray Benchmark 1.0.8 GPU Comparison with Single vs Dual GeForce RTX 2070, 2080, and 2080 Ti

Both as a single card and in a pair, the RTX 2070 outperformed the 2080 - against all reason that I can come up with. The less expensive, less powerful 2070 came in two seconds faster in both configurations. All three RTX series cards scaled perfectly from 1 to 2 GPUs, with render time being cut almost exactly in half in each case. Since the 2070 and 2080 both have 8GB of memory as well, for versions of V-Ray that match this benchmark there seems to be no reason to opt for the more expensive card. Please note, as discussed above, that the V-Ray Benchmark is older now - newer versions of V-Ray, like 3.6 or Next, may behave differently.

An interesting side note: on the 2080 and 2080 Ti, we ran the dual GPU tests both with and without SLI / NVLink enabled. In most programs where we have tried this, and where NVLink support was not present, performance actually went down - but in the V-Ray Benchmark, it was identical. Both pairs of cards took the exact same amount of time to render whether they were connected via SLI / NVLink or not... almost as if V-Ray just ignored that feature entirely. It could also be evidence that V-Ray is NVLink aware, but in that case I would have expected similar but slightly different performance - rather than exactly the same result, down to the second. Hopefully the next version of the V-Ray Benchmark will show a difference with NVLink, since that tech should be beneficial for rendering. The RTX 2070s do not support NVLink, so that is something which could potentially alter the performance landscape between them in the future.

Conclusion

As of right now, and for versions of V-Ray which mirror the performance of the 1.0.8 benchmark, only three video cards make much sense: the GeForce RTX 2070 8GB, GeForce RTX 2080 Ti 11GB, and Titan V 12GB. Lower-end cards don't save enough money to justify the loss in speed, and other high-end cards are either slower at similar price points or cost more for the same rendering speed.

For the best results, though, go for a system with multiple video cards in order to get the best performance from a single workstation!

Recommended Systems for V-Ray

1 CPU / 1-2 GPU
Compact

Configure


1 CPU / 1-4 GPU
Tower

Configure


2 CPU / 1-4 GPU
Tower

Configure


1 CPU / 1-4 GPU
1U Rackmount

Configure


Tags: V-Ray, RT, GPU, Rendering, Benchmark, Performance, NVIDIA, GeForce, RTX, 2070, 2080, 2080 Ti, Turing, Video, Card

Thank you for continuing to do this sort of testing. It is invaluable to the community and hard to find elsewhere.

Posted on 2018-11-18 16:12:55
Jose Paulo Caldeira

Hi,

First of all, thank you for the really great articles.

Secondly, sorry for my English. I'm from Brazil, you know, doing my best.. rs

Well, this might be a stupid question, probably it is, but how do you manage to put together two RTX 2070 cards without SLI/Nvlink connectors? Is there any other way to do it?? I'm in the process of buying a new graphic card, mainly to work with Da Vinci but also with the Adobe programs. I was intending to choose the new RTX 2070, but this (no SLI) limitation has left me insecure. I don't have enough money to buy two video cards right now (not even one of the really top ones). However, I feel that I should keep the possibility of a second card open for future upgrades, perhaps choosing the Vega 64, for instance. Not sure that I'm doing the right thing, but if the RTX2070 does support multi-cards systens, I belive my problem is solved.

Posted on 2018-11-19 23:16:37

When doing GPU based rendering you don't need to connect them at all, most GPU rendering engines (V-Ray RT, Octane, Redshift, etc) will just use however many compatible GPUs you have installed in the system. SLI was something you wanted to *avoid* in the past, in fact, because it was designed to combine two cards for displaying real-time 3D graphics (usually in games) which is very different from path- or ray-tracing for rendering.

NVLink changes things a bit, since that could potentially be used to combine two cards in order to improve certain aspects of rendering... but it requires not only a physical bridge, but also software support. In the current benchmarks for V-Ray, Octane, and Redshift that does not seem to be present - but with how new NVLink is, I think it is just a matter of time before most GPU rendering engines are updated to support it.

You are correct that the RTX 2070 does not support NVLink, so if / when these renderers add support for it the 2070 will be left out. It can still be used in multi-GPU configurations, though - just without any bridge or link connecting the cards. That is how it is normally done now, and will likely still be the norm for a while. Besides, the main advantage of adding NVLink support for rendering is likely going to be sharing of the video memory - to allow larger scenes to be rendered. That would be a nice feature, certainly, but it isn't going to keep multiple cards from being worthwhile even when not linked together :)

Posted on 2018-11-19 23:23:03
Jose Paulo Caldeira

Thanks!

I see that I still got a lot to understand.

One last question, please. Is this also true for Da Vinci Resolve?

If I install two graphic cards, despite having no SLI bridge or NVlink, they will improve DaVinci overall performance, or just for rendering? Or neither of them?

Posted on 2018-11-20 00:35:58

Yes - in fact, putting two cards in SLI or NVLink will *reduce* performance in Resolve. It will cause that application to only "see" one card (the primary one, which the monitor is attached to) and be unable to utilize the other. Matt did an article about that which you may want to look over: https://www.pugetsystems.co...

Posted on 2018-11-20 17:11:44
TheVeryFineNerd

strange how the 2070 outperforms the 2080. Would really like to see an updated Octane and Redshift test. and thx for the great work

Posted on 2018-11-20 09:13:19

I did a quick test on that after publishing this article, and the 2070 is *not* faster than the 2080 in either of those renderers. I'm still trying to decide how to present that info, since an article just about that one isolated topic seems too focused.

Posted on 2018-11-21 17:35:40
Mattias Graham

Well, this definitely makes me feel validated in going for the 2070! Thanks for testing this tech in real workstation applications, it's really cool.

Posted on 2018-11-21 02:55:56
Kassem

Aren't RT cores supposed to extensively accelerate rendering in vray ? I guess the improvements we see from these results are affected mostly from by adding more cuda core to these new cards, i don't know i thought rtx would perform better :(

Posted on 2018-11-22 05:05:47
Cordell Hughes

Ray tracing cores are new and supposedly fully designed to process Ray Traced data primarily if not fully. The holdup is NVIDIA uses Cuda programming language for Nvidia products only.

NVIDIA is the first company known to develop Ray tracing Hardware for exclusively Ray tracing and uses Cuda language only requiring all hardware and software companies have to conform to their standard.

If this was AMD Radeon with Ray tracing cores, AMD would use OpenCL and Vulkan language that's public domain, free to use making their RT cores compatible with all CPUs, GPUs, software, hardware and operating systems from day one.

Posted on 2018-11-22 07:52:14
smn_lrt

In fact if you inform yourself just a bit before sentencing,
YOU CAN USE THEM IN VULKAN
through the Nvidia extensions...why an extension/library? (Elementary) Beccause other accelerator are not capable to do it, so it can not be a standard now (maybe in future)...so it need to be in a library, like every library shared for every peripheral producers for every programming language...
A little arcticle with some code snippet...
https://devblogs.nvidia.com...

Posted on 2018-12-11 10:31:54
Cordell Hughes

I'm waiting for a hardware statistics program like gpu-z or simular Ray tracing core usage. Until then it's hard to know how much RT cores much less tensor cores are being utilized presently.

Posted on 2018-12-11 21:31:46

When support is added, RT cores have the potential to greatly speed up ray-tracing in GPU based render engines. The current V-Ray benchmark does not support them, though, and I don't think V-Ray 3.6 or Next do either - at least not yet, so far as I am aware. I do expect a future version or update to add support, though, and hopefully at some point Chaos Group will put out a more modern benchmark that includes that functionality.

Posted on 2018-11-24 23:39:11
konstantin stavrogin

The 2080 is not actually slower than the 2070 in vray, its an error of the benchmark. The same error shows that the 1070 is faster than the 1080 in all benchmark results in chaosgroup forum. Actually there is a comparison of 2080 and 2080ti vs 1080ti cards in 3-5 different scenes in chaosgroup forum, it gives a clear understanding of the performance of the cards. The Vray benchmark can only give an idea but its not the right tool to value gpus In different scenes cuda cores perform differently.

Posted on 2018-11-23 04:20:05

Hmm, that is fascinating. I certainly could see it being a side effect of the specific scene that is included in the V-Ray Benchmark, which somehow favors the 2070 over the 2080. Hopefully the next version of V-Ray Benchmark, whenever it comes out, will include a more complex scene which better matches real-world performance... along with an updated version of the V-Ray engine, of course, since the current one in the benchmark is a couple versions old now.

Posted on 2018-11-24 23:58:48
Turing

https://nvidianews.nvidia.c...

https://www.nvidia.com/tita...

Puget, can you please benchmark the NVIDIA TITAN RTX?

Posted on 2018-12-03 13:22:10

We definitely will, when we get some in. That may be a while, though - this card was announced today, but is not yet available for purchase.

Posted on 2018-12-03 16:57:00