Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1242
Article Thumbnail

V-Ray: NVIDIA GeForce RTX 2070, 2080, & 2080 Ti GPU Rendering Performance

Written on November 16, 2018 by William George


V-Ray, from Chaos Group, is made up of a pair of rendering engines - one that uses the CPUs (processors) and another which focuses on GPUs (video cards). We have already tested modern CPUs on V-Ray, using the benchmark utility they make publicly available, but since GPUs can be utilized too it is important to look at how various video cards perform as well.

Recently, NVIDIA released a new graphics architecture (Turing) with both mainstream GeForce and professional-grade Quadro video card models. Since GeForce cards are much more affordable, and multiple GPUs are often desired, they are popular for use with GPU based rendering. We are going to take a look at how the latest GeForce RTX series cards perform individually in V-Ray Benchmark.

Test Setup

The current version of the V-Ray Benchmark (1.0.8) tests the CPU and GPU(s) separately, even though the latest version of V-Ray Next GPU itself can use both the CPU and video cards at the same time. Because of the benchmark's limitation, it doesn't really matter what CPU we use to test the new GeForce cards - but still, we decided to stick with the same platform for all GPUs to ensure a fair test.

If you want to see full details, with links to the various part pages, .

Individual GPU Benchmark Results

Here are the results, in seconds, from V-Ray Benchmark 1.0.8 (using V-Ray 3.57.01) for the different video cards we tested. The new GeForce RTX series cards are in a darker shade of green, to make it easy to pick them out:

V-Ray Benchmark 1.0.8 GPU Comparison with GeForce RTX 2070, 2080, and 2080 Ti


Surprisingly, the RTX 2070 actually outperformed the more expensive 2080 - just by a hair, but I had expected it to be slower. To verify this result I looked up several systems we have sold with the 2070 and 2080 in recent weeks, and sure enough: the RTX 2080 came in at 68 to 70 seconds, while the 2070 ranged from 67 to 69 seconds. There is some variance in there from one system to another because of things like the CPU and other specs, as well as different variants of these video cards which may be clocked slightly higher or lower... but the trend of the 2070 being as fast or slightly faster than the 2080 is solid. I am not sure why, though, and it flies in the face of the relative specs of the two cards. It makes me want to go back and test the 2070 in other GPU rendering engines like OctaneRender and Redshift, since we did our last round of testing with the RTX 2080 and 2080 Ti on those applications before the 2070 was available.

Beyond that strange outlier, the rest of the results line up pretty logically. The RTX 2080 Ti outperforms the similarly-priced Titan Xp, and the 2070 & 2080 are basically tied with the older GTX 1080 Ti. There is not much reason to get older video cards now, though in some places they do offer more VRAM. Topping it off, Titan V is still the fastest single card - at least on this version of V-Ray Benchmark.

I bring up the version of V-Ray because the engine used in the current benchmark is rather dated now. It is based on V-Ray 3.57.01, but both V-Ray 3.6 and V-Ray Next have been out for some time now. Chaos Group has been saying for a while that an updated benchmark is in the works which will demonstrate the newer features and performance of V-Ray Next, but I have not seen an ETA. Additionally, these new GeForce RTX cards feature hardware-based ray tracing cores - so if V-Ray is updated to utilize those in the future it will swing performance even more in favor of the RTX cards, likely pushing them ahead of the Titan V (which lacks RT cores). Newer versions of V-Ray may alter how video cards are utilized, potentially putting the RTX 2080 back in front of the 2070.

Dual GPU Benchmark Results

To further investigate the situation with the RTX 2070 outperforming what should be a faster 2080, we re-ran the V-Ray Benchmark on another platform with each of the three GeForce RTX series cards - both individually, as before, and in pairs. This gave us yet another point of reference on individual card performance along with a chance to see if, perhaps, the 2070 was great as a single card but wouldn't scale as well with multiple GPUs. Here are the results, with dual GPU configurations highlighted in a darker shade of green:

V-Ray Benchmark 1.0.8 GPU Comparison with Single vs Dual GeForce RTX 2070, 2080, and 2080 Ti

Both as a single card and in a pair, the RTX 2070 outperformed the 2080 - against all reason that I can come up with. The less expensive, less powerful 2070 came in two seconds faster in both configurations. All three RTX series cards scaled perfectly from 1 to 2 GPUs, with render time being cut almost exactly in half in each case. Since the 2070 and 2080 both have 8GB of memory as well, for versions of V-Ray that match this benchmark there seems to be no reason to opt for the more expensive card. Please note, as discussed above, that the V-Ray Benchmark is older now - newer versions of V-Ray, like 3.6 or Next, may behave differently.

An interesting side note: on the 2080 and 2080 Ti, we ran the dual GPU tests both with and without SLI / NVLink enabled. In most programs where we have tried this, and where NVLink support was not present, performance actually went down - but in the V-Ray Benchmark, it was identical. Both pairs of cards took the exact same amount of time to render whether they were connected via SLI / NVLink or not... almost as if V-Ray just ignored that feature entirely. It could also be evidence that V-Ray is NVLink aware, but in that case I would have expected similar but slightly different performance - rather than exactly the same result, down to the second. Hopefully the next version of the V-Ray Benchmark will show a difference with NVLink, since that tech should be beneficial for rendering. The RTX 2070s do not support NVLink, so that is something which could potentially alter the performance landscape between them in the future.


As of right now, and for versions of V-Ray which mirror the performance of the 1.0.8 benchmark, only three video cards make much sense: the GeForce RTX 2070 8GB, GeForce RTX 2080 Ti 11GB, and Titan V 12GB. Lower-end cards don't save enough money to justify the loss in speed, and other high-end cards are either slower at similar price points or cost more for the same rendering speed.

For the best results, though, go for a system with multiple video cards in order to get the best performance from a single workstation!

V-Ray Workstations

Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.

Configure a System!

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!
Tags: V-Ray, RT, GPU, Rendering, Benchmark, Performance, NVIDIA, GeForce, RTX, 2070, 2080, 2080 Ti, Turing, Video, Card

Thank you for continuing to do this sort of testing. It is invaluable to the community and hard to find elsewhere.

Posted on 2018-11-18 16:12:55
Jose Paulo Caldeira


First of all, thank you for the really great articles.

Secondly, sorry for my English. I'm from Brazil, you know, doing my best.. rs

Well, this might be a stupid question, probably it is, but how do you manage to put together two RTX 2070 cards without SLI/Nvlink connectors? Is there any other way to do it?? I'm in the process of buying a new graphic card, mainly to work with Da Vinci but also with the Adobe programs. I was intending to choose the new RTX 2070, but this (no SLI) limitation has left me insecure. I don't have enough money to buy two video cards right now (not even one of the really top ones). However, I feel that I should keep the possibility of a second card open for future upgrades, perhaps choosing the Vega 64, for instance. Not sure that I'm doing the right thing, but if the RTX2070 does support multi-cards systens, I belive my problem is solved.

Posted on 2018-11-19 23:16:37

When doing GPU based rendering you don't need to connect them at all, most GPU rendering engines (V-Ray RT, Octane, Redshift, etc) will just use however many compatible GPUs you have installed in the system. SLI was something you wanted to *avoid* in the past, in fact, because it was designed to combine two cards for displaying real-time 3D graphics (usually in games) which is very different from path- or ray-tracing for rendering.

NVLink changes things a bit, since that could potentially be used to combine two cards in order to improve certain aspects of rendering... but it requires not only a physical bridge, but also software support. In the current benchmarks for V-Ray, Octane, and Redshift that does not seem to be present - but with how new NVLink is, I think it is just a matter of time before most GPU rendering engines are updated to support it.

You are correct that the RTX 2070 does not support NVLink, so if / when these renderers add support for it the 2070 will be left out. It can still be used in multi-GPU configurations, though - just without any bridge or link connecting the cards. That is how it is normally done now, and will likely still be the norm for a while. Besides, the main advantage of adding NVLink support for rendering is likely going to be sharing of the video memory - to allow larger scenes to be rendered. That would be a nice feature, certainly, but it isn't going to keep multiple cards from being worthwhile even when not linked together :)

Posted on 2018-11-19 23:23:03
Jose Paulo Caldeira


I see that I still got a lot to understand.

One last question, please. Is this also true for Da Vinci Resolve?

If I install two graphic cards, despite having no SLI bridge or NVlink, they will improve DaVinci overall performance, or just for rendering? Or neither of them?

Posted on 2018-11-20 00:35:58

Yes - in fact, putting two cards in SLI or NVLink will *reduce* performance in Resolve. It will cause that application to only "see" one card (the primary one, which the monitor is attached to) and be unable to utilize the other. Matt did an article about that which you may want to look over: https://www.pugetsystems.co...

Posted on 2018-11-20 17:11:44

strange how the 2070 outperforms the 2080. Would really like to see an updated Octane and Redshift test. and thx for the great work

Posted on 2018-11-20 09:13:19

I did a quick test on that after publishing this article, and the 2070 is *not* faster than the 2080 in either of those renderers. I'm still trying to decide how to present that info, since an article just about that one isolated topic seems too focused.

Posted on 2018-11-21 17:35:40
Mattias Graham

Well, this definitely makes me feel validated in going for the 2070! Thanks for testing this tech in real workstation applications, it's really cool.

Posted on 2018-11-21 02:55:56

Aren't RT cores supposed to extensively accelerate rendering in vray ? I guess the improvements we see from these results are affected mostly from by adding more cuda core to these new cards, i don't know i thought rtx would perform better :(

Posted on 2018-11-22 05:05:47
Cordell Hughes

Ray tracing cores are new and supposedly fully designed to process Ray Traced data primarily if not fully. The holdup is NVIDIA uses Cuda programming language for Nvidia products only.

NVIDIA is the first company known to develop Ray tracing Hardware for exclusively Ray tracing and uses Cuda language only requiring all hardware and software companies have to conform to their standard.

If this was AMD Radeon with Ray tracing cores, AMD would use OpenCL and Vulkan language that's public domain, free to use making their RT cores compatible with all CPUs, GPUs, software, hardware and operating systems from day one.

Posted on 2018-11-22 07:52:14

In fact if you inform yourself just a bit before sentencing,
through the Nvidia extensions...why an extension/library? (Elementary) Beccause other accelerator are not capable to do it, so it can not be a standard now (maybe in future)...so it need to be in a library, like every library shared for every peripheral producers for every programming language...
A little arcticle with some code snippet...

Posted on 2018-12-11 10:31:54
Cordell Hughes

I'm waiting for a hardware statistics program like gpu-z or simular Ray tracing core usage. Until then it's hard to know how much RT cores much less tensor cores are being utilized presently.

Posted on 2018-12-11 21:31:46

When support is added, RT cores have the potential to greatly speed up ray-tracing in GPU based render engines. The current V-Ray benchmark does not support them, though, and I don't think V-Ray 3.6 or Next do either - at least not yet, so far as I am aware. I do expect a future version or update to add support, though, and hopefully at some point Chaos Group will put out a more modern benchmark that includes that functionality.

Posted on 2018-11-24 23:39:11

The Chaos group blog has some information in the RTX cards with vray:
(1) https://www.chaosgroup.com/...
(2) https://www.chaosgroup.com/...

Posted on 2019-02-07 15:54:47

Yeah, I've seen those before - they came out around the launch of the GeForce RTX series. I've also heard from Chaos Group that they are working on an updated version of their benchmark, which will hopefully incorporate some of the features in these new GPUs, but last I checked (earlier this week) they were still listing the old 1.0.8 version for download.

Posted on 2019-02-07 20:35:11

Well at least something is in the works. Let us just hope they release it soon, so you can do some proper testing.

Posted on 2019-02-08 18:30:09
konstantin stavrogin

The 2080 is not actually slower than the 2070 in vray, its an error of the benchmark. The same error shows that the 1070 is faster than the 1080 in all benchmark results in chaosgroup forum. Actually there is a comparison of 2080 and 2080ti vs 1080ti cards in 3-5 different scenes in chaosgroup forum, it gives a clear understanding of the performance of the cards. The Vray benchmark can only give an idea but its not the right tool to value gpus In different scenes cuda cores perform differently.

Posted on 2018-11-23 04:20:05

Hmm, that is fascinating. I certainly could see it being a side effect of the specific scene that is included in the V-Ray Benchmark, which somehow favors the 2070 over the 2080. Hopefully the next version of V-Ray Benchmark, whenever it comes out, will include a more complex scene which better matches real-world performance... along with an updated version of the V-Ray engine, of course, since the current one in the benchmark is a couple versions old now.

Posted on 2018-11-24 23:58:48



Puget, can you please benchmark the NVIDIA TITAN RTX?

Posted on 2018-12-03 13:22:10

We definitely will, when we get some in. That may be a while, though - this card was announced today, but is not yet available for purchase.

Posted on 2018-12-03 16:57:00

So something interesting I found out yesterday. The number of displays (monitors) plugged into the card(s), effect render times by a few seconds (fewer is better).

3 displays, 4k render = 12mins 45 seconds.
1 display, 4k render = 12 mins 11 seconds.

Curious to see what happens if you plug into integrated graphics while rendering, I wasnt able to because im running a Ryzen 1700x.

Posted on 2019-02-06 15:01:44

Hi i need a high end rendering PC which will use 3ds max, vray autocad mainly...Kindly suggest one...

Porceesor - i9 9900K , 64GB RAM, Which grphics would be best? most my systems running GTX1080..Also one concern its notted that 7820X renders faster than 8700k so high core count is the one that makes faster? Pplease suggest

Posted on 2019-02-26 07:30:47

The 9900K is a great CPU for 3ds Max and AutoCAD! For rendering, though, it depends on exactly what you are doing. V-Ray has two different render engines, one of which uses the CPU and the other uses the video card(s). The names have changed over the years, depending on which generation of V-Ray you are using. At one point it was V-Ray Adv and V-Ray RT, then it became V-Ray Next CPU and V-Ray Next GPU. Anyhow, if you are using one of the CPU versions then a higher core count processor could be faster... but you'd have to go quite a bit higher, since the 9900K already has 8 cores at very high clock speeds. I think you'd need to get at least a 12-core Intel chip like the 9920X to be measurably faster, and that is much more expensive. AMD's Threadripper processors are also really good at CPU based rendering, but those are slower in low-thread workloads like 3ds Max and AutoCAD.

As for what video card to get, both 3ds Max and AutoCAD are from Autodesk - and they tend to only certify their software to work on "professional" video cards, like NVIDIA's Quadro series. Those are pricey, though, so it is up to you if you want to stick with a certified card or break that rule to save money. If you are using one of the GPU-accelerated version of V-Ray, that also changes things. That would benefit from one or more fast video cards. So it is hard to recomend a specific video card without knowing what your usage of V-Ray is like and what your budget is.

Posted on 2019-02-26 16:53:53

Thanks william for reply. I just want to know if RTX 2080TI will work fine with these softwares. I have used GTX 1080ti and it renders perfectly and faster. We are actaully using VRAY ADV and not RT. Also will softwares like soliworks also work in this grpaics card? I am purchasing this for my office for one user.

Posted on 2019-02-27 13:32:14

Solidworks has some specific features, like RealView, which will not work on GeForce cards. In order to ensure those features are available, you need to pick a card from their official certified list (available on their website).

The Autodesk programs you mentioned before are less strict - in my experience they will work with GeForce cards, even though those aren't on their official certification lists.

The video card won't matter for V-Ray Adv, since that is entirely CPU-based.

Posted on 2019-02-27 21:56:17

i like to buy i9 9900k and gpu 2070 dual is it fine fast work was sure?

Posted on 2019-04-25 16:46:52

Dual RTX 2070 cards should do very nicely in rendering workloads, and the 9900K is a great CPU for applications that need high clock speed and moderate core count :)

Posted on 2019-04-25 17:42:28
Avrushchenko Grigori

Hi, will vray sketch work with i7 9700f and 1 rtx2080 and what the differnce of the rendering time same configuration with two rtx 2080 for 4k image. thx in advance.

Posted on 2019-11-16 08:52:35

I'm not familiar with "V-Ray Sketch"... did you perhaps mean the V-Ray plug-in for SketchUp? If so, I haven't used that particular version of the V-Ray plug-in but I can't think of any reason it wouldn't work with a modern CPU and video card like those you listed. Whether a second RTX 2080 would help or not depends on which side of the V-Ray rendering engine you use, though: if you use the CPU side then the video card(s) won't matter, but if you use V-Ray GPU (formerly RT) then having a second card will cut render times almost in half.

Posted on 2019-11-18 17:24:37

rtx 2070 8GB = 68
rtx 2080 8GB = 69

Strange results
The rtx 2080 should be better because it has more CUDA cores

But I think I knew the reason
The amount of video memory in the 2080 is not enough to exploit the full power of CUDA cores

Posted on 2019-11-25 13:54:57

The V-Ray Benchmark isn't rendering a big enough scene for VRAM to be a limitation on these cards, so I don't think that is what is going on. I suspect it just comes down to something weird in the way that the V-Ray engine works with video cards. Interestingly, it still happens with the newer version of their engine (V-Ray Next) and its benchmark: https://www.pugetsystems.co.... This behavior is *not* seen in the other GPU rendering engines we test (Octane and Redshift), which is why I think it is just a quirk of V-Ray.

Posted on 2019-11-26 21:05:45