Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/1688
Article Thumbnail

Unsupported: How to Make Dual NVLink Work on Windows 10

Written on March 4, 2020 by William George


With the current Turing-based generation of GeForce and Quadro cards, NVIDIA offers a method of physically connecting pairs of cards to enable direct communication between them. This facilitates SLI for gaming and similar applications, and can allow direct memory access between the cards for scientific computing and rendering (if the software supports it). NVIDIA calls these physical connectors "NVLink Bridges", and outside of gaming (where SLI is the more focused-on term) they use the NVLink name to refer to both the technology and the connection itself. I have written a lot about NVLink in the past, including how to enable and test it in Windows, as well as which bridges will work with which cards.

But what if you want to have more than two video cards? In my early testing I included a setup with four cards in two NVLinked pairs - and it worked just fine. Ever since that time, I had assumed this was the way things were supposed to work - and we even sold such configurations on occasion! - until recently we had an order come through for a setup with four GeForce RTX 2080 Ti cards, and it wasn't working as expected. I got involved because of my past experience with testing this stuff, but the latest NVIDIA drivers just would not cooperate at all. We got it working on an older driver revision, but then found that Windows would immediately update the driver... and while we had some leads to preventing that, it wasn't really something we could stand behind a customer using in the field.

In light of that experience, I've taken time in the last couple of weeks to go through and test several different NVIDIA drivers on two configurations: four GeForce RTX 2080 Ti cards as well as four Quadro RTX 6000s, both set up in two physically bridged pairs. In this article I will chronicle what I found worked, what didn't work or behaved oddly, and where we are at with this as a company as a result.

Test Hardware

Here is the hardware platform I used for testing, a rare motherboard with enough PCI-Express slots to actually set this up:

Test Platform
CPU Intel Xeon W-3245
CPU Cooler Stock Intel Xeon SP 92mm Cooler
Motherboard ASUS PRO WS C621-64L SAGE/10G
RAM 6x DDR4-2666 16GB ECC (96GB total)
Video Card 4x ASUS GeForce RTX 2080 Ti 11GB
4x PNY Quadro RTX 6000 24GB
2x NVIDIA Quadro 2-slot NVLink Bridge
Hard Drive Samsung 960 Pro 1TB
Software Windows 10 Pro 64-bit
NVLink Test Utility

Four NVIDIA Quadro RTX 6000 video cards in two NVLink pairs

That's a lot of horsepower!

Test Methodology

My test process was fairly simple:

  • Perform a clean installation of the NVIDIA driver to be examined (using DDU if needed to properly clean up first)
  • After installation, reboot the system
  • Open NVIDIA Control Panel and check what the default SLI configuration was after driver installation
  • If not already in SLI, attempt to switch the configuration to that mode and note what cards it showed would be linked
  • Apply settings
  • See if the links shown in NVIDIA Control Panel matched what was shown before applying
  • Run our NVLink test utility and record the result
  • Disable SLI and proceed on to the next driver

The expected behavior, for a driver which should allow the system to use both pairs of cards in SLI / NVLink, is for NVIDIA Control Panel to show the four cards being in two SLI pairs, as in the screenshot below:

Two pairs of GeForce RTX 2080 Ti cards in SLI / NVLink in Windows 10 Pro

Click image above to enlarge

Something else worth noting on that screenshot is that one card in each pair has a monitor connected to it. That is required in Windows for a pair of cards to be put into SLI, so a setup like this requires either two monitors, two separate connections to a single monitor, or one real monitor plus a dummy dongle that fakes the presence of another monitor.

Test Results - GeForce

Here is a table showing the driver versions I tested and their behavior. To save space, I have grouped together sequences of drivers that functioned the same:

Driver Versions Dual NVLink? Description
Non-functional These older drivers install just fine, and look like they will allow both pairs of cards to enter into SLI / NVLink, but when you actually click "Apply" they give warnings about programs running in the background. After those are closed, and you click "Continue", the process appears to work for a moment but then reverts back to being disabled.
Functional These are the last GameReady driver before the Creator / Studio drivers started to come out, the lone "Creator" driver, and the first "Studio" driver (respectively) and they all work as expected. In fact, SLI is enabled by default when they are installed, and NVLink tests show it is working immediately.
Non-functional These Studio drivers default to one pair of cards being in SLI (and functioning in NVLink) upon installation, and no amount of adjusting settings was able to improve that. I could never get both pairs to even look like they were going to be enabled, though sometimes it would switch which of the two pairs of cards was in SLI.
Non-functional These were the most recent Studio drivers when I was testing, and in both cases I had to physically remove the NVLink bridges in order to get the drivers to even install properly. If that wasn't done, upon rebooting after driver installation, one or two of the four cards would show up with errors in Device Manager and the NVIDIA control panel would not run. Even using DDU did not help. Only removing the bridges, then installing drivers and rebooting, then finally shutting down and putting the bridges back on would allow the drivers to function properly. Even once all that was done, only one pair of cards would go into SLI / NVLink at a time.

I was disappointed to find that for several driver revisions this feature has not worked, though it did when these cards were first launched (I have records of the older drivers working, even though they now have trouble with background processes in Windows) and even up through the first Studio driver release it was perfectly functional. One of my coworkers reached out to NVIDIA with this information, to see what they had to say, and we got word back that this feature is not supported on GeForce cards, and the fact that it worked in the past was unintentional. After all the testing this did not surprise me, though again it did disappoint, and it can somewhat be inferred from the fact that no 2-slot NVLink bridges exist that are GeForce branded. To do this testing at all, we had to use Quadro branded bridges - as listed in the Test Hardware section, and as shown in the image below.

Four Asus GeForce RTX 2080 Ti blower-style video cards in two NVLink pairs

One of the cards had a defective LED, but worked fine otherwise

Test Results - Quadro

This brings up a great question, though: does this feature work on Quadro cards? After all, there are 2-slot bridges that NVIDIA sells for them, thus enabling (physically, at least) such a setup without having to go outside of their official branding. To find out, I did similar testing with a handful of Quadro driver releases - sticking mostly to the latest version of each driver family (the first three digits of the driver version). The results are shown in the same style of table:

412.40 Functional Dual NVLink could be enabled, and worked, but there was a warning when enabling it (as we saw with the earliest GeForce drivers). Unlike those GeForce drivers, though, this one was able to get the Quadro cards into SLI / NVLink and the performance was as expected.
426.32 Functional Dual NVLink worked perfectly! It didn't default to being in SLI / NVLink immediately after driver installation, as some of the GeForce drivers did, but it showed the correct card pairings when switching to SLI mode and then gave the correct bandwidth when tested.
431.98 Functional When attempting to enable SLI in this driver, it defaulted to showing all four cards connected in "4-way SLI" - which is the only time I saw such behavior with any of the GeForce or Quadro RTX series cards / drivers. After applying that setting, and running our NVLink test, the script didn't properly identify which pairs of cards were in NVLink together - but the actual bandwidth results did show both pairs of cards having the proper communication speeds for NVLink. I think the reason the script got confused is because even the cards that were not physically bridged were still shown as having P2P access to each other, just with lower bandwidth (they must have been communicating over PCI-E). One advantage of this configuration, however, was that it did not require two monitor connections in order for both pairs of SLI to be enabled: because it was all one big, happy SLI family a single monitor connection was sufficient to allow the full setup! This was definitely the weirdest result out of all my testing, but technically it worked.
442.50 Non-functional This is the latest driver at this time, released at the end of February 2020. Unfortunately, dual NVLink did not work with this driver. Only one pair of cards would go into SLI, and often not even a pair that was connected via a NVLink bridge! I tried repeatedly, and sometimes three cards would go into a single SLI triplet... but never two pairs or all four. This odd behavior confused our test program, but measured bandwidth was very low - showing that NVLink was not working.

Until that last driver test, I had high hopes that this type of configuration would be properly supported by NVIDIA on Windows... but it looks like that is not the case. We haven't yet gotten official word from NVIDIA about whether this is supposed to work or not, but I am guessing not given that the latest drivers don't behave (plus the odd results from the version before that). So where does that leave us?


Can you make two pairs of NVLink cards work in Windows 10? Yes, by using older drivers... but Windows may update them at any time, potentially messing up the configuration. You also wouldn't be able to utilize improved performance or added features from newer drivers, and there is no guarantee that future versions of Windows will work with the older drivers. All in all, not a great solution.

Does Puget Systems Offer Dual Pairs of NVLinked Cards in Windows?

No, due to fact that the latest drivers do not behave with this sort of configuration - and that NVIDIA has said they don't support it - we can no longer offer dual pairs of NVLinked graphics cards in Windows. One pair is doable, with the proper motherboard, chassis, and power supply... and we do offer configurations with three or four video cards, just not in NVLink.

I should also note that both GeForce and Quadro cards have worked in NVLink just fine under Linux, and don't even need any special work done to enable that (like putting the cards in SLI, which is required in Windows). This could always change if NVIDIA alters the behavior of their Linux driver, but for now that is the way to go if you absolutely must have this setup.

Tags: NVLink, NVIDIA, GeForce, Quadro, SLI, Windows 10, Video Card, GPU
Luca Pupulin

Hi William,
great and interesting article as usual...
just two questions...

why using four 2080 Ti in NVLink mode instead of two Titans (in NVLink as well)?

I didn't understand well if you enabled SLI with Quadro cards;it shouldn't be necessary,am I correct?


Posted on 2020-03-05 17:35:01

Four 2080 Ti cards together have more horsepower than two Titans, so there are many cases (like GPU based rendering) where you might want to use them and also have access to NVLink. Moreover, the Titan RTX cards are only available with dual fan configurations, which mean they are not ideal for more than two cards in a system (so I did not bother testing them in quad configurations, like I did with the 2080 Ti and RTX 6000).

I am trying to remember if I ever tried running my NVLink test utility on the Quadro cards with them *not* in SLI... and I cannot remember now. But as far as I can recall, from all of my prior with with NVLink in Windows, SLI was required on both Quadro and GeForce to enable that feature. The only exception was back with the older Quadro GP100 and GV100, which required a different trick to enable NVLink (and a third video card as well, to handle display output):


Posted on 2020-03-13 18:03:37


Do not use older drivers, they have known security vulnerabilties that are fixed in the latest drivers.

Posted on 2020-03-06 08:11:34

Yeah, normally it is desirable to use the latest drivers for a number of reasons - security not the least of those! - but this is a weird case where a feature that used to work no longer does, so some people may have to balance security against their need for that feature. Thankfully, the two driver vulnerabilities listed on that link you sent both require an attacker to have local system access... and if you've got someone running software on your computer already (either locally or remote controlled) you are already in a heap of trouble :)

Posted on 2020-03-13 17:59:55

Interesting article to say the least. Wonder why Nvidia just killed 3 and 4 way SLI? Can the drivers be "edited" to "force" 4 Way SLI/NVLink to work? There's a guy on Guru3D Forums who has 4 Titan Voltas running in SLI and he is not using any bridge (software SLI) - I think he says he used the Nvidia auto diff utility and edited the drivers but no idea if it works properly.

Posted on 2020-04-05 02:32:01
Vedran Klemen

What would be Quadro dual slot Nvlink for two 2070 Super? Thx.

Posted on 2020-05-04 15:51:27

I haven't specifically tried it, but it looks like the 2070 Super uses the standard size NVLink connector - so the Quadro RTX 6000 / 8000 bridges should fit it, but not the RTX 5000 bridges (which are an odd, smaller size).

Posted on 2020-05-04 20:52:20
Vedran Klemen

Hello, Can You have two rtx cards with nvlink and one or two without? Thanks!

Posted on 2020-07-08 15:35:06

Sort of, though it seems that if you have more than two cards the pairings (which ones are supposed to be in NVLink together) didn't always come out correctly on some of the newer driver revisions... at least when this article was written, which has been a little while now. I've not tried the last few driver releases.

It also may not be beneficial to have only some of the cards in a system be in NVLink, depending on what you are looking to accomplish with it. For example, if you are doing GPU based rendering then the limiting factor in some ways will be the maximum VRAM per GPU, so having two cards in NVLink and one or two that are not would not help in that situation.

Posted on 2020-07-10 00:56:03

Were the cards put in TCC mode?

Posted on 2020-07-09 19:52:41

For some of the older Quadro cards, putting them in TCC mode is how to enable NVLink in Windows (see: https://www.pugetsystems.co.... For GeForce and other Quadro cards, though, the way to do it is through enabling SLI... and I don't believe you can do that with TCC. I'm not even sure you can enable TCC on GeForce; I can't remember if I've tried.

Posted on 2020-07-10 00:51:44

LinusTechTips managed to put some RTX 8000s in NVLink running in TCC mode. (https://youtu.be/l_IHSRPVqwQ) TCC mode is evidenced by them requiring another GPU (TITAN RTX). Does putting the 2nd pair in TCC mode and leaving the first pair in SLI/WDDM work?

Posted on 2020-07-11 06:40:23

I just watched the applicable part of the video (from about 9:30 to 12:00) and I can't tell which approach they took for sure. They showed screenshots indicating both the use of the command line TCC method (which is required for Quadro GP100 and GV100 cards to get into NVLink, and may also work on other Quadro cards... I've just not tried it) as well as the SLI method (which works on GeForce and RTX-series Quadro cards). The SLI method shouldn't disable video outputs from the primary card, so if they used that the Titan might not have been strictly necessary... but they might have wanted it anyway, to separate display output from the cards used for compute workloads and maybe handle multitasking with graphics stuff better. Or maybe they did use the TCC method, in which case a third card for video output is indeed required. I can't tell for sure which was done in this specific case, but since the narration from Linus mentioned the command line I *suspect* it was the TCC mode... I just don't know why they opted for that over just enabling SLI :/

Posted on 2020-07-13 20:25:45

TCC mode bypasses Windows's WDDM graphics model and CUDA apps are able to address the GPUs, without WDDM getting in the way. Usually, enabling TCC mode restores a performance penalty due to WDDM (https://blog.thepixelary.co... at least for blender work. Linus says that getting NVLINK to work will disable Windows graphics (WDDM). I don't think that is the case, as evidenced by your testing. What I think Robbie is saying is that instead of the Titan GPU, two Quadro cards will take its place. So, GPU0 and GPU1 will be in bridged in WDDM mode, while GPU2 and GPU3 will be bridged in TCC mode.

Posted on 2020-07-19 02:13:12
jeroen b

Hi William,

My pc runs on Ubuntu 20.04. I do a lot of rendering in Blender and the coming Blender2.90 release wil support NVlink.
The four RTX 2080ti in my pc are connected using two Quadro RTX 6000 HB bridges.

The beta Blender2.90 I have tried out, and the option to share memory appears in the settings. This suggests that my dual NVlink is indeed active.

However, when I test to find out how much memory a render can take, Blender 2.90 crashes.
This could very well be caused by this version of Blender still beeing developed, but maybe my NVlink setup is not OK.

How can I test in Ubuntu 20.04 if NVlink is working OK?
(I use the recommended 440.100 driver)


Posted on 2020-08-13 11:44:51

I haven't run this sort of test in Linux before, personally, but you should be able to use the Linux version of the CUDA Toolkit which we utilized in our Windows-based NVLink test script:


You'll probably want to check out the documentation there, as I imagine the programs and commands to use them are different under Linux so I can't provide the exact process to follow - but under Windows, the specific executable we used is called "p2pBandwidthLatencyTest.exe". What we looked for is bandwidth between paired cards that was high enough to indicate the NVLink bridges had to be in use, rather than just communication over PCI-Express.

Posted on 2020-08-13 17:50:44
jeroen b

Thanks for your reply.
I managed to get these results from all four GPU's;

"sudo nvidia-smi nvlink -s" (Exactly thesame for every GPU)
Link 0; 25.781 GB/s
Link 1; 25.781 GB/s

"sudo nvidia-smi nvlink -c" (Exactly thesame for every GPU)
Link 0, P2P is supported: true
Link 0, Access to system memory supported: true
Link 0, P2P atomics supported: true
Link 0, System memory atomics supported: true
Link 0, SLI is supported: true
Link 0, Link is supported: false
Link 1, P2P is supported: true
Link 1, Access to system memory supported: true
Link 1, P2P atomics supported: true
Link 1, System memory atomics supported: true
Link 1, SLI is supported: true
Link 1, Link is supported: false

I am not sure, but "Link is supported: false" does not look good to me.
What is your opinion?

Posted on 2020-08-13 19:18:39

I've run some of those commands (or at least, the Windows equivalents) before... but it has been a long time, and I don't remember what the results looked like or how to interpret them. However, do know that I didn't settle on them as the definitive way of determining NVLink functionality. Is there anything in the Linux download called "p2pBandwidthLatencyTest" or something similar? If so, that is what I would try running instead.

Posted on 2020-08-14 16:02:29
George Jubran

Hi William,
I just built my deep learning station with two RTX 2080ti within wondows 10 OS - and was very anxious to see if the NVLINK would be enabled. I had no problem with enabling SLI form the Nvidia control panel, and then I followed through trying to enable the nvlink via the smi.exe test utility. But wasnt successful at all. One thing about Nvidia is that they should have support but wasnt able to find it anywhere except here. WHen I ran the test utility I received the error window mentioned here that I dont have support dor CUDA 10 , although I installed CUDA 10.1 along with the latest drivers for the 2080ti (452.06 is the latest drivers to this date)- I tied to connect a display for each card- but that didnt work either. I noticed that NVLINK doesnt like a display to be connected to each of the pair of the 2080ti's - One thing I havent tried is to connect both gpus to the same display through two connections. Hopefully this will work but I doubt it for some reason. I read somewhere also in a related article that a 3rd card must be connected for the display output in order to enable NVLINK , is that true? - I truly appreciate your help here- I have invested few in this build and look forward to start my research but wanted this NVLINK issue to be resolved before I get started. Is this issue related to microsoft or NVIDIA, or both fo rthat matter?Does this problem recur also within Ubunto ? .
One more thing I should mention about the gpus: I have one Nvidia RTX 2080ti FE and the second is EVGA RTX 2080 TI BLACK EDITION - both are the same chipset- Does this mean they are NOT identical? .
My system specs: motherboard: Asrock x399 phantom gaming 6, CPU: AMD 1950x threadripper 16core, 32 thread, RAM: 128GB - and windows 10 , PSU: EVGA 1200W. thank you for your help

Posted on 2020-09-04 12:36:41

On Windows, once you have SLI enabled that should mean NVLink is also enabled / functional. The test utility we use is just to verify that NVLink is working, not to enable it. For just one pair of cards you also shouldn't need multiple monitors to be hooked or anything like that... it should just work (once the bridge is installed and SLI is enabled).

If I may ask, how do you know NVLink is *not* working? Have you tried the test utility linked here?


Posted on 2020-09-04 15:55:56
George Jubran

Hi William,
Thanks for your prompt response! I do appreciate that. Here is what I have:
1) The SLI is enabled - as the Nvidia control panel indicates that clearly.
2) I have downloaded the test file you referred to verify that Nvlink is functioning properly. I downloaded the zip file and extracted it in the download folder then ran it. I got a black screen first for maybe 30 seconds, then the screen came back with the following screenshots below:


Error - No CUDA v10 window that is displayed in the article. Although I installed CUDA tool 10.1 along with the updated drivers for rtx 2080 ti 452.06
I verified that in the device manager and both cards are listed with the proper version of the drivers.

3) When I opened command prompt (as administrator) , I entered the command:
c:\ Program FIles\Nvidia Corporation..\v10.1\bin\Debug> nvidia-smi.exe nvlink -s

4) I get an error message when I enter the following commands:
> nvidia-smi.exe -i 0 -dm TCC and >nvidia-smi.exe -i 1 -dm TCC (TCC or 1 either way I get the same error

5) When I run the test program after downloading and extracting it I get a black screen, then the screen comes back with the error message: " https://uploads.disquscdn.c... The system does not have CUDA v10 support"

6) I moved the Test folder to the Debug directory- and ran the p2pBandwidthLatencyTest , and got this result- I guess my mistake was that I didnt move the executable files to the Debug directory. Why OUT OF MEMORY is displayed?

I look forward to your response. Thank you very much!
George Jubran


Posted on 2020-09-04 17:59:35

1) Good! :)

2) After downloading the zip file from our website, and extracting its contents, what file are you running? It should be the one called "NVLinkTest.exe", and it should just be run by double-clicking on it (no need to go into the command line). I'm surprised that there is any substantial delay or a black screen being displayed at all... it should only show graphical windows like the screenshots shown here:


3) What was the result of running the command you mentioned in this entry? None of the screenshots you included appear to be the results of that particular command.

4) You don't want to be trying to change either of these cards to TCC mode - that is a method for making some other cards (like the Quadro GV100 and GP100) go into NVLink, but it is not necessary or supported for GeForce cards. That is why you are getting errors when you try the commands to put the cards into that mode.

5) I wonder if my test program doesn't play nicely with CUDA 10.1... I'm not in the office at the moment, but next time I am I will check the script code and see if perhaps I need to update it to work better with 10.1.

6) From that output, it looks like the test is stopping prematurely. There should be a lot more output than that, showing transfer speeds and latency between the two cards. I have not seen that out of memory error in this context before, and a quick search online didn't seem to show any results either. I am guessing that something is messed up, though, which is leading to this issue and some of the other strange behavior. It might be related to the attempts to put the cards into TCC mode, even. I think that if I were in your situation I would take the cards out of SLI, reinstall the latest drivers (making sure to select the Clean Install option, which will thoroughly remove the existing drivers & settings first) and then try to start over fresh. I would run the NVLinkTest.exe after the driver install & a reboot, then put the cards into SLI and try it a second time. You should be able to save the output from each run (if there aren't errors mucking it up, at least) and that info could help.

Posted on 2020-09-04 18:47:59