Article Thumbnail

Multiheaded NVIDIA Gaming using Ubuntu 14.04 + KVM

Written on August 1, 2014 by Matt Bach
Share:
Table of Contents:
  1. Introduction
  2. Hardware requirements
  3. Step 1: Edit the Ubuntu modules and bootloader
  4. Step 2: Blacklist the NVIDIA cards
  5. Step 3: Create VFIO config files
  6. Step 4: Create virtual disk(s)
  7. Step 5: Create a script to run each virtual machine
  8. Step 6: Start the virtual machine
  9. Step 7: Add USB support
  10. Congratulations! You are done!

Introduction

We recently published the article Multi-headed VMWare Gaming Setup where we used VMWare ESXI to run four virtual gaming machines from a single PC. Each virtual machine had its own dedicated GPU and USB controller so there was absolutely no input or display lag while gaming. The setup worked great and the article was very popular, but one limitation we found was that NVIDIA GeForce cards cannot be used as passthough devices in VMWare ESXI. We received feedback from some readers that GeForce cards should work in Linux with KVM (Kernel-based Virtual Machine) so we set out to make a GeForce-based multiheaded gaming PC using Ubuntu 14.04 and KVM.

What we found is that while it is completely possible, getting GPU passthrough to work in a Linux distribution like Ubuntu was not as simple as following a single guide. Once we figured out the process it wasn't too bad, but most of the guides we found are written for Arch Linux rather than Ubuntu. And while both are Linux distributions, there are some differences that made certain key portions of the guide not directly applicable to Ubuntu. Since we already spent the effort of figuring out how to get our multiheaded gaming system working in Ubuntu, we decided to write our own guide based on what we were able to piece together from various sources.

Most of what we figured out was based on the guide KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9 written by user nbhs. However, this guide is intended for Arch Linux, so there were some things we had to change in order for everything to work in Ubuntu. In addition to the guide above, we also heavily used the following sources:

One thing we won't be covering in this guide is the basic installation of Ubuntu and KVM since there are already a number of guides available. Honestly, if you are unable to install Ubuntu and KVM on your own then this is likely much more advanced of a project than you are probably ready for. However, one guide we will specifically mention is this KVM/Installation guide that we followed to install KVM.

Hardware requirements

There is actually very little in the way of hardware requirements for doing GPU passthroughs with KVM except that the hardware is supported by Ubuntu and that the CPU and Motherboard supports virtualization.

One thing we will mention is that our test system is an Intel-based system and that we will be using NVIDIA GeForce GTX cards for passthrough. You can use an AMD CPU and/or GPU but you may have to tweak some of the instructions in this guide. For the sake of completion, here is our test hardware:

One last thing that we will note is that with Linux there are often many ways to do the same thing. In fact, the methods we will be showing in this guide are very possibly not the most efficient way to do this. So if you have an idea or come across a different way to do something, just give it a shot. If you like it better, be sure to let us know in the comments at the end of this article!

Step 1: Edit the Ubuntu modules and bootloader

As we found on this forum post, since we are using the stock Ubuntu kernel one thing we will need to do is add a few missing components necessary to load VFIO (Virtual Finction I/O). VFIO is required to pass full devices through to a virtual machine, so we need to make sure Ubuntu loads everything it needs. To do this, edit the /etc/modules file with the command sudo gedit /etc/modules and add:

pci_stub
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_intel 

Next, in order for Ubuntu to load IOMMU properly, we need to edit the Grub cmdline. To do so, enter the command sudo gedit /etc/default/grub to open the grub bootloader file. On the line with "GRUB_CMDLINE_LINUX_DEFAULT", add "intel_iommu=on" to enable IOMMU. On our motherboard, we also needed to add "vfio_iommu_type1.allow_unsafe_interrupts=1" in order to enable interrupt remapping. Depending on your motherboard, this may or may not be necessary. For our system, the boot line looks like: 

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1"

After that, run sudo update-grub to update Grub with the new settings and reboot the system.

Step 2: Blacklist the NVIDIA cards

NVIDIA cards cannot be used by a virtual machine if the base Ubuntu OS is already using them, so in order to keep Ubuntu from wanting to use the NVIDIA cards we have to blacklist them by adding their IDs to the initramfs. Note that you do not want to do this for your primary GPU unless you are prepared to continue the rest of this guide through SSH or some other method of remote console. Credit for this step goes to the superuser.com user genpfault from this question.

  1. Use the command lspci -nn | grep NVIDIA . If you are using video cards other than NVIDIA, you can simply use lspci -nn and just search through the output to find the video cards. 

    02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110B [GeForce GTX Titan Black] [10de:100c] (rev a1)
    02:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1)
    03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110B [GeForce GTX Titan Black] [10de:100c] (rev a1)
    03:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1)
    04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110 [GeForce GTX Titan] [10de:1005] (rev a1)
    04:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1)

    What we need is actually the ID at the end of each line that we will tell initramfs to blacklist. For our system, the three unique IDs are: 10de:100c, 10de:0e1a, and 10de:1005. Notice that they are not unique IDs per device, but rather per model. Since we have two different models of video cards that we want to pass through to virtual machines (two Titan Blacks and one Titan), we have two different IDs for the GPU. Since both models use the same HDMI audio device, we only have one HDMI ID for all three cards. 

  2. With these ID's in hand, open initramfs-tools/modules with the command sudo gedit /etc/initramfs-tools/modules and add this line (substituting the IDs for the ones from your system):

    pci_stub ids=10de:100c,10de:0e1a,10de:1005
  3. After saving the file, rebuild the initramfs with the command sudo update-initramfs -u and reboot the system.

  4. After the reboot, check that the cards are being claimed by pci-stub correctly with the command dmesg | grep pci-stub. In our case, we should have 6 devices listed as "claimed by stub". If your devices are not showing up as claimed, first try copy/pasting the ID directly from the terminal into the module file since we found that typing them out sometimes didn't work for some unknown reason..

    [ 1.522487] pci-stub: add 10DE:100C sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
    [ 1.522498] pci-stub 0000:02:00.0: claimed by stub
    [ 1.522509] pci-stub 0000:03:00.0: claimed by stub
    [ 1.522516] pci-stub: add 10DE:1005 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
    [ 1.522521] pci-stub 0000:04:00.0: claimed by stub
    [ 1.522527] pci-stub: add 10DE:0E1A sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
    [ 1.522536] pci-stub 0000:02:00.1: claimed by stub
    [ 1.522544] pci-stub 0000:03:00.1: claimed by stub
    [ 1.522554] pci-stub 0000:04:00.1: claimed by stub

    Note that all six devices are listed as "claimed by stub"

Step 3: Create VFIO config files

In order to bind the video cards to the virual machines, we need to create a config file for each virtual machine. To do this, create a cfg file with the command sudo gedit /etc/vfio-pci#.cfg where # is a unique number for each of your planned virtual machine. Within these files, enter the PCI address for the video card you want to have passed through to the virtual machine. These addresses can be found with the command lspci -nn | grep NVIDIA and are shown at the beginning of each line. Again, if you are not using NVIDIA you can use the messier command lspci -nn and hunt down your video cards. For our setup, we ended up these with three .cfg files:

/etc/vfio-pci1.cfg

0000:02:00.0
0000:02:00.1


/etc/vfio-pci2.cfg

0000:03:00.0
0000:03:00.1


/etc/vfio-pci3.cfg

0000:04:00.0
0000:04:00.1

Step 4: Create virtual disk(s)

Most of the prep work is done at this point, but before we configure our first virtual machine we first need to create a virtual disk for the virtual machine to use. To do this, repeat the following command for as many virtual machines as you want:

dd if=/dev/zero of=windows#.img bs=1M seek=size count=0


where windows#.img is a unique name for each virual machine image and size is the size of the image you want in GB * 1000. If you want roughly a 80GB image, enter 80000. We want a 120GB image, so we entered 120000. By default, this img file will be created in your /home/user folder.

Step 5: Create a script to run each virtual machine

We need to be able to create a very custom virtual machine which simply is not possible with any GUI-based virtual machine manager in Ubuntu that we know of. Using a script also allows us to bind the video card to VFIO right before running the virtual machine instead of getting into startup scripts like the Arch Linux guide uses. Credit goes to heiko_s on the Ubuntu forums for the nice script below.

What this script does is fist bind the video card to VFIO based on the .cfg file we created a few steps back. After that, it creates a virtual machine that uses both the video card we specify and the image we just made in the previous step.

To make the script, enter the command sudo gedit /usr/vm# where # is the unique identifier for that virtual machine. Next, copy the script below into this file. Be sure to change anything in bold to match your configuration:

#!/bin/bash

configfile=/etc/vfio-pci#.cfg

vfiobind() {
    dev="$1"
        vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
        device=$(cat /sys/bus/pci/devices/$dev/device)
        if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
        fi
        echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

}

modprobe vfio-pci

cat $configfile | while read line;do
    echo $line | grep ^# >/dev/null 2>&1 && continue
        vfiobind $line
done

sudo qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host \
-smp 4,sockets=1,cores=4,threads=1 \
-bios /usr/share/qemu/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \
-drive file=/home/puget/windows#.img,id=disk,format=raw -device ide-hd,bus=ide.0,drive=disk \
-drive file=/home/puget/Downloads/Windows.iso,id=isocd -device ide-cd,bus=ide.1,drive=isocd \
-boot menu=on

exit 0


Be sure to edit the # to be the unique identifier for this virtual machine and that the "/etc/vfio-pci#.cfg" file corresponds to the PCI addresses in the "-device vfio-pci" lines. You may also want to edit the amount of RAM the virtual machine will get ("-m 4096" will give 4096MB or 4GB of RAM) and the number of CPU cores and sockets ("-smp 4,sockets=1,cores=4,threads=1" will give a single socket, 4 core vCPU without hyperthreading). 

One additional thing you can do is directly mount an ISO of whatever OS you want to install. The ISO we used was named Windows.ISO and is located in the Downloads folder. Simply change this location to point to whatever ISO you want to install from.

Once the script is configured how you want it, save it then enter the command sudo chmod 755 /usr/vm# to make the script executable.

Step 6: Start the virtual machine

At this point, everything should be configured to allow the video card to be properly passed through to the virtual machine. Give the system one more reboot just to be sure everything took correctly and plug a monitor into the video card you have set to be passed through. Start the virtual machine with the command sudo /usr/vm# where # is the unique identifier for that virtual machine. If everything was done properly, a black window titled "QEMU" should show up in Ubuntu and you should get a display on your virtual machine's monitor. However, don't be surprised or disappointed if you get an error.

If you get an error, go back through this guide to make sure you didn't miss anything. If you are sure you didn't miss anything then it is probably a problem unique to your hardware. Unfortunately, all we can really say is "good luck" and have some fun googling the error you are getting. Most likely there is something slightly different about your hardware that requires a little bit different setup and configuration. This is simply the joys of Linux. Luckily, Ubuntu and Linux in general has a very active community so you are very likely to find the solution to your error if you do enough digging.

Step 7: Add USB support

Getting a NVIDIA GeForce card to pass through to a virtual machine is great, but we still need a way to actually install and use an OS on the virtual machine. To do this, we need to add USB support to the virtual machine. In our opinion, the best way to do this is to simply pass through an entire USB controller much like what we just did with a video card. However, we have found that some USB controllers simply don't like to be used as a passthrough. If that happens, you will need to pass through individual USB devices.

USB Controller Pass-through

To pass through an entire USB controller, first use lspci -nn | grep USB to find the PCI address of the USB controller you want to pass through. Then, add the address to your /etc/vfio-pci#.cfg file as a new line just like what we did earlier for the video card. 

Next, add the controller to your virtual machine script file with the command sudo gedit /usr/vm# . To do this, add the following line:

-device vfio-pci,host=00:1d.0,bus=pcie.0 \

only replace the 00:1d.0 with the address of your USB controller. If you are lucky, it will work without a hitch. If you are not lucky, there a number of reasons you may not be able to pass through that specific controller.

If you get an error, you might simply try a different controller. On our system, we were able to pass through the USB 2.0 controllers without a problem, but could not get the USB 3.0 controllers to work due to a problem with there being additional devices in their IOMMU group. We were unable to solve that issue, so for our system we ended up passing through individual USB devices instead of the entire controller.

USB Device Pass-through

If you run into problems passing through an entire USB controller that you cannot solve, the other option is to pass through individual USB devices. This is actually easier in many ways, but USB device addresses like to change randomly so you may find that you need to edit the virtual machine script any time you reboot the machine or add/change a USB device.

To start, use the command lsusb to show the USB devices currently connected to your system. In our case, we are creating three virtual machines so we have three additional sets of keyboards and mice plugged in. The relevant part of our lsusb output looks like:

Bus 002 Device 017: ID 045e:00cb Microsoft Corp. Basic Optical Mouse v2.0
Bus 002 Device 016: ID 045e:00cb Microsoft Corp. Basic Optical Mouse v2.0
Bus 002 Device 015: ID 045e:00cb Microsoft Corp. Basic Optical Mouse v2.0
Bus 002 Device 013: ID 045e:07f8 Microsoft Corp.
Bus 002 Device 014: ID 045e:07f8 Microsoft Corp.
Bus 002 Device 011: ID 045e:07f8 Microsoft Corp.

Most guides for KVM will say to use the ID to pass through USB devices (like 045e:00cb), but note that the ID is unique by model, not device. Since we have multiple devices of the same model, we instead have to use the bus and device numbers. The ID is more reliable so you should use that if possible, but if you have multiples of the same model you have to do it by bus/device number. To do this, add one of the follow lines to your /usr/vm# script for the USB devices you want to pass through to the virtual machine.

By ID:

-usb -usbdevice host:045e:00cb -usbdevice host:045e:07f8 \


By bus and device:

-usb -device usb-host,hostbus=2,hostaddr=17 -device usb-host,hostbus=2,hostaddr=13 \


Be sure to change the parts in bold to match your hardware. If you find that your USB device no longer works either randomly or after a reboot, rerun lsusb to find out if the device number has changed.

Congratulations! You are done!

There are plenty of other options in KVM that you can experiment with, but at this point you should have a virtual machine (or multiple virtual machines) up and running - each with their own dedicated video card and keyboard/mouse. Simply install your OS of choice and enjoy your multiheaded gaming system!

If you are interested in how well this works or want to find out more about how this could be used in the real world, be sure to check out our Multi-headed VMWare Gaming Setup article.

One thing we will say is that after using both VMWare ESXI and Ubuntu+KVM to make a multiheaded gaming PC, VMWare by far the easier and more reliable method. Things like being able to pass through all our USB controllers without any problems and the vSphere client to allow us to easily administer the virtual machines over the network made VMWare much easier to use. It is limited to AMD Radeon and NVIDIA Quadro cards, but even with that limitation it is still the method we would recommend if you planning on building a multiheaded gaming PC.


Tags: Ubuntu, KVM, PCI passthrough, virtualization, virtual machine, multi-head, gaming
deehems

Dope. I can't wait to try this. wish me luck

Posted on 2014-09-21 05:04:43
froyomuffin

First of all, great guide! I'm having a little trouble, however. I'm getting a code 12 in windows. Have you encountered this issue?

Posted on 2014-09-23 11:25:39
mmm

Do you remember which driver version you were using or does it still work with the latest drivers? I'm trying it with very similar hardware - two GTX 780ti though instead of Titans and I keep getting code 43.

Posted on 2014-10-18 18:56:50
Paperino

Quick question: is the BIOS output during boot sent to the QEMU window or to the actual attached monitor?

Posted on 2014-10-21 23:25:24

We don't have this built up anymore, so I can't confirm it, but I believe the BIOS output of the virtual machines is not displayed on the attached monitor (so it should be through the QEMU window). The BIOS POST was so quick and that wasn't something we paid much attention to, however, so I'm really not 100% confident in that answer.

Sorry I can't give you a confirmed answer.

Posted on 2014-10-21 23:38:57
MCon

I have a much simpler setup (internal iHD4600 GPU for main win + a single nVidia GTX770), but I'm unable to convince Linux to leave the GTX card alone!
Nouveau is loaded WAY before pci-stub, which manages to "claim" only the audio part of the board.
I also tried setting the iGPU as "primary"in BIOS, but to no avail.

Can someone, please, help?

Posted on 2014-11-04 16:38:27
Menace

I feel like I'm so close. I get the QEMU window, but no output on the other screen. I've checked the other outputs on the video card, I hear the fans spin up when I launch the VM...

The card works, I had it running Windows 8.1 bare-metal before (bleh) so I know it isn't that. My devices seem to be getting marked as stub (single-headed for my current setup, so I only show two devices..)

Also no error messages to help me find out what's going wrong. Card is a GeForce GTX 780.

Any ideas?

Posted on 2014-12-06 06:28:59
bubba

I was in a similar situation.
Windows didnt know what to do with the gpu that had been passed through. So I added an emulated vga card and used VNC to view the output of the emulated card to let me install windows and get the gpu drivers up and running properly.

I added
-vnc :0 -vga vmware

to the vmcreation script

vnc'd into the server from my laptop (or from the host if you have a gui installed) downloaded catalyst, installed it, shut it down, removed the -vga vmware option and then restarted the window vm - it loaded up fine with a monitor connected to the gpu being passed through.

all working now

Posted on 2015-02-07 19:43:06
Menace

Thanks! I've torn it down for right now, but I still have the files. I'll have to try this soon. I'll let you all know if it fixes the issue for me.

Posted on 2015-02-08 17:03:43

Thanks for writing this article on VGA passthrough using KVM on Ubuntu. It should work likewise with Linux Mint, so I hope you don't mind if I link to it in my Xen VGA passthrough how-to here: http://forums.linuxmint.com/vi....

With regard to using Nvidia cards for the VM, I would be interested to see if there are Nvidia cards outside the Quadro series that work with KVM. By the way, I believe not all Quadro cards are suitable, for example the Quadro 600 didn't work for me when trying with Xen. I understand that only the "Multi-OS" types will work, that is the Quadro 2000 and higher.

Posted on 2014-12-16 18:49:12
zack

Did everything in the tutorial on a newly setup machine (just built for testing this) but I get this when trying to execute the VM script:

qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error no iommu_group for device
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized

Posted on 2014-12-18 11:39:08
Rick Wuijster

Hey guys, i have everything installed now, but when i start the virtual machine. I get an error: "could not find /usr/share/qemu/bios.bin" The folder /usr/share/qemu exists, but there is no file called bios.bin, there are a lot of other files tough. How can I fix this? Pls help

Posted on 2014-12-19 15:47:18
Bas van Langevelde

Question: How Would I be able to pass through an entire hard disk to a specific vm?
My ubuntu installation runs on a hard disk but I owuld like the Win8 vm to run on my ssd.

Posted on 2015-01-11 10:40:50
Justin Swanson

Been looking into this for the wife's photo editing. I'm trying to see what are the requirements for the video cards. Anyone have a quick and simple answer for what to look for in video cards? I was looking at xen and it has a much more restrictive list.

Posted on 2015-01-20 05:24:06
Rick Wuijster

If i start my virtual machine i don't see windows starting on my other screen, nor do I see seabios. But I do not get any errors. Anyone knows how this is possible and/or knows how to fix this?

Posted on 2015-02-05 08:21:55
Foreigner

Hey, I had installed 15.04 Alpha with QEMU 2.2 today and got the same problem with the bios.bin in seabios. I deleted the "-vga none" trigger in row "-bios /usr/share/qemu/bios.bin -vga none \", installed windows 8.1 on the QEMU window and installed the Geforce driver for the GTX780Ti.
Then i rebooted the machine and saw the nicley yellow warning with fault code 43, but had wrote "-cpu host,kvm=off" before. ... Experimental I added the "-vga none" trigger again and there was the video output! Have no idea why, but it work. And the card was fully detected. No code 43.

Will test it with 14.10 tomorrow. That contains QUEMU 2.1.x instead 2.2.
14.04 only have QEMU version 2.0.x.

p.s.
You only can use "-cpu host,kvm=off" if you have installed QEMU 2.1 or later. See here:
https://www.redhat.com/archive...

Or you use a old driver. See here:
http://lists.gnu.org/archive/h...

Posted on 2015-02-05 23:12:40
Rick Wuijster

Hi, I have the same error and followed the steps you told me: delete ¨vga none trigger¨. you spoke about pu host,kvm=off, and you said it only works with qemu 2.1. I have QEMU 2.0 installed. How can I update Qemu to 2.1 or later? Do I need to completely install new version of Ubuntu or do i need to update Kvm? Please help me.
Thank you

Posted on 2015-02-21 14:05:20
Achim

Great Guide. Thank You.

I made all the steps described inthe guide but get some problems.
My used hardware is as following:

Motherboard: X9DA7 X9DAE - Supermicro
CPU: 2x Intel Xeon CPU E5-2670
VGA 1: GeForce GTX 670
VGA 2 (Pass through): Nvidia Quadro FX1800 or
VGA 2 (Pass through): Nvidia GeForce 8800 GTS

The use OS and kernel version:
Ubuntu 14.04 64-Bit
Kernel: 3.18.4-031804

I didn't patch the kernel.

the command lspci -nn | grep NVIDIA results in:

02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 670] [10de:1189] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)
83:00.0 VGA compatible controller [0300]: NVIDIA Corporation G80 [GeForce 8800 GTS] [10de:0193] (rev a2)

the command: dmesg | grep pci-stub results in:

[ 1.145601] pci-stub: add 10DE:0193 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 1.145636] pci-stub 0000:83:00.0: claimed by stub
[ 17.176633] pci-stub 0000:83:00.0: enabling device (0000 -> 0003)

qemu starts without error but I get no signal on the monitor plugged to VGA 2.

My ideas:

- Are the two VGA 2 cards i used compatible to pass through (haven't find a list yet)
- Is it absolutely necessary to patch the kernel with the 'acs override patch'

Some body any ideas?

Posted on 2015-02-17 09:29:33
Jacob Eriksson

So i managed to get it half working.
The VM runs, but it wont use the other screen, and i have to set change '-vga none' to '-vga vmware' for it to actually render anything.

Posted on 2015-02-19 17:20:38
audioserf

Giving this a shot today with a GTX 660 and 970. Hope all goes well. Cheers.

Posted on 2015-02-23 23:26:25
hallona

any luck with a GTX 660 ?
i would buy one if it works!

Posted on 2015-06-04 09:36:36
Bent

After going through the tutrorial I am getting the following error:

qemu-system-x86_64: -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1: vfio: error getting device 0000:01:00.1 from group 1: No such device

Verify all devices in group 1 are bound to vfio-pci or pci-stub and not already in use

qemu-system-x86_64: -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1: vfio: failed to get device 0000:01:00.1

qemu-system-x86_64: -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1: Device initialization failed.

qemu-system-x86_64: -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1: Device 'vfio-pci' could not be initialized

I defiently blacklisted all Nvidia components:

lspci listed me these:

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] [10de:1380] (rev a2)

01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbc] (rev a1)

dmesg | grep pci-stub gives me this:

[ 0.526239] pci-stub: add 10DE:1380 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000

[ 0.526258] pci-stub 0000:01:00.0: claimed by stub

[ 0.526262] pci-stub: add 10DE:0FBC sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000

[ 0.526266] pci-stub 0000:01:00.1: claimed by stub

So yeah, what am I missing out here?

Posted on 2015-03-18 16:34:54
Carter Hill

I just posted about pretty much the same error as well. Have you had any luck with it yet?

Posted on 2015-04-19 17:39:23
Isaac Peek

Attempted this yesterday. I'm running all AMD(fx 6300, HD7870(guest), R7 240(host)) and when I start my VM and if I don't press f12 for boot options or even select anything in the boot options I get the following:

qemu-system-x86_64: vfio_bar_write(,0xd2ea, 0x0, 1) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0xd2e9, 0x0, 1) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0xd2e8, 0x0, 1) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0xd2ef, 0x0, 1) failed: Device or resource busy

Anyone else had that? Know what might be going on? I found 1 other post on the ArchLinux thread about it and they had an issue with their intel drivers(host cpu/gpu). Didnt really find their solution though.

Posted on 2015-07-13 17:39:22

if anyone is looking for the expired forum he mentioned in the begging " KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9"

http://webcache.googleusercont...

Posted on 2015-08-08 05:39:10
swe3tdave

i had to blacklist the radeon driver because it was loading before pci-stub... but now it works fine with my Radeon 5450, now, if only my motherboard didn't auto shut off every time i try using two video cards...

Posted on 2015-09-14 11:53:07
MOV EAX, MANVIR

I've got my R9 280 successfuly assigned to the VM and installed the drivers but the only problem is I get very low fps. Any ideas as to why or a solution?

Posted on 2015-10-20 23:02:54
Roger Lawhorn

I am trying this. It all seems good till I cannot get pci_stub to bind the nvidia video card to itself. I have tried adding pci_stub to my boot line to get it going early and still it will not grab anything. Any ideas on this? This is holding up the whole show. I am attempting to run Fallout 4 without rebooting into Windows 7. I need vga passthrough working for this.

Posted on 2015-11-13 00:09:01
fede

i get this 3 erros, can you help me to find a solution?
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error no iommu_group for device

qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.

qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized

Posted on 2016-04-02 03:05:46
bar

I have the same error.
haven't figured out the solution yet..

Posted on 2016-04-24 11:59:00
pete

Any plans to update this guide for 16.04 server?

Posted on 2016-07-21 19:40:55
Roflrogue

Do you need more than one graphics card for this to work?

Posted on 2016-08-14 13:24:16
Christopher Gibb

Hey, just wanted to say thanks for writing this article. After a couple of days messing around I've managed to get my setup running.
For anybody out there banging their head off of this I include my spec to aid in deciding if this can be done for you - i7-5820K, ASRock X99 Killer Fatal1ty, GeForce GTX 750, GeForce GTX 950. At the moment the 750 is passed through but I'm planning on swapping them around. All the info that was needed was in the article and buried in these comments + links. Again, thanks to you Matt for writing this article and thanks to all those who commented for sharing their fixes. It's also worth a mention that I'm migrating from VMWare workstation and that I didn't even need to do anything to the .vmdk file - KVM boots it no bother! I'm so happy my VM has now got it's own GPU with the correct nVidia drivers. I have some small issues to iron out with networking performance and I need to figure out how to be suspend/resume the VM (all the docs I've seen so far are for the XML method). But all in all very satisfied with this.

Posted on 2016-10-02 18:08:17
Wilco Engelsman

thnx mate, got it working within a few hours. Nvida 1070 user here, with ubunto 16.04.
I had to change some parts of the script and had to follow some different steps in order to get it working.
I had to make sure the nivdia driver was not installed in X (anymore). Otherwise the blacklisting would not work.
I had to change some settings with the bios (use efi bios)
I had to change some parameters with qemu.

Installing windows 10 now.

Posted on 2016-11-28 12:27:41