Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/585
Article Thumbnail

Multiheaded NVIDIA Gaming using Ubuntu 14.04 + KVM

Written on August 1, 2014 by Matt Bach
Share:

Introduction

We recently published the article Multi-headed VMWare Gaming Setup where we used VMWare ESXI to run four virtual gaming machines from a single PC. Each virtual machine had its own dedicated GPU and USB controller so there was absolutely no input or display lag while gaming. The setup worked great and the article was very popular, but one limitation we found was that NVIDIA GeForce cards cannot be used as passthough devices in VMWare ESXI. We received feedback from some readers that GeForce cards should work in Linux with KVM (Kernel-based Virtual Machine) so we set out to make a GeForce-based multiheaded gaming PC using Ubuntu 14.04 and KVM.

What we found is that while it is completely possible, getting GPU passthrough to work in a Linux distribution like Ubuntu was not as simple as following a single guide. Once we figured out the process it wasn't too bad, but most of the guides we found are written for Arch Linux rather than Ubuntu. And while both are Linux distributions, there are some differences that made certain key portions of the guide not directly applicable to Ubuntu. Since we already spent the effort of figuring out how to get our multiheaded gaming system working in Ubuntu, we decided to write our own guide based on what we were able to piece together from various sources.

Most of what we figured out was based on the guide KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9 written by user nbhs. However, this guide is intended for Arch Linux, so there were some things we had to change in order for everything to work in Ubuntu. In addition to the guide above, we also heavily used the following sources:

One thing we won't be covering in this guide is the basic installation of Ubuntu and KVM since there are already a number of guides available. Honestly, if you are unable to install Ubuntu and KVM on your own then this is likely much more advanced of a project than you are probably ready for. However, one guide we will specifically mention is this KVM/Installation guide that we followed to install KVM.

Hardware requirements

There is actually very little in the way of hardware requirements for doing GPU passthroughs with KVM except that the hardware is supported by Ubuntu and that the CPU and Motherboard supports virtualization.

One thing we will mention is that our test system is an Intel-based system and that we will be using NVIDIA GeForce GTX cards for passthrough. You can use an AMD CPU and/or GPU but you may have to tweak some of the instructions in this guide. For the sake of completion, here is our test hardware:

One last thing that we will note is that with Linux there are often many ways to do the same thing. In fact, the methods we will be showing in this guide are very possibly not the most efficient way to do this. So if you have an idea or come across a different way to do something, just give it a shot. If you like it better, be sure to let us know in the comments at the end of this article!

Step 1: Edit the Ubuntu modules and bootloader

As we found on this forum post, since we are using the stock Ubuntu kernel one thing we will need to do is add a few missing components necessary to load VFIO (Virtual Finction I/O). VFIO is required to pass full devices through to a virtual machine, so we need to make sure Ubuntu loads everything it needs. To do this, edit the /etc/modules file with the command sudo gedit /etc/modules and add:

pci_stub
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_intel 

Next, in order for Ubuntu to load IOMMU properly, we need to edit the Grub cmdline. To do so, enter the command sudo gedit /etc/default/grub to open the grub bootloader file. On the line with "GRUB_CMDLINE_LINUX_DEFAULT", add "intel_iommu=on" to enable IOMMU. On our motherboard, we also needed to add "vfio_iommu_type1.allow_unsafe_interrupts=1" in order to enable interrupt remapping. Depending on your motherboard, this may or may not be necessary. For our system, the boot line looks like: 

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1"

After that, run sudo update-grub to update Grub with the new settings and reboot the system.

Step 2: Blacklist the NVIDIA cards

NVIDIA cards cannot be used by a virtual machine if the base Ubuntu OS is already using them, so in order to keep Ubuntu from wanting to use the NVIDIA cards we have to blacklist them by adding their IDs to the initramfs. Note that you do not want to do this for your primary GPU unless you are prepared to continue the rest of this guide through SSH or some other method of remote console. Credit for this step goes to the superuser.com user genpfault from this question.

  1. Use the command lspci -nn | grep NVIDIA . If you are using video cards other than NVIDIA, you can simply use lspci -nn and just search through the output to find the video cards. 

    02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110B [GeForce GTX Titan Black] [10de:100c] (rev a1)
    02:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1)
    03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110B [GeForce GTX Titan Black] [10de:100c] (rev a1)
    03:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1)
    04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110 [GeForce GTX Titan] [10de:1005] (rev a1)
    04:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1)

    What we need is actually the ID at the end of each line that we will tell initramfs to blacklist. For our system, the three unique IDs are: 10de:100c, 10de:0e1a, and 10de:1005. Notice that they are not unique IDs per device, but rather per model. Since we have two different models of video cards that we want to pass through to virtual machines (two Titan Blacks and one Titan), we have two different IDs for the GPU. Since both models use the same HDMI audio device, we only have one HDMI ID for all three cards. 

  2. With these ID's in hand, open initramfs-tools/modules with the command sudo gedit /etc/initramfs-tools/modules and add this line (substituting the IDs for the ones from your system):

    pci_stub ids=10de:100c,10de:0e1a,10de:1005
  3. After saving the file, rebuild the initramfs with the command sudo update-initramfs -u and reboot the system.

  4. After the reboot, check that the cards are being claimed by pci-stub correctly with the command dmesg | grep pci-stub. In our case, we should have 6 devices listed as "claimed by stub". If your devices are not showing up as claimed, first try copy/pasting the ID directly from the terminal into the module file since we found that typing them out sometimes didn't work for some unknown reason..

    [ 1.522487] pci-stub: add 10DE:100C sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
    [ 1.522498] pci-stub 0000:02:00.0: claimed by stub
    [ 1.522509] pci-stub 0000:03:00.0: claimed by stub
    [ 1.522516] pci-stub: add 10DE:1005 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
    [ 1.522521] pci-stub 0000:04:00.0: claimed by stub
    [ 1.522527] pci-stub: add 10DE:0E1A sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
    [ 1.522536] pci-stub 0000:02:00.1: claimed by stub
    [ 1.522544] pci-stub 0000:03:00.1: claimed by stub
    [ 1.522554] pci-stub 0000:04:00.1: claimed by stub

    Note that all six devices are listed as "claimed by stub"

Step 3: Create VFIO config files

In order to bind the video cards to the virual machines, we need to create a config file for each virtual machine. To do this, create a cfg file with the command sudo gedit /etc/vfio-pci#.cfg where # is a unique number for each of your planned virtual machine. Within these files, enter the PCI address for the video card you want to have passed through to the virtual machine. These addresses can be found with the command lspci -nn | grep NVIDIA and are shown at the beginning of each line. Again, if you are not using NVIDIA you can use the messier command lspci -nn and hunt down your video cards. For our setup, we ended up these with three .cfg files:

/etc/vfio-pci1.cfg

0000:02:00.0
0000:02:00.1


/etc/vfio-pci2.cfg

0000:03:00.0
0000:03:00.1


/etc/vfio-pci3.cfg

0000:04:00.0
0000:04:00.1

Step 4: Create virtual disk(s)

Most of the prep work is done at this point, but before we configure our first virtual machine we first need to create a virtual disk for the virtual machine to use. To do this, repeat the following command for as many virtual machines as you want:

dd if=/dev/zero of=windows#.img bs=1M seek=size count=0


where windows#.img is a unique name for each virual machine image and size is the size of the image you want in GB * 1000. If you want roughly a 80GB image, enter 80000. We want a 120GB image, so we entered 120000. By default, this img file will be created in your /home/user folder.

Step 5: Create a script to run each virtual machine

We need to be able to create a very custom virtual machine which simply is not possible with any GUI-based virtual machine manager in Ubuntu that we know of. Using a script also allows us to bind the video card to VFIO right before running the virtual machine instead of getting into startup scripts like the Arch Linux guide uses. Credit goes to heiko_s on the Ubuntu forums for the nice script below.

What this script does is fist bind the video card to VFIO based on the .cfg file we created a few steps back. After that, it creates a virtual machine that uses both the video card we specify and the image we just made in the previous step.

To make the script, enter the command sudo gedit /usr/vm# where # is the unique identifier for that virtual machine. Next, copy the script below into this file. Be sure to change anything in bold to match your configuration:

#!/bin/bash

configfile=/etc/vfio-pci#.cfg

vfiobind() {
    dev="$1"
        vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
        device=$(cat /sys/bus/pci/devices/$dev/device)
        if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
        fi
        echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

}

modprobe vfio-pci

cat $configfile | while read line;do
    echo $line | grep ^# >/dev/null 2>&1 && continue
        vfiobind $line
done

sudo qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host \
-smp 4,sockets=1,cores=4,threads=1 \
-bios /usr/share/qemu/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \
-drive file=/home/puget/windows#.img,id=disk,format=raw -device ide-hd,bus=ide.0,drive=disk \
-drive file=/home/puget/Downloads/Windows.iso,id=isocd -device ide-cd,bus=ide.1,drive=isocd \
-boot menu=on

exit 0


Be sure to edit the # to be the unique identifier for this virtual machine and that the "/etc/vfio-pci#.cfg" file corresponds to the PCI addresses in the "-device vfio-pci" lines. You may also want to edit the amount of RAM the virtual machine will get ("-m 4096" will give 4096MB or 4GB of RAM) and the number of CPU cores and sockets ("-smp 4,sockets=1,cores=4,threads=1" will give a single socket, 4 core vCPU without hyperthreading). 

One additional thing you can do is directly mount an ISO of whatever OS you want to install. The ISO we used was named Windows.ISO and is located in the Downloads folder. Simply change this location to point to whatever ISO you want to install from.

Once the script is configured how you want it, save it then enter the command sudo chmod 755 /usr/vm# to make the script executable.

Step 6: Start the virtual machine

At this point, everything should be configured to allow the video card to be properly passed through to the virtual machine. Give the system one more reboot just to be sure everything took correctly and plug a monitor into the video card you have set to be passed through. Start the virtual machine with the command sudo /usr/vm# where # is the unique identifier for that virtual machine. If everything was done properly, a black window titled "QEMU" should show up in Ubuntu and you should get a display on your virtual machine's monitor. However, don't be surprised or disappointed if you get an error.

If you get an error, go back through this guide to make sure you didn't miss anything. If you are sure you didn't miss anything then it is probably a problem unique to your hardware. Unfortunately, all we can really say is "good luck" and have some fun googling the error you are getting. Most likely there is something slightly different about your hardware that requires a little bit different setup and configuration. This is simply the joys of Linux. Luckily, Ubuntu and Linux in general has a very active community so you are very likely to find the solution to your error if you do enough digging.

Step 7: Add USB support

Getting a NVIDIA GeForce card to pass through to a virtual machine is great, but we still need a way to actually install and use an OS on the virtual machine. To do this, we need to add USB support to the virtual machine. In our opinion, the best way to do this is to simply pass through an entire USB controller much like what we just did with a video card. However, we have found that some USB controllers simply don't like to be used as a passthrough. If that happens, you will need to pass through individual USB devices.

USB Controller Pass-through

To pass through an entire USB controller, first use lspci -nn | grep USB to find the PCI address of the USB controller you want to pass through. Then, add the address to your /etc/vfio-pci#.cfg file as a new line just like what we did earlier for the video card. 

Next, add the controller to your virtual machine script file with the command sudo gedit /usr/vm# . To do this, add the following line:

-device vfio-pci,host=00:1d.0,bus=pcie.0 \

only replace the 00:1d.0 with the address of your USB controller. If you are lucky, it will work without a hitch. If you are not lucky, there a number of reasons you may not be able to pass through that specific controller.

If you get an error, you might simply try a different controller. On our system, we were able to pass through the USB 2.0 controllers without a problem, but could not get the USB 3.0 controllers to work due to a problem with there being additional devices in their IOMMU group. We were unable to solve that issue, so for our system we ended up passing through individual USB devices instead of the entire controller.

USB Device Pass-through

If you run into problems passing through an entire USB controller that you cannot solve, the other option is to pass through individual USB devices. This is actually easier in many ways, but USB device addresses like to change randomly so you may find that you need to edit the virtual machine script any time you reboot the machine or add/change a USB device.

To start, use the command lsusb to show the USB devices currently connected to your system. In our case, we are creating three virtual machines so we have three additional sets of keyboards and mice plugged in. The relevant part of our lsusb output looks like:

Bus 002 Device 017: ID 045e:00cb Microsoft Corp. Basic Optical Mouse v2.0
Bus 002 Device 016: ID 045e:00cb Microsoft Corp. Basic Optical Mouse v2.0
Bus 002 Device 015: ID 045e:00cb Microsoft Corp. Basic Optical Mouse v2.0
Bus 002 Device 013: ID 045e:07f8 Microsoft Corp.
Bus 002 Device 014: ID 045e:07f8 Microsoft Corp.
Bus 002 Device 011: ID 045e:07f8 Microsoft Corp.

Most guides for KVM will say to use the ID to pass through USB devices (like 045e:00cb), but note that the ID is unique by model, not device. Since we have multiple devices of the same model, we instead have to use the bus and device numbers. The ID is more reliable so you should use that if possible, but if you have multiples of the same model you have to do it by bus/device number. To do this, add one of the follow lines to your /usr/vm# script for the USB devices you want to pass through to the virtual machine.

By ID:

-usb -usbdevice host:045e:00cb -usbdevice host:045e:07f8 \


By bus and device:

-usb -device usb-host,hostbus=2,hostaddr=17 -device usb-host,hostbus=2,hostaddr=13 \


Be sure to change the parts in bold to match your hardware. If you find that your USB device no longer works either randomly or after a reboot, rerun lsusb to find out if the device number has changed.

Congratulations! You are done!

There are plenty of other options in KVM that you can experiment with, but at this point you should have a virtual machine (or multiple virtual machines) up and running - each with their own dedicated video card and keyboard/mouse. Simply install your OS of choice and enjoy your multiheaded gaming system!

If you are interested in how well this works or want to find out more about how this could be used in the real world, be sure to check out our Multi-headed VMWare Gaming Setup article.

One thing we will say is that after using both VMWare ESXI and Ubuntu+KVM to make a multiheaded gaming PC, VMWare by far the easier and more reliable method. Things like being able to pass through all our USB controllers without any problems and the vSphere client to allow us to easily administer the virtual machines over the network made VMWare much easier to use. It is limited to AMD Radeon and NVIDIA Quadro cards, but even with that limitation it is still the method we would recommend if you planning on building a multiheaded gaming PC.


Tags: Ubuntu, KVM, PCI passthrough, virtualization, virtual machine, multi-head, gaming
deehems

Dope. I can't wait to try this. wish me luck

Posted on 2014-09-21 05:04:43
froyomuffin

First of all, great guide! I'm having a little trouble, however. I'm getting a code 12 in windows. Have you encountered this issue?

Posted on 2014-09-23 11:25:39

Yea, we've seen a code 12 while doing this. For us, it happened when the cards were not properly blacklisted from the Ubuntu OS. So the virtual machine could see the GPU, but couldn't use it since it was already bound to the base Ubuntu OS.

A code 12 I believe is a bit generic of an error though, so it could be caused by something else as well.

Good luck!

Posted on 2014-09-23 16:37:49
froyomuffin

Mmm that's exactly what I thought. Except it looks like that's not the issue. On a fresh boot, the audio & video devices from NVIDIA have the kernel driver set to pci-stub. After binding vfio, both turn to vfio-pci. Can you think of another way I can check that Ubuntu isn't hogging the PCI? :P

Posted on 2014-09-27 18:23:47

Hmm, I'm not sure what it could be. It's a pretty complex process, and if any of your hardware decides it has an issue with it then you may need to completely change how you do it (or it may not work with your hardware at all). I would check out all the forum links we put in the introduction - all of our guide was taken from those posts, so it is possible there is a step or two you can do differently to try to get it to work.

Posted on 2014-09-29 16:51:00
froyomuffin

I got it working in the end. I switched to Arch since but the solutions should be applicable to Ubuntu. All problems were solved after reading the following guide many many times:
https://bbs.archlinux.org/v...

What I ran into and the solutions:
1. Black screen/no output/code 12:
May happen due to kernel being too old, missing the ACS override patch or missing the i915 VGA arbiter fixes. It's also important to be doing primary passthrough (i.e. no video except directly from the graphics card). In the latest kernels (at least on 3.18+), I did not need the ACS override patch and did not need to use the arbiter fixes as I used OVMF. I would recommend using OVMF to anyone as the ACS override patch breaks DRI and cripples the graphics on the host (software rendering).

2. Code 43:
Bonus issue here after I got the passthrough working. This is likely caused by a "bug" nVidia introduced. If the drivers detect that the OS is virtualized, they fubar. It's known and nVidia won't fix it. Interestingly, this "bug" is not present on their quadro line cards that have passthrough supported officially. To work around that, you need to hide the virtualization from the OS (which, I understand prevents it from some optimizations :C). You do this with -cpu [type],kvm=off. Good document here: http://www.linux-kvm.org/wi...

Posted on 2015-02-14 04:05:36
toxicdav3

It would be interesting to try sli between 2 or more cards within a kvm instance.

Posted on 2014-10-11 15:23:04
mmm

Do you remember which driver version you were using or does it still work with the latest drivers? I'm trying it with very similar hardware - two GTX 780ti though instead of Titans and I keep getting code 43.

Posted on 2014-10-18 18:56:50

I'm not 100% sure, but I believe it was 337.88 although it may have been one or two revisions newer than that. 337.88 was the driver I know we used on the VMWare Gaming article (http://www.pugetsystems.com... since we did more benchmarking on that system and still had the logs. Since that was only a week or two before we started working on this article, it should be pretty close.

Good luck, I can tell you from experience that getting this to work is tricky. There might be just enough difference between your cards and the ones we used that one or more of our steps might need to be tweaked. The really hard part is figuring out what those tweaks are.

Posted on 2014-10-20 17:46:14
Koadic

Code 43 occurs when Nvidia cards detect that they are in a virtual machine, to fix this you need to use "kvm=off" in your script; for example, "-cpu host,kvm=off". This should be recognized in Qemu versions 1.7 and above iirc.

Posted on 2014-10-27 16:08:46
Paperino

Quick question: is the BIOS output during boot sent to the QEMU window or to the actual attached monitor?

Posted on 2014-10-21 23:25:24

We don't have this built up anymore, so I can't confirm it, but I believe the BIOS output of the virtual machines is not displayed on the attached monitor (so it should be through the QEMU window). The BIOS POST was so quick and that wasn't something we paid much attention to, however, so I'm really not 100% confident in that answer.

Sorry I can't give you a confirmed answer.

Posted on 2014-10-21 23:38:57
Koadic

It's on the attached monitor in Ubuntu 14.10 with Qemu 2.1.

Posted on 2014-10-27 16:02:29
Koadic

Anyone else having this issue when you close the VM and then try to start it again?

qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:03:00.0
Device option ROM contents are probably invalid (check dmesg).
Skip option ROM probe with rombar=0, or load from file with romfile=

The only fix I've discovered is rebooting the host system. I'm assuming the output isn't detaching from the VM upon shutdown.

dmesg says:

vfio-pci 0000:03:00.0: Invalid ROM contents

----edit----
I currently use a PCIe slot that isn't the primary (meaning it is my 2nd graphics card that I use in the VM). Does anyone have this issue when using the primary card? If not then I'll just switch which cards I'm using for the time being.

Posted on 2014-10-27 16:57:19
nbhs

Boot the vm, dump your bios with gpu-z, then use rombar=

Posted on 2014-11-15 10:41:49
nbhs

Sorry i ment romfile=

Posted on 2014-11-15 10:42:44
MCon

I have a much simpler setup (internal iHD4600 GPU for main win + a single nVidia GTX770), but I'm unable to convince Linux to leave the GTX card alone!
Nouveau is loaded WAY before pci-stub, which manages to "claim" only the audio part of the board.
I also tried setting the iGPU as "primary"in BIOS, but to no avail.

Can someone, please, help?

Posted on 2014-11-04 16:38:27
John

@MCon - you could try to blacklist nouveau in /etc/modprobe.d/blacklist.conf
Example: echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
this should prevent the kernel from loading the module :)

hope this helps..

Posted on 2014-11-07 09:11:44
Crackerjackthe4th

Nouveau is the Open Source driver, isn't it? If so, that means Nouveau loads as soon as the kernel itself loads; unlike the closed drivers, it's built into the kernel itself, which means two things for you:
1) You can't blacklist Nouveau, which means
2) you need to recompile the kernel without support for Nvidia cards.

Posted on 2015-08-30 15:23:12

Do I need 2 monitors for this setup? I'd really like to try it, but i'm lacking one screen if that's required

Posted on 2014-11-19 00:03:03
Dycius

There is a bit of a hackish way to use one monitor.
1. Set the VM to start at boot up of your host.
2. Have monitor plugged into your pass through card to the VM.
3. Pass through your mouse and keyboard.
4. Install putty and XMing on the VM if Windows.
5. Use X forwarding to bring your Linux apps seamlessly on your VM.
6. Success.

Or

1. If you monitor has two inputs, one cable from each card plugged into it.
2. Install synergy network Keyboard/mouse switch to move control between the two computers.
3. Move cursor to other computer and use button on monitor to switch inputs.
4. Success.

Hope this helps.

Posted on 2014-11-29 15:36:35

Thanks! I'll give the first one a try. Currently I am using the second solution, but its no fun with a 2k screen, since it does not support hdmi1.4 on that slot.

Posted on 2014-11-29 19:53:40
snarfies

Has anyone had any success with IOMMU on onboard drive controllers and the like? My mb has two different controllers, and I've experimented with it a bit, but I can't get the Windows 7 installer to see the disks.

Posted on 2014-12-03 21:26:05
Koadic

I remember having some syntax issues and not being able to have the installer see the disk. I try to keep my scripts in cloud storage (so I can reference them in case I really mess something up) and I believe this one fixed the issue, if not I'll have to boot into linux and find it.

-drive file=/home/name/Downloads/Windows.iso,id=d0,media=cdrom,if=none -device ide-cd,bus=ide.0,drive=d0 \

-drive file="/home/name/Downloads/Windows.img",id=e0,media=disk,format=raw,if=none,cache=none -device ide-hd,bus=ide.1,drive=e0 \

Posted on 2014-12-04 01:31:54
snarfies

Thanks, but that isn't what I'm after: I'm trying to use VFIO to pass through an entire drive controller (and by extension the attached drives), not point to an image file.

I phrased the original question a little poorly.

Posted on 2014-12-05 12:51:02
Menace

I feel like I'm so close. I get the QEMU window, but no output on the other screen. I've checked the other outputs on the video card, I hear the fans spin up when I launch the VM...

The card works, I had it running Windows 8.1 bare-metal before (bleh) so I know it isn't that. My devices seem to be getting marked as stub (single-headed for my current setup, so I only show two devices..)

Also no error messages to help me find out what's going wrong. Card is a GeForce GTX 780.

Any ideas?

Posted on 2014-12-06 06:28:59
bubba

I was in a similar situation.
Windows didnt know what to do with the gpu that had been passed through. So I added an emulated vga card and used VNC to view the output of the emulated card to let me install windows and get the gpu drivers up and running properly.

I added
-vnc :0 -vga vmware

to the vmcreation script

vnc'd into the server from my laptop (or from the host if you have a gui installed) downloaded catalyst, installed it, shut it down, removed the -vga vmware option and then restarted the window vm - it loaded up fine with a monitor connected to the gpu being passed through.

all working now

Posted on 2015-02-07 19:43:06
Menace

Thanks! I've torn it down for right now, but I still have the files. I'll have to try this soon. I'll let you all know if it fixes the issue for me.

Posted on 2015-02-08 17:03:43

It seems that all went well for me except that even though my GPU is claimed by stub it still appears in Ubuntu's Additional drivers as opposed to the APU that should be used. If I try to run VM the host system screen gets weird colour patterns and the system freezes half the time but windows does boot on the other video output (I see logo or even language selection when system didn't freeze). What could be causing my card to be used even with stub claiming it?

Posted on 2014-12-07 22:12:04

Thanks for writing this article on VGA passthrough using KVM on Ubuntu. It should work likewise with Linux Mint, so I hope you don't mind if I link to it in my Xen VGA passthrough how-to here: http://forums.linuxmint.com....

With regard to using Nvidia cards for the VM, I would be interested to see if there are Nvidia cards outside the Quadro series that work with KVM. By the way, I believe not all Quadro cards are suitable, for example the Quadro 600 didn't work for me when trying with Xen. I understand that only the "Multi-OS" types will work, that is the Quadro 2000 and higher.

Posted on 2014-12-16 18:49:12
zack

Did everything in the tutorial on a newly setup machine (just built for testing this) but I get this when trying to execute the VM script:

qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error no iommu_group for device
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized

Posted on 2014-12-18 11:39:08
CanadianMonkey

Running into an identical problem, were you able to solve this?

Posted on 2014-12-28 03:45:46
Andrew Collett

Hey, Having the same Issue. Are you guys using Ubuntu Server? I completed this guide in ubuntu desktop, and everything was working well. Played games on the wondows vm even. But when I went to set the thing up "finally", on an ubuntu server, no GUI, I ran into the above issue. No solutions yet... Any luck?

Posted on 2015-02-25 20:37:24
Justin Swanson

VT-D isn't enabled at either the bios or software level.

Posted on 2015-05-22 00:12:09
Roger Lawhorn

intel_iommu is not enabled. add that to your boot line. "intel_iommu=on"

Posted on 2015-11-13 06:07:20
Rick Wuijster

Hey guys, i have everything installed now, but when i start the virtual machine. I get an error: "could not find /usr/share/qemu/bios.bin" The folder /usr/share/qemu exists, but there is no file called bios.bin, there are a lot of other files tough. How can I fix this? Pls help

Posted on 2014-12-19 15:47:18
Foreigner

hey I had the same problem with 14.10. use /usr/share/seabios/bios.bin instead.
had worked for me.

Posted on 2015-02-04 22:18:45
Rick Wuijster

Okay, thanks. I already fixed it :). But I have another problem at the moment.

Posted on 2015-02-05 08:20:28
Ivan Palau Fernandez

First of all, great guide!!

I have a little trouble to blacklist devices, and hope you can help me to solve it. My system is almost the same as the one you used for this guide, but for the graphics cards. I have 4 R9 280X, and I only want to blacklist three of them, and keep one GPU with its HDMI Audio Device for Kubuntu. How could I proceed to do this?

I don't want to go the VMWare ESXi way, cause there is no support for Software Raid (not Fake Raid on MB chipset), and I have 24hdds attached to an HP SAS Expander, splitted into three Software Raid Volumes managed by Kubuntu, and it's mandatory for me to keep this untouched.

lspci -nn | grep AMD

04:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X] [1002:6798]

04:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series] [1002:aaa0]

06:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X] [1002:6798]

06:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series] [1002:aaa0]

0a:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X] [1002:6798]

0a:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series] [1002:aaa0]

0b:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X] [1002:6798]

0b:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series] [1002:aaa0]

Posted on 2015-01-07 20:25:33
Guest

Question: How Would I be able to pass through an entire hard disk to a specific vm?
My ubuntu installation runs on a hard disk but I owuld like the Win8 vm to run on my ssd.

Posted on 2015-01-11 10:40:50
Justin Swanson

You should be able to. Where it talks about disks link to the actual hard drive /Dev/sdX

Posted on 2015-05-22 00:13:17
Bas van Langevelde

How do I get the script to attach a physical drive?

Posted on 2015-01-12 15:19:17
Justin Swanson

Been looking into this for the wife's photo editing. I'm trying to see what are the requirements for the video cards. Anyone have a quick and simple answer for what to look for in video cards? I was looking at xen and it has a much more restrictive list.

Posted on 2015-01-20 05:24:06
Rick Wuijster

If i start my virtual machine i don't see windows starting on my other screen, nor do I see seabios. But I do not get any errors. Anyone knows how this is possible and/or knows how to fix this?

Posted on 2015-02-05 08:21:55
Foreigner

Hey, I had installed 15.04 Alpha with QEMU 2.2 today and got the same problem with the bios.bin in seabios. I deleted the "-vga none" trigger in row "-bios /usr/share/qemu/bios.bin -vga none \", installed windows 8.1 on the QEMU window and installed the Geforce driver for the GTX780Ti.
Then i rebooted the machine and saw the nicley yellow warning with fault code 43, but had wrote "-cpu host,kvm=off" before. ... Experimental I added the "-vga none" trigger again and there was the video output! Have no idea why, but it work. And the card was fully detected. No code 43.

Will test it with 14.10 tomorrow. That contains QUEMU 2.1.x instead 2.2.
14.04 only have QEMU version 2.0.x.

p.s.
You only can use "-cpu host,kvm=off" if you have installed QEMU 2.1 or later. See here:
https://www.redhat.com/arch...

Or you use a old driver. See here:
http://lists.gnu.org/archiv...

Posted on 2015-02-05 23:12:40
Rick Wuijster

Hi, I have the same error and followed the steps you told me: delete ¨vga none trigger¨. you spoke about pu host,kvm=off, and you said it only works with qemu 2.1. I have QEMU 2.0 installed. How can I update Qemu to 2.1 or later? Do I need to completely install new version of Ubuntu or do i need to update Kvm? Please help me.
Thank you

Posted on 2015-02-21 14:05:20
Achim

Great Guide. Thank You.

I made all the steps described inthe guide but get some problems.
My used hardware is as following:

Motherboard: X9DA7 X9DAE - Supermicro
CPU: 2x Intel Xeon CPU E5-2670
VGA 1: GeForce GTX 670
VGA 2 (Pass through): Nvidia Quadro FX1800 or
VGA 2 (Pass through): Nvidia GeForce 8800 GTS

The use OS and kernel version:
Ubuntu 14.04 64-Bit
Kernel: 3.18.4-031804

I didn't patch the kernel.

the command lspci -nn | grep NVIDIA results in:

02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 670] [10de:1189] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)
83:00.0 VGA compatible controller [0300]: NVIDIA Corporation G80 [GeForce 8800 GTS] [10de:0193] (rev a2)

the command: dmesg | grep pci-stub results in:

[ 1.145601] pci-stub: add 10DE:0193 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 1.145636] pci-stub 0000:83:00.0: claimed by stub
[ 17.176633] pci-stub 0000:83:00.0: enabling device (0000 -> 0003)

qemu starts without error but I get no signal on the monitor plugged to VGA 2.

My ideas:

- Are the two VGA 2 cards i used compatible to pass through (haven't find a list yet)
- Is it absolutely necessary to patch the kernel with the 'acs override patch'

Some body any ideas?

Posted on 2015-02-17 09:29:33
Jacob Eriksson

For whatever reason, I end up with an error filling the whole terminal window.
http://pastebin.com/faZQ1QFz

Posted on 2015-02-19 17:20:38
marco

You're using an Intel "k" part. This does not support VT-D which is required for PCI passthrough.

Posted on 2015-03-11 21:35:50
Anon

Intel ARK says differently

Posted on 2015-03-20 03:01:19

Even if the CPU supports VT-d the chipset on the motherboard and the BIOS also need to support it. It may be worth looking at that to see if the source of the problem is there.

Posted on 2015-03-20 04:17:29
SXX

For anyone who look into comments.

Haswell "K" CPU's didn't supported VT-D/IOMMU, but "Haswell refresh" aka "Devil's Canyon" already support VT-D even in "K" versions.

Posted on 2015-07-05 11:21:33
audioserf

Giving this a shot today with a GTX 660 and 970. Hope all goes well. Cheers.

Posted on 2015-02-23 23:26:25
Rick Wuijster

Hey, I'm trying it too with the gtx 970. I'm getting some errors, like code 43. i've read something about not showing windows that you're using KVM hypervisor. If you get everything working, please let me know! Because i'm stuck :(

Posted on 2015-02-24 16:10:36
audioserf

Dude I didn't even get the VM up and running :/ ran into a few QEMU errors during start up. I've heard it's because NVidia's proprietary drivers will hang on to devices that are not in use so blacklisting them is futile. Hope it all works out for you though.

Posted on 2015-03-04 00:55:40
hallona

any luck with a GTX 660 ?
i would buy one if it works!

Posted on 2015-06-04 09:36:36
audioserf

Back when I tried the first time, no. It's been a while so I'm not really sure if it you'd get the same results as I

Posted on 2015-06-04 22:27:45
Ty Brown

I'm stuck, I have two GTX 760's. I used to SLI them on Windows, but now, I want to use one for Ubuntu and the other for Windows so I can strictly use Windows for it's exclusives and I can play games like Dying Light on Linux. Obviously, they have the same ID, so if I try to blacklist one, then both will be blacklisted. I have Intel HD4600 onboard graphics but I REALLY don't want to use them since they're not very good when it comes to OpenGL performance.

Posted on 2015-03-11 18:05:47
Johnny

Hi, I have got exactly the same problem. (Although mine are GTX560's) If you have any solution or anyone else knows what I need to do, I would be very happy :)

Posted on 2015-07-05 16:29:52
Ty Brown

Hey it's been quite a while, but I ended up buying a copy of Windows after that. :S I'm actually going to get rid of one of my 760's simply because my SLI setup runs WAY too hot. Both cards at 85C on load, when on only 1 card I hit 60C on load. I think Single card solutions are the way to go. The 760 alone is a beast of a card, but I need to upgrade so I can get full DirectX12 support rather than partial. Sucks when Nvidia 2013 cards won't have full support but 2012 AMD cards will. Really wanting to get my hands on the R9 Nano when it comes out! It's supposed to be more powerful than the 290X, and that's a HELL of a card, especially for it's $300 mark! So I can expect to see the Nano come out at around the $350 range.

Posted on 2015-07-07 17:27:55
Bent

After going through the tutrorial I am getting the following error:

qemu-system-x86_64: -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1: vfio: error getting device 0000:01:00.1 from group 1: No such device

Verify all devices in group 1 are bound to vfio-pci or pci-stub and not already in use

qemu-system-x86_64: -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1: vfio: failed to get device 0000:01:00.1

qemu-system-x86_64: -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1: Device initialization failed.

qemu-system-x86_64: -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1: Device 'vfio-pci' could not be initialized

I defiently blacklisted all Nvidia components:

lspci listed me these:

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] [10de:1380] (rev a2)

01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbc] (rev a1)

dmesg | grep pci-stub gives me this:

[ 0.526239] pci-stub: add 10DE:1380 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000

[ 0.526258] pci-stub 0000:01:00.0: claimed by stub

[ 0.526262] pci-stub: add 10DE:0FBC sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000

[ 0.526266] pci-stub 0000:01:00.1: claimed by stub

So yeah, what am I missing out here?

Posted on 2015-03-18 16:34:54
Carter Hill

I just posted about pretty much the same error as well. Have you had any luck with it yet?

Posted on 2015-04-19 17:39:23
Carter Hill

I've followed the guide very closely, however when I try to run it with the command in step, I get the following errors...

qemu-system-x86_64: -device vfio-pci,host=07:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error, group 6 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
qemu-system-x86_64: -device vfio-pci,host=07:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 6
qemu-system-x86_64: -device vfio-pci,host=07:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=07:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized

Can anyone help me with this? I'm using a GTX 760 for the guest server

Posted on 2015-04-19 04:42:21
EquinoxCZ

Hi, is there a way, how to run 2 VM with only 2 GPUs? Linux is displaed on first GPU and if i try to run VM using this GPU, screen just powers off. I am not sure if it is doing anything in background. The other one works well.

Posted on 2015-05-11 21:42:02

Yea, you can do that, but since you have to blacklist the cards you want to pass through you wouldn't have a video output for your main Linux installation. If the system is something where you can just SSH into or something that might work for you, but if you need to actually use the Linux install locally then it won't work well.

Basically, to get the GPU to work in KVM you need to make it so the main Linux OS can't use it at all. I don't know of any way to do this other than blacklisting it during the initial boot of Linux.

Posted on 2015-05-11 21:50:28
EquinoxCZ

Thank you. That was mine intention to use SSH. I am albe to run the second VM via ssh. The first one should be the same. Can you maybe pintpoint me what to search for, how to blacklist first card in Linux init as well. I only added IDs of cards in /etc/initramfs-tools/modules as it is described in step 2.2. I have same cards so there is only one ID (second for audio). Linux than run in low resolution, but it still uses first gpu, though it is claimed by pci_stub (both cards)

Posted on 2015-05-12 09:05:16

Huh... I haven't done what you are doing myself, but I can't think of what else you would need to do. You might try uninstalling the GUI (or re-installing using Ubuntu Server) but I'm not 100% sure that would do it. An alternative is to get a really cheap video card you can toss in just so your main Linux install has something it can use. Not a really elegant solution, but that is probably the most sure-fire way to do what you want.

Posted on 2015-05-12 19:03:07
Isaac Peek

Attempted this yesterday. I'm running all AMD(fx 6300, HD7870(guest), R7 240(host)) and when I start my VM and if I don't press f12 for boot options or even select anything in the boot options I get the following:

qemu-system-x86_64: vfio_bar_write(,0xd2ea, 0x0, 1) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0xd2e9, 0x0, 1) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0xd2e8, 0x0, 1) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0xd2ef, 0x0, 1) failed: Device or resource busy

Anyone else had that? Know what might be going on? I found 1 other post on the ArchLinux thread about it and they had an issue with their intel drivers(host cpu/gpu). Didnt really find their solution though.

Posted on 2015-07-13 17:39:22

if anyone is looking for the expired forum he mentioned in the begging " KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9"

http://webcache.googleuserc...

Posted on 2015-08-08 05:39:10
swe3tdave

i had to blacklist the radeon driver because it was loading before pci-stub... but now it works fine with my Radeon 5450, now, if only my motherboard didn't auto shut off every time i try using two video cards...

Posted on 2015-09-14 11:53:07
Arne Ziegert

hope you are still active at this topic... i run a GTX 680 and a GT 9600 on a Sapphire X58 Pure Black und pci-stub both cards... still the primary gpu (aka the GTX 680) shows the terminal and i am unable to passthrough it to a VM. How can i fix this behavior? using the host just via ssh is no problem.

Posted on 2015-09-29 18:10:18
TLDOM

Blacklist nouveau drivers and control your setup by ssh.
Maybe blacklist nouveaufb too

Posted on 2015-10-01 07:36:09
MOV EAX, MANVIR

I've got my R9 280 successfuly assigned to the VM and installed the drivers but the only problem is I get very low fps. Any ideas as to why or a solution?

Posted on 2015-10-20 23:02:54
Foreigner

I had a problem with my x79 chipset and my geforce 780ti. Official the x79 was just PCI Express 3.0 ready... So if i want do get PCIe 3.0 speed instead of PCIe 2.0 i had to add a trigger to my grub.cfg. If you use the x79 chipset look up if you find a fix on the internet. In my case the nvidia driver shows me the actual pci speed what is used.

Hope it caold help.

Posted on 2015-11-03 08:36:19
Roger Lawhorn

I am trying this. It all seems good till I cannot get pci_stub to bind the nvidia video card to itself. I have tried adding pci_stub to my boot line to get it going early and still it will not grab anything. Any ideas on this? This is holding up the whole show. I am attempting to run Fallout 4 without rebooting into Windows 7. I need vga passthrough working for this.

Posted on 2015-11-13 00:09:01
Roger Lawhorn

I have it working now and windows 7 running, but the gaming video card cannot obtain enough resources (code 12). Cannot find a way to tell qemu to give the emulation more resources. Had to turn on the default vga card to get w7 installed and the nvidia driver installed. If I turn off the vga card and boot I get nowhere even though the nvidia card is there and has a driver.

Posted on 2015-11-13 06:15:43
Afifty Footninja

im using this guide on an amd machine. i didnt install any drivers knowing ill just be blacklisting. now at the point in blacklisting cards where i write "sudo update-initramfs -u" i keep getting: AND then the video devices fail to be claimed while the audio devices have no issues.

update-initramfs: Generating /boot/initrd.img-4.2.0-19-generic
W: Possible missing firmware /lib/firmware/radeon/boniare_mc.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/mullins_sdma1.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/kabini_sdma1.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/kaveri_sdma1.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/hawaii_sdma1.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/bonaire_sdma1.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/mullins_uvd.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/hawaii_uvd.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/kaveri_uvd.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/kabini_uvd.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/bonaire_uvd.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/mullins_vce.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/hawaii_vce.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/kaveri_vce.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/kabini_vce.bin for module amdgpu
W: Possible missing firmware /lib/firmware/radeon/bonaire_vce.bin for module amdgpu

Posted on 2015-12-13 21:19:36
Marcus Woods

I think the issue with Nvidia devices and the code 43 has a resolution of using an older driver. I was able to use 340.51 to get my nvidia cards to work in vmware and ubuntu. I believe the "security" updates in 341 are meant to prevent us from having fun. You will read everywhere that Nvidia Code 43 in windows while doing this. If you have that problem try 340.51 driver.

Posted on 2016-02-27 22:48:09
Marcus Woods

Also thanks for the great write up! Things have changed over a year, but not that much. I might use your skeleton to make another article and reference if that is okay with you to update for the new year.

Posted on 2016-02-27 22:49:09

The shell script vm# should be placed in /usr/bin/ folder instead of /usr/;
since they are executed under sudo, the shell script should not call sudo again: "sudo qemu-system-x86_64 -enable-kvm -M ...";
if I want to use the entire /dev/sdc as my harddisk for the virtual machine, how to specify the -drive -device line?

Posted on 2016-03-19 05:19:05
fede

i get this 3 erros, can you help me to find a solution?
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error no iommu_group for device

qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.

qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized

Posted on 2016-04-02 03:05:46
bar

I have the same error.
haven't figured out the solution yet..

Posted on 2016-04-24 11:59:00
pete

Any plans to update this guide for 16.04 server?

Posted on 2016-07-21 19:40:55
Hurricane

By the Cyber-Gods, if it works I'll get rid of my last Windows PC in the next few weeks.

Posted on 2016-08-07 21:03:25
Joe

does passing a USB device through to the guest make it unavailable to the host?

Posted on 2016-08-08 22:04:35
Roflrogue

Do you need more than one graphics card for this to work?

Posted on 2016-08-14 13:24:16
Christopher Gibb

Hey, just wanted to say thanks for writing this article. After a couple of days messing around I've managed to get my setup running.
For anybody out there banging their head off of this I include my spec to aid in deciding if this can be done for you - i7-5820K, ASRock X99 Killer Fatal1ty, GeForce GTX 750, GeForce GTX 950. At the moment the 750 is passed through but I'm planning on swapping them around. All the info that was needed was in the article and buried in these comments + links. Again, thanks to you Matt for writing this article and thanks to all those who commented for sharing their fixes. It's also worth a mention that I'm migrating from VMWare workstation and that I didn't even need to do anything to the .vmdk file - KVM boots it no bother! I'm so happy my VM has now got it's own GPU with the correct nVidia drivers. I have some small issues to iron out with networking performance and I need to figure out how to be suspend/resume the VM (all the docs I've seen so far are for the XML method). But all in all very satisfied with this.

Posted on 2016-10-02 18:08:17
Paul O'Rorke

Hi,

thanks for this awesome guide. I am stuck at black listing my card becauyse I have 2 x GTX 550 Ti cards and they have the same ID:

paul@paul-desktop:~$ sudo lspci -nn |grep -i nvidia
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF116 [GeForce GTX 550 Ti] [10de:1244] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation GF116 High Definition Audio Controller [10de:0bee] (rev a1)
06:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF116 [GeForce GTX 550 Ti] [10de:1244] (rev a1)
06:00.1 Audio device [0403]: NVIDIA Corporation GF116 High Definition Audio Controller [10de:0bee] (rev a1)

How do I specify one for the guest so as to leave the other to the host?

Paul

Posted on 2016-11-09 22:15:22
Wilco Engelsman

thnx mate, got it working within a few hours. Nvida 1070 user here, with ubunto 16.04.
I had to change some parts of the script and had to follow some different steps in order to get it working.
I had to make sure the nivdia driver was not installed in X (anymore). Otherwise the blacklisting would not work.
I had to change some settings with the bios (use efi bios)
I had to change some parameters with qemu.

Installing windows 10 now.

Posted on 2016-11-28 12:27:41
Miguel Covarrubias

Im trying to get it working but im getting this message:

qemu-system-x86_64: -device ide-hd,bus=ide.0,drive=disk: Drive 'disk' is already in use because it has been automatically connected to another device (did you need 'if=none' in the drive options?)

Could you please write down your scripts files ?

Thanks!

Posted on 2017-02-25 23:19:21
Wilco Engelsman

sudo qemu-system-x86_64 -enable-kvm -M q35 -m 16384 -cpu host,kvm=off \
-smp 8,sockets=1,cores=4,threads=2 \
-bios /usr/share/ovmf/OVMF.fd \
-vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=0000:01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=0000:01:00.1,bus=root.1,addr=00.1 \
-device vfio-pci,host=0000:03:00.0,bus=pcie.0 \
-device virtio-scsi-pci,id=scsi \
-drive id=disk0,if=virtio,format=raw,file=/home/wilco/VM/Windows10.img \
-drive file=/dev/sdb2,if=virtio,format=raw,media=disk,id=disk1 \
--usb -usbdevice host:045e:0719 \

Posted on 2017-02-26 08:04:42
Wilco Engelsman

but, if you find it too difficult: you could also use virtmanager to create a working vm with gpu passthrough.

I also have a non-virtio drive for you:

-drive file=/media/disk2/windows1.img,id=disk,format=raw,if=none -device ide-hd,bus=ide.0,drive=disk \

Posted on 2017-02-26 08:10:16
Miguel Covarrubias

Thanks a lot!! Its booting now, but im facing another problem..
When the installation program boots it get stuck when you have to select the hd where windows will be installed.
It says that I missing a DVD, USB o HD driver

Here is my script:

sudo qemu-system-x86_64 \
-enable-kvm \
-M q35 \
-m 8192 \
-cpu host,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-bios /usr/share/ovmf/OVMF.fd \
-vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \
-device virtio-scsi-pci,id=scsi \
-drive file=/home/miguel/VM/windows1.img,id=disk,format=raw,if=none -device scsi-hd,drive=disk \
-drive file=/home/miguel/Downloads/Win10_1607_SingleLang_SpanishMexico_x64.iso,id=isocd,if=none -device scsi-cd,drive=isocd \
-usb -usbdevice host:1532:0110 \
-boot menu=on

FYI Im installing Windows 10 and Ubuntu 16.04 as the Host OS as you did

Please advise
Thanks!

Posted on 2017-02-26 16:49:58
Miguel Covarrubias

Well I'll respond to myself, i just added the virtio iso drivers:

-cdrom /home/miguel/VM/virtio-win-0.1.126.iso

I was able to select the scsi driver and its installing now

Thanks!

Posted on 2017-02-26 17:31:17
Ivan Vargas

Hi!

I have a Quadro M6000 , can i use this solution for passtrought the same gpu to 4 o 5 virtual machines at same time? thanks!

Posted on 2017-02-09 16:18:54
ElXDi

First at all, thank you very much for most complete guide qemu + GPU passthrough.
Everything works well for me except two really annoying things. First once starting qemu the Ubuntu's interface color is inverting. Helps is I force qemu monitor to full screen and the back to window mode (ctr+alt+f).
The second issue is real problem. After installation on AMD/ATI drivers Windows goes to boot loop for some reason.
My specs:
CPU: i7-4790K
MB: Z97x-UD5H-BK
RAM: 32Gb
Video 1: Intel embedded HD Graphics 4600
Video 2: Radeon HD 5870 for guest machine

Posted on 2017-02-12 22:38:15
ElXDi

I forget to mention, the guest OS is Windows 7 Home Premium

Posted on 2017-02-12 22:43:17
Ignacio Colautti

I'm having trouble booting. I googled and tried everything.

This is my VM file


#!/bin/bash

## DEVICE PASSTHROUGH

configfile=/etc/vfio-pci.cfg
vmname="vmGames"

vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver/module/drivers/pci\:vfio-pci ]; then
echo "Skipping $dev because it is already using the vfio-pci driver"
continue;
fi
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo "Unbinding $dev"
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
echo "Unbound $dev"
fi
echo "Plugging $dev into vfio-pci"
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
echo "Plugged $dev into vfio-pci"
}

modprobe vfio-pci

if ps -A | grep -q $vmname; then
echo "$vmname is already running." &
exit 1
else
cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
done

cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd

## VM INITIALISATION

qemu-system-x86_64 \
-name $vmname,process=$vmname \
-enable-kvm \
-M q35 \
-m 8G \
-cpu host \
-smp 4,sockets=1,cores=2,threads=2 \
-vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=01:00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=01:00.1 \
-boot order=dc \
-device virtio-scsi-pci,id=scsi \
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,unit=1,file=/tmp/my_vars.fd \
-drive file=/mnt/HDDSata1/vm-games.img,id=disk0,if=virtio,cache=none,format=raw \
-drive file=/mnt/HDDSata1/Windows10Pro.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \
-drive file=/mnt/HDDSata1/virtio-win-0.1.126.iso,id=virtiocd,format=raw,if=none -device ide-cd,bus=ide.1,drive=virtiocd \
-usb -usbdevice host:0e8f:0022
-netdev type=tap,id=net0,ifname=tap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

exit 0
fi

The monitor gets signal but this error appears
boot failed efi scsi device

What I'm doing wrong?

Thanks

Posted on 2017-05-23 00:38:54
DeadChicken

Every time I set up a Qemu environment on Linux I use this guide/forward it to friends, so I thought some update should be useful.
Since Qemu version 2.4 the drive/device commands changed, so here's an updated script:

#!/bin/bash

configfile=/etc/vfio-pci_7770.cfg

vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

}

modprobe vfio-pci

cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
done

sudo qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host \
-smp 4,sockets=1,cores=4,threads=1 \
-bios /usr/share/qemu/bios.bin -vga none \
-device ioh3420,bus=pcie.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=03:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=03:00.1,bus=root.1,addr=00.1 \
-usb -usbdevice host:05e3:0745 -usbdevice host:0458:003a \
-hda /home/VM_Files/windows1.img \
-cdrom /home/VM_Files/ISO/Win10_Pro_x64_hu.ISO \
-boot menu=on

exit 0

Long storry short: you have no longer need to add a device and a drive for each ISO/IMG file, just use -hda/-hdb and -cdrom

Posted on 2017-10-29 15:47:30
Linblows99

It would be easier to glue two or three pc cases together!

:)

Posted on 2018-09-16 21:31:45