Puget Systems print logo
Read this article at https://www.pugetsystems.com/guides/564
Article Thumbnail

Multi-headed VMWare Gaming Setup

Written on July 9, 2014 by Matt Bach


At Puget Systems, we are constantly trying out new (and sometimes old) technologies in order to better serve our customers. Recently, we were given the opportunity to evaluate desktop virtualization with NVIDIA GRID which GPU virtualization and virtual machines to stream a virtual desktop with full GPU acceleration to a user. NVIDIA GRID is built around streaming the desktop, which requires robust network infrastructure and high quality thin clients. Even with the best equipment, there is latency, video compression, and high CPU overhead. These can be worked around for many applications, but are all big turn-offs to gamers.

What that in mind, we set out to build a PC that uses virtualization technologies to allow multiple users to game on one PC but where there is no streaming and no additional latency because all of the user inputs (video, sound, keyboard and mouse) are directly connected to the PC. By creating virtual machines and using a mix of shared resources (CPU, RAM, hard drive and LAN) and dedicated resources (GPU and USB) we were able to create a PC that allows up to four users to game on it at the same time. Since gaming requires minimal input and display lag, we kept the GPU and USB controllers outside of the shared resource pool and directly assigned them to each virtual OS which allows the keyboard/mouse input and video output to bypass the virtualization layer. The end result is a single PC running four virtual machines; each of which behaves and feels like any other traditional PC.

Hardware Requirements

Unlike a normal PC, there are a number of hardware requirements that limit what hardware we were able to use in our multi-headed gaming PC. Since we are using virtual machines to run each virtual OS, the main requirement was that the motherboard and CPU both support virtualization which on Intel-based systems is most commonly called VT-x and VT-d. Checking a CPU for virtualization support is usually easy as it is listed right in the specifications for the CPU. Motherboards are a bit trickier since virtualization support is not often listed in the specs, but typically either the BIOS or the manual will have settings for "VT-d" and/or "Intel Virtualization Technology". If those options (or different wording of the same setting) are available, then virtualization and PCI Passthrough should work.

Also, since we are passing video cards through to each virtual OS, the video card itself needs to actually support PCI passthrough. This was the most difficult hardware requirement we had to figure out since video cards do not list anywhere (at least that we could find) whether or not they support it. In our research and contact with manufacturers, we found that almost all AMD cards (Radeon and FirePro) work, but from NVIDIA officially only Quadro and GRID (not GeForce) support it.

We tried to get a number of GeForce cards to work (we tested with a GTX 780 Ti, GTX 660 Ti, and GTX 560), but no matter what we tried they always showed up in the virtual machine's device manager with a code 43 error. We scoured the internet for solutions and worked directly with NVIDIA but never found a good solution. After a lot of effort, NVIDIA eventually told us that PCI passthrough is simply not supported on GeForce cards and that they have no plans to add it in the immediate future.

Update 8/1/2014: We still don't know of a way to get NVIDIA GeForce cards to work in VMWare, but we have found that you can create a multiheaded gaming PC by using Ubuntu 14.04 and KVM (Kernel-based Virtual Machine). If you are interested, check out our guide.

In addition, we also had trouble with multi-GPU cards like the new AMD Radeon R9 295x2. We could pass it through OK, but the GPU driver simply refused to install properly. Most likely this is an issue with passing through the PCI-E bridge between the two GPUs, but no matter what is actually causing the issue the end result is that multi-GPU cards currently do not work well for PCI passthrough.

While this list is nowhere near complete, we specifically tested the following cards during this project:

Cards that work Cards that don't work
AMD Radeon R9 280 NVIDIA GeForce GTX 780 Ti
AMD Radeon R7 250 NVIDIA GeForce 660 Ti
AMD Radeon HD 7970 NVIDIA GeForce 560
NVIDIA Quadro K2000 AMD Radeon R9 295x2

For our final testing configuration, we ended up using the following hardware:

Four Radeon R9 280 video cards put out quite a bit of heat - especially when stacked right on top of each other - so we had to have a good amount of airflow in our chassis to keep them adequately cooled. There are a number of chassis available that have enough PCI slots and good fan placement like the Rosewill Blackhawk Ultra or Xigmatek Elysium that would work for this configuration, but for our testing we used a custom acrylic chassis since we were planning on showing this system off in our booth at PDXLAN 24.

Our test system in a custom acrylic enclosure with four Asus Radeon R9 280 DirectCU II video cards. Note the four groups of keyboard/mouse/video cables in the third picture that go to the four sets of keyboard/mouse/monitor.

A very similar system we recently built for a customer (for a completely different purpose) in a Xigmatek Elysium chassis with four XFX Radeon R9 280X video cards.

Virtual Machine Setup

Since we want to have four operating systems running at the same time, we could not simply install an OS onto the system like normal. Instead, we had to run a virtual machine hypervisor on the base PC and create multiple virtual machines inside that. Once the virtual machines were created, we were then able to install an OS on each of them and have them all run at the same time.

While there are many different hypervisors we could have used, we chose to use VMWare ESXI 5.5 to host our virtual machines since it is the one we are most familiar with. We are not going to go through the entire setup in fine detail, but to get our four virtual machines up and running we performed the following:

Step 1
We installed the VMWare ESXI 5.5 hypervisor on the system and assigned a static IP to the network adapter so we could remotely manage it.

VMWare ESXI setup

Step 2
In Configuration -> Advanced Settings -> Edit, we selected the devices we wanted to pass through from the main system to the individual virtual machines. In our case, we passed through two USB 2.0 controllers, two USB 3.0 controllers, and the four AMD Radeon R9 280 video cards (both the GPU itself and the HDMI audio controller).

VMWare ESXI PCI Passthrough

Step 3
Next we created four virtual machines and added one USB controller and one video card to the machine's PCI devices making sure to include both the GPU itself and the HDMI audio device. Figuring out which USB controller was which on the motherboard was matter of trial and error, but we eventually got it set so that we knew which USB ports were allocated to each virtual machine.

For each virtual machine we assigned 4 vCPUs, 7GB of RAM, and 180GB of storage space.

Step 4
With the PCI devices added, we next changed the boot firmware from BIOS to EFI. This was really only required to get the USB 2.0 controllers to function properly on our motherboard as a passthrough device, but for the sake of consistency we changed all of the virtual machines to EFI.

Step 5
For the final configuration step, we entered the datastore in Configuration -> Storage, downloaded the .vmx file for each virtual machine, added 

pciHole.start = "1200"
pciHole.end = "2200"

to the file and re-uploaded it to the datastore. This was required for the AMD cards to properly pass through to the virtual machines. If you are doing this yourself and are having troubles, you could also try adding pciPassthru0.msiEnabled = "FALSE" where "0" is the PCI number for both the GPU and HDMI audio device.

VMWare ESXI datastore .vmx file


With all this preparatory setup complete, we were able to install Windows 8.1 through the vSphere client console. Once we had the GPU driver installed we were then able to plug a monitor and keyboard/mouse into the appropriate GPU and USB ports, configure the display settings to use the physical monitor instead of the VMWare console screen and complete the setup and testing as if the virtual machine was any other normal PC. 

Performance and Impressions

Since resource sharing makes it really difficult to benchmark hardware performance we are not going to get into anything like benchmark numbers. Really, the performance of each virtual OS is going to entirely depend on what hardware you have in the system and how you have that hardware allocated to each virtual machine. Instead of FPS performance, what we are actually more concerned about is if there is any input or display lag. Since gaming is so dependent on minimizing lag, this is also a great way to test the technology in general. If there are no problems while gaming then less rigorous tasks like web browsing, word processing, Photoshop, etc. should be no problem.

To get a subjective idea of how a multi-headed gaming system performs, we loaded four copies of Battlefield 4 onto the four virtual machines and "borrowed" some employees from our production department to test out our setup. After getting the video settings dialed in (2560x1440 with med/high settings gave us a solid 60 FPS in 48 man servers), we simply played the game for a while to see how it felt. Universally, everyone who tried it said that they noticed absolutely no input or display lag. So from a performance standpoint, we would call this a complete success!

Screenshot of Battlefield 4 running on 1/4 of our test system (one of the four virtual machines)

2560x1440 with a mix of medium and high settings gave us >60 FPS in 48 man servers

With four people playing Battlefield 4 at the same time, our setup was not really limited by either the CPU or GPU since both were performing near 100%. Instead, we found that the ASUS Radeon R9 280 paired almost perfectly with four shared cores from the Xeon E5-2695 v2. However, keep in mind that the Xeon E5-2695 v2 we used is pretty much the fastest Xeon CPU currently available so if you are considering doing this yourself you may run into a CPU limitation if you want to have four people gaming at once. Of course, you could very easily use a dual CPU motherboard with a pair of more reasonably priced Xeons to get even more CPU power for your dollar than what we have in our test system.

Conclusion & Use Cases

Getting four gaming machines running on a single physical system was not quite as easy as we originally expected it to be, but it worked very well once we figured out all the little tricks. The only obstacle we were not able to overcome was the fact that NVIDIA GeForce cards do not support virtualization, but the other issues we ran into were all resolved with various small configuration and setup adjustments. With those things figured out, it really was not overly difficult to get each virtual OS up and running with its own dedicated GPU and USB controller.

Performance wise, we are very impressed with how well this configuration worked. Being a custom PC company, we have plenty of employees that enjoy gaming and none of them noted any input or display lag. But beyond the cool factor, what is the benefit of doing something like this over using four traditional PCs with specs similar to each of the four virtual machines?

4 monitors, keyboards, and mice off one PC

  • Hardware consolidation - This probably is not terribly important to many users, but having a single physical machine instead of four means that you have less hardware to maintain and a smaller physical footprint.
  • Shared resources - For our testing we only assigned four CPU cores to each of our virtual machines since we knew that we were going to be loading each virtual machine equally but with virtualization you typically would over-allocate resources like CPU cores to a much greater extent. Instead of equally dividing up a CPU, with virtualization you can assign anything from a single core to all the cores on the physical CPU to each virtual machine. Modern virtualization is efficient enough that it can dynamically allocate resources to each virtual machine on the fly which gives each machine the maximum performance possible at any given time. Especially in non-gaming environments, it is rare to use more than a small percentage of the CPUs available power the majority of the time, so why not let your neighbor use it while they use Photoshop or encode a video?
  • Virtual OS management - Since virtual environments are primarily made for use with servers, they have a ton of features that make them very easy to manage. For example, if you want to add a new virtual machine or replace an existing machine that is having problems, you can simply make a copy of one of the other virtual machines. You need to change things like CD keys and the PCI passthrough devices, but the new machine would be up and running in the fraction of the time it would take to install it from scratch. In addition, you can use what are called "Snapshots" to create images of a virtual machine at any point. These images can be used to revert the virtual machine back to a previous state which is great for recovering from things like virsuses. In fact, you can even set a virtual machines to revert to a specific snapshot whenever the machine is rebooted. This makes it so you don't have to worry about what a user might install on the machine since it will automatically revert to the specified snapshot when the machine is shut down.

As for specific use-cases, a multi-headed system could be useful almost any time you have multiple users in a local area. Internet and gaming cafes, libraries, schools, LAN parties, or any other place where a user is temporarily using a computer would likely love the snapshot feature of virtual machines. In fact, you could even give all users administrator privileges and just let them install whatever they want since you can have the virtual machine set to revert back to a specific snapshot automatically.

In a more professional environment, snapshots might not be as exciting (although they would certainly still be very beneficial), but the ability to share hardware resources to give extra processing power to users when they need would be very useful. While it varies based on the profession, most employees spend the majority of their time doing work that requires little processing power intermixed with periods where they have to wait on the computer to complete a job. By sharing resources between multiple users, you can dramatically increase the amount of processing power available to each user - especially if it is only needed in short bursts.

Overall, a multi-headed system is very interesting but is a bit of a niche technology. The average home user would probably never use something like this, but it definitely has some very intriguing real-world benefits. Would something like this be useful in either your personal or professional life? Let us know how you would use it in the comments below.

Extra: How far is too far?

For this project we used four video cards to power four gaming virtual machines because that was a very convienant number considering the PCI slot layout and the fact that the motherboard had four onboard USB controllers. However, four virtual machines is not at all the limit of this technology. So just how many virtual desktops could you run off a single PC with each desktop still having direct access to its own video card and USB controller? 

The current Intel Xeon E5 CPUs have 32 available lanes that PCI-E cards can use. If you used a quad Xeon system you would get 128 PCI-E lanes which you could theoretically divide into 128 individual PCI-E x1 slots using PCI-E expanders and risers. The video cards would likely see a bit of a performance hit unless you are using very low-end cards, but by doing this you could technically get 66 virtual personal computers from a single quad Xeon system (assuming the motherboard has 4 onboard USB controllers).

Is 66 virtual machines off a single box too far? Honestly: yes. The power requirements, cooling, layout and overall complexity is pretty ridiculous at that point. Plus, how would you even fit 66 users around one PC (if it could even be called a PC at that point)? USB cables only have a maximum length of about 16 feet, so very quickly you would simply run out of space to put people. Really, at that point you should probably look into virtual desktop streaming instead of the monstrosity above that we mocked up in Photoshop.

Theroretical quad Xeon PC running 66 video cards and USB controllers

A Quad Xeon system could theoretically run 66 video cards and USB controllers at PCI-E x1.
(Yes, this picture is photoshopped)

What do you think? How many virtual desktops do you think is right to aim for with a setup like this?

Tags: VMWare, ESXI, PCI passthrough, virtualization, virtual machine, multi-head, gaming
Avatar gotbliss56

What stops Bluetooth technology from being used here so you don't have to use USB cables? I know some gamers like to be hardwired but I don't notice any lag with my Logitech wireless mouse and keyboard.

Posted on 2014-07-10 01:56:30

Yea, I think wireless keyboards and mice would work pretty well for this. You still have to run a cable for the monitor but HDMI and DisplayPort cables can be something like 40-50 feet long so you would get quite a bit of extra space from that. I'm not sure how many bluetooth devices you can use in a small space before you get interference though, so that would probably be what really limits the number of virtual machines you could run off a single PC if you want to go the crazy route with PCI-E extenders and all that.

Posted on 2014-07-10 02:21:54
Avatar DaGeek247

I honestly have no idea how many bluetooth devices can fit in one room, let alone sitting right next to each other on a server.

What I do know is that usb is no longer limited by length if you're willing to spend the money. There is an optical usb cable that can go some pretty crazy lengths without issue.



Posted on 2014-10-28 04:08:02
Avatar Aberran Fox

Also the 15-16 foot limit is for power over USB. IF you have a powered hub at every 15 feet you can get to 100 foot limits without distortion. Well you want decent quality hubs at each of those points of course. But if you have that the limit greatly expands. Same can be done with HDMI, VGA, and DVI cables over distances. I have worked with both these problems. Also the limits for USB hit harder when your working with larger amounts of data. A keyboard and a mouse would likely have no problem over even a 25 foot cable without extra hubs. You may have a slight latency added but electricity moves quick and the time it takes to go another 10 feet I don't think most mortals would even notice.

TL:DR these limits are very conservative and with powered hubs and optical cables are no longer actual limits.

Posted on 2016-01-31 17:49:58
Avatar Mark Calder

There is a limit to the number of USB's that can be connected end-to-end. It's called tiers and you can have five max.
We use active USB extensions which are essentially one port USB hubs (that are hub powered). Judging from a few thousand customers worldwide, using more than two active USB extensions causes an eventual dropout forcing the connection to be cycled. Our devices require no hub power at all but we find more than one active extension causes dropouts ranging from once per two weeks to multiple times a day.

Posted on 2016-09-09 19:02:23
Avatar Francis Draconis

That's not true. You can only have 128 USB devices connected to a single "port". Depending on the chip design on the card (I'm not sure of exactly how all these work) you'd be using a PCI lane for say 2 "ports" of USB 2.0 which would be a max of 256 devices per PCI lane. So, you actually would most likely each up AT LEAST an entire PCI lane just for USB devices bandwidth. (That's presuming that you even had a chip that was designed to be maxed out and all those devices didn't interfere with each other etc).

Posted on 2018-04-13 21:49:49
Avatar Dan

in my experience usb is a little more finicky with distance than just the 5v...not saying there isn't prepackaged devices that do the job but from having tried running usb over a cat5e(with references from others who have too) with a powered hub on both ends I found it to be flaky in daily use and eventually gave up on it

Posted on 2017-04-01 08:39:18
Avatar jbanks

I'm not sure why they told you geforce doesn't support PCI passthrough. I have it working nicely currently.

My setup is a linux host + windows guest using qemu+kvm.

The linux host is running some crappy ati card while the guest is given a gtx 760.

You can see other setups on https://bbs.archlinux.org/v...

Posted on 2014-07-10 14:19:25
Avatar ScathArach

I am guessing that GeForce doesn't support proper passthrough in a hypervisor enviroment which is what they are testing on.

Posted on 2015-06-16 16:39:22
Avatar Michael F

KVM is a hypervisor of sorts - it is part of the Linux kernel so most distributions pull double duty as full OS and hypervisor. The reason it works with KVM is that it is not a commercial project so there is some legal barriers VMware has that KVM doesn't.

Posted on 2015-12-03 03:09:24
Avatar Cypher720

Could use a converter for longer distances with hdmi/usb cables over cat5/6.
hdmi: http://www.hdtvsupply.com/h...
usb: http://tinyurl.com/pu3l3xx

Pros: Reduce the number of physical towers in the building.
Multiple roles. It could be a HTPC/kitchen/gaming/kids pc all in one.

Limiting factors: Higher inital cost as well as upgrades, Cable runs, Difficulty upgrading parts. Having things break whenever a part is upgraded would be annoying. Technically advanced configuration.

Posted on 2014-07-10 22:31:52

Can I please have that rig? I will donate all my gaming life to it.

Posted on 2014-07-10 22:38:33
Avatar hiho

How did you divive 12 cores into 4 cours each divided by 4 stations? Or did you count hyperthreading as cores in which case you had 3 machines on physical cores and the 4th on hyperthreading which is not feasible as far as i know.. ? Could anybody explain or is there a mistake?

Posted on 2014-07-13 07:47:56

We had Hyperthreading off just to make things a bit simpler and indeed had 4 cores allocated to each virtual machine. That adds up to 16 cores, but with virtualization you can over-allocate resources like CPU cores. The simplified version of what is happening is that the hypervisor (the base VMWare ESXI install) dynamically allocates the CPU cores to the machines that need the processing power. So since there are only 12 cores, VMWare changes which virtual machines get which cores on the fly. This allows the virtual machines to get bursts of extra CPU power when they need it rather than matching the number of physical CPU cores to the total number of vCPUs that are allocated to the virtual machines.

If you want a bit more in-depth answer, check out this section of our recent Virtual Desktop Streaming with NVIDIA GRID article: http://www.pugetsystems.com...

Posted on 2014-07-15 19:26:11
Avatar Jeremy Roberts

That is not my understanding of how VMware distributes processing.
If you allocate 4 vCPU to a VM it will need to wait until its time slot and for 4 cores to be come available to do any work. This means that at any one time you will have only 3 virtual machines doing any work load on the the physical CPU (actually possibly two as the host has to use CPU as well). If you have 4 heavily used machines this will show up as high CPU ready time or the time the VM needs to wait for CPU resouces to be come available.
It would be better to either allocate only 3 vCPU per machine (the host will still be using some anyway) or turn on Hyperthreading so the host sees 24 virtual CPUs, this way the CPU will do some of the balancing act and you will get better overall responsiveness from CPU workloads.
It is a testament to how efficient VMware is at CPU scheduling or how over specified most modern gaming computers are that you did not notice any real lag.

On he plus side a great article and fablous build for which I commend you on some excellent boundry pushing.

Posted on 2014-09-16 22:50:40
Avatar Scott Tuttle

I could be wrong but I believe this was called strict co-scheduling which hasnt been the case since ESX 2.x


Posted on 2015-03-24 12:31:30
Avatar Joe P

Which would be a better performance an amd r9 280 or a Nvidia Quadro 4000? Not the K4000 since that's about 100 dollars more expensive, but the r9 280 and the Nvidia Quadro 4000 are about the same price, anyone know which one I should get? And can I not just use a r9 290? The article only mentions r9 290x not the r9 290?

Posted on 2014-07-26 11:38:20

It really depends on what you are doing. If you are gaming then a R9-290/R9-290X (either should work) would be much better than a Quadro 4000. If you are making more of a workstation-type system (for AutoCAD, video encoding, etc) then the Quadro would be a better choice in my opinion.

Posted on 2014-07-28 19:12:11
Avatar Michael

From what I read at various vmware forums the 290x does not work in passthrough with esxi 5.1 or 5.5. If someone manages to make it work let me know, I went down the route of modding my 780ti to a Quadro K6000, works a treat!

Posted on 2014-08-13 09:14:38
Avatar George Todd

I'm looking at this setup for the house and LAN parties and found this:https://calebcoffie.com/amd...

Posted on 2015-08-02 19:08:23
Avatar Joe P

Also is it possible to do a youtube tutorial on how you managed to get this working? I've never used ESXi and barely even know how to set it up, but I'm really eager to get a setup like this rolling. I have all the hardware aside from graphics cards (figuring out which I should use, leaning towards 280s based from this article) but I have no idea how to setup ESXi step by step and how to configure the pass through as well as setting up the vms in ESXi. There are no tutorials for setting up this kind of setup on youtube, so I'm begging you to please make one :(

Posted on 2014-07-26 14:01:23

There are a couple youtube guides for installing ESXI and creating virtual machines (https://www.youtube.com/wat... and https://www.youtube.com/wat.... For the PCI passthrough, I did find this video (https://www.youtube.com/wat... although it doesn't deal with editing the .vmx file from the datastore. If you follow those three videos and check our setup steps periodically, I think you should be able to get it working even if you don't have much ESXI experience.

Good luck!

Posted on 2014-07-28 19:19:18
Avatar Simo

Any idea if you can use a Radeon R9 280 from any vendor? e.g. MSI, AMD, Sapphire...

Posted on 2014-08-07 04:48:29

Yea, any brand should work fine.

Posted on 2014-08-07 19:48:46
Avatar Michael F

I know this is a year old, but I'll leave it here because I just found this today so others probably will in the future. VMware and Red Hat (primary KVM contributor) both recommend reference cards from Sapphire. Any card should work, but there are potential card BIOS issues.

Posted on 2015-12-03 03:14:09
Avatar Joe P

Would there be anyway to take 1 graphics card, perhaps a Quadro and divide it up so that multiple guests can draw from 1 graphics card instead of having each guest having their own individual graphics card? I was hoping to maybe split a Quadro 6000 and assign about 256 mb from the gpu to multiple guests

Posted on 2014-08-10 16:59:43

I don't believe you can do that with a Quadro card. As far as I understand, NVIDIA GRID cards are the only cards from NVIDIA that can do something like that. Unfortunately, they are headless cards (no ports) so you have to do virtual desktop streaming. Plus, they are pretty expensive. We did an article on it if you want to read abnout them though: http://www.pugetsystems.com...

Most likely just getting multiple cheaper cards would be the best route for a situation like yours.

Posted on 2014-08-11 18:56:35
Avatar Joe P

Hi thanks for responding, I am very grateful. Could you please take a look at this 40 second youtube demonstrating sVGA on ESXi with a quadro 6000? The author wrote this about what he was doing:


Tim Federwitz

1 year ago,

"I will have to answer this in multiple parts because of character limitation per comment. We are giving each VM 512MB of video memory. This reserves 256MB on the GPU and 256MB on the Host in RAM (so this will need to be taken into consideration in host sizing). The video card we are using is an Nvidia Quadro 6000 which has 6GB of VRAM. So in theory, and depending on the application and it's GPU load, you could assign 24 VMs to the card with 512MB video memory each."

Isn't that an option? I don't care too much about how the games look on VM, I just need the games to function (20 fps average or a little lower is fine) and need about 30 of them. Is sVGA with a quadro 6000 what I'm looking for? Maybe remotefx with hyper-v? I heard there was a 12 vm limit with remotefx on hyper-v or did I misunderstand? And lastly I hear that ESXi sVGA doesn't support directx past 9 where as remotefx on hyper-v supports directx 11?

Posted on 2014-08-11 19:02:51

Huh, it looks like my understanding was wrong, you can do sVGA with Quadro cards. It looks like the limitations are almost exactly the same as with GRID cards though, so the article I linked you should actually almost directly apply in terms of usability and requirements (although we used Cirtrix XenServer + XenDesktop instead of VMWare ESXI + HorizonView)

I don't have much experience with RemoteFX, so I can't give you too much advice there. And I believe you are right that sVGA on ESXI only supports DX9.

The one thing I can tell you is that if you go the virtual desktop streaming route you are going to need a lot of CPU power on the host to serve 30 clients and still have enough processing power left over to run 30 instances of a game. How much processing power you need is going to depend on exactly what you are doing of course,

Posted on 2014-08-11 19:28:00
Avatar Joe P

By your guesstimate, would dual E5-2665 and 64 gb of 1600 ecc ram with a 1tb samsung evo ssd suffice for about 24-30? Mobo would be a supermicro x9da7. I already bought all these parts, just waiting to buy a gpu for this project. Do you think I'm on the right track?

Posted on 2014-08-11 19:30:59

It is really hard to estimate, so take this worth a grain of salt. I *think* that will be enough to do the streaming itself OK, but I'm not sure how much CPU power you are going to have leftover to run the game. My gut feeling is that dual E5-2665 might be enough for 15-20 clients, but not for 30. Again, it is really hard to estimate though, especially not knowing exactly what game you will be running.

Posted on 2014-08-11 19:43:02
Avatar Joe P

Alright, thanks a lot Matt! That's all I ever wanted to hear

Posted on 2014-08-11 19:44:52

From what I can tell, the FX AMD processors should be able to do this correct? Obviously the correct motherboard would be needed as well, I'm just thinking about making a 2, maybe 3 headed system out of an FX 8 core. Do you know if its possible to say, having an odd number (3?) vCPUs with the cores being "modules" and not a true 8 core? Thanks!

Posted on 2014-09-09 12:42:54

So long as it has the proper VMware support (including VT-d for PCI-passthrough), it seems to me like it should work fine. No problem with odd numbers of cores per VM. We actually ran our setup that way for a while!

Posted on 2014-09-10 04:18:56

VT-d is an Intel technology, so I don't think that will show up on an AMD processor / motherboard. They may have an equivalent technology, of course, but I haven't kept up on AMD chips as well the last couple of years... so you'd want to look into that beforehand.

Posted on 2014-09-10 04:50:40

Ahh good catch!

Posted on 2014-09-10 05:19:27

Thanks guys! In AMD speak its referred to as AMD-Vi or IOMMU from what I can tell. Also in my forum trolling and research I've found that most AM3 and newer CPUs (Phenom II's, FX Series, Opterons) from AMD will work as long as the motherboard has the functionality. (970-990 seem to be your best bet, though some Asus boards have some issues) Once I found out that older Phenom IIs work (I've seen some people use Regor core Athlon X2s even) I'm thinking I will run an FX series (not set on which CPU) in my main system, and run just a dual headed system for server/gaming station with my current Phenom II X4 830. Will update if you guys want.

Posted on 2014-09-10 12:43:04
Avatar Michael F

I would make sure to allocate each VM a multiple of 2 cores and not over-provisioning, such that the hypervisor can grant it monopoly over a module. If 2 VMs were competing for one of the shared FPUs, that could really kill performance in games. In a server that's not really an issue because they don't really ever do FP calculations. The other concern is PCI bandwidth - you absolutely must have a 990FX chipset, and even then it will only have 4 PCIe 2.0 x8 slots (x16 physical, x8 electrical) so high end GPUs will be bandwidth starved.

Posted on 2015-12-03 03:25:42

Awesome computer, I wish I could afford it.

Posted on 2014-10-23 00:34:41

How much would it cost to build one of these with current hardware? I'm considering a 3 head setup for my 3 kids to use for making Minecraft videos and depending on the price, I may even look to have you guys build it.

Posted on 2014-12-11 19:55:50
Avatar Aleksandar Prokic

Matt, Where can I pick up sketches for this plexiglas case?

Posted on 2014-12-18 16:30:05

That chassis was just something we threw together to make it more attractive and eye catching at events, it's not really something we are planning on making publicly available. Honestly, it is a huge pain to assemble and work in since we didn't spend a ton of time making it more than pretty.

If you want to make your own, all we really did was take our test bench (http://www.pugetsystems.com... and make it an enclosure instead of an open air bench.

Posted on 2014-12-18 19:45:44
Avatar Setesh

In France we have Bull Bullion S x86 Servers.

This server could have 8 module of 2 processor server (they are stacked). It could have 24 TB of memory and 56 PCI-E 8x.

In fact you could made out-of-box a 56 headed Gaming platform.


Last point.... It's really too expensive....

Posted on 2014-12-24 07:34:55
Avatar Paul

Did you guys ever encounter issues with the USB passthrough? For example did the USB ever stop working and force you to reboot the VM or even have to reboot the host to get it working again?

Posted on 2014-12-26 20:49:24
Avatar inolen

Great article! You mentioned that a dual-Xeon setup could perhaps be more cost effective, which processors did you have in mind? From the numbers on cpubenchmark.net, it seems a dual Intel Xeon E5-2630 could perform similar, but I'm not at all familiar with how the performance on a dual CPU setup scales for real world work (in my case, building a 4 headed gaming rig).

Posted on 2014-12-30 07:29:14

Usually the thing you have to be careful of with high core count systems (whether it be single CPU or dual) is that the software you are using will actually use all the cores. Sometimes you run into problems with software only using a single CPU instead of both, but usually you run into a core limit before that happens. However, since you will be dividing the cores among multiple virtual machines, it shouldn't be a problem.

A pair of E5-2630's should be about the same performance as what we used so it should work fine. What I would recommend doing is first determine your budget then figure out how many cores you want per virtual machine (for gaming 2-4 cores should be about right). After that, simply get the highest frequency CPU(s) you can with those two considerations in mind.

Posted on 2014-12-30 19:30:45
Avatar George

Hello! Happy New Year!!

You mention the AMD Radeon R7 250 GPU. Did you
actually try it? If yes, what were the results both for video and audio
via the HDMI output? Thank you

Posted on 2015-01-10 18:31:19
Avatar Robert Cipriani

I can't get mine to work, it's an XFX card. Running ESXi 6U2 on a DL380 G7 with x16 PCI-E riser. I've tried Windows 7 and 8, 64 bit. The AMD drivers (I've tried older versions of Catalyst and the newest Crimson) will install, but Radeon Settings says that no are AMD drivers installed, I have no OpenGL, etc. I've tried just about everything, no luck so far. I bought the card based on this article...

Posted on 2016-04-07 18:19:25
Avatar Matt S.

How did you guys do this part of your configuration?

"configure the display settings to use the physical monitor instead of the VMWare console screen"

I haven't found anything useful that describes how to do this? If you could point me in the right direction that would be greatly appreciated.

Posted on 2015-01-12 21:41:28

That was just done though the Windows screen resolution GUI (like this http://winsupersite.com/sit.... It should show two monitors, one for VMWare console screen and one for the monitor you actually have attached to the card. You just have to make sure the actual monitor is the primary display device.

Posted on 2015-01-12 23:32:18
Avatar Matt S.

Thanks for the quick response. Unfortunately your link does not work, but it seems straight forward. I'll give it a try. Also, did you disable the VMWare console screen or leave it active still?

Posted on 2015-01-14 17:27:08

Huh, that's what I get for just linking to an image I found on Google I suppose.

We usually disabled the console screen, but only did so once we knew for certain that the machine was up and running completely. It is nice to have the console screen as a backup in case something goes weird with the virtual machine, but it is also really annoying to have the mouse be able to go off onto another screen that you can't even see.

Posted on 2015-01-14 19:56:53
Avatar Alan Latteri

How did you disable the console screen?

Posted on 2015-11-26 08:31:28
Avatar Seba

Ok, I installed Radeon, I see two monitors..., but - when I change monitor to attached to my Radeon, I get display from card video out on host - not on console on client. On client I must set monitor attached to vMware SVGA. Where I did issue?

Posted on 2015-11-13 18:44:05
Avatar Rob

What version VMs are y'all using (8, 9, 10)? Also, how are y'all managing the VMs - Do y'all have a vCenter running somewhere or are y'all able to manage via the full client connected directly to the ESXi host?

After enjoying the benefits of virtualization at work the past few years, I decided to do something along similar lines, started testing things on home setup, and ran across this article. The issues I'm seeing is that I'll need a separate machine for management, and will possibly need a vCenter, which isn't too cheap, even for the essentials (or requires you to keep redeploying the free trial, bleh).

Posted on 2015-01-15 06:09:01
Avatar mic

i'm running since a year a similar rig with two HD7970OC cards. One VM is using a tripple screen setup and serves as my main workstation. The other one is connected via HDMI / CAT6 to the livingroom TV which results in a lightning fast, invisible and noiseless Media PC. As shared part: AMD 8 Core, 32 GB RAM, small 128 GB SSD as cache. The data stores are on a separate rig and provided via NFS. Beside of the two virtualized workstations with physical Screens, Keyboards, Mouses and USB DACs I'm also running up to 10 virtual servers with stuff like SPLUNK, NAGIOS, and all kind of security tools.

Now to the questions:

I'm running ESXI 5.5.0,1331820. Some versions break USB other versions break PCI passthrough either by luck everything works or its a real pain to get it to work.
Most likely some configs (e.g. more than 8 GB RAM per VM) will freeze the VM.
Also there are other things that don't work with PCI pass-through.

VM Version is 8 - I migrated at one point in time from 5.0 (was a scary moment)
I manage the setup via my Windows 7 workstation that is running on top of ESXI.
As fallback I'm using my Notebook via SSH. I don't run Vcenter because its not freely available.

One more hint: The Esxi box runs 24x7 so it helps to put the GPUs manually in low-speed mode when no GPU power (Gaming, etc) is required. Makes a big difference with noise, heat and power-consumption.

Stability: Doing these kind of things is a stretch and the Hypervisor doesn't run as stable as it would without PCIpassthrough. In average my rig crashes once every 60 days or so which is kind of acceptable.

Posted on 2015-02-02 21:06:25
Avatar patpro

Hi, have you tried to upgrade to ESXi 6? I'm planning to build my own multi-headed workstation but I'm not sure about recent ESXi version (and hardware support).
Also, how do you manually put GPUs in low-speed mode? Is that a setting inside guest OS?

Posted on 2016-05-07 06:38:20
Avatar nacer

how about turning geforce card into Quadro by hardware hacking .. any one tested it with esxi?


Posted on 2015-01-31 10:39:33
Avatar shubham

does it also work on amd radeon 8670M series

Posted on 2015-01-31 15:18:29
Avatar Todd Sirois

Ever since I first really grasped what a VM was, meddling with
early VirtualBox builds I wanted to toss my HTPC, and two gaming systems
(one for me and the GF) and throw it all in one maintainable box.
Rather than dealing with conflicting software, I just "embed" a VM in a
working condition. Simply put, I don't have to worry if my printer
driver causes me to crash, or brings me out of a game, (as one
example.) I can leave one lean VM for games and only meddle with the
software that is relevant, further it is backed up by the underlying
kernel running the VM server. Essentially that becomes your BIOS, only
more easy to deal with and a lot harder to "brick."

Scale it back to something like the $300 i7 6-core w/ HT:

1 Gaming system 2x GFX cards in Xfire


2 Gaming systems, or

1 Gaming System and an HTPC image

The Linux OS running the HV would be able to perform routine and needed tasks like OS backups, anti-virus, etc. As impressive as your achievement was, I think it only seemed superfluous because of the hardware involved. Simple hardware that supports
virtualization need not be that expensive or the goals that lofty to make them inherently useful to average user. Just running on VMs alone simplifies moving images from one system to another. Throw in remote streaming services (splashtop, steam in-home-streaming) and you basically have wireless desktops to whatever display you want.

On mobile parts, a laptop can be hastily split into two workstations. CPU cores and especially memory are often under-utilized to the average user, at least efficiently, lets use them.

Posted on 2015-03-03 03:29:56

I have this project on Hackaday:


6 heads with OSX & Android x86

Plus PCIE switching

Posted on 2015-03-21 14:15:32
Avatar L.A.B.

With a matrix video switch you could route the video and USB just about anywhere over thousands of feet. I'd love to set this up in a Crestron system, using the DM switch.

Posted on 2015-04-09 02:16:31
Avatar Robert Cipriani

How well would this work over RDP or another remote display protocol?

Posted on 2015-04-23 19:39:13

It depends on what program you are using for remote access and what performance you need. A lot of remote access applications are not GPU accelerated which kind of negates the whole point of passing through the GPU. Off the top of my head, Splashtop is one I can think of that utilizes the GPU (I know there are others, just can't think of them right now). The other problem is that most remote access applications will be limited to 30FPS or lower. If you are OK with that it should work just fine though.

Alternatively, you could just use Steam Streaming for games. We've tested it and it works great for games even up to 60FPS. The only problem is that Steam Streaming needs a monitor plugged into the system. If you don't want to do that, though, you can just get a HDMI monitor emulator (we've used this in the past: http://www.amazon.com/Compu...

If you need to stream a bunch of virtual machines with GPU acceleration, a better choice would be to use an NVIDIA GRID card (https://www.pugetsystems.co.... It is more expensive and complex to setup, but the performance is really, really good and it is much more stable than doing PCI-passthrough like we did in this article.

Posted on 2015-04-23 20:21:56
Avatar Robert Cipriani

I'm playing with a trial of Horizon View, which supports PCoIP, hopefully the R7 will work in vDGA mode.

Posted on 2016-04-07 20:59:01
Avatar Steven Lee

I plan on doing a setup like this, except with Two instead of four. If anyone can answer this for me, I know that ESXI Hypervisor has to be managed by an external PC (finally a use for that old netbook) How does it Cold boot? Does it go straight to the Virtual OSes? or do you have to manually launch them with the laptop? in fact how is power management handled? i.e. Putting the PC into hibernate or sleep etc? How about if a user from a VM shuts down their "Side"? does that mean that it'd have to be restarted from the vsphere client? Is this setup practical for home use?

Posted on 2015-06-20 22:00:35
Avatar Kevin Smith

I use softxpand for this. Its way cooler. can use amd and nvidia gpus at the same time.

Posted on 2015-06-21 17:35:38
Avatar George Todd

I may have missed it but how was sound passed to the VM users?

Posted on 2015-08-02 19:12:54

I believe we just used the HDMI audio from the video card then plugged in headsets to the monitor's audio out when we did this project. You could also use either USB headsets or USB sound cards and that should work great although you may need to start using USB hubs to get enough USB ports.

Posted on 2015-08-03 17:44:01
Avatar Chris Smit

Was the sound also clear, so no stuttering at all?

Posted on 2015-12-10 23:00:00

The sound was actually very good, I couldn't tell any difference between normal HDMI audio and passed-through HDMI audio.

Posted on 2015-12-11 18:37:30
Avatar Steve J

Nice job. I've been working on a similar setup without using VMWare which runs multiple games in a single host with mutiple GPUs. My approach is different from yours in many ways, and requires management software which I developed myself. I suppose using VMWare makes it more robust in a sense that its hypervisor is a proven technology (so to speak) and you don't have to worry about the details of it. Currently my setup runs four games with four GPUs. I just wanted to say you guys did a very fine job. Maybe someday we can have a discussion to talk about the differences between our approaches. Cheers.

Posted on 2015-08-09 01:45:51
Avatar Bill Lindsay

Hi, sorry to comment on such an old thread but I am trying to build a similar setup myself and ran into some issues. I will only be running 1 gaming VM off of it instead of 4 (I will be running other VM's but just for various things I will be hosting).

Setup consists of
Asus z9pe-d8 ws
2x E5-2660
AMD R5 220 (since there is no onboard video)
AMD R9 390 (to be passed through to the gaming VM)

Got ESXi installed and running, got the 390 configured for passthrough but as soon as I assign it to a VM, the VM refuses to power on (it tries to start then after about 10s it just gives a very generic failed to start)

VT-d is enabled and I made sure to add the above mentioned pcihole and passthrough commands to the vmx file.

Not sure if it makes any difference but I have the R5 220 in PCIE 1, a PCIE SSD in slot 3 and the R9 390 in Slot 5.



Posted on 2015-08-10 23:27:39
Avatar Bill Lindsay

so after battling with this for a while I got the gpu working, ended up removing the R5 220 and it works now. Downside is that im passing through my only gpu.

usb is actually putting up a fight now and I get "Failed to register the device pciPassthru2 for 0:29.0 due to unavailable hardware or software support" Tried both of the usb 2 controllers on my board and it doesn't matter if the vm is EFI or not. I even tried a completely fresh VM with only the usb controller passed and it doesn't work

Posted on 2015-08-16 21:48:29
Avatar Alexis Grassette

Would it be possible to instead of running 4 games at the same time to just have something like 2 games running at the same time plus running a media server for streaming purposes?

Posted on 2015-08-24 21:21:00

Sure, that shouldn't be a problem. Just configure it as a normal virtual machine (shouldn't need a GPU)and it should work just fine.

Posted on 2015-08-24 21:51:15
Avatar kira kira

i know this is a old post but im lost reading this article. maybe you could clear something up for me. how did you manage to have the esxi sever also work as the client? i been trying to figure out how to some what replicate this and everyone seems to tell me this is not possible so please HELP!!

Posted on 2015-09-20 17:53:53

ESXI isn't really a client per say - it is simply what is installed on the bare hardware. All the Windows operating systems are then installed onto virtual machines on the ESXI install.

Posted on 2015-09-21 16:58:27
Avatar kira kira

i understand thats how it works but for example would one be able to have a hypervisor installed on a laptop and have multiple os running or having the ability to save snapshots or have a extra line of security?

Posted on 2015-09-21 17:06:12

You could, but you would need a second system to remotely do any administrative tasks (like creating a snapshot).

Posted on 2015-09-21 18:38:14
Avatar kira kira

I see hmmm it'll have to try this out. Random question kinda relating to this topic of gaming. So VMware offers the ability to send a sub device pass thru is there any virtual machines that pass thru thunderbolt for a egpu

Posted on 2015-09-21 18:47:29
Avatar EnKay Kay

Am I missing something here? I can see all the pass-through setup on ESXi, however I don't think it's possible to directly connect to VMs on the host terminal. Where are the client machines in this setup?

Posted on 2015-09-25 16:04:04
Avatar Butch Pornebo

I'm setting up something similar BUT not for gaming system. got a couple of questions.

1) so the mouse and keyboard and connected thru the USB passthrough. right ?

2) Got 1 video card with 2 vga monitor attached to it. Plan to install 2 guest operation system under vsphere. can I dedicate an assignment / connection of 1 vga display to each guest OS even tho the monitors are attached physically to 1 video card ?


Posted on 2015-10-04 22:28:03
Avatar Wesley

Did you guys have to inject any drivers to get the ASMedia USB 3.0 drivers to populate in the Advanced Settings (for setting up passthrough)? I've got a Gigabyte X99-UD4 mobo that has a Renesas USB 3.0 hub built in, but it doesn't populate in the passthrough section. However they are detected when I use the lsusb command through Putty.

Posted on 2015-10-08 16:28:00
Avatar Sultan Islam

Hm, I was actually interested in this topic, so does virtualization allow threads to passed on as well? Personally if this is possible and there is no performance lost versus running it nativity, one can use a high core count xeon and emulate two individual 8 core systems and run windows for gaming, Mac osx for editing and the host Linux for hpc computing like open foam and fea simulation. Since its possible to pass through video card, you can have a monitor array with different vm. I also assume you give each vm itd own physical hard drive to keep the systems different?

Posted on 2015-10-20 05:24:59
Avatar Somethingsomething

ESXi does binary translation and there is no performance loss or difference from running bare metal or on top of the hypervisor inside a virtual machine. When you create a virtual machine you assign the VM resources like how many vCPU's, vRAM, Hard Drive capacity etc this is taken from the host system a good way to think about this is 1vCPU = 1 logical core although over subscription of the CPU is possible depending on the workloads IE: you can give away 2x-3x more vCPUs then you may have logical cores, cores are not assigned to a virtual machine VMware uses a scheduler to manage the cores and processes. RAM is harder to over subscribe since RAM useage is most cases is fairly constant. I suggest a 1:1 ratio between physical RAM and vRAM but milage will vary. Lets say your system has a single SSD when you create a VM a folder gets created on the storage and inside that folder is a .vmdk file which is basically the virtual disk for the VM. You can create many VM's on a single drive without the need to partition.

Posted on 2016-01-15 14:23:44

Can you recommend an affordable card that can drive two 2560x1440 displays. They're MST compatible, so older radeon 6000 might work.
Should I get something like Nvidia NVS 310 FirePro W2100?

Oh, and I gonna pack this in a pretty small case, so power consumption shouldn't be big.

Posted on 2015-10-28 01:53:24

GeForce cards do work with pass through on ESXI 6: I just tested it using a GeForce Titan Z and a Titan X installed on the same system, which allowed me to then share THREE GPU Cards (the Titan Z is actually 2 titan X's... so I had 3 Titan X's pointing at 3 virtual machines. It was great, but then I tried something even BETTER: well, maybe not so much for gaming, but for really slick multimedia worksations. I use a i7 5960x, with 64GB of DDR4 3000mhz, overclocked to 4.5GHZ, a RAID 0 consisting of LSI MegaRaid .8i and a total of 8 Samsung 850 512GB SSD-- this RAID 0 array pumps out blistering speed on a 12Gbps transfer bus... anyway, using Hyper-V Server 2008 I was able to use it to share my three GPUs with 30 virtual machines! I literally have 30 Windows 7 Enterprise 64-bit with 2GB of RAM running on Hyper-V Server 2008 core, and all 30 virtual machines can each play a DVD movie simultaneously with zero frame drops and the utilization is smartly clustered on each of the cards-- 1 server with 30 virtual multimedia machines... HOT!

Posted on 2015-11-30 11:52:58
Avatar habwich

Raytech70, did you directly passthrough a titan X and / or titan Z into a VM, and getting a display and actually game in a VM without getting an Error code 43 in Windows Device Manager? Because if what you say is true you may have found a way to dedicate a full Geforce graphics card passthrough into a VM, which is unheard of in ESXi. I have been trying this for years and gave up. Is this true? Can you make a video on youtube as proof of you gaming and showing off you sick build in a vm with a titan?

Posted on 2015-12-25 00:23:47
Avatar habwich

typo* your sick build in a vm with a titan?

Posted on 2015-12-25 00:24:44
Avatar Somethingsomething

Passthrough is possible to a Virtual Machine but some of the consumer grade geforce cards dont support it. I believe Titan does (atleast i have come across many people claiming it does) and know for sure the Grid Cards support it. This passthrough feature was released a few years ago mainly as a Horizon VIEW feature to allow high performance 3D support in a virtual machine for CAD, medical imaging and other use types that required 3D.

I tried passing through my GTX 970 to ESXi 6.0 and got the error code you discussed but like I said I have come across a few people who were successful with the Titan Line and AMD Consumer Cards.

Source: I am a VMware PreSales Engineer.

Posted on 2016-01-15 14:15:41
Avatar patpro

Titan GPU are off limit for me, too expensive. I'm really interested in the GTX 1070 that has just been announced. Feel free to share any news about NVidia GPU passthrough in ESXi 6 :)

Posted on 2016-05-07 06:54:55
Avatar Kazim Naim


did u found a solution on Titan X?

Posted on 2016-06-02 14:47:21
Avatar Kazim Naim

Hi Ray

I am trying to run TITAN X without any success

VT-D and x enabled. (on the same host I can run Quadro K6000)

Unable to run Titan X. it shows under windows code 43.

Anything u seen similar?



Posted on 2016-06-02 14:46:44
Avatar Jens

Anyone got this working with R9 380? Can't get a hold of a 280 :(

Posted on 2015-12-07 18:39:44
Avatar Arnaud pam

easy to go more tan 5000 feet just need immagination to respect all standard :) you can upgrade more than 1000 users in a entire bulding. ahahahah how, pay me an lisen :)

My first test is on 2011 Avril and it work perfectly from this date.

Good luck in your project

Posted on 2016-02-20 18:44:27
Avatar deadlydeadly

hi i know this old post but still can u answer some question
can i use my onboard vga as secondary adapter since im poor as hell and my friend only playing moba like dota 2

Posted on 2016-02-29 09:37:08

This is very intresting article.. Thanks for sharing..

Posted on 2016-02-29 18:19:13
Avatar Pitágoras Rooglas

Is possible to assign more than one monitor "user" per video card, making it up for more than 4 users ? Thanks.

Posted on 2016-03-13 17:26:23

Not with standard video cards. To do that, you need to go up to a card like the NVIDIA GRID cards - although those are designed for network streaming, not local access. We have a similar article to this one about GRID if you are interested: https://www.pugetsystems.co...

Posted on 2016-03-14 16:59:22
Avatar Robert Cipriani

DL380 G7 running ESXi 6.0.0 U2
VT-d and other virtualization related stuff is enabled in BIOS
Windows 7 64 bit VM (also tried 8.1)
8 GB ram allocated to VM and is "all locked"
Virtual hardware is version 11 but I also tried version 8
XFX Radeon R7 250 in an x16 riser card
Video and HDMI audio are passed through to the VM
pci hole and other recommend params set in the .vmx
VM is set to EFI boot

AMD drivers will install but don't work properly. Radeon Settings tell me that no AMD drivers are installed. If I go to Advanced Settings in the Screen Resolution dialog, it says the adapter is Microsoft Basic (in Windows 8), even though Device Manager shows the card as a Radeon R7 200 Series (and there's no yellow exclamantion point or anything). Minecraft won't run, says no Open GL. I've tried the latest Crimson and also Catalyst 14.4, same results.

Any suggestions? I bought this card after struggling to get a GeForce to work, and it's behaving essentially the same way. I've got a Raspberry Pi with the Horizon client waiting to connect to this thing :D, but it's driving me nuts.

Posted on 2016-04-07 18:27:39
Avatar Robert Cipriani

Hmmm...I should also point out that I'm trying to get the drivers set up via the VM console. Do I need a physical monitor attached for this to work? My ultimate goal is to use the Pi as a thin client with PCoIP. Could the lack of a real monitor be the issue?

Posted on 2016-04-07 18:42:19
Avatar Robert Cipriani

Update: Yes, not having a display physically attached to my R7 was a problem - the driver isn't active when there's no actual display. When I connect one I get output from the physical card, and the VMware SVGA console goes black. Horizon, however, wants to connect to that display instead of the Radeon. My next step is to attach a local USB keyboard/mouse to the server so I can log in, make sure 3D acceleration works, then disable the VMware console adapter. If that works, I can either just leave the monitor attached and try the Horizon client again (it doesn't even have to be powered on), try finding an EDID emulator (software or hardware), or try an Avocent KVM dongle on one of the DVI ports.

Posted on 2016-04-08 13:53:38
Avatar Robert Cipriani

Ok, so the Radeon, along with a passed-through USB controller and keyboard/mouse, works beautifully if I sit in front of the server. I cannot get Horizon View to connect to the GPU-accelerated display, even after disabling the VMware console. I just get a black screen in Horizon, and it eventually disconnects. I'm not even sure if this is possible; it's almost certainly not supported. I'm going to see if RemoteFX can be made to work with the physical GPU. I installed NoMachine as a test but wasn't impressed, and it borked RDP completely.

Posted on 2016-04-11 18:55:23
Avatar jon jon goufema liames zenbin

Are you from india?i dont believe in you...sorry..

Posted on 2016-10-11 02:27:10
Avatar Brandon Bridges

I would like to try something similar only for Retro gaming purposes. I am thinking of getting an older Xeon with a motherboard that has multiple PCI-E and PCI slots and putting GPUs from different era's in order to play games that normally you would never be able to play all on the same system. So I'd have a Windows 98 VM with a Voodoo5 PCI attached to it and a Windows 98 VM with one of the very first PCI-E cards attached to it, then a Windows XP machine with a more powerfull GPU, then maybe an DOS VM with a 2d card and a Voodoo1or 2. Was curious if anyone has tried anything like this.

Posted on 2016-12-02 19:21:06
Avatar Hai Tu


Do you have any recommendations for someone who's just passing one GPU (Radeon 390) with an onboard gpu from my motherboard? Whenever I boot up, I'll get a BSOD (windows 10) and it references atikmpag.sys

Posted on 2017-01-03 21:12:02
Avatar Eric Volker

Well thanks for confirming one of the problems I was having. I was seeing that Error 43 message when trying to get a GTX 670 to work in a VM. Kind of moot anyway, since the positioning of the PCIe power cables didn't quite fit in the case. The system currently has an ancient Quadro FX 3800 that works quite nicely, but is a poor performer. It can run classic Skyrim at playable framerates though. Do newer Geforce cards work, say from the Maxwell or Pascal architectures?

Linus of Linus Tech Tips reported that he had problems with AMD cards not working after soft resetting the VM. He would have to physically reboot the entire VM server if he had to reboot any one of his 7 VMs with Fury Nanos. I take it that this is not a problem with the older R9/R7 series, correct? Or was it because Linus was using Unraid instead of ESXi?

I'm also having a lot of issues with USB passthrough. Passing through the host's keyboard and mouse does not work, and if I pass through a PCI USB controller, USB hotplug doesn't work. In other words, if a USB device is attached *at boot* to the VM it is recognized and usable. However, if i "unplug" the device (i.e., switch away with a KVM switch), the device will *not* be recognized when it's reattached to the VM. If I remote into the VM, go to Device Manager and "Scan for hardware changes" the VM will detect the USB device. It's rather frustrating and inconvenient. Anyone know of a fix?

I know these comments are old and dusty, but I'm hoping someone can give me some insights.

Posted on 2017-01-31 04:58:00
Avatar Eric Volker

Finally got my Geforce GTX 670 to work in a VM. The secret is to add this parameter to your VM's *.vmx config file:

hypervisor.cpuid.v0 = "FALSE"

I'm not exactly sure what this does, but it apparently tricks the guest OS into believing it's running on bare metal instead of a VM. This also fools the Nvidia driver into not giving the Error 43 message we all know and love. One odd consequence of this config file parameter is that Task Manager now shows CPU usage permanently at zero percent, nor does it show usages for individual processes. They're all zero. It's also been implied that this setting can introduce a performance penalty, but I'm getting near native performance levels from my GTX 670.

Posted on 2017-02-03 00:45:23
Avatar denyw

Is there any known method to passthrough multiple vm from single gpu? Googled about this but cant find anything. Most vga nowadays have more than single output..

Posted on 2017-03-02 14:01:02
Avatar Raefaldhi Amartya Junior

I want to try this inovation to my internet cafe but i wonder the electricity bill.
basically all PC won't run 24/7 so which one cheapest 4 PC rig vs 1PC (4VM)?

Posted on 2017-06-16 00:34:32
Avatar Artemiy

Sorry for commenting such an old post but may be someone my answer my questions please? :)
Is it possible to setup something like this without external monitor, keyboard and mouse? I just need to pass through gpu to virtual machine and to be able to load it in a separate window from host, like it's done in virtualbox or vmware workstation. Is it possible?

Posted on 2017-09-09 12:25:21
Avatar Jim Copeland

If you used the Steam In-Home Streaming feature, I wonder if it would be possible to stream up to 4 different gaming sessions to different tablets or devices within the LAN. This way I could see if some friends each had a basic laptop, but wanted to come over and play, they would just need to connect to your LAN, have Steam installed, and stream away

Posted on 2018-08-13 22:17:05
Avatar cheff0r

code 43 in esxi: add setting hypervisor.cpuid.v0 = "FALSE"
Do make sure you use the "" quotes around false, even if no other setting has these. It only works WITH these quotes.

Posted on 2018-11-02 10:43:03

Very nice article thanks a lot

Posted on 2019-02-06 14:03:14

Would remote desktop connections cause significant lag to the vms?

Posted on 2019-02-26 04:00:35

Nice article


Posted on 2020-07-25 13:52:22

Your website is a great one and Really nice article my brother keep sharing more.

Posted on 2021-01-18 15:40:44