Article Thumbnail

Multi-headed VMWare Gaming Setup

Written on July 9, 2014 by Matt Bach
Share:
Table of Contents:
  1. Introduction
  2. Hardware Requirements
  3. Virtual Machine Setup
  4. Performance and Impressions
  5. Conclusion & Use Cases
  6. Extra: How far is too far?

Introduction

At Puget Systems, we are constantly trying out new (and sometimes old) technologies in order to better serve our customers. Recently, we were given the opportunity to evaluate desktop virtualization with NVIDIA GRID which GPU virtualization and virtual machines to stream a virtual desktop with full GPU acceleration to a user. NVIDIA GRID is built around streaming the desktop, which requires robust network infrastructure and high quality thin clients. Even with the best equipment, there is latency, video compression, and high CPU overhead. These can be worked around for many applications, but are all big turn-offs to gamers.

What that in mind, we set out to build a PC that uses virtualization technologies to allow multiple users to game on one PC but where there is no streaming and no additional latency because all of the user inputs (video, sound, keyboard and mouse) are directly connected to the PC. By creating virtual machines and using a mix of shared resources (CPU, RAM, hard drive and LAN) and dedicated resources (GPU and USB) we were able to create a PC that allows up to four users to game on it at the same time. Since gaming requires minimal input and display lag, we kept the GPU and USB controllers outside of the shared resource pool and directly assigned them to each virtual OS which allows the keyboard/mouse input and video output to bypass the virtualization layer. The end result is a single PC running four virtual machines; each of which behaves and feels like any other traditional PC.

Hardware Requirements

Unlike a normal PC, there are a number of hardware requirements that limit what hardware we were able to use in our multi-headed gaming PC. Since we are using virtual machines to run each virtual OS, the main requirement was that the motherboard and CPU both support virtualization which on Intel-based systems is most commonly called VT-x and VT-d. Checking a CPU for virtualization support is usually easy as it is listed right in the specifications for the CPU. Motherboards are a bit trickier since virtualization support is not often listed in the specs, but typically either the BIOS or the manual will have settings for "VT-d" and/or "Intel Virtualization Technology". If those options (or different wording of the same setting) are available, then virtualization and PCI Passthrough should work.

Also, since we are passing video cards through to each virtual OS, the video card itself needs to actually support PCI passthrough. This was the most difficult hardware requirement we had to figure out since video cards do not list anywhere (at least that we could find) whether or not they support it. In our research and contact with manufacturers, we found that almost all AMD cards (Radeon and FirePro) work, but from NVIDIA officially only Quadro and GRID (not GeForce) support it.

We tried to get a number of GeForce cards to work (we tested with a GTX 780 Ti, GTX 660 Ti, and GTX 560), but no matter what we tried they always showed up in the virtual machine's device manager with a code 43 error. We scoured the internet for solutions and worked directly with NVIDIA but never found a good solution. After a lot of effort, NVIDIA eventually told us that PCI passthrough is simply not supported on GeForce cards and that they have no plans to add it in the immediate future.

Update 8/1/2014: We still don't know of a way to get NVIDIA GeForce cards to work in VMWare, but we have found that you can create a multiheaded gaming PC by using Ubuntu 14.04 and KVM (Kernel-based Virtual Machine). If you are interested, check out our guide.

In addition, we also had trouble with multi-GPU cards like the new AMD Radeon R9 295x2. We could pass it through OK, but the GPU driver simply refused to install properly. Most likely this is an issue with passing through the PCI-E bridge between the two GPUs, but no matter what is actually causing the issue the end result is that multi-GPU cards currently do not work well for PCI passthrough.

While this list is nowhere near complete, we specifically tested the following cards during this project:

Cards that work Cards that don't work
AMD Radeon R9 280 NVIDIA GeForce GTX 780 Ti
AMD Radeon R7 250 NVIDIA GeForce 660 Ti
AMD Radeon HD 7970 NVIDIA GeForce 560
NVIDIA Quadro K2000 AMD Radeon R9 295x2

For our final testing configuration, we ended up using the following hardware:

Four Radeon R9 280 video cards put out quite a bit of heat - especially when stacked right on top of each other - so we had to have a good amount of airflow in our chassis to keep them adequately cooled. There are a number of chassis available that have enough PCI slots and good fan placement like the Rosewill Blackhawk Ultra or Xigmatek Elysium that would work for this configuration, but for our testing we used a custom acrylic chassis since we were planning on showing this system off in our booth at PDXLAN 24.

Our test system in a custom acrylic enclosure with four Asus Radeon R9 280 DirectCU II video cards. Note the four groups of keyboard/mouse/video cables in the third picture that go to the four sets of keyboard/mouse/monitor.

A very similar system we recently built for a customer (for a completely different purpose) in a Xigmatek Elysium chassis with four XFX Radeon R9 280X video cards.

Virtual Machine Setup

Since we want to have four operating systems running at the same time, we could not simply install an OS onto the system like normal. Instead, we had to run a virtual machine hypervisor on the base PC and create multiple virtual machines inside that. Once the virtual machines were created, we were then able to install an OS on each of them and have them all run at the same time.

While there are many different hypervisors we could have used, we chose to use VMWare ESXI 5.5 to host our virtual machines since it is the one we are most familiar with. We are not going to go through the entire setup in fine detail, but to get our four virtual machines up and running we performed the following:

Step 1
We installed the VMWare ESXI 5.5 hypervisor on the system and assigned a static IP to the network adapter so we could remotely manage it.

VMWare ESXI setup

Step 2
In Configuration -> Advanced Settings -> Edit, we selected the devices we wanted to pass through from the main system to the individual virtual machines. In our case, we passed through two USB 2.0 controllers, two USB 3.0 controllers, and the four AMD Radeon R9 280 video cards (both the GPU itself and the HDMI audio controller).

VMWare ESXI PCI Passthrough

Step 3
Next we created four virtual machines and added one USB controller and one video card to the machine's PCI devices making sure to include both the GPU itself and the HDMI audio device. Figuring out which USB controller was which on the motherboard was matter of trial and error, but we eventually got it set so that we knew which USB ports were allocated to each virtual machine.

For each virtual machine we assigned 4 vCPUs, 7GB of RAM, and 180GB of storage space.

Step 4
With the PCI devices added, we next changed the boot firmware from BIOS to EFI. This was really only required to get the USB 2.0 controllers to function properly on our motherboard as a passthrough device, but for the sake of consistency we changed all of the virtual machines to EFI.

Step 5
For the final configuration step, we entered the datastore in Configuration -> Storage, downloaded the .vmx file for each virtual machine, added 

pciHole.start = "1200"
pciHole.end = "2200"

to the file and re-uploaded it to the datastore. This was required for the AMD cards to properly pass through to the virtual machines. If you are doing this yourself and are having troubles, you could also try adding pciPassthru0.msiEnabled = "FALSE" where "0" is the PCI number for both the GPU and HDMI audio device.

VMWare ESXI datastore .vmx file

 

With all this preparatory setup complete, we were able to install Windows 8.1 through the vSphere client console. Once we had the GPU driver installed we were then able to plug a monitor and keyboard/mouse into the appropriate GPU and USB ports, configure the display settings to use the physical monitor instead of the VMWare console screen and complete the setup and testing as if the virtual machine was any other normal PC. 

Performance and Impressions

Since resource sharing makes it really difficult to benchmark hardware performance we are not going to get into anything like benchmark numbers. Really, the performance of each virtual OS is going to entirely depend on what hardware you have in the system and how you have that hardware allocated to each virtual machine. Instead of FPS performance, what we are actually more concerned about is if there is any input or display lag. Since gaming is so dependent on minimizing lag, this is also a great way to test the technology in general. If there are no problems while gaming then less rigorous tasks like web browsing, word processing, Photoshop, etc. should be no problem.

To get a subjective idea of how a multi-headed gaming system performs, we loaded four copies of Battlefield 4 onto the four virtual machines and "borrowed" some employees from our production department to test out our setup. After getting the video settings dialed in (2560x1440 with med/high settings gave us a solid 60 FPS in 48 man servers), we simply played the game for a while to see how it felt. Universally, everyone who tried it said that they noticed absolutely no input or display lag. So from a performance standpoint, we would call this a complete success!

Screenshot of Battlefield 4 running on 1/4 of our test system (one of the four virtual machines)

2560x1440 with a mix of medium and high settings gave us >60 FPS in 48 man servers

With four people playing Battlefield 4 at the same time, our setup was not really limited by either the CPU or GPU since both were performing near 100%. Instead, we found that the ASUS Radeon R9 280 paired almost perfectly with four shared cores from the Xeon E5-2695 v2. However, keep in mind that the Xeon E5-2695 v2 we used is pretty much the fastest Xeon CPU currently available so if you are considering doing this yourself you may run into a CPU limitation if you want to have four people gaming at once. Of course, you could very easily use a dual CPU motherboard with a pair of more reasonably priced Xeons to get even more CPU power for your dollar than what we have in our test system.

Conclusion & Use Cases

Getting four gaming machines running on a single physical system was not quite as easy as we originally expected it to be, but it worked very well once we figured out all the little tricks. The only obstacle we were not able to overcome was the fact that NVIDIA GeForce cards do not support virtualization, but the other issues we ran into were all resolved with various small configuration and setup adjustments. With those things figured out, it really was not overly difficult to get each virtual OS up and running with its own dedicated GPU and USB controller.

Performance wise, we are very impressed with how well this configuration worked. Being a custom PC company, we have plenty of employees that enjoy gaming and none of them noted any input or display lag. But beyond the cool factor, what is the benefit of doing something like this over using four traditional PCs with specs similar to each of the four virtual machines?

4 monitors, keyboards, and mice off one PC

  • Hardware consolidation - This probably is not terribly important to many users, but having a single physical machine instead of four means that you have less hardware to maintain and a smaller physical footprint.
  • Shared resources - For our testing we only assigned four CPU cores to each of our virtual machines since we knew that we were going to be loading each virtual machine equally but with virtualization you typically would over-allocate resources like CPU cores to a much greater extent. Instead of equally dividing up a CPU, with virtualization you can assign anything from a single core to all the cores on the physical CPU to each virtual machine. Modern virtualization is efficient enough that it can dynamically allocate resources to each virtual machine on the fly which gives each machine the maximum performance possible at any given time. Especially in non-gaming environments, it is rare to use more than a small percentage of the CPUs available power the majority of the time, so why not let your neighbor use it while they use Photoshop or encode a video?
  • Virtual OS management - Since virtual environments are primarily made for use with servers, they have a ton of features that make them very easy to manage. For example, if you want to add a new virtual machine or replace an existing machine that is having problems, you can simply make a copy of one of the other virtual machines. You need to change things like CD keys and the PCI passthrough devices, but the new machine would be up and running in the fraction of the time it would take to install it from scratch. In addition, you can use what are called "Snapshots" to create images of a virtual machine at any point. These images can be used to revert the virtual machine back to a previous state which is great for recovering from things like virsuses. In fact, you can even set a virtual machines to revert to a specific snapshot whenever the machine is rebooted. This makes it so you don't have to worry about what a user might install on the machine since it will automatically revert to the specified snapshot when the machine is shut down.

As for specific use-cases, a multi-headed system could be useful almost any time you have multiple users in a local area. Internet and gaming cafes, libraries, schools, LAN parties, or any other place where a user is temporarily using a computer would likely love the snapshot feature of virtual machines. In fact, you could even give all users administrator privileges and just let them install whatever they want since you can have the virtual machine set to revert back to a specific snapshot automatically.

In a more professional environment, snapshots might not be as exciting (although they would certainly still be very beneficial), but the ability to share hardware resources to give extra processing power to users when they need would be very useful. While it varies based on the profession, most employees spend the majority of their time doing work that requires little processing power intermixed with periods where they have to wait on the computer to complete a job. By sharing resources between multiple users, you can dramatically increase the amount of processing power available to each user - especially if it is only needed in short bursts.

Overall, a multi-headed system is very interesting but is a bit of a niche technology. The average home user would probably never use something like this, but it definitely has some very intriguing real-world benefits. Would something like this be useful in either your personal or professional life? Let us know how you would use it in the comments below.

Extra: How far is too far?

For this project we used four video cards to power four gaming virtual machines because that was a very convienant number considering the PCI slot layout and the fact that the motherboard had four onboard USB controllers. However, four virtual machines is not at all the limit of this technology. So just how many virtual desktops could you run off a single PC with each desktop still having direct access to its own video card and USB controller? 

The current Intel Xeon E5 CPUs have 32 available lanes that PCI-E cards can use. If you used a quad Xeon system you would get 128 PCI-E lanes which you could theoretically divide into 128 individual PCI-E x1 slots using PCI-E expanders and risers. The video cards would likely see a bit of a performance hit unless you are using very low-end cards, but by doing this you could technically get 66 virtual personal computers from a single quad Xeon system (assuming the motherboard has 4 onboard USB controllers).

Is 66 virtual machines off a single box too far? Honestly: yes. The power requirements, cooling, layout and overall complexity is pretty ridiculous at that point. Plus, how would you even fit 66 users around one PC (if it could even be called a PC at that point)? USB cables only have a maximum length of about 16 feet, so very quickly you would simply run out of space to put people. Really, at that point you should probably look into virtual desktop streaming instead of the monstrosity above that we mocked up in Photoshop.

Theroretical quad Xeon PC running 66 video cards and USB controllers

A Quad Xeon system could theoretically run 66 video cards and USB controllers at PCI-E x1.
(Yes, this picture is photoshopped)

What do you think? How many virtual desktops do you think is right to aim for with a setup like this?

Tags: VMWare, ESXI, PCI passthrough, virtualization, virtual machine, multi-head, gaming
gotbliss56

What stops Bluetooth technology from being used here so you don't have to use USB cables? I know some gamers like to be hardwired but I don't notice any lag with my Logitech wireless mouse and keyboard.

Posted on 2014-07-10 01:56:30
jbanks

I'm not sure why they told you geforce doesn't support PCI passthrough. I have it working nicely currently.

My setup is a linux host + windows guest using qemu+kvm.

The linux host is running some crappy ati card while the guest is given a gtx 760.

You can see other setups on https://bbs.archlinux.org/view...

Posted on 2014-07-10 14:19:25
ScathArach

I am guessing that GeForce doesn't support proper passthrough in a hypervisor enviroment which is what they are testing on.

Posted on 2015-06-16 16:39:22
Michael F

KVM is a hypervisor of sorts - it is part of the Linux kernel so most distributions pull double duty as full OS and hypervisor. The reason it works with KVM is that it is not a commercial project so there is some legal barriers VMware has that KVM doesn't.

Posted on 2015-12-03 03:09:24
Cypher720

Could use a converter for longer distances with hdmi/usb cables over cat5/6.
hdmi: http://www.hdtvsupply.com/hdto...
usb: http://tinyurl.com/pu3l3xx

Pros: Reduce the number of physical towers in the building.
Multiple roles. It could be a HTPC/kitchen/gaming/kids pc all in one.

Limiting factors: Higher inital cost as well as upgrades, Cable runs, Difficulty upgrading parts. Having things break whenever a part is upgraded would be annoying. Technically advanced configuration.

Posted on 2014-07-10 22:31:52
DLConspiracy//

Can I please have that rig? I will donate all my gaming life to it.

Posted on 2014-07-10 22:38:33
hiho

How did you divive 12 cores into 4 cours each divided by 4 stations? Or did you count hyperthreading as cores in which case you had 3 machines on physical cores and the 4th on hyperthreading which is not feasible as far as i know.. ? Could anybody explain or is there a mistake?

Posted on 2014-07-13 07:47:56
Joe P

Which would be a better performance an amd r9 280 or a Nvidia Quadro 4000? Not the K4000 since that's about 100 dollars more expensive, but the r9 280 and the Nvidia Quadro 4000 are about the same price, anyone know which one I should get? And can I not just use a r9 290? The article only mentions r9 290x not the r9 290?

Posted on 2014-07-26 11:38:20

It really depends on what you are doing. If you are gaming then a R9-290/R9-290X (either should work) would be much better than a Quadro 4000. If you are making more of a workstation-type system (for AutoCAD, video encoding, etc) then the Quadro would be a better choice in my opinion.

Posted on 2014-07-28 19:12:11
Michael

From what I read at various vmware forums the 290x does not work in passthrough with esxi 5.1 or 5.5. If someone manages to make it work let me know, I went down the route of modding my 780ti to a Quadro K6000, works a treat!

Posted on 2014-08-13 09:14:38
Joe P

Would there be anyway to take 1 graphics card, perhaps a Quadro and divide it up so that multiple guests can draw from 1 graphics card instead of having each guest having their own individual graphics card? I was hoping to maybe split a Quadro 6000 and assign about 256 mb from the gpu to multiple guests

Posted on 2014-08-10 16:59:43

I don't believe you can do that with a Quadro card. As far as I understand, NVIDIA GRID cards are the only cards from NVIDIA that can do something like that. Unfortunately, they are headless cards (no ports) so you have to do virtual desktop streaming. Plus, they are pretty expensive. We did an article on it if you want to read abnout them though: http://www.pugetsystems.com/la...

Most likely just getting multiple cheaper cards would be the best route for a situation like yours.

Posted on 2014-08-11 18:56:35
Glenn Thompson

From what I can tell, the FX AMD processors should be able to do this correct? Obviously the correct motherboard would be needed as well, I'm just thinking about making a 2, maybe 3 headed system out of an FX 8 core. Do you know if its possible to say, having an odd number (3?) vCPUs with the cores being "modules" and not a true 8 core? Thanks!

Posted on 2014-09-09 12:42:54

So long as it has the proper VMware support (including VT-d for PCI-passthrough), it seems to me like it should work fine. No problem with odd numbers of cores per VM. We actually ran our setup that way for a while!

Posted on 2014-09-10 04:18:56

VT-d is an Intel technology, so I don't think that will show up on an AMD processor / motherboard. They may have an equivalent technology, of course, but I haven't kept up on AMD chips as well the last couple of years... so you'd want to look into that beforehand.

Posted on 2014-09-10 04:50:40
Glenn Thompson

Thanks guys! In AMD speak its referred to as AMD-Vi or IOMMU from what I can tell. Also in my forum trolling and research I've found that most AM3 and newer CPUs (Phenom II's, FX Series, Opterons) from AMD will work as long as the motherboard has the functionality. (970-990 seem to be your best bet, though some Asus boards have some issues) Once I found out that older Phenom IIs work (I've seen some people use Regor core Athlon X2s even) I'm thinking I will run an FX series (not set on which CPU) in my main system, and run just a dual headed system for server/gaming station with my current Phenom II X4 830. Will update if you guys want.

Posted on 2014-09-10 12:43:04
Michael F

I would make sure to allocate each VM a multiple of 2 cores and not over-provisioning, such that the hypervisor can grant it monopoly over a module. If 2 VMs were competing for one of the shared FPUs, that could really kill performance in games. In a server that's not really an issue because they don't really ever do FP calculations. The other concern is PCI bandwidth - you absolutely must have a 990FX chipset, and even then it will only have 4 PCIe 2.0 x8 slots (x16 physical, x8 electrical) so high end GPUs will be bandwidth starved.

Posted on 2015-12-03 03:25:42

Awesome computer, I wish I could afford it.

Posted on 2014-10-23 00:34:41
Setesh

In France we have Bull Bullion S x86 Servers.

This server could have 8 module of 2 processor server (they are stacked). It could have 24 TB of memory and 56 PCI-E 8x.

In fact you could made out-of-box a 56 headed Gaming platform.

http://www.bull.com/download/n...

Last point.... It's really too expensive....

Posted on 2014-12-24 07:34:55
inolen

Great article! You mentioned that a dual-Xeon setup could perhaps be more cost effective, which processors did you have in mind? From the numbers on cpubenchmark.net, it seems a dual Intel Xeon E5-2630 could perform similar, but I'm not at all familiar with how the performance on a dual CPU setup scales for real world work (in my case, building a 4 headed gaming rig).

Posted on 2014-12-30 07:29:14

Usually the thing you have to be careful of with high core count systems (whether it be single CPU or dual) is that the software you are using will actually use all the cores. Sometimes you run into problems with software only using a single CPU instead of both, but usually you run into a core limit before that happens. However, since you will be dividing the cores among multiple virtual machines, it shouldn't be a problem.

A pair of E5-2630's should be about the same performance as what we used so it should work fine. What I would recommend doing is first determine your budget then figure out how many cores you want per virtual machine (for gaming 2-4 cores should be about right). After that, simply get the highest frequency CPU(s) you can with those two considerations in mind.

Posted on 2014-12-30 19:30:45
George

Hello! Happy New Year!!

You mention the AMD Radeon R7 250 GPU. Did you
actually try it? If yes, what were the results both for video and audio
via the HDMI output? Thank you

Posted on 2015-01-10 18:31:19
Robert Cipriani

I can't get mine to work, it's an XFX card. Running ESXi 6U2 on a DL380 G7 with x16 PCI-E riser. I've tried Windows 7 and 8, 64 bit. The AMD drivers (I've tried older versions of Catalyst and the newest Crimson) will install, but Radeon Settings says that no are AMD drivers installed, I have no OpenGL, etc. I've tried just about everything, no luck so far. I bought the card based on this article...

Posted on 2016-04-07 18:19:25
Rob

What version VMs are y'all using (8, 9, 10)? Also, how are y'all managing the VMs - Do y'all have a vCenter running somewhere or are y'all able to manage via the full client connected directly to the ESXi host?

After enjoying the benefits of virtualization at work the past few years, I decided to do something along similar lines, started testing things on home setup, and ran across this article. The issues I'm seeing is that I'll need a separate machine for management, and will possibly need a vCenter, which isn't too cheap, even for the essentials (or requires you to keep redeploying the free trial, bleh).

Posted on 2015-01-15 06:09:01
nacer

how about turning geforce card into Quadro by hardware hacking .. any one tested it with esxi?

http://www.eevblog.com/forum/c...

Posted on 2015-01-31 10:39:33
Todd Sirois

Ever since I first really grasped what a VM was, meddling with
early VirtualBox builds I wanted to toss my HTPC, and two gaming systems
(one for me and the GF) and throw it all in one maintainable box.
Rather than dealing with conflicting software, I just "embed" a VM in a
working condition. Simply put, I don't have to worry if my printer
driver causes me to crash, or brings me out of a game, (as one
example.) I can leave one lean VM for games and only meddle with the
software that is relevant, further it is backed up by the underlying
kernel running the VM server. Essentially that becomes your BIOS, only
more easy to deal with and a lot harder to "brick."

Scale it back to something like the $300 i7 6-core w/ HT:

1 Gaming system 2x GFX cards in Xfire

reboot

2 Gaming systems, or

1 Gaming System and an HTPC image

The Linux OS running the HV would be able to perform routine and needed tasks like OS backups, anti-virus, etc. As impressive as your achievement was, I think it only seemed superfluous because of the hardware involved. Simple hardware that supports
virtualization need not be that expensive or the goals that lofty to make them inherently useful to average user. Just running on VMs alone simplifies moving images from one system to another. Throw in remote streaming services (splashtop, steam in-home-streaming) and you basically have wireless desktops to whatever display you want.

On mobile parts, a laptop can be hastily split into two workstations. CPU cores and especially memory are often under-utilized to the average user, at least efficiently, lets use them.

Posted on 2015-03-03 03:29:56
Frank Rizzo

I have this project on Hackaday:

https://hackaday.io/project/10...

6 heads with OSX & Android x86

Plus PCIE switching

Posted on 2015-03-21 14:15:32
L.A.B.

With a matrix video switch you could route the video and USB just about anywhere over thousands of feet. I'd love to set this up in a Crestron system, using the DM switch.

Posted on 2015-04-09 02:16:31
Robert Cipriani

How well would this work over RDP or another remote display protocol?

Posted on 2015-04-23 19:39:13
Steve J

Nice job. I've been working on a similar setup without using VMWare which runs multiple games in a single host with mutiple GPUs. My approach is different from yours in many ways, and requires management software which I developed myself. I suppose using VMWare makes it more robust in a sense that its hypervisor is a proven technology (so to speak) and you don't have to worry about the details of it. Currently my setup runs four games with four GPUs. I just wanted to say you guys did a very fine job. Maybe someday we can have a discussion to talk about the differences between our approaches. Cheers.

Posted on 2015-08-09 01:45:51
Alexis Grassette

Would it be possible to instead of running 4 games at the same time to just have something like 2 games running at the same time plus running a media server for streaming purposes?

Posted on 2015-08-24 21:21:00

Sure, that shouldn't be a problem. Just configure it as a normal virtual machine (shouldn't need a GPU)and it should work just fine.

Posted on 2015-08-24 21:51:15
EnKay Kay

Am I missing something here? I can see all the pass-through setup on ESXi, however I don't think it's possible to directly connect to VMs on the host terminal. Where are the client machines in this setup?

Posted on 2015-09-25 16:04:04
Виктор Ковыршин

Can you recommend an affordable card that can drive two 2560x1440 displays. They're MST compatible, so older radeon 6000 might work.
Should I get something like Nvidia NVS 310 FirePro W2100?

Oh, and I gonna pack this in a pretty small case, so power consumption shouldn't be big.

Posted on 2015-10-28 01:53:24

GeForce cards do work with pass through on ESXI 6: I just tested it using a GeForce Titan Z and a Titan X installed on the same system, which allowed me to then share THREE GPU Cards (the Titan Z is actually 2 titan X's... so I had 3 Titan X's pointing at 3 virtual machines. It was great, but then I tried something even BETTER: well, maybe not so much for gaming, but for really slick multimedia worksations. I use a i7 5960x, with 64GB of DDR4 3000mhz, overclocked to 4.5GHZ, a RAID 0 consisting of LSI MegaRaid .8i and a total of 8 Samsung 850 512GB SSD-- this RAID 0 array pumps out blistering speed on a 12Gbps transfer bus... anyway, using Hyper-V Server 2008 I was able to use it to share my three GPUs with 30 virtual machines! I literally have 30 Windows 7 Enterprise 64-bit with 2GB of RAM running on Hyper-V Server 2008 core, and all 30 virtual machines can each play a DVD movie simultaneously with zero frame drops and the utilization is smartly clustered on each of the cards-- 1 server with 30 virtual multimedia machines... HOT!

Posted on 2015-11-30 11:52:58
habwich

Raytech70, did you directly passthrough a titan X and / or titan Z into a VM, and getting a display and actually game in a VM without getting an Error code 43 in Windows Device Manager? Because if what you say is true you may have found a way to dedicate a full Geforce graphics card passthrough into a VM, which is unheard of in ESXi. I have been trying this for years and gave up. Is this true? Can you make a video on youtube as proof of you gaming and showing off you sick build in a vm with a titan?

Posted on 2015-12-25 00:23:47
habwich

typo* your sick build in a vm with a titan?

Posted on 2015-12-25 00:24:44
Somethingsomething

Passthrough is possible to a Virtual Machine but some of the consumer grade geforce cards dont support it. I believe Titan does (atleast i have come across many people claiming it does) and know for sure the Grid Cards support it. This passthrough feature was released a few years ago mainly as a Horizon VIEW feature to allow high performance 3D support in a virtual machine for CAD, medical imaging and other use types that required 3D.

I tried passing through my GTX 970 to ESXi 6.0 and got the error code you discussed but like I said I have come across a few people who were successful with the Titan Line and AMD Consumer Cards.

Source: I am a VMware PreSales Engineer.

Posted on 2016-01-15 14:15:41
patpro

Titan GPU are off limit for me, too expensive. I'm really interested in the GTX 1070 that has just been announced. Feel free to share any news about NVidia GPU passthrough in ESXi 6 :)

Posted on 2016-05-07 06:54:55
Robert Cipriani

Specs:
DL380 G7 running ESXi 6.0.0 U2
VT-d and other virtualization related stuff is enabled in BIOS
Windows 7 64 bit VM (also tried 8.1)
8 GB ram allocated to VM and is "all locked"
Virtual hardware is version 11 but I also tried version 8
XFX Radeon R7 250 in an x16 riser card
Video and HDMI audio are passed through to the VM
pci hole and other recommend params set in the .vmx
VM is set to EFI boot

AMD drivers will install but don't work properly. Radeon Settings tell me that no AMD drivers are installed. If I go to Advanced Settings in the Screen Resolution dialog, it says the adapter is Microsoft Basic (in Windows 8), even though Device Manager shows the card as a Radeon R7 200 Series (and there's no yellow exclamantion point or anything). Minecraft won't run, says no Open GL. I've tried the latest Crimson and also Catalyst 14.4, same results.

Any suggestions? I bought this card after struggling to get a GeForce to work, and it's behaving essentially the same way. I've got a Raspberry Pi with the Horizon client waiting to connect to this thing :D, but it's driving me nuts.

Posted on 2016-04-07 18:27:39
Robert Cipriani

Hmmm...I should also point out that I'm trying to get the drivers set up via the VM console. Do I need a physical monitor attached for this to work? My ultimate goal is to use the Pi as a thin client with PCoIP. Could the lack of a real monitor be the issue?

Posted on 2016-04-07 18:42:19
Robert Cipriani

Update: Yes, not having a display physically attached to my R7 was a problem - the driver isn't active when there's no actual display. When I connect one I get output from the physical card, and the VMware SVGA console goes black. Horizon, however, wants to connect to that display instead of the Radeon. My next step is to attach a local USB keyboard/mouse to the server so I can log in, make sure 3D acceleration works, then disable the VMware console adapter. If that works, I can either just leave the monitor attached and try the Horizon client again (it doesn't even have to be powered on), try finding an EDID emulator (software or hardware), or try an Avocent KVM dongle on one of the DVI ports.

Posted on 2016-04-08 13:53:38
Robert Cipriani

Ok, so the Radeon, along with a passed-through USB controller and keyboard/mouse, works beautifully if I sit in front of the server. I cannot get Horizon View to connect to the GPU-accelerated display, even after disabling the VMware console. I just get a black screen in Horizon, and it eventually disconnects. I'm not even sure if this is possible; it's almost certainly not supported. I'm going to see if RemoteFX can be made to work with the physical GPU. I installed NoMachine as a test but wasn't impressed, and it borked RDP completely.

Posted on 2016-04-11 18:55:23
jon jon goufema liames zenbin

Are you from india?i dont believe in you...sorry..

Posted on 2016-10-11 02:27:10
Brandon Bridges

I would like to try something similar only for Retro gaming purposes. I am thinking of getting an older Xeon with a motherboard that has multiple PCI-E and PCI slots and putting GPUs from different era's in order to play games that normally you would never be able to play all on the same system. So I'd have a Windows 98 VM with a Voodoo5 PCI attached to it and a Windows 98 VM with one of the very first PCI-E cards attached to it, then a Windows XP machine with a more powerfull GPU, then maybe an DOS VM with a 2d card and a Voodoo1or 2. Was curious if anyone has tried anything like this.

Posted on 2016-12-02 19:21:06