ESXi Upgrade on Home Server
My home computer setup was starting to show its age. My previous setup was a pair of Core2Duo class systems each with 4GB of DDR2 RAM. One functioned as a server running ESXi 5, and the other as a gaming rig with a pretty slick ATI Radeon 6850. The gaming rig did just fine for most modern games with that video card, but the ESXi server was over-taxed when trying to run anything more than a couple small Linux VMs.
I read about folks having success with "whitebox" ESXi servers at thehomeserverblog.com, and figured I'd follow suit. I purchased a Gigabyte 970A-UD3P motherboard, AMD FX8350 CPU, and 32 GB of Gskill DDR3 RAM from Newegg. In retrospect I probably should have forked over the extra dough for the 990 chipset, but C'est la vie.
I got everything hooked up and my VMs migrated over, when I noticed that my CPU and memory stats were nearly idle. Even after installing a couple of Windows 2k8 R2 VMs that had struggled before. I looked over thehomeserverblog.com article again, and he speaks about using the AMD IOMMU passthrough to give the physical video card over to a VM for desktop-like performance. I thought, "with all this overhead, could I virtualize my gaming rig?"
It turns out, it's not as easy as it looks. IOMMU (and the matching Intel feature VT-d) are not widely supported, nor well documented. I figured it should be as easy as enabling the feature in the BIOS and patching the device through. Ha ha...nope.
While configuring IOMMU passthrough in ESXi is no challenge in and of itself, getting the VM to recognize and use it is something else.
I started out simply enough with a spare video card (a Nvidia Geforce 210) card and passed it through. The VM saw it in the hardware list, and I could install the drivers for it, but it was never able to actually use it. It's been a few days now, so I don't remember the exact error code, but try as I might it never worked. I decided that since the thehomeserverblog.com had used ESXi 5.0 instead of the latest 5.5, that I might start there. I also read numerous places about broken IOMMU support in ESXi 5.1.
I reverted my ESXi box to 5.0 and installed a fresh Windows 7 VM with 2 GB RAM. This RAM value is important as you shall see later. I also moved the ATI Radeon 6850 from the gaming rig to the server to give this thing the best possible shot. With my fresh install done and VMware Tools installed, it could indeed see the physical card. Now could it use it? After installing the latest ATI drivers, YES! It could!
Ok, onto the next steps. 2 GB of RAM isn't going to get me very far, so I upped it to 4 GB. Uh oh! VMware spits out some error about needing a parameter "pciHole.start=XXXX" inserted into the .vmx file or it can't start the VM. As it turns out, this number they give you is freaking wrong! I searched around and around the interwebs for someone doing the same thing as me and found this VMware community post: communities.vmware.com. If you read down the forum a while, it mentions that to get it to work correctly with >= 4GB RAM you actually need 2 parameters in your .vmx file:
I read about folks having success with "whitebox" ESXi servers at thehomeserverblog.com, and figured I'd follow suit. I purchased a Gigabyte 970A-UD3P motherboard, AMD FX8350 CPU, and 32 GB of Gskill DDR3 RAM from Newegg. In retrospect I probably should have forked over the extra dough for the 990 chipset, but C'est la vie.
I got everything hooked up and my VMs migrated over, when I noticed that my CPU and memory stats were nearly idle. Even after installing a couple of Windows 2k8 R2 VMs that had struggled before. I looked over thehomeserverblog.com article again, and he speaks about using the AMD IOMMU passthrough to give the physical video card over to a VM for desktop-like performance. I thought, "with all this overhead, could I virtualize my gaming rig?"
It turns out, it's not as easy as it looks. IOMMU (and the matching Intel feature VT-d) are not widely supported, nor well documented. I figured it should be as easy as enabling the feature in the BIOS and patching the device through. Ha ha...nope.
While configuring IOMMU passthrough in ESXi is no challenge in and of itself, getting the VM to recognize and use it is something else.
I started out simply enough with a spare video card (a Nvidia Geforce 210) card and passed it through. The VM saw it in the hardware list, and I could install the drivers for it, but it was never able to actually use it. It's been a few days now, so I don't remember the exact error code, but try as I might it never worked. I decided that since the thehomeserverblog.com had used ESXi 5.0 instead of the latest 5.5, that I might start there. I also read numerous places about broken IOMMU support in ESXi 5.1.
I reverted my ESXi box to 5.0 and installed a fresh Windows 7 VM with 2 GB RAM. This RAM value is important as you shall see later. I also moved the ATI Radeon 6850 from the gaming rig to the server to give this thing the best possible shot. With my fresh install done and VMware Tools installed, it could indeed see the physical card. Now could it use it? After installing the latest ATI drivers, YES! It could!
Ok, onto the next steps. 2 GB of RAM isn't going to get me very far, so I upped it to 4 GB. Uh oh! VMware spits out some error about needing a parameter "pciHole.start=XXXX" inserted into the .vmx file or it can't start the VM. As it turns out, this number they give you is freaking wrong! I searched around and around the interwebs for someone doing the same thing as me and found this VMware community post: communities.vmware.com. If you read down the forum a while, it mentions that to get it to work correctly with >= 4GB RAM you actually need 2 parameters in your .vmx file:
- pciHole.start=1200
- pciHole.end=2200
I threw those in the .vmx file and voila! It works! I rebooted it a few times and finally settled on 4 CPU cores and 16 GB of RAM for said Windows box. I quickly installed a copy of Battlefield 3 and gave it a whirl. (A side node, I also passed through a 4-port USB 2 PCI card in addition to the graphics so the keyboard, mouse, etc. are all "native" to the VM). The game played just fine with no appreciable lag. It also helps that the Windows .vmdk disk files reside on SSD storage. :-)
Now that I had a functional example, how to get my already installed games into the VM? Enter VMware Converter. This is a free download and can be used to migrate a live machine into a VMware environment. I shutdown my clean VM and ran the converter on my live gaming rig. Once it was in ESX, I removed the PCI passthrough from the clean VM, attached it to the migrated one, added the pciHole parameters, and booted it up. Sweet! It works! Since the migrated VM already had the drivers installed, the card was recognized immediately.
So far I've played a bit of Battlefield 3, Bulletstorm, and checked a few other games with no problems whatsoever. I'm pleased I'll be able to not only make everything faster, but reduce the number of boxes under my desk as well.
Comments
Post a Comment