Gaming with XDMA CrossFire Passthrough on Ubuntu 17.04
Posted on December 24, 2016
I have used Windows as my primary OS ever since I was a teenager because I am a gamer. Ironically, my gaming hobby also introduced me to my programming/Linux hobby (and now career) which are always at odds with one another. I can either game, which has an unfortunate requirement of needing Windows, or hack, which usually requires Linux. This meant I had to constantly dual boot between Windows and Linux or suffer with a virtual machine with poor video performance. Not ideal.
However, a few years ago the Linux Kernel Virtual Machine (KVM) project enabled near native Windows guest performance by passing through a host video card to a guest. I was extremely interested in trying this out but was deterred by a lack of proof that SLI or CrossFire worked since my other, other hobby is also being a hardware enthusiast.
All that changed when I discovered a post authored by Duelist detailing that he had successfully got his XDMA Radeons to run in CrossFire! I gave it a try it myself and found that documentation was sparse and mostly geared towards Arch Linux. I have a preference for Ubuntu and couldn’t really find anything modern that would help me. All in all, I was successful and am extremely happy with the performance of my box. Hopefully this guide will also help others that have multiple Radeons that they wish to CrossFire!
Enable VT-d virtualization in your BIOS and enable your integrated GPU.
Ubuntu Setup
Install Ubuntu 16.10 or 17.04 if you want virt-manager because it comes with a more modern libvirt 2.1 than Ubuntu 16.04 LTS.
After you have installed Ubuntu you will now have the stock kernel but it won’t be virtualization aware. We will need to enable the VFIO (Virtual Finction I/O) kernel modules to allow us to pass full devices to the guest machine.
Add the following to the bottom of the file:
Next we will need to enable Intel IOMMU via the boot loader:
Now we will need to identify the PCI-Bus IDs of the hardware devices that we wish to pass through. In this case I’m passing through my R9 290s and their HDMI audio devices which have PCI-Bus IDs 1002:67b1 and 1002:aac8.
Add this to a special vfio.conf that will be run when the vfio-pci module is started:
Add the following to the file replacing you PCI-Bus IDs as needed:
Blacklist the opensource radeon driver from starting:
Add the following to the file
Reboot your machine and verify that the video cards have been stubbed out via vfio-pci.
QEMU Setup
Next install the Open Virtual Machine Firmware (OVMF) and QEMU so that we can run a virtual machine.
The OVMF is a project to enable UEFI support for virtual machines. It’s a BIOS that is needed to boot our Windows guest and it needs a variable store partition that we’re going to now create:
Next up, create your Windows image for the guest to use. If you’re interested in why these settings are used this should help clear things up.
QEMU Test Drive
Try running the VM using just QEMU after you alter the necessary fields to your machine’s specification. This will be the number of cores/threads, amount of memory, or paths to the installation media. I wouldn’t suggest you change too much otherwise. Also, please note that Windows 10 will BSOD unless qemu emulates a core2duo class processor.
In this example I have also mounted the Windows virtio drivers which you should download beforehand.
If you look at the bold configuration above, the trick to getting CrossFire working is to create a unique PCI-e root port switch per GPU (i.e. ioh3420).
If you were able to run this command then you should have seen a Tiano UEFI boot screen on your Radeons. If so, you succesfully got to the first step in GPU passthrough. Unfortunately this doesn’t mean things will work right off away. You will need the QXL emulated video card as a substitute until everything works.
Install Windows on the Guest using QXL Video
I had to set up my guest at this stage because there is a bug that prevents Windows 7 guests from booting QEMU KVMs on libvirt. I installed Windows 7 first (and then upgraded to 8 and then 10) because I don’t own a Windows 10 retail copy and can still upgrade to 10 for free with a valid version of Windows 7 or 8. You might want to skip this stage and go on directly on to libvirt section to avoid having to re-activate your OS (it seems libvirt changes something that Windows’ activate system doesn’t like).
Fully install Windows 10 and get it working before you install the Radeon drivers. It just worked for me after this. If you run into weird GPU issue a reboot will usually fix the problem. Please note that I was also unable to get CrossFire working in Windows 7 so its likely that only Windows 8/10 are supported.
(Optional: 16.10 and above only) libvirt and virt-manager Setup
Now we’re going to get this working with libvirt and virt-manager in order to allow our VM to autostart on boot.
Next up we’re going to create a file for inclusion as a virt domain.
Alter the following XML similar to what you did above for QEMU and paste it in. The necessary bits to load the unique PCI-e root port switches have been added in bold.
We will now need to edit the qemu configuration for libvirt.
Change the user that the virtual machine runs at to root and also enhance cgroup-device_acl to include the vfio paths we found above.
Next we’ll need to modify the Apparmor profile for libvirt to give it access to our VFIO devices. The KVM Manage Ubuntu guide was useful here.
Add the following to the file and while also updating your vfio paths:
Restart apparmor and libvirtd:
Try to start the Windows 10 guest now using virt-manager
Connect to QEMU/KVM and hit the play button. It’s also somewhat convenient to do USB passthrough here.
If you notice things aren’t working keep an eye on dmesg and then edit the Apparmor profile further to give additional access to libvirt.
Benchmark
My machine seems to have a minor 2 frame per second hit over bare metal in benchmarks. If you would like to analyze my benchmark result further take a look here: http://www.3dmark.com/compare/fs/11182166/fs/8229002
Windows 10 Guest Autostart
If everything was successful you should now be able to autostart your virtual machine on boot with: