Sean's Virtual Desktop
How To Install an NVIDIA vGPU in ESXi Hosts
Get better graphics to your virtual desktop infrastructure users.
After reading about what NVIDIA GRID vGPU can do in a recent article, you may think that it would be difficult to install and configure. But while installing the GRID vGPU manager software requires a little vSphere command-line comfort, it's not as difficult as setting up other methods of virtual desktop 3D acceleration.
There are three components needed to configure NVIDIA GRID vGPU on a vSphere 6 host:
Downloading the Software
- The NVIDIA GRID vGPU Manager software. (More on that below).
- An SSH client to access the vSphere Host command line.
- Either the vSphere Client, the Web Client, or an SCP program to upload the NVIDIA software to the host.
Before configuring an ESXi 6.0 host to utilize vGPU, the NVIDIA software needs to be retrieved from the NVIDIA site and put in a location where all hosts can access it. This can be the local/tmp directory on a host, or a VMFS datastore accessed by multiple hosts.
The software package for NVIDIA GRID vGPU is available from the NVIDIA website, and this link
will take you to the page to accept the license agreement and download the software.
The download is a ZIP file that contains the VMware components, Windows components, and a "getting started" guide in PDF form. You'll need to extract the Windows driver installer for your version of Windows, as well as the NVIDIA VIB file. The VIB file contains the GRID drivers and management software for the ESXi host, and this is what you'll need to upload.
Uploading to the ESXi Host
Once you've extracted the VIB file, it needs to be placed in a spot that the ESXi hosts can access them. There are a couple of methods for doing this:
Installing GRID vGPU
- Upload the VIB file to a shared datastore using the C# or vSphere Web Client. This option may be the preferable option if you plan to install the GRID vGPU on multiple hosts.
- Upload the VIB to a local datastore using the C# or vSphere Web Client. This option is nice if you're deploying to a single host.
- Upload the VIB to a local folder on the ESXi installation volume such as /tmp. This will require SSH to be enabled on the host before uploading the file. A file transfer application that supports the SCP protocol, such as WinSCP, will be required to upload the file.
Once the file's been uploaded, you can start to install the GRID vGPU software on your ESXi hosts. This is a multi-step process, and most of it will be done from the ESXi command-line shell. SSH access is normally disabled, but before you activate it, you need to put the host into maintenance mode
and either move all VMs to another host or shut them down.
Once the host is in maintenance mode, you'll need to enable SSH on the ESXi hosts that you'll be installing the GRID vGPU software on. You can either do this in the troubleshooting menu in the DCUI or in the Services section of the Security tab in the C# or Web Client.
You'll then need to remote into the host and take the following steps:
- Change directories to the folder where you uploaded the VIB file. If the VIB file was uploaded to a VMFS datastore, enter the following command:
cd /vmfs/volumes/[name of datastore]/
- Type LS to list the contents of the directory. Verify that the NVIDIA VIB file is listed.
- Type in
esxcli software vib install -v /[full path to VIB file]/NVIDIA[version].vib. You can use tab to autocomplete folder and file names.
- If successful, you'll see the driver listed in the VIBs installed section (shown in Figure 1).
- Reboot the system.
- Re-enable SSH and connect back into the system.
- Verify that vGPU components are installed and functioning properly with the
nvidia-smi command. This should show all GRID GPUs and their current temps (Figure 2).
In the next article in this series, you'll configure VMs and View desktop pools to utilize GRID vGPU.
Sean Massey is a systems administrator from Appleton, Wisc. He blogs about VDI, Windows PowerShell, and automation at http://seanmassey.net, and is active on Twitter as @seanpmassey.