Take Five With Tom Fenton

Using NVMe Devices in Your Datacenter

Five use cases where it makes the most sense.

NVM Express (NVMe) devices, with their sub-millisecond latency and high bandwidth, are the best-performing commodity storage devices available in the datacenter today. Even though they're now readily available, their performance does come with a price, and they should be used judicially.

Despite being more expensive than SSD drives for general use, many shops are acquiring them and starting to work with them in their datacenter. Below are five use cases for NVMe devices that you may want to begin with, to determine if they're a worthwhile investment for your own datacenter.

  1. Server-side caching. Experienced NVMe users will likely agree that server-side caching is the most basic use case for the datacenter. RAM is limited and relatively expensive on a server, and an NVMe drive can act as a large and relatively inexpensive cache that can greatly improve the performance of the workload on a server. Most modern hypervisors and OSes have the capacity to use NVMe devices for basic server-side caching, but other third parties offer add-on products to enable server-side caching with more advanced features.

  2. SDS storage caching. Many software-defined storage (SDS) solutions, such as VMware's VSAN, can use NVMe devices directly (and others indirectly) to increase performance. The benefit of using an NVMe device for SDS vs. server-side caching is that allocating an NVMe device directly to the SDS can make more intelligent use of this resource.

  3. NVMe PCIe as virtual machine (VM) storage. Red Hat, VMware, and Microsoft all allow NVMe devices to be formatted and used as primary storage for VMs. VMs that are stored on NVMe drives will have great performance; be aware, however, that using NVMe drives for a VM's primary storage will be wasteful in all but the most niche cases.

  4. Pass-through of NVMe devices to VMs. In most cases, you will want a VM to directly use an NVMe device, and using a pass-through driver is a way to optimize this resource. By using a pass-through driver, the NVMe device will be exposed directly to the VM, but the VM will need to have the NVMe driver running on it. Storage-intensive applications will get the biggest bag from this topology.

  5. Shared NVMe drives. While many users still believe that NVMe drives need to be plugged into the PCIe bus on the server that is using them, this is no longer the case. A new technology, NVMe over Fabric (NVMeF), allows a pool of NVMe drives to be aggregated, abstracted and shared within in a datacenter. All the major hardware vendors are currently working on this technology, as well as some startups like Mangstor, Zstor, Apeiron and E8.

About the Author

Tom Fenton works in VMware's Education department as a Senior Course Developer. He has a wealth of hands-on IT experience gained over the past 20 years in a variety of technologies, with the past 10 years focused on virtualization and storage. Before re-joining VMware, Tom was a Senior Validation Engineer with The Taneja Group, were he headed their Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on Twitter @vDoppler.

Featured

Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.