Take Five With Tom Fenton

Using NVMe Devices in Your Datacenter

Five use cases where it makes the most sense.

NVM Express (NVMe) devices, with their sub-millisecond latency and high bandwidth, are the best-performing commodity storage devices available in the datacenter today. Even though they're now readily available, their performance does come with a price, and they should be used judicially.

Despite being more expensive than SSD drives for general use, many shops are acquiring them and starting to work with them in their datacenter. Below are five use cases for NVMe devices that you may want to begin with, to determine if they're a worthwhile investment for your own datacenter.

  1. Server-side caching. Experienced NVMe users will likely agree that server-side caching is the most basic use case for the datacenter. RAM is limited and relatively expensive on a server, and an NVMe drive can act as a large and relatively inexpensive cache that can greatly improve the performance of the workload on a server. Most modern hypervisors and OSes have the capacity to use NVMe devices for basic server-side caching, but other third parties offer add-on products to enable server-side caching with more advanced features.

  2. SDS storage caching. Many software-defined storage (SDS) solutions, such as VMware's VSAN, can use NVMe devices directly (and others indirectly) to increase performance. The benefit of using an NVMe device for SDS vs. server-side caching is that allocating an NVMe device directly to the SDS can make more intelligent use of this resource.

  3. NVMe PCIe as virtual machine (VM) storage. Red Hat, VMware, and Microsoft all allow NVMe devices to be formatted and used as primary storage for VMs. VMs that are stored on NVMe drives will have great performance; be aware, however, that using NVMe drives for a VM's primary storage will be wasteful in all but the most niche cases.

  4. Pass-through of NVMe devices to VMs. In most cases, you will want a VM to directly use an NVMe device, and using a pass-through driver is a way to optimize this resource. By using a pass-through driver, the NVMe device will be exposed directly to the VM, but the VM will need to have the NVMe driver running on it. Storage-intensive applications will get the biggest bag from this topology.

  5. Shared NVMe drives. While many users still believe that NVMe drives need to be plugged into the PCIe bus on the server that is using them, this is no longer the case. A new technology, NVMe over Fabric (NVMeF), allows a pool of NVMe drives to be aggregated, abstracted and shared within in a datacenter. All the major hardware vendors are currently working on this technology, as well as some startups like Mangstor, Zstor, Apeiron and E8.

About the Author

Tom Fenton has a wealth of hands-on IT experience gained over the past 30 years in a variety of technologies, with the past 20 years focusing on virtualization and storage. He currently works as a Technical Marketing Manager for ControlUp. He previously worked at VMware in Staff and Senior level positions. He has also worked as a Senior Validation Engineer with The Taneja Group, where he headed the Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on X @vDoppler.

Featured

Subscribe on YouTube