Hands-On With an NVMe Drive

Does the speed hype live up to the reality?

I've been bullish on NVM Express (NVMe) for the last year. Due to its efficient design, NVMe can transport more data with lower latency, and yet consume fewer CPU cycles than SATA or SAS drives. Even though the technology is still relatively new, NVMe drives are available from many vendors; they're supported by Microsoft, Linux, VMware and other OSes.

The price and availability of NVMe drives have widely prohibited many datacenters from incorporating them into their infrastructure, but over the past six months the prices have dropped substantially, and their availability has increased to a point where many datacenter professionals are starting to consider implementing them. In this article, I'll detail my own experience installing a Micron NVMe drive in one of my servers.

For a primer on NVMe technology, you can reference my previous article on NVMe drives. And check out Trevor Pott's insightful article on how NVMe technology changed his datacenter.

Micron HHHL Format 9100 NVMe Drives
Micron was kind enough to supply me with a 9100 PRO Enterprise HHHL NVMe drive for this article. Micron NVMe drives have been getting good reviews from independent testers for their speed, reliability, availability and price. This drive has been spec'd out by Micron, and independently verified to be able to achieve a sequential read/write (128KB I/O size) rate of 2.05/0.69 GB, a random read/write (4KB I/O size) at 525,000/50,000 IOPS, and a random read/write latency of 120/30 us (microseconds).

It should be noted that the 800GB drive is the smallest capacity and least performant drive offered in the 9100 line, and that the larger drives have better performance. The largest drive is 3.2TB, and the most performant drive in the 9100 line can deliver 3.2/2.2 GB sequential read/write and deliver 750,000/300,000 IOPS while still having a random read/write latency of 120/30.

I installed the NVMe drive into a Dell PowerEdge R610 to see if an older system would be able to take advantage of an NVMe drive. This system was running VMware ESXi 6.5.0, had two Xeon X5660 CPUs running at 2.80GHz (12 physical/24 logical processors), and 96GB RAM. The R610 had three PCIe G2 slots: two x8 slots, and one x4 slot.

[Click on image for larger view.] Figure 1. The NVMe drive in the ESXi host.

A PCIe 2.0 connector has a theoretical transfer rate of 5 GB/s and a per-lane throughput rate of 500 MB/s. Consequently, an eight-lane PCIe connector should support an aggregate throughput of up to 4 GB/s and 2 GB/s in an x4 slot. In theory, the PCIe 2.0 x8 slots on the R610 should be able to handle the transfer rates of the NVMe drives.

After powering down the R610 and inserting the Micron NVMe drive into the system, the system was powered on and the drive was recognized by the ESXi host (Figure 1).

[Click on image for larger view.] Figure 2. The NVMe drive, selected for the VMFS filesystem.

The NVMe drive was formatted using the VMware ESXi Web client. Using the wizard, the NVMe drive was selected (Figure 2), and a VMFS 6 filesystem was written to it (Figure 3).

[Click on image for larger view.] Figure 3. The VMFS filesystem placed on the NVMe device.

After refreshing the Web client, the NVMe datastore appeared as a valid datastore (Figure 4).

Using the NVMe Datastore
Once a datastore was placed on the NVMe device, I was able to use it like any other datastore. I'll run more exhaustive tests on the NVMe drive in the future, but I did conduct a casual test by cloning a 100GB virtual machine (VM) and comparing it to how long it took to clone on an SSD drive and an HDD drive.

The results were enlightening, if not unexpected. Cloning a 100GB VM on a 7200 RPM HDD drive took 2,639 seconds, compared to 552 seconds on an SSD and only 156 seconds on an NVMe device.

[Click on image for larger view.] Figure 4. The NVMe datastore.

Again, this was a very casual comparative test that was done in an informal manner, but the results align with the performance measures that other independent analysts and labs are reporting by using NVMe drives.

A Skeptic No More
I dreaded this exercise, because it has been my experience that very seldom do new technologies come through on their promises; moreover, they often require a high-level effort and support from the vendor to get them into operation.

I was extremely relieved for once to find my skepticism and doubt to be proven wrong: I was able to power down my R610 server, plug the Micron 9100 NVMe drive into it, have ESXi recognize the drive, and have the drive successfully function immediately after booting up the system back up. I didn't have to fool around with drivers, configuration, jumper settings or any of the other myriad maladies that have caused me grief in the past.

My initial impression of the Micron NVMe 9100 drive using the VMware NVMe driver is that it was easy to install; it worked, to my surprise, just as advertised. Despite the limited amount of testing performed, and with a big caveat being that these were very off-the-cuff tests, I find it safe to conclude that NVMe drives certainly excel and live up to their performance  expectations.

About the Author

Tom Fenton has a wealth of hands-on IT experience gained over the past 30 years in a variety of technologies, with the past 20 years focusing on virtualization and storage. He currently works as a Technical Marketing Manager for ControlUp. He previously worked at VMware in Staff and Senior level positions. He has also worked as a Senior Validation Engineer with The Taneja Group, where he headed the Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on X @vDoppler.


Subscribe on YouTube