In-Depth

The Birth of Faster Storage

NVMe is new, but has a lot of promise in the all-flash market.

I recently had the chance to attend a debriefing on Mangstor's NVMe over Fabric (NVMeF) all-flash array. The debriefing was hosted by Paul Prince, Mangstor CTO. Even though NVMeF is still in its infancy, I'm intrigued by the early implications for its use in the virtual datacenter. Let's take a look at how NVMeF came to be, then cover some current NVMeF products and use cases.

NVMe (shortened from NVM Express) is a standard protocol for SSD devices over a PCI express (PCIe) bus. They came about because SAS/SATA interfaces were designed decades ago for spinning disks, and they simply could not efficiently handle the increased capabilities of SSD devices.

The NVMe interface was designed from the ground up for use with flash memory. Currently, there are two ways to connect an NVMe device to a PCIe bus: plugging it directly into a standard-sized PCI Express slot, or using a U.2 interface to connect 2.5-inch devices.

The U.2 interface allows SSD devices to use standard drive caddies that can be accessed without opening up a server's case, which is the typical method for a device that plugs directly in to a PCIe slot. NVMe devices can work with -- and are currently shipping with -- native drivers for Linux, Windows and VMware. (I've previously written about NVMe technology).

Why NVMeF?
NVMeF allows a datacenter to aggregate and abstract a pool of NVMe flash storage. This pool can be delivered in its native form, or be delivered via a front-end file server. I like to think of NVMeF all-flash arrays as "just a bunch of flash," or JBOF.

A pool of NVMe flash can be carved out for use by different servers, which are no longer constrained by that amount of NVMe storage that can be physically housed in a server. With the right architecture and software, enterprise features -- high-availability, failover, deduplication and thin provisioning, for example -- can be incorporated. Figure 1 shows an example of how NVMe connects to servers.

In September 2014, NVMe Inc. members proposed a protocol standard for NVMeF that would enable remote access to NVMe devices over RDMA fabrics; this would eliminate TCP network latency. Later, Mellanox, the InfiniBand and Ethernet interconnect behemoth, joined the technical working group and is currently helping to shape the standard. As of June 2016, the standard is about 90 percent complete. This progress has allowed companies to start working on actual NVMeF arrays.

[Click on image for larger view.] Figure 1. An example of how NVMeF can connect to a server.
NVMeF Products
There are several emerging array and component NVMeF products. EMC DSSD D5, Mangstor and Zstor are established companies working on NVMeF arrays; Zstor, Apeiron and E8 are in the early development stages of NVMeF arrays. X-IO Technologies' Axellio system is taking a different approach, selling their products through OEMs, ODMs and system integrators.

NVMeF arrays are excellent, but they do need adapters to enable data transfer. Mellanox Technologies' ConnectX-3 Pro and ConnectX-4 product families implement RoCE, Chelsio supports iWARP, and QLogic supports both RoCE and iWARP.

Use Cases
Although NVMeF is still in its infancy, many uses are being explored as a way to exploit the huge amount of fast and efficient NVMe storage that NVMeF arrays can deliver. For example, there are big potential benefits in having a pool of very fast storage available for latency-sensitive applications. I also can see how online transaction processing (OLTP), data warehousing and analytics-intensive applications would greatly benefit from the availability of an enormous pool of reliable, low-latency storage.

The verticals that will have the most interest in, and likely be the early adopters of, this technology will be the usual suspects: media and entertainment, energy exploration and the financial sector.

Mangstor NX6320 NVMe
Near as I can tell, Mangstor was the first to market with an NVMeF all-flash array. Its NX6320 aggregates expensive NVMe resources into a single pool that can then be divided amongst many servers. The benchmark numbers they've released (which have been verified by reliable third parties) shows class-leading performance. They're also priced comparable per capacity and performance with non-NVMeF storage solutions. Administrators have the ability to centrally manage and service their MX6300 arrays through its TITAN storage software. TITAN can also integrate with OpenStack Cinder and optimize the data path to decrease latency and increase performance.

Under the hood of the NX6320 array is a Dell PowerEdge R7320 Server. This server can host 8, 12, 16 or 32TB of NVMe storage using its own MX6300 NVMe SSDs. Mangstor currently supports the following OSes: RHEL, SLES, CentOS, Ubuntu, Windows, and VMware ESXi 5.5/6.0 via VMDirectPath. To deliver the bits to the servers, Mangstor supports RDMA over Converged Ethernet (RoCE), InfiniBand and iWARP.

A Promising Start
NVMeF may be the next big thing in storage technology, and the Mangstor NX6320 2U all-flash array certainly is proving out in the real world that NVMeF is more than a concept and can deliver outstanding performance over fabric to disparate physical hosts. NVMeF is still in its early days, so it does not yet have a set standard, and it has limited driver support. I will be very interested to watch how this category develops out over the next few years.

About the Author

Tom Fenton has a wealth of hands-on IT experience gained over the past 30 years in a variety of technologies, with the past 20 years focusing on virtualization and storage. He currently works as a Technical Marketing Manager for ControlUp. He previously worked at VMware in Staff and Senior level positions. He has also worked as a Senior Validation Engineer with The Taneja Group, where he headed the Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on X @vDoppler.

Featured

Subscribe on YouTube