Everyday Virtualization

Blog archive

Storage Race Continues with 16 GFC

There is one thing that we all know about virtualization: Storage is your most important decision. I've said a number of times that my transition to virtualization as the core part of my IT practice also led to me knowing a lot about shared storage technologies. As I've developed my storage practice to enable my virtualization practice, one thing I've become good at is breaking down details on transports for virtual machine storage systems.

This can include individual drive performance, usually measured in detail measurements such as IOPs and drive rotational speed. Note that I didn't mention capacity, as that isn't a way to gauge performance. I also focus a lot of disk interconnection options, such as a SAS bus for drives. Lastly, I spend some time considering storage protocol in use. For virtual machines, I have used NFS, iSCSI and Fibre Channel over the years. I've yet to use Fibre Channel over Ethernet, which is different than Fibre Channel as we've known it.

When we design storage for virtual machines, many of these decision points can influence the performance and supportability of the infrastructure. I recently took note of 16 Gigabit Fibre Channel interfaces, in particular the Emulex LPe16000 series of devices (available in 1- or 2-port models). I took note here because, when we calculate speed capabilities for a storage system for virtual machines the storage protocol is important. The communication type is one decision (Fibre or Ethernet), then the rate comes into play. Ethernet is honestly pretty simple, and it has a lot of benefits (especially in supportability).

Ethernet for virtual machines usually exists in 1 and 10 Gigabit networks; slower networks aren't really practical for data center applications. Fibre Channel networks are the mainstay in many data centers, and there are a lot of speeds available: 1, 2, 4, 8 and 16 Gig speeds. It's important to note that Ethernet and Fibre Channel are materially different, and the speed is only part of the picture. Fibre channel is SCSI commands sent over fiber options and Ethernet (iSCSI and NFS both) encapsulate SCSI commands over TCP/IP.

Protocol wars aside, I indeed like the 16 GFC interfaces that are now readily available. How speed is measured between Fibre and Ethernet technologies are different, but Ethernet per-port is 10 Gigabit for most situations and 16 GFC is 16 "gigabauds," which has an equivalent of 14.025 Gb/s at full optic data transfer. So, per port, Fibre Channel now has mainstream options faster than 10 Gigabit Ethernet. Of course, there are switch infrastructure and multipathing considerations; but that applies to both Fibre Channel and Ethernet.

In my experience for my larger virtualization infrastructures, I'm still a fan of Fibre Channel storage networks. I know that storage is a passionate topic, and this may be a momentary milestone as 25, 40 and 100 Gigabit Ethernet technologies emerge.

What's your take on 16 GFC interfaces? Share your comments here.

Posted by Rick Vanover on 03/25/2013 at 3:33 PM


Featured

Subscribe on YouTube