Windows Insider

The Poor Man's VMFS

Greg walks you through new ways to set up this technology.

How can you compress 200 servers down to 20? How can you automatically restart a failed server on a healthy hardware chassis? How can you go back in time with disk snapshots? The answer to all those questions is system virtualization. Impressive capabilities like that often come with an equally impressive price tag, though, and virtualization is no exception.

Historically, if you bought into the vision of VMware's Virtual Infrastructure, purchasing the VMware license is just the first step. Then you have to buy the mandatory high-end server hardware, a storage area network (SAN) for housing the data, a Host Bus Adapter (HBA) and Fibre Channel switches that connect the server to the SAN. It's easy to understand the return on investment for virtualizing your environment, but it's hard to accept the six-figure check you have to write to just get in the door.

Fortunately, VMware's Virtual Infrastructure 3, also called ESX v3.0, offers some new and less-expensive mechanisms for providing low-use disk space on non-production virtual machines. If you've soured on the $2,000 cost per server for SAN connection's HBA cards, you'll be glad to know that ESX v3.0 now supports IP-based connections to VMware's proprietary VMFS file system via iSCSI. You'll also want to learn more about this new version's support for VMFS partitions hosted on NFS.

How Much Do You Love iSCSI?
Why should this get you pumped up? Because while iSCSI is a universally accepted protocol that has been around for years, it's only now getting the proper recognition it deserves for its interoperability. VMware's iSCSI implementation supports a hardware-based connection using a special iSCSI card, although you can also install a software-based initiator on the VMware host. The latter approach lets you use the cheaper existing network cards already in your server to connect to remote disk resources, in the same way Fibre Channel encapsulates SCSI commands inside a fibre connection. The iSCSI software initiator tunnels SCSI through regular Cat-5 cable using a run-of-the-mill gigabit network card.

The result isn't fast and obviously the SAN you're attempting to connect needs to support iSCSI. However, the ability to re-use existing network cards for disk connections can be a money saver for an ESX chassis that doesn't support high-performance applications. The speed of the connection to the virtual machine's disk will be slower using iSCSI, somewhere on the order of 1GB/sec instead of Fibre Channel's typical 4GB/sec. You can always tie NICs together to wring out more performance, though.

As of this writing, there are no hardware initiators that officially support ESX v3.0. In the forthcoming v3.1 release, though, there should be QLogic QLA4050 adapter support. To use this adapter to create a disk connection, you first have to ensure that the QLogic adapter has an IP route to the iSCSI storage on your SAN, and that the SAN is configured to expose an iSCSI LUN via that interface. Then, you can log in to your Virtual Infrastructure Client and select a virtual machine. On the Configuration tab, select the Storage Adapters link and click to view the adapter properties.

Once that screen comes up, you'll need to enter the iSCSI Target Name, an Alias for the link, and any necessary networking information like IP addresses, Gateway and DNS servers to route to the iSCSI disk target. Then, click on the Dynamic Discovery tab to initiate a discovery of the available disks. If you've enabled CHAP authentication -- and that's a must if you're using your production Ethernet network as your storage network -- then enter in the assigned username and password to complete the connection. If you have done this correctly, then the Storage Adapters screen in the Virtual Infrastructure client should populate with the new LUNs. Select the storage tab to create a new VMFS partition on the exposed LUN.

Using the software-only initiator involves essentially the same configuration steps, except that you will connect the network cable to an unused gigabit NIC on the server instead of the special QLogic card.

NFS for the Rest of Us
Got a lot of extra Windows and Linux direct-attached storage lying around? For those who don't have an iSCSI-compatible SAN, ESX v3.0 can create VMFS partitions on Unix- or Linux-hosted NFS shares. If you are a Windows shop, you can fudge creating an NFS share by using Microsoft Services for Unix. Either approach lets you consume that excess Unix- and Windows-based direct-attached storage for hosting virtual machines when performance isn't critical.

Configuring NFS on Unix/Linux or using Microsoft Services for Unix is fairly straightforward. If you want two servers at 10.0.0.1 and 10.0.0.2 to see an NFS share called /vmdata, create an export on an existing Unix server by adding the following line to /etc/exports:

/vmdata 10.0.0.1(rw,no_root_squash) 10.0.0.2(rw,no_root_squash)

The "rw" in the string above enables read and write access to the export. You also need to enable "no_root_squash" because of how Unix defaults to giving the root user the least amount of access to an NFS volume. Enabling "no_root_squash" reverses this behavior by granting UID 0 to the VMKernel.

After updating /etc/exports, you'll need to start or restart the NFS daemon for many Unix systems by entering either "service nfs start" or "service nfs restart." This command and the export string may be slightly different based on the type of Unix you're using.

Windows Insider
[Click on image for larger view.]
Figure 1. In order to connect ESX to an NFS share, you'll need the FQDN or IP address of the NFS server, the export name and a Datastore name of your choosing.

Once this is complete, create a VM-Kernel network port on a virtual switch inside the Virtual Infrastructure Client. To do this, you'll need the IP address and share name of the Unix export, as well as a name for the Datastore you'll be creating. As before, once you've created the connection, the last step in the process is to create a new VMFS Datastore.

The process is essentially the same when using Windows Services for Unix, but the clicks and commands necessary to create and enable the export are slightly different. Consult the SFU manual for specifics.

All in all, this added support for lower-performance disk subsystems only serves to increase VMware's palatability as a cost-effective data center solution. Though the up-front costs of moving to a virtualized data center are still high, the feature sets available to administrators, along with the added uptime gained by making the jump, will pay for those costs in a surprisingly short period of time.

About the Author

Greg Shields is Author Evangelist with PluralSight, and is a globally-recognized expert on systems management, virtualization, and cloud technologies. A multiple-year recipient of the Microsoft MVP, VMware vExpert, and Citrix CTP awards, Greg is a contributing editor for Redmond Magazine and Virtualization Review Magazine, and is a frequent speaker at IT conferences worldwide. Reach him on Twitter at @concentratedgreg.

Featured

Subscribe on YouTube