Everyday Virtualization

Blog archive

5 Design Considerations for vSphere 5

Since vSphere 5 has been announced, a number of new features and requirements of the product may require the typical vSphere administrator to revisit the design of the vSphere installation. New features are paving the way for even more efficient virtual environments. But the new licensing model has been discussed just as much if not more than the new features. The new pricing model accounts for the amount of RAM that is provisioned to powered-on virtual machines (vRAM). How we go about designing vSphere environments has changed with the new features and the new licensing paradigm. Here are some tactical tips to consider with the new features and vRAM implications:

1. Consider large cluster sizes.
The ceiling of 48 GB of vRAM per CPU at Enterprise Plus (and the lower levels) works best when pooled together. The vRAM entitlement is pooled across all CPUs in a cluster and the allocated memory for active virtual machines that are consuming against the vRAM pool. With a larger vSphere cluster, there are more CPU contributions to the vRAM allocation. Basically for production environments, many administrators found themselves stopping at 8 for cluster sizes. Historically, it was a good number.

2. Combine development, test and production vSphere clusters.
This may seem a bit awkward at first, but having hard lines of separation between environments may need to soften as the costs are considered against the benefits of pooling all CPU vRAM entitlements to fewer environments. Further, this has always been the driving thought of core functionality: Pool all hardware resources, and allow the management tools to ensure CPU and memory resources are delivered; including across production and development zones.

3. Get more out of storage with Storage DRS.
Storage DRS is one of my favorite features of vSphere 5. This is not to be confused with storage tiering solutions that may be available from a SAN vendor, as Storage DRS is a solution for like tiers of storage. Basically, a pool of similar storage resources is logically grouped and vSphere will manage latency and free space automatically. Storage DRS isn’t the solution for mixing tiers (such as SAS and SATA drives), as it is intended to manage latency and free space across a number of similar resources. I see Storage DRS saving a lot of time that administrators manage looking at datastore latency and free space to then perform Storage vMotion tasks; this will be a big win for the administrator.

4. VMFS-5 unified block size makes provisioning easier.
There are a number of critical improvements with VMFS-5 as part of vSphere 5.The most underrated is the fact there is now a unified block size (1 MB). This will save a lot of accidental formats at a smaller size like we had with VMFS-3 volumes for VMDK files larger than 256 GB. The more visible feature of VMFS-5 is that a single VMFS-5 volume can now be 64 TB. That’s huge! This will make provisioning much simpler for large volumes on capable storage processors. This will greatly simplify the design aspect of new vSphere 5 environments, but will also make upgrades require some consideration. I recommend reformatting all VMFS-3 volumes to VMFS-5, especially those at block sizes other than 1 MB. A VMFS-3 volume at 2, 4 or 8 MB block size can be upgraded to VMFS-5; but if you can move the resources around; I recommend a reformat.

5. Remote environments not leveraging virtualization? Consider the VSA.
The vSphere Storage Appliance (VSA) is actually the only net-new product with the vSphere 5 (and related cloud technologies) launch. While the VSA has a number of version 1 limitations, it may be a good solution for environments that have been too small for a typical vSphere installation. The VSA simply takes local storage resources and presents them as an NFS datastore. Two or three ESXi hosts are leveraged to provide this virtual SAN on the local storage resources. The big limitation is that vCenter cannot be run on that special datastore, so consider a remote environment leveraging the central vCenter instead of a remote installation.

Do you see major design changes coming due to the new features of vSphere 5 or due to the new vRAM licensing paradigm? Share your strategies here.

Posted by Rick Vanover on 08/02/2011 at 12:48 PM


Subscribe on YouTube