vStorage VMFS Version Notes

In a prior post, I mentioned that vStorage VMFS is one of my favorite elements of a VMware-based virtualization solution. With vSphere, the good news is that no action is required on the part of an administrator in the upgrade path. For fibre channel and iSCSI storage systems, here are some notes on VMFS versions that you can consider in your upgrade and deployment scenarios. And for the NFS camp out there, I'll get to you guys another day!

With ESX 4, the main thing you need to know is that versions of VMFS that started with ESX 3.x are forward compatible with ESX 4. If we look a little closer on this, some subtle details will start to emerge. Most VMFS volumes for ESX 3.x host systems were provisioned as VMFS version 3.21 volumes. When ESX 3.5 was introduced, we start to see VMFS version 3.31 assinged to newly created LUNs. Chances are if you have LUNs that have not been reformatted, you have a mixed environment in the versions of VMFS volumes.

When ESX 4 is introduced, we can still take advantage of the main benefit from the storage side: thin provisioning of the virtual disks, even for VMFS 3.21 volumes. For older versions of VMFS, we can miss out on some of the performance enhancements. Specifically, a VMFS 3.21 volume would not provide the same results as a VMFS 3.29 or higher volume. vSphere and ESX 4 will assign new LUNs at VMFS 3.33. For LUNs at VMFS 3.31, the difference between 3.31 and 3.33 is minor and marked as an internal change for VMFS.

The takeaway here is that if you can afford the time to evacuate each VMFS 3.21 volume that was initially provisioned in ESX 3.x and reformat it as 3.31 or 3.33, it may be a good idea. This can be made easy with Enhanced Storage VMotion, which allows a fully provisioned VM to be migrated to a thin provisioned disk. You have VMs to move anyways, so we might as well reformat the LUN upward if the opportunity is there.

VMFS is still the bombdizzle in the competitive landscape for virtualization filesystems from the hypervisor perspective. This is just a small note on versioning that can get everything in line as part of your upgrade planning process, either to vSphere or current versions of ESX 3.5.

Posted by Rick Vanover on 06/01/2009 at 12:47 PM3 comments


vSphere DRS Cluster Limits Introduced with vSphere

Recently, I wrote how certain vSphere configurations will introduce limits to consolidation ratios on HA-enabled clusters. With that information, I thought it a good idea to share another piece of information from the Configuration Maximums document for vSphere.

For a DRS-enabled cluster managed by vCenter, there is a supported maximum of 1,280 virtual machines powered on concurrently. In VI3, there was no limit other than the 2,000 VMs per vCenter server. Why the introduction of a limit? Well, that's complicated. DRS has advanced in vSphere, with two big additions. The first is the fully supported addition of Distributed Power Management. Previously in VI3, that was experimental. The other key addition is the vApp group. A vApp group is a collection of VMs that can be managed as a single object, including their resource management.

Like the HA VMs per host limit that is put into certain clusters, this DRS limitation per cluster can seriously impact design planning. VDI implementations can easily hit 1,280 VMs per DRS cluster, so then you make the decision to not make it a DRS cluster. For server consolidation, DRS is a must, yet scaling to additional clusters is something many administrators may groan about.

This isn't earth shattering, but just a call that we all should review the current configuration maximums document for vSphere.

Posted by Rick Vanover on 05/28/2009 at 12:47 PM4 comments


vSphere CLI Quick Look: Storage

Now that VMware has released vSphere, I'm giving a long, hard look at using ESXi as the de facto hypervisor. ESX has been good to me, but going forward I think ESXi is the better choice. This is mainly so that managed (with vSphere licensing for features) and unmanaged systems would be the same bare-metal hypervisor.

For many environments, storage and networking are your biggest areas of planning and administration outside of the virtual machines themselves. For storage, I find myself frequently wanting to check on the multipath capabilities of ESX. That is difficult with ESXi 4 in most default configurations. One tool that can help without circumventing the ESXi 4 configuration is the vSphere Command Line Interface or CLI. The vSphere CLI allows for esxcfg- series commands to be run on an ESXi host, including those that are standalone systems using the free ESXi license.

Installing the vSphere CLI is quick and easy. You can run this type of command for multipath information on your storage system:

esxcfg-mpath.pl --server 10.187.187.175 -list -b

Figure 1 shows the interactive display and its corresponding entry, compared to the vSphere Client.

vSphere CLI
Figure 1. The multipath command is executed from a Windows system to the ESXi host (Click image to view larger version.)

It's quite handy to be able to run this and other commands from workstations, for the case of an unmanaged hypervisor. For VI3 administrators who have used the esxcfg-mpath command before, you'll notice that the multipathing display is quite different on ESXi 4.

Is this helpful? Let me know what commands you want to see and I'll see if I can script them.

Posted by Rick Vanover on 05/28/2009 at 12:47 PM3 comments


Rockstar Consolidation At Risk?

For any virtualization solution, it is imperative to design solutions for your needs within the guidelines of the products utilized. With the recent vSphere release, we can now go into some of the fine-point details that go into how we will consider the new platform. Last week the Configuration Maximums document went online at the VMware Web site. Here is the VMware Infrastructure 3 Configuration Maximums document if you want to compare between the major releases.

In many of my posts here, I've referred to VI3 and vSphere as having "rockstar" consolidation capacity. The VMware products deliver this with DRS and the memory management technologies; you can really pack a lot of virtual machines in your implementation. vSphere introduces some complications to this strategy. I first stumbled upon this at Duncan Epping's Yellow Bricks blog. Duncan's posts are top-notch, and you should keep his material on your short list. On his site he spells out a correlation of details -- basically, that there is smaller limit for VMs per host when using VMware HA. Without VMware HA, the published maximum is 320 guest VMs per host. When HA is introduced, this is where it gets vUgly. For HA-enabled clusters running with eight or fewer ESX servers, there can be up to 100 VMs per host. For HA-enabled clusters running with nine or more ESX servers, the maximum number of VMs per host is limited to 40 VMs.

The 40-VM limit is a real shock to me when using HA in clusters with nine or more hosts. Current and future server hardware is capable of much more than 40, but the management overhead becomes quite heavy with this tier. The way to solve this is to have multiple clusters with HA, but that is somewhat less than satisfactory. It's mainly because HA has a cost of at least one hosts' worth of CPU and RAM that you can use as they are reserved, but you still have to pay the licensing for it.

For me, VMware HA has been more of a burden than a benefit. This is primary due to the functionality issues that occurred in the ESX 3.5 versions and I've never needed it to save me. I'm still excited about vSphere -- don't get me wrong. But my ugly little friend, VMware HA, pops his head up again. Where does this information put you on vSphere?

Posted by Rick Vanover on 05/26/2009 at 12:47 PM1 comments


Everyday Virtualization Shifts Into Sixth Gear

2009 has been a wild ride in the virtualization world so far. Last week's vSphere release, Citrix XenServer 5.5 beta and Windows Server 2008 R2 with Hyper-V going into release candidate status all make for a busy landscape. I haven't event touched the partner products and management space.

With this busy state of virtualization, I had a call with Virtualization Review Editor Keith Ward and laid out some ideas for new content. The good news for you is that he liked all of it! So, get ready for additional material that is highly technical on the Everyday Virtualization blog. I'm formulating a lot of concepts, but if you have topics you want covered, let me know. Drop me an e-mail or post a comment below.

Posted by Rick Vanover on 05/26/2009 at 12:47 PM4 comments


Stone Cold Clones

In the course of any server virtualization project, there will undoubtedly be a conversion that just won't seem to work correctly. While there are many ways to go about troubleshooting a system for a P2V or V2V conversion, one strategy is to perform the cold clone. A cold clone is actually the cleanest way to convert a system, as it is in a zero-transaction state. Most conversion tools are crash-consistent for Windows systems.

While the consistent part of that quantification is good, the crash part is not. A cold clone conversion moves to a virtual machine very cleanly by booting into a special environment to perform a conversion. This approach works well for super sensitive systems which may include proprietary databases or high file share traffic that would be out of sync when brought up crash-consistent.

As part of my beta work with the upcoming vSphere release, one of the areas that I have been focusing some of my efforts is the vCenter Converter product. The newest release includes a cold clone option that is for the most part indistinguishable from the prior product offerings, including much slower conversion speed than the online conversion mechanisms. The main point is that ESX 4 and vSphere support are now an option for a destination VM in the cold clone environment.

One thing that caught my eye is that the newest vCenter Converter uses a boot environment that is not Vista or Windows Server 2008-based. Administrators can add storage and network drivers for more modern equipment with the included peTool.exe utility, but this is not exactly the most pleasant experience. My conversions thus far on a few physical systems and Sun xVM VirtualBox VMs have gone seamless with the cold clone utility of vCenter Converter for vSphere.

I'm putting the finishing touches on materials for the Advanced P2V session for the upcoming TechMentor training event in Orlando. I'll also be presenting two other sessions on building a business case for virtualization and when it makes sense to start paying for virtualization for smaller environments.

Has the cold clone ever got you out of a sticky conversion? If so, tell your story on it below -- I may just replay it at TechMentor!

Posted by Rick Vanover on 05/21/2009 at 12:47 PM1 comments


Competition Coming for Virtual Switches

Fresh from the recent Citrix Synergy event, there are mounting mumblings that Citrix will provide a virtual switch for the XenServer and KVM hypervisor platforms. Citrix may be circling around an open source virtual switch, which, like the XenServer hypervisor, has its roots in open source that eventually evolves into a revenue generating product.

Chris Wolf, Virtualization Review's Virtual Advisor columnist, mentions the possibility of a Citrix-backed virtual switch on his personal blog. Wolf is dead-on about why a virtual switch is important, and that the competitive landscape may give Cisco a run for the money.

This comes on the heels of Cisco, which is in the beta phase of their offering with the Nexus 1000V for VMware virtual environments.

It's worth speculating on whether an open source virtual switch could be released as a virtual appliance. If so, could a Citrix-sponsored open source virtual switch be run on VMware's ESX? Imagine an open source virtual switch that could be competitive with Cisco's offering on multiple hypervisor platforms. I still think, though, that Cisco will have an overall edge, simply by offering a tighter integration to existing physical networking infrastructure using Cisco equipment.

Wolf mentions that details for the concept product are not out yet, but we should hear about this soon. Competitively it's a must-do, since many network administrators are excited about the Nexus 1000V. Citrix XenServer environments should be excited about the start of this movement, especially if there is corresponding activity in Xen, where many of the features start out before being folded into the revenue product.

Are you excited about the upcoming virtual switches? Share your comments below or shoot me a message.

Posted by Rick Vanover on 05/19/2009 at 12:47 PM4 comments


Expanded OS Support With vSphere

One of the strong points of VMware's upcoming vSphere release is support for an expanded range of guest operating systems (OS). With VI3, the current inventory of supported OSes is around 15 across various Linux, Microsoft, Novell and Sun offerings. vSphere will support more than 24 OSes. Among the new offerings include FreeBSD, OS/2, SCO OpenServer 5, SCO UnixWare 7, DOS, Windows 98, Windows 95, Windows 3.1 and DOS.

While that is a good thing in general, there are downsides. One tradeoff has to do with the potential of increased memory overhead. One of the key features enabling ESX's rockstar consolidation ratios is "transparent page sharing" technology. This technology stores memory pages that are identical across multipe VMs only once; it's effectively a de-duplication of active memory pages. Of course, more OSes in a datacenter means there will be fewer identical memory pages, and less efficient use of memory.

However, VMware has other ways to make memory more efficient, including memory ballooning. Available if VMware Tools is installed on the guest OS, ballooning moves memory from inactive VMs to active ones, which is another help for obtaining high consolidation ratios.

The other potential negative of additional platform support is the "enabling" of non-infrastructure groups to keep outdated OSes around forever. While there are legitimate needs for older OSes, such as test environments or archived systems, there is a thin line that the virtualization administrator walks in most situations regarding older OSes. It is generally a good idea to keep current on all platforms, but that is not always possible.

Is the additional OS support good news for you? Or is this something you might not want to advertise to your internal customers? Share your comments below.

Posted by Rick Vanover on 05/14/2009 at 12:47 PM0 comments


Virtual Appliance Grazing

Virtual appliances are a great thing, if you find one you like. It is truly amazing how many appliances are available, and the VMware Appliance Marketplace is a nice starting point for the hunt. Many appliances are free, but with the old warning that “you get what you pay for,” administrators should proceed with caution in the virtual appliance space.

One of my favorites is the VKernel SearchMyVM appliance, which is the quickest way to do query-based searches on VMware VI3 installations. I use a smattering of others, and play with a bunch more. But getting into the appliance space got me thinking a bit about the management side of appliances.

The free products are interesting and worth investigating. There are also plenty of evaluation appliances available, so don’t get too attached if you don’t have the funds to use it legitimately or become dependent on a product you can’t procure.

Where I get concerned about appliances -- especially the free ones -- is the practical issues associated with using them. A virtual appliance contains some operating system configured by the publisher of the appliance. Usually they're Linux-based for licensing reasons, and most provide a root password. Should these passwords be changed? Do you know exactly what these devices do on the network or to systems they connect to? Simply because they are quick and easy to download doesn't mean you can skip your infrastructure management and policy homework.

I don’t want to bash virtual appliances; I am a big fan of them. But I do want to issue a few warnings for virtual appliances.

  • If you are not using the appliance, turn it off. Most virtual environments are too expensive to be running idle VMs.
  • Determine if the appliance is really needed or provides some benefit you think is worth its resources.
  • Refine the security. Get the password(s) where you need them up front, and find and fix problems before you get too far along with the appliance.

Maybe a rant; maybe I’m on to something. What is your opinion on virtual appliances? Do they fall out of policy in your environment? Share your comments below.

Posted by Rick Vanover on 05/12/2009 at 12:47 PM2 comments


Decision Time: ESX vs ESXi

I am really excited for the upcoming product from VMware. In my opinion, this is a good time to make a final decision on ESX vs. ESXi. For current VI3 environments, ESXi has most of the functionality of ESX and all of the licensed features are available. This KnowledgeBase article has a good breakdown of the differences as they exist today. For anything that you can’t do on ESXi directly, chances are you can do it either via the Remote Command Line or VI Toolkit.

At this point in the technology, I'm pro-ESXi. To start with, the lack of a full console operating system does make the host more efficient. ESXi has a learning curve compared to ESX, but based on where we have come in virtualization thus far, that should be an easy obstacle to overcome. Didn’t ESX seem a little daunting compared to a simple physical server a few years ago?

One reason to go ESXi is its long-term viability. With ESXi now being released in tandem with ESX, one has to wonder if both products will be around for the long haul. It's safe to assume that only one of the products will be around for the long haul, and VMware makes a strong case for ESXi in this FAQ. ESXi is clearly recommended for all scenarios. We also get occasional nuggets from moderator post entries on VMware Communities to the same result. But for base vSphere release, both ESX and ESXi will be available and fully supported.

ESXi is my choice for vSphere’s hypervisor, in part because of its importance for our cloud framework. What's your take on ESX vs. ESXi? E-mail me your thoughts on the matter or share them below.

Posted by Rick Vanover on 05/06/2009 at 12:47 PM7 comments


Application Virtualization Galore

For many environments, jumping into application virtualization is an ‘as-needed’ endeavor. There are a few big players in the space, including VMware ThinApp, Citrix XenApp, Microsoft App-V, Symantec EndPoint. If you didn’t see February/March issue, Ken Da Silva did a great review of ThinApp and its smooth interface. In my opinion, application virtualization is the boutique segment of the larger virtualization market and can be tough for many organizations to find a use case.

The exception is XenApp, since Citrix has been in the application and presentation virtualization space for quite a long time. Further, many organizations have already invested heavily in Citrix installations. These have also made natural transitions to server virtualization, with Web front end servers and presentation servers being converted to VMs. It is also darn cool to virtualize a server that is already a virtualization solution - a "double-dip," if you will.

What got me thinking about application virtualization was this whitepaper that compared the four biggest products in the space. What I like is that there is a really good breakdown of the architectural differences between the various solutions. This material put ThinApp on top for performance reasons, but definitely made me want to poke around the other solutions. A major criteria for choosing application virtualization is the costs, as usually only the largest environments need the technology – and it could be applied to a large number of systems.

Where are you with application virtualization? Have price models got in the way of using this technology? Share your stance on this slice of virtualization below or email me your comments on the products and how you use them.

Posted by Rick Vanover on 05/05/2009 at 12:47 PM3 comments


What We Can Learn From the Big Boys

On Virtualization.info, Alessandro Perilli has several good posts on how Microsoft and VMware use virtualization internally. The first is a peek into both VMware and Microsoft's use of virtualization and the other is a response to some criticism put forth for Microsoft's adoption strategy. Beyond this, Virtualization Review editor Keith Ward and I had access to some VMware's inner workings in a recent call, which Keith highlights in his blog.

I am an infrastructure guy, and I find this information incredibly interesting. While I am not involved in an environment anywhere near the size and scale of VMware and Microsoft, I do take away some important information. First of all, consolidation ratio is more important to the "real world" than what is manifested in the internal practices of Microsoft and VMware.

According to the posts, Microsoft consolidation ratios are fewer than 23 per host, and average 10.4 server VMs per host. The VMware ratio is 10 server VMs per host. Linked in this material is a Vinternals post where a production Hyper-V environment is only hosting 5.7 server VMs per host.

With a similar host configuration, I am regularly seeing ratios in the 25-35 VMs per host and am very happy with the environment. New environments with Nehalem processors and more RAM may let me see 50-90 server VMs per host. While all VMs are not created equal, a consolidation ratio is a comparable statistic in my opinion when looked at in aggregate.

The next thing that caught my interest in this configuration is that with Microsoft Hyper-V, the reliance on (or avoidance of, in some cases) Microsoft clustering services is still something I shake my head at. I hinted at this recently when I mentioned that VMFS just makes this easy for us. I am just not a fan of using a non-virtualization solution to manage access to the disk that contains the VMs, and in Hyper-V R2 the reliance continues.

The final point that is that there is nothing that VMware and Microsoft are doing that most organizations can't do. Whichever side you choose -- in my case, VMware -- you can do it.

The peek into how the big boys are playing in their own sandbox is amazing. What's your take on it? Send me your thoughs or share your comments below.

Posted by Rick Vanover on 04/30/2009 at 12:47 PM0 comments


Subscribe on YouTube