Keeping Sales a Priority -- at The Expense of Free Tools

One of the best things about virtualization is that it lends itself to so many other great things, including partner products that supplement the larger landscape. Virtualization Review Editor Keith Ward hinted at this with a recent post. Today, Veeam Software has announced that the Veeam Backup product's offering to perform backups of the free edition of ESXi will be discontinued. The recommended path will be to use vSphere Essentials and enhanced Veeam offerings.

The bad news is that this is at VMware's request, clearly protecting revenue. Sure, terms and conditions exist for these scenarios, but a backup tool? I'm a little disappointed, as free tools round out any solution nicely, especially the small and medium business. This is where Veeam Backup has been targeted.

Your thoughts on the matter?. Share your comment below or drop me an e-mail.

Posted by Rick Vanover on 06/03/2009 at 12:47 PM4 comments


Odd Processor Assignments

For most server consolidation implementations, running server operating systems with one or two virtual CPUs can address most situations. Rarely do I see a need for more processing power beyond two virtual CPUs, but there are always exceptions that can make an example.

With vSphere, we can now provision VMs with up to eight virtual CPUs. Further, we can introduce an odd number of processors available to the guest VM, as in Fig. 1, which shows a VM during the creation process and the option to assign three, five or seven processors.

Creating New Virtual Machines (small)
Figure 1.VMs created with vSphere can now be provisioned with up to eight processors, including an odd-numbered inventory. (Click image to view larger version.)

VMware's docs recommend that VMs be provisioned with one virtual CPU. This is because the host can most efficiently manage the processing requests to honor one CPU cycle at a time, versus a request that must have two or more CPU cycles.

One quick note on compatibility: ESX 4 introduces Virtual Machine 7. Within ESX, machines from ESX 3.x work like version 4 and can still only be assigned one, two or four virtual CPUs.

In the case of odd-numbered CPU inventories, I remember back in the day when vendors used to offer three CPU servers. But, that was tacky then -- and it is probably tackier now. One of the only requirements-based reasons to assign three, five, or seven CPU VMs would be for licensing reasons. Beyond that, we could venture into configurations that may spell trouble down the road. I could just imagine a vendor saying "we don’t support odd-numbered CPUs -- except 1."

I am curious to your thoughts on the issue. Share your comment below or drop me an e-mail.

Posted by Rick Vanover on 06/01/2009 at 12:47 PM2 comments


vStorage VMFS Version Notes

In a prior post, I mentioned that vStorage VMFS is one of my favorite elements of a VMware-based virtualization solution. With vSphere, the good news is that no action is required on the part of an administrator in the upgrade path. For fibre channel and iSCSI storage systems, here are some notes on VMFS versions that you can consider in your upgrade and deployment scenarios. And for the NFS camp out there, I'll get to you guys another day!

With ESX 4, the main thing you need to know is that versions of VMFS that started with ESX 3.x are forward compatible with ESX 4. If we look a little closer on this, some subtle details will start to emerge. Most VMFS volumes for ESX 3.x host systems were provisioned as VMFS version 3.21 volumes. When ESX 3.5 was introduced, we start to see VMFS version 3.31 assinged to newly created LUNs. Chances are if you have LUNs that have not been reformatted, you have a mixed environment in the versions of VMFS volumes.

When ESX 4 is introduced, we can still take advantage of the main benefit from the storage side: thin provisioning of the virtual disks, even for VMFS 3.21 volumes. For older versions of VMFS, we can miss out on some of the performance enhancements. Specifically, a VMFS 3.21 volume would not provide the same results as a VMFS 3.29 or higher volume. vSphere and ESX 4 will assign new LUNs at VMFS 3.33. For LUNs at VMFS 3.31, the difference between 3.31 and 3.33 is minor and marked as an internal change for VMFS.

The takeaway here is that if you can afford the time to evacuate each VMFS 3.21 volume that was initially provisioned in ESX 3.x and reformat it as 3.31 or 3.33, it may be a good idea. This can be made easy with Enhanced Storage VMotion, which allows a fully provisioned VM to be migrated to a thin provisioned disk. You have VMs to move anyways, so we might as well reformat the LUN upward if the opportunity is there.

VMFS is still the bombdizzle in the competitive landscape for virtualization filesystems from the hypervisor perspective. This is just a small note on versioning that can get everything in line as part of your upgrade planning process, either to vSphere or current versions of ESX 3.5.

Posted by Rick Vanover on 06/01/2009 at 12:47 PM3 comments


vSphere DRS Cluster Limits Introduced with vSphere

Recently, I wrote how certain vSphere configurations will introduce limits to consolidation ratios on HA-enabled clusters. With that information, I thought it a good idea to share another piece of information from the Configuration Maximums document for vSphere.

For a DRS-enabled cluster managed by vCenter, there is a supported maximum of 1,280 virtual machines powered on concurrently. In VI3, there was no limit other than the 2,000 VMs per vCenter server. Why the introduction of a limit? Well, that's complicated. DRS has advanced in vSphere, with two big additions. The first is the fully supported addition of Distributed Power Management. Previously in VI3, that was experimental. The other key addition is the vApp group. A vApp group is a collection of VMs that can be managed as a single object, including their resource management.

Like the HA VMs per host limit that is put into certain clusters, this DRS limitation per cluster can seriously impact design planning. VDI implementations can easily hit 1,280 VMs per DRS cluster, so then you make the decision to not make it a DRS cluster. For server consolidation, DRS is a must, yet scaling to additional clusters is something many administrators may groan about.

This isn't earth shattering, but just a call that we all should review the current configuration maximums document for vSphere.

Posted by Rick Vanover on 05/28/2009 at 12:47 PM4 comments


vSphere CLI Quick Look: Storage

Now that VMware has released vSphere, I'm giving a long, hard look at using ESXi as the de facto hypervisor. ESX has been good to me, but going forward I think ESXi is the better choice. This is mainly so that managed (with vSphere licensing for features) and unmanaged systems would be the same bare-metal hypervisor.

For many environments, storage and networking are your biggest areas of planning and administration outside of the virtual machines themselves. For storage, I find myself frequently wanting to check on the multipath capabilities of ESX. That is difficult with ESXi 4 in most default configurations. One tool that can help without circumventing the ESXi 4 configuration is the vSphere Command Line Interface or CLI. The vSphere CLI allows for esxcfg- series commands to be run on an ESXi host, including those that are standalone systems using the free ESXi license.

Installing the vSphere CLI is quick and easy. You can run this type of command for multipath information on your storage system:

esxcfg-mpath.pl --server 10.187.187.175 -list -b

Figure 1 shows the interactive display and its corresponding entry, compared to the vSphere Client.

vSphere CLI
Figure 1. The multipath command is executed from a Windows system to the ESXi host (Click image to view larger version.)

It's quite handy to be able to run this and other commands from workstations, for the case of an unmanaged hypervisor. For VI3 administrators who have used the esxcfg-mpath command before, you'll notice that the multipathing display is quite different on ESXi 4.

Is this helpful? Let me know what commands you want to see and I'll see if I can script them.

Posted by Rick Vanover on 05/28/2009 at 12:47 PM3 comments


Rockstar Consolidation At Risk?

For any virtualization solution, it is imperative to design solutions for your needs within the guidelines of the products utilized. With the recent vSphere release, we can now go into some of the fine-point details that go into how we will consider the new platform. Last week the Configuration Maximums document went online at the VMware Web site. Here is the VMware Infrastructure 3 Configuration Maximums document if you want to compare between the major releases.

In many of my posts here, I've referred to VI3 and vSphere as having "rockstar" consolidation capacity. The VMware products deliver this with DRS and the memory management technologies; you can really pack a lot of virtual machines in your implementation. vSphere introduces some complications to this strategy. I first stumbled upon this at Duncan Epping's Yellow Bricks blog. Duncan's posts are top-notch, and you should keep his material on your short list. On his site he spells out a correlation of details -- basically, that there is smaller limit for VMs per host when using VMware HA. Without VMware HA, the published maximum is 320 guest VMs per host. When HA is introduced, this is where it gets vUgly. For HA-enabled clusters running with eight or fewer ESX servers, there can be up to 100 VMs per host. For HA-enabled clusters running with nine or more ESX servers, the maximum number of VMs per host is limited to 40 VMs.

The 40-VM limit is a real shock to me when using HA in clusters with nine or more hosts. Current and future server hardware is capable of much more than 40, but the management overhead becomes quite heavy with this tier. The way to solve this is to have multiple clusters with HA, but that is somewhat less than satisfactory. It's mainly because HA has a cost of at least one hosts' worth of CPU and RAM that you can use as they are reserved, but you still have to pay the licensing for it.

For me, VMware HA has been more of a burden than a benefit. This is primary due to the functionality issues that occurred in the ESX 3.5 versions and I've never needed it to save me. I'm still excited about vSphere -- don't get me wrong. But my ugly little friend, VMware HA, pops his head up again. Where does this information put you on vSphere?

Posted by Rick Vanover on 05/26/2009 at 12:47 PM1 comments


Everyday Virtualization Shifts Into Sixth Gear

2009 has been a wild ride in the virtualization world so far. Last week's vSphere release, Citrix XenServer 5.5 beta and Windows Server 2008 R2 with Hyper-V going into release candidate status all make for a busy landscape. I haven't event touched the partner products and management space.

With this busy state of virtualization, I had a call with Virtualization Review Editor Keith Ward and laid out some ideas for new content. The good news for you is that he liked all of it! So, get ready for additional material that is highly technical on the Everyday Virtualization blog. I'm formulating a lot of concepts, but if you have topics you want covered, let me know. Drop me an e-mail or post a comment below.

Posted by Rick Vanover on 05/26/2009 at 12:47 PM4 comments


Stone Cold Clones

In the course of any server virtualization project, there will undoubtedly be a conversion that just won't seem to work correctly. While there are many ways to go about troubleshooting a system for a P2V or V2V conversion, one strategy is to perform the cold clone. A cold clone is actually the cleanest way to convert a system, as it is in a zero-transaction state. Most conversion tools are crash-consistent for Windows systems.

While the consistent part of that quantification is good, the crash part is not. A cold clone conversion moves to a virtual machine very cleanly by booting into a special environment to perform a conversion. This approach works well for super sensitive systems which may include proprietary databases or high file share traffic that would be out of sync when brought up crash-consistent.

As part of my beta work with the upcoming vSphere release, one of the areas that I have been focusing some of my efforts is the vCenter Converter product. The newest release includes a cold clone option that is for the most part indistinguishable from the prior product offerings, including much slower conversion speed than the online conversion mechanisms. The main point is that ESX 4 and vSphere support are now an option for a destination VM in the cold clone environment.

One thing that caught my eye is that the newest vCenter Converter uses a boot environment that is not Vista or Windows Server 2008-based. Administrators can add storage and network drivers for more modern equipment with the included peTool.exe utility, but this is not exactly the most pleasant experience. My conversions thus far on a few physical systems and Sun xVM VirtualBox VMs have gone seamless with the cold clone utility of vCenter Converter for vSphere.

I'm putting the finishing touches on materials for the Advanced P2V session for the upcoming TechMentor training event in Orlando. I'll also be presenting two other sessions on building a business case for virtualization and when it makes sense to start paying for virtualization for smaller environments.

Has the cold clone ever got you out of a sticky conversion? If so, tell your story on it below -- I may just replay it at TechMentor!

Posted by Rick Vanover on 05/21/2009 at 12:47 PM1 comments


Competition Coming for Virtual Switches

Fresh from the recent Citrix Synergy event, there are mounting mumblings that Citrix will provide a virtual switch for the XenServer and KVM hypervisor platforms. Citrix may be circling around an open source virtual switch, which, like the XenServer hypervisor, has its roots in open source that eventually evolves into a revenue generating product.

Chris Wolf, Virtualization Review's Virtual Advisor columnist, mentions the possibility of a Citrix-backed virtual switch on his personal blog. Wolf is dead-on about why a virtual switch is important, and that the competitive landscape may give Cisco a run for the money.

This comes on the heels of Cisco, which is in the beta phase of their offering with the Nexus 1000V for VMware virtual environments.

It's worth speculating on whether an open source virtual switch could be released as a virtual appliance. If so, could a Citrix-sponsored open source virtual switch be run on VMware's ESX? Imagine an open source virtual switch that could be competitive with Cisco's offering on multiple hypervisor platforms. I still think, though, that Cisco will have an overall edge, simply by offering a tighter integration to existing physical networking infrastructure using Cisco equipment.

Wolf mentions that details for the concept product are not out yet, but we should hear about this soon. Competitively it's a must-do, since many network administrators are excited about the Nexus 1000V. Citrix XenServer environments should be excited about the start of this movement, especially if there is corresponding activity in Xen, where many of the features start out before being folded into the revenue product.

Are you excited about the upcoming virtual switches? Share your comments below or shoot me a message.

Posted by Rick Vanover on 05/19/2009 at 12:47 PM4 comments


Expanded OS Support With vSphere

One of the strong points of VMware's upcoming vSphere release is support for an expanded range of guest operating systems (OS). With VI3, the current inventory of supported OSes is around 15 across various Linux, Microsoft, Novell and Sun offerings. vSphere will support more than 24 OSes. Among the new offerings include FreeBSD, OS/2, SCO OpenServer 5, SCO UnixWare 7, DOS, Windows 98, Windows 95, Windows 3.1 and DOS.

While that is a good thing in general, there are downsides. One tradeoff has to do with the potential of increased memory overhead. One of the key features enabling ESX's rockstar consolidation ratios is "transparent page sharing" technology. This technology stores memory pages that are identical across multipe VMs only once; it's effectively a de-duplication of active memory pages. Of course, more OSes in a datacenter means there will be fewer identical memory pages, and less efficient use of memory.

However, VMware has other ways to make memory more efficient, including memory ballooning. Available if VMware Tools is installed on the guest OS, ballooning moves memory from inactive VMs to active ones, which is another help for obtaining high consolidation ratios.

The other potential negative of additional platform support is the "enabling" of non-infrastructure groups to keep outdated OSes around forever. While there are legitimate needs for older OSes, such as test environments or archived systems, there is a thin line that the virtualization administrator walks in most situations regarding older OSes. It is generally a good idea to keep current on all platforms, but that is not always possible.

Is the additional OS support good news for you? Or is this something you might not want to advertise to your internal customers? Share your comments below.

Posted by Rick Vanover on 05/14/2009 at 12:47 PM0 comments


Virtual Appliance Grazing

Virtual appliances are a great thing, if you find one you like. It is truly amazing how many appliances are available, and the VMware Appliance Marketplace is a nice starting point for the hunt. Many appliances are free, but with the old warning that “you get what you pay for,” administrators should proceed with caution in the virtual appliance space.

One of my favorites is the VKernel SearchMyVM appliance, which is the quickest way to do query-based searches on VMware VI3 installations. I use a smattering of others, and play with a bunch more. But getting into the appliance space got me thinking a bit about the management side of appliances.

The free products are interesting and worth investigating. There are also plenty of evaluation appliances available, so don’t get too attached if you don’t have the funds to use it legitimately or become dependent on a product you can’t procure.

Where I get concerned about appliances -- especially the free ones -- is the practical issues associated with using them. A virtual appliance contains some operating system configured by the publisher of the appliance. Usually they're Linux-based for licensing reasons, and most provide a root password. Should these passwords be changed? Do you know exactly what these devices do on the network or to systems they connect to? Simply because they are quick and easy to download doesn't mean you can skip your infrastructure management and policy homework.

I don’t want to bash virtual appliances; I am a big fan of them. But I do want to issue a few warnings for virtual appliances.

  • If you are not using the appliance, turn it off. Most virtual environments are too expensive to be running idle VMs.
  • Determine if the appliance is really needed or provides some benefit you think is worth its resources.
  • Refine the security. Get the password(s) where you need them up front, and find and fix problems before you get too far along with the appliance.

Maybe a rant; maybe I’m on to something. What is your opinion on virtual appliances? Do they fall out of policy in your environment? Share your comments below.

Posted by Rick Vanover on 05/12/2009 at 12:47 PM2 comments


Decision Time: ESX vs ESXi

I am really excited for the upcoming product from VMware. In my opinion, this is a good time to make a final decision on ESX vs. ESXi. For current VI3 environments, ESXi has most of the functionality of ESX and all of the licensed features are available. This KnowledgeBase article has a good breakdown of the differences as they exist today. For anything that you can’t do on ESXi directly, chances are you can do it either via the Remote Command Line or VI Toolkit.

At this point in the technology, I'm pro-ESXi. To start with, the lack of a full console operating system does make the host more efficient. ESXi has a learning curve compared to ESX, but based on where we have come in virtualization thus far, that should be an easy obstacle to overcome. Didn’t ESX seem a little daunting compared to a simple physical server a few years ago?

One reason to go ESXi is its long-term viability. With ESXi now being released in tandem with ESX, one has to wonder if both products will be around for the long haul. It's safe to assume that only one of the products will be around for the long haul, and VMware makes a strong case for ESXi in this FAQ. ESXi is clearly recommended for all scenarios. We also get occasional nuggets from moderator post entries on VMware Communities to the same result. But for base vSphere release, both ESX and ESXi will be available and fully supported.

ESXi is my choice for vSphere’s hypervisor, in part because of its importance for our cloud framework. What's your take on ESX vs. ESXi? E-mail me your thoughts on the matter or share them below.

Posted by Rick Vanover on 05/06/2009 at 12:47 PM7 comments


Subscribe on YouTube