Dan's Take

Decision Points for Your Hypervisor Choice

Trying to select one and only one hypervisor to create virtual environments can be a fool's errand.

Just Another Rock-Fetch?
Back in my days at DEC, may it rest in pieces, this was called a rock-fetch. A staff member was tasked with ferreting out useful data by sifting through mounds of data to create information to support an executive decision. The staff member would return, after days or perhaps weeks of research and analysis, with well researched, analyzed data along with a concise summary and suggestions for the decision. At that point, the individual was told that the decision had already been made, and that he/she should now gather information on something else -- that is, fetch a different rock.

Something similar occurs when IT operations staff are told to select one -- and only one -- virtualization stack, because this approach "will reduce complexity and overall costs."

Different Strokes for Different Workloads
The challenge IT faces when given directives like this is that different workloads need support from a different set of tools. As long as the enterprise is supported by workloads on mainframes, midrange UNIX systems and industry standard systems that host 32-bit and 64-bit operating environments, trying to select a single virtual machine (VM) software package is trickier than you might think.

Why? Consider that the enterprise workloads are executing on at least four different computing architectures, and the tools needed to create virtual environments are different. Mainframe, RISC/UNIX, x86-32 and x86-64 environments are based upon different microprocessor architectures (although the x86 microprocessors are clearly quite similar). They offer different  numbers of machine registers; some of the registers have specific uses and can't be used for another purpose; the instruction sets differ from microprocessor to microprocessor, memory and internal bus architectures. Their I/O models are quite different as well.

In short, what it takes to enfold and isolate an operational environment is quite different from computing environment to computing environment.

Dan's Take: No "Right" Answer
Proprietary mainframe and midrange systems are often not documented well enough (for an external developer anyway) to allow for the easy creation of system software that executes directly on the hardware and makes best use of its capabilities. Because of this, enterprises are directed to select virtual processing software, such as clustering software, workload management software, VM software and operating system virtualization and partitioning software offered by the supplier of the system and its operating system.

Systems based on well-documented architectures often present more opportunity for open source communities and for suppliers of proprietary solutions. It's clear that the x86 family of microprocessors falls into this category. That's why there are many available options, including VMware's ESX/ESXi, Microsoft's Hyper-V and the open source projects Xen and KVM.

Although I've banged this drum before, an enterprise really must decide what it is trying to accomplish before the best option can be selected. Here are a few bits of advice to help guide the decision-making process:

  • If the enterprise uses industry standard, x86-based applications on a platform made up of software created by a single supplier, such as Microsoft, Oracle or VMware, selecting that supplier's VM software offering will, in all likelihood, make it easier to create and support a virtual environment.

  • If the enterprise uses industry standard, x86-based applications with a mix of computing environments built on operating systems, databases and applications from many different suppliers, your needs are different. Selecting VM software based on an open source project, such as Xen or KVM, or from a supplier trying to support multi-vendor computing environments, such as VMware, might be the best approach. This, by the way, is a common datacenter environment after an acquisition or merger.

  • If open source is a key decision criterion, then focusing on Xen and KVM is indicated. The next key decision point is the enterprise mix of systems: Does it contain older x86 32-bit systems, newer x86 64-bit systems and x86 systems having virtualization extensions?

    KVM makes use of the virtualization extensions provided by Intel's VT and AMD's Virtualization. Xen, on the other hand, includes the ability to support x86 microprocessors that don't offer the special instructions and features to facilitate a virtual environment; but they can use those extensions if they're present.

  • Open source VM software offerings are made available by many suppliers, each of which has integrated the technology into its own computing environment. The preferred technology supplier in this case will direct the choice of VM software as well. IBM, Oracle, Red Hat, SUSE and other open source suppliers will offer either Xen, KVM or both. If one of those suppliers is the "senior supplier" in the enterprise environment, its advice should be strongly considered.

In the end, each company has to make the best decision it can, based on the best information it has on its business requirements today, and what's likely to be needed in the future. This must include a knowledge of the must-have applications, the cost of software (licenses, support, training and so on), as well as the machine requirements (e.g., processor performance, memory, I/O, storage). It must also be based upon both the extent of and depth of staff expertise available.

About the Author

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.

Featured

Subscribe on YouTube