Virtual Architect

Square One

To properly use virtualization, you've got to first understand it.

We've all heard the classic examples before: Virtualization reduces your data center power and cooling costs. Virtualization instantly brings green computing to your business. With it, you'll rapidly deploy new servers and workstations -- it's all just copy and paste!

All of this is classic virtualization hype, and in a lot of ways all of it's true. Once you implement virtualization into your environment, you'll likely reduce your server chassis count, which will have an impact on your power consumption and green status. As we like to say in IT, "nothing's a panacea," but at first blush it seems virtualization sure makes a run for the money.

That being said, anything that purports such an immediate hard return on investment is inevitably going to end up with competition. In a market with once only a few major vendors (hat tip, VMware), we now find ourselves awash with a number of possibilities and a wildly complex set of ways in which we can virtualize our environments.

That's one of the major reasons for this magazine. Virtualization Review is the world's first trade magazine dedicated completely to distributing information about all things virtual, and how those products and toolsets impact IT and the computing environment as a whole.

I've been consulting and writing about IT topics for a long time. I've taken a look at the technology and the tools in the virtualization space as well as in IT as a whole. I've been involved with a number of virtualization projects integrating the technology with the needs of business to find the best fit. So in this column, I'll try to function as your virtualization architect. Here we'll look at the technology, but also show how and where it fits best within your business.

Back to Basics
Considering all this, for our first issue I figured we should start with the basics. Virtualization has come a long way in the last few years, and there are a number of new and exciting manifestations of this concept of which you may not be aware. So let's go back to square one and talk a little about its concepts and its products.

First up is the idea of hardware virtualization, also called hypervisor-based virtualization. With hardware virtualization we make use of a thin layer of code, called a hypervisor, that resides between the physical resources on a server and the virtual machines (VMs) that makes use of those resources. It's the responsibility of the hypervisor to manage and schedule VMs and their access to physical resources. Hypervisor-based virtualization solutions were among some of the earliest entrants into the field, and today enjoy the largest market share of all the architectures.

What's interesting about hardware virtualization is that even with its widespread distribution, it may not necessarily provide the highest performance. Virtual machines in native virtualization environments require the use of driver emulation. These emulated drivers are installed onto each residing VM such that the "hardware" composition of each VM in the environment is effectively the same. Translating requests from these emulated drivers to the physical drivers on the host consumes resources, which can reduce performance.

A major benefit of this approach is that VMs in native virtualization environments are functionally equivalent from the perspective of hardware. This similarity makes them easier to manage than physical systems.

Paravirtualization
Paravirtualization is another architecture with a concept somewhat similar to hardware virtualization, though with a subtle twist. Paravirtualized machines operate as independent entities in the same way as with hardware virtualization. But with paravirtualization, residing VMs are actually "aware" that they've been virtualized. This is done through the use of paravirtualized device drivers rather than emulated drivers. Paravirtualization does not emulate hardware, but instead provides a set of application programming interfaces (APIs) for certain virtualization tasks. Because of this more streamlined approach and a reduction in the need to translate certain communication in and out of the VM, there's a potential for better VM performance.

One of the early problems of paravirtualization was that the VM operating system itself had to be modified to support this awareness. This limited its use for proprietary OSes like Windows. Today, paravirtualization can be supported without OS modification through the use of hardware extensions encoded into processor chip instruction sets. These extensions-such as Intel's VT and AMD's AMD-V extensions-are now common in most new hardware.

OS Virtualization
Our third architecture is OS virtualization, sometimes referred to as partitioning. This way of abstracting resources is done much differently than the other two, as a greater level of resource sharing is done between VMs atop the same host. With OS virtualization, all VMs on a host share with that host those files common between their configuration and the host itself. Only when differences occur are those they stored separately. They also share the drivers, file system buffers and cache.

This high level of sharing, as well as the use of real drivers (as opposed to emulated or paravirtualized), results in very high performance, but at the cost of OS homogeneity. Each residing VM must be the same OS as the host, which limits its efficacy in heterogeneous environments.

Mapping Virtualization Architectures to Products

Application Virtualization
Last is the idea of application virtualization. Different than any of the other concepts discussed here, application virtualization doesn't deal with whole machines. It instead concerns itself with the abstraction of applications, encapsulating all the files, Registry keys, drivers and other configurations associated with an application into (usually) a single file. This abstraction means that applications are no longer so much "installed" to target machines, but more like "copied."

IT organizations benefit from application virtualization because the encapsulation process eliminates the need for repeated, complicated manual installations for each instance of an application. If a user requires an application, that application is automatically targeted to the user's machine. When the user's finished, the uninstall is as simple as deleting the associated file. Encapsulated apps are usually managed through a central service, which enables central control of licensing and distribution. Fully realized application virtualization solutions can nearly eliminate the need for technicians to physically visit desktops for software installations.

The world's a big place, and in it are a lot of products. Some may fit your environment better than others. The responsibility for finding the virtualization solution that works best, incorporates the greatest return on investment, and provides the performance and features you need is critical to ensuring success. In future columns, we'll talk about just these topics as well as others of interest in the virtualization world.

About the Author

Greg Shields is Author Evangelist with PluralSight, and is a globally-recognized expert on systems management, virtualization, and cloud technologies. A multiple-year recipient of the Microsoft MVP, VMware vExpert, and Citrix CTP awards, Greg is a contributing editor for Redmond Magazine and Virtualization Review Magazine, and is a frequent speaker at IT conferences worldwide. Reach him on Twitter at @concentratedgreg.

Featured

Subscribe on YouTube