Dan's Take
Virtualization: Much More Than Virtual Machines
Old ideas of virtualization still hold, even in 2016.
- By Dan Kusnetzky
- 01/04/2016
Here we are in 2016, and the industry is still having a discussion about what is virtualization. I thought we were long past that and were starting to talk about how virtual environments could be more agile, more manageable and offer enterprises many benefits over hosting every function on physical hardware. I guess I was wrong.
In the final days of 2015, several of my clients pointed to statements made by representatives of certain software suppliers that essentially equated virtualization to the use of virtual machine (VM) software, or hypervisors, to create a more dynamic computing environment on industry-standard machines. They asked me to explain (for the gazillionth time) why I thought virtualization was a much larger topic. I believe that I won them over to a more expansive view of the topic. Do you still think that the use of virtualization and the use of virtual machine software are the same topic?
What Is Virtualization, Anyway?
Virtualization is the use of excess machine capacity to create a logical, artificial environment that offers features, functions and capabilities beyond those offered by the underlying physical computing environment alone. The concept applies to many different areas of computing, including:
- How individuals interact with systems
- How applications interact with operating systems
- How operating systems interact with systems
- How systems interact with both networks and storage
- How the computing environment can be managed and made secure
There, I said it. While virtual machine software certainly fits in the "how operating systems interact with systems" category, it is only one part of how this concept can be applied to processing.
For the moment, I'll just focus on processing. We can look at the other areas of virtualization technology in future articles.
Breaking Down Processing Virtualization
As I pointed out in my book,
Virtualization, A Manager's Guide (O'Reilly), there are five different types of processing virtualization in use in today's enterprise datacenters; they range from making one system seem like many to making many systems appear to be a single computing resource. Cloud service providers are, of course, using this technology as well.
Here's a snippet from my book that briefly examines each processing virtualization segment:
Making One System Appear To Be Many
Virtual machine software allows the entire stack of software that makes up a system to be encapsulated into a virtual machine file. Then a hypervisor can run one or more complete virtual systems on a physical machine. There are two types of hypervisors: A type 1 hypervisor runs on top of the physical system. A type 2 hypervisor allows a guest system to run as a process under another operating system. Each of these systems processes as if it has total control of its own system, even though it may only be using a portion of the capabilities of a larger physical system.
Operating system virtualization and partitioning allows many applications to run under a single operating system and gives each a completely isolated, protected environment. Each application functions as if it is running on its own system and is managing its own resources.
Making many systems appear to be one
Parallel processing monitors make it possible for many machines to execute the same applications or application components, with the goal of reducing the processing time of the application. Each system is asked to process one segment of data or run a single application component. As it finishes its task, the parallel processing monitor passes it another task to complete. This computational approach allows applications to complete hundreds, thousands, or, perhaps, tens of thousands times faster.
Workload management monitors (also known as load balancing monitors) make it possible for multiple instances of a single application to run simultaneously on many machines. As requests are made, the workload monitor sends each to the system having the most available capacity. While each application may run no more quickly than before, more people can be served by the application.
High availability/fail over/disaster recovery monitors make it possible for people using a computing service to be protected from an application, system, or system component failure. The monitor detects a failure and restarts an application to a surviving system.
Memory virtualization or distributed cache memory makes it possible for many systems to share their internal memory. This capability is at the heart of the many non-relational databases known collectively as NoSQL databases.
Dan's Take: Virtualization Isn't a One-Trick Pony
Virtualization technology is allowing the industry to make better use of hardware resources, reconfigure computing environments on the fly to improve: application performance, use of networking capacity, the use of storage and improve overall security and manageability of computing environments. This technology is at the heart of cloud computing, the Internet of Things (IoT), software-defined storage, software-defined datacenters, and converged systems.
Restricting our thinking to the narrow view that virtualization is nothing more than the use of VM software makes it much harder to see all of the many ways this concept can be put to use.
Instead, we  need to go on to talking about how to built and deploy IT workloads so that they are modular, easy to use, easy to manage and can be hosted and re-hosted in the best place.
About the Author
Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.