In-Depth
Hypervisors and Microvisors: A Primer
Understanding the distinctions is crucial.
Virtualization is no longer merely the realm of the hypervisor and its virtual machines. The microvisor and its containers are firmly a part of virtualization today, and this isn't going to change. The following is a quick primer on what virtualization administrators need to know.
First up: there is no "hypervisor vs. microvisor". It is not a war. These technologies are complementary, not competitive. To understand why, one must first understand the difference between the two technologies.
An OS-Free Zone
If you're reading
Virtualization Review, you probably know what a hypervisor is, and how it works. For brevity's sake, I'm going to skip that. So where does the microvisor differ? The first thing to note is that, as a general rule, containers don't have an operating system inside them.
Like a hypervisor, the microvisor runs as part of an operating system. Also like a hypervisor, one typically doesn't run workloads on the parent instance. Unlike a hypervisor, only part of the parent instance's resources are virtualized, and which parts get this treatment is configurable.
The goal of a hypervisor is to provide complete isolation between guest instances. Each operating system is its own world. If you want to run instances of Windows, Linux, BSD and DR-DOS on top of a VMware hypervisor, you can do that. Each guest has its own kernel, its own RAM addressing space, its own completely isolated disk... the works.
Strategic Sharing
Microvisors are instead the art of strategic sharing. The parent instance has a kernel and a filesystem with basic tools. If you were going to install an application on that parent instance, that application would deposit files and configurations in various places. Executing the application would raise a process, which would grab some RAM, have its threads managed by the operating system and so on.
Install another application and there are more files in different places. The different applications might be able to read one another's files, or interfere with one another's threads. The more applications, the messier it becomes.
Containers lie to applications. They present to the application a version of the parent instance's file system, but it is one in which none of the applications living in other containers are visible. Similarly, the application in the container can't see threads, network traffic, etc. from other containers.
Multiple applications running in multiple containers still share a kernel. Changes made to the parent partition still affect all applications in all containers. It's just that the different applications can't talk to one another. Unless you let them. Microvisors can be quite configurable as to just how much isolation occurs, and more modern ones allow different levels of isolation for different containers.
Some microvisors allow individual containers to have filesystem overrides that, for example, allow a container to "install" specific libraries at specific virtual filesystem locations which would normally be outside of its container space. In this case an update to the parent instance might thus update libraries for some containers, but not for others.
This can be useful. For example, it can ensure specific applications have the exact versions of libraries they need to operate without holding up patching on the rest of the system (and thus the rest of the hosted containers.)
The Risks
It can also go horribly, horribly wrong. For example, when a developer packages vulnerable SSH libraries into their container that don't get updated when the parent instance is patched.
Hypervisors and microvisors do different things, and they do them in different ways. Microvisors allow more applications in less RAM, though if you install a microvisor on the bare metal then you only get one kernel for all applications. A hypervisor running a handful of VMs, however, with microvisors inside each VM can still achieve very high application densities while allowing diversity of operating systems and/or kernels on a single machine.
About the Author
Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.