Take Five With Tom Fenton
5 Ways Virtualization Has Reinvented the Datacenter
The transformation has been staggering.
It's been said that Miles Davis, an American jazz musician, bandleader and composer, reinvented jazz music at least five times. Few musicians can claim such an illustrious career. In this sense, virtualization is similar to Miles Davis: it has reinvented the datacenter at least five times. Let's take a brief look at each.
- The x86 hypervisor. In 1998, VMware kicked off the virtualized datacenter when it developed a way to run multiple guest OSes on a single x86 host as virtual machines (VMs). Yes, other companies had virtualized computer resources before, but these were big, expensive, proprietary servers, not the commodity x86 server running Windows or Linux. This allowed for server consolidation.
- x86 hypervisor manager. Once a single server was able to run multiple OSes, the next logical step was to manage multiple virtualized servers as a single object. This, along with shared storage, allowed VMs to run on any physical host in a datacenter. It also allowed the movement of running VMs from one physical host to another. This paved the way for business continuity, as OSes did not need to be stopped to perform routine maintenance on physical hardware. It could be argued that x86 hypervisors like ESXi, Xen and Hyper-V, due to their business continuity features, actually made Windows and Linux OSes datacenter-capable platforms.
- Disaster management and mitigation. Once datacenters had the ability to move workloads from server to server within the datacenter, the next logical progression was to support the replication and movement of a datacenter in one geographical location to another. If a disaster affected one particular site, another site could easily take up the load and the business would continue to function. A few products that enable this process are VMware's Site Recovery Manager (SRM), Veeam Replication and Zerto.
- The Cloud. All the previous technologies allow the abstraction of the OS from a physical server, a physical server from a single cluster of physical servers, and a cluster of physical servers from a single geographical location. The cloud allows a further abstraction as it allows a VM to be abstracted from any physical constraints. By using the cloud to run VMs, a workload is totally decoupled from any physical attributes. Businesses do not own the hardware on which the VMs run or the physical location in which they reside; they simply purchase the ability to run a VM. The cloud allows for a secure, robust environment that's OpEx based, rather than CapEx based. Amazon is the leader in this field; but Google, Microsoft, VMware, IBM and many others offer cloud solutions.
- Containers. Once we had abstracted the OS away from any physical constraints, what was left to do? Abstract the application away from the OS. By using containers, an application can run in a safe sandbox in an OS without affecting any other applications running on that particular OS. These sandboxes the application is running allow a secure, robust and manageable location for applications, without the overhead of running an entire OS. All the major players, along with some startups such as CoreOS, Docker and Rancher are working on technology to assist with containers.
About the Author
Tom Fenton has a wealth of hands-on IT experience gained over the past 30 years in a variety of technologies, with the past 20 years focusing on virtualization and storage. He previously worked as a Technical Marketing Manager for ControlUp. He also previously worked at VMware in Staff and Senior level positions. He has also worked as a Senior Validation Engineer with The Taneja Group, where he headed the Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on X @vDoppler.