The Evolution of VMware's vMotion
How it started, where it is, where it's going.
The defining feature that elevated VMware from being just about server consolidation to encompassing the realm of business continuity was vMotion. With vMotion, no longer did a datacenter need to suffer an application outage due to having to replace or update its infrastructure components.
Instead, vMotion allows applications running on one server and/or being stored on a storage array to be seamlessly migrated to another server or form of storage without having to incur expensive downtime. vMotion functions if these components are hundreds, or even thousands, of miles away.
But vMotion technology as we know it today didn't suddenly appear in the datacenter; it evolved over time, and over a number of releases (see Figure 1 ). Let's take a look at the brief history and evolution of vMotion.
By 2003, VMware had captured the hearts, minds and wallets of IT professionals with a simple concept: they could save datacenters wads of cash by consolidating many physical servers onto a single physical server. Not only did this consolidation cut down on the expense of purchasing physical servers, it also decreased infrastructure costs (electricity, heating, cooling, maintenance agreements, floor space and so on) and administrative costs.
VMware could have existed and survived solely based on this simple concept of consolidation; but the next level was achieved when it implemented its first business continuity feature, vMotion.
The First VMotion
The first iteration of VMotion (yes, it originally had a capital V) was released with VMware Virtual Center 1.0 in 2003. Everyone was simply blown away by the fact that you could migrate a virtual machine (VM) running an application from one ESX host to another without suffering any downtime.
The earliest demonstrations showed 3D Pinball for Windows running. The test was whether or not you could tell when the VM migrated from one server to the other while playing the game. The answer was that you couldn't. It was amazing.
Another selling point was the ease with which this migration occurred. To vMotion a VM you simply right-clicked the VM and chose "Migrate" to initiate the process. It's still just as simple; Figure 2 shows the vCenter Web client being used to migrate a VM from one host to another.
Four years later, at VMworld 2007, VMware co-founder Mendel Rosenblum demonstrated how it was possible to move a running VM from one storage device to another, regardless of the underlying storage protocol. This liberated administrators from being tied to one storage device, technology, or vendor.
Although this didn't have the same initial "wow" factor as moving a VM from server to server, this flexibility further abstracted the operating system away from the underlying hardware. Storage vMotion was officially released with VMware Virtual Infrastructure 3.5 (which consisted of VMware ESX 3.5 and vCenter Server 2.5) in 2008.
Multiple NIC vMotion
The next two major changes to vMotion were included in vSphere 5.0: allowing a vMotion to take place over multiple NICs and Stun During Page Send (SDPS). These changes to vMotion were not as visible or as talked about in the community, but were still important. The first sped up the transfer of a vMotion, as it could now use multiple NICs. SDPS slows down the vCPUs that a VM is using during a vMotion, thereby allowing extremely active VMs to be slightly less active during the final stages of vMotion, enabling a successful move.
One of the limitations with vMotion at this point was that the VMs being transferred needed to have shared storage, and both hosts needed to have access to that shared storage. With the release of vSphere 5.1, vMotion without shared storage was available. Figure 3 shows the current wizard that allows the migration of a VM's compute and storage.
Administrators were getting comfortable with vMotion and were starting to do some very interesting things with it in the datacenter. With vSphere 5.5, vMotion broke out of the datacenter and burst into the larger arena. In 2011, EMC announced that its VPLEX Metro product had been qualified to support a vMotion over twice the previous distance possible. A VM could now be transferred a distance of approximately 125 miles, as long as the round-trip latency (RTT) was less than 10 ms.
This made it possible for a datacenter in one city to move its mission-critical vMotion application workload to another city 125 miles away without disruption.
Really Long Distance vMotion
A vMotion distance of 125 miles was quite remarkable. But VMware customers demanded more, and in 2015 they got it in spades with the release of vSphere 6.0 with the release of Long Distance vMotion; by long distance, they meant 150 ms RTT.
That allows vMotion from the west coast to the east coast of the U.S., as the RTT from San Francisco to New York is 99ms. It's highly unlikely that the vSphere infrastructure in such diverse locations would be the same, so VMware also made it possible to vMotion across and between vCenter Servers and vSwitches. They also allowed the transfer to take place over L3 networks.
The Future of vMotion
VMware will continue to make evolutionary changes to make vMotion more performant, reliable and useful in the datacenter. But what will the next huge obstacle for vMotion to tear down in the datacenter? At VMworld 2015, VMware demonstrated vMotion to the cloud, which they referred to as "Project Skyscraper."
VMware said that their goal was to let customers "extend their datacenter to the public cloud and vice-a-versa by seamlessly operating across boundaries while providing enterprise-level security and business continuity." Now that we have vMotion from practically any datacenter to any datacenter, I'd predict that a vMotion to the public cloud will be the next huge obstacle in the datacenter to fall.
vMotion has evolved over the last decade to a be an indispensable feature in a system administrator's toolbox. I'm not sure anyone could have guessed 10 years ago, while watching a VM running Windows Pinball being transferred from one server to another, what a huge impact it would have on the datacenter. It freed IT professionals from the constraints of physical hardware by allowing a VM to be migrated on the best hardware for its use Here's to looking forward to the future continued evolution of an already mind-blowing technology.
Tom Fenton has a wealth of hands-on IT experience gained over the past 30 years in a variety of technologies, with the past 20 years focusing on virtualization and storage. He currently works as a Technical Marketing Manager for ControlUp. He previously worked at VMware in Staff and Senior level positions. He has also worked as a Senior Validation Engineer with The Taneja Group, where he headed the Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on X @vDoppler.