In-Depth

What's Next for Virtualization?

Virtualization is evolving into a ubiquitous presence that will permeate technology infrastructures both large and small.

We've come a long way since the birth of virtualization. In 10 short years, virtualization has transformed the way workloads are managed in the data center. So what's next? Where's virtualization heading? What will our data centers look like 10 years from now? Will we finally arrive in the futuristic realm of "mist computing," where ubiquitous virtualization ushers in a new age of pervasive compute resources and advanced workload optimization? What exactly will it mean to move into the mist?

The Birth of VDI
By 2006, virtualization had arrived and the sales and marketing engines at VMware Inc. were hungry for expansion into new domains. Some forward-looking financial companies alerted VMware that they were using server virtualization technology to run Windows XP desktops for various purposes, including disaster recovery for traders and the elimination of multiple PCs under the desks of various workers. As a result, Virtual Desktop Infrastructure (VDI) was born. VDI continues gaining ground, maturing and capturing market interest.

Not yet a mainstream strategy, it's poised to become a popular and efficient means of managing diverse desktop users. Initially targeted to niche-use cases such as pandemic planning and offshore development, VDI has branched out and is now being considered by many companies as a primary enterprise desktop strategy.

Note the similarities in the maturity curve of both server and desktop virtualization. Both have pulled away from initial market perception as risky, special purpose and too costly to be mainstream. The abstraction of workloads from their underlying hardware constraints has proven more successful than many expected, and virtualization continues to move inward toward the center of industry norms and practices. Ever seeking new domains, hypervisor technology is appearing in mobile devices such as the VMware Mobile Virtualization Platform, demonstrating the ability to run the Google Android OS side-by-side with Windows Mobile.

Amid these many advances and feature improvements, virtualization is starting to actively reform and revise the ways in which data centers serve virtual workloads. Targeted initially at server consolidation, traditional centralized storage was accepted as the proper way to service hypervisors and associated workloads. These assumptions carried over to VDI reference architectures with negative consequences to VDI cost models. Reasons why virtual machines (VMs) rely on a scarce resource like storage area networks (SAN) or network-attached storage (NAS) to deliver I/O typically include:

  • Data is only safe on centralized storage
  • There is nowhere else to put data
  • Nothing but SAN/NAS can deliver needed performance
  • Only SAN/NAS have the advanced features needed to support virtualization

In short, all of these assumptions contain partial truths but are ultimately false. They reflect the reality that virtualization was meshed onto existing data center architectures rather than data centers being optimized for the needs of virtualized workloads. Emerging virtualization models will facilitate new storage architectures optimized for virtual workloads.

As virtualization continues to make inroads into both data center and edge infrastructures, we will see infrastructures adapting to the specific needs of virtualization rather than simply the layering of virtualization on top of existing legacy infrastructures. This transformation will manifest at all layers of existing infrastructure and is underway already. To illustrate this further, the following areas of traditional infrastructure will be considered:

  • CPU/memory designs
  • Storage/cache designs
  • Server identity management
  • Networking and I/O management
  • OS changes
  • Workload placement

Each of these infrastructure design pillars already shows signs of adaptation to the specific needs of virtualization, and whereas early-stage virtualization simply attached to legacy infrastructures, the major changes for virtualization going forward will be intertwined with the infrastructure upheaval driven directly by the evolution of virtualization technologies. Figure 1 represents past and future virtualization milestones. Dates are approximate, and milestones are based on educated predictions.


[Click on image for larger view.]
Figure 1. Virtualization Milestones

Hypervisor Types
Initial virtualization efforts are described as type-2 hypervisors -- where the virtualized workload sits above an existing OS. Microsoft Virtual PC and VMware Virtual Server are examples of this architecture. While this model continues to show value in particular cases, hypervisor design quickly moved to a type-1 model where the hypervisor itself sits directly above hardware and all guest OSes access hardware through the hypervisor. VMware ESX broke new ground for x86 workloads by ushering in this type-1 design as "enterprise class," providing superior performance and reliability. Key to the success of early ESX versions was the ability to trick the underlying CPUs by intercepting so-called "Ring 0" calls from guest OSes and performing some CPU code substitution needed to maintain the integrity of the CPU/hypervisor communication. Although effective, this design adds to virtualization overhead.

Both Intel Corp. and AMD Inc. responded to the spread of virtualization by introducing new features in the CPUs that made Ring 0 translation unnecessary. Today, Intel VT and AMD-V allow hypervisors to be more efficient, removing the compute-heavy components of workload management and leaving hypervisors to manage workloads without in-path code translations. Memory densities have advanced significantly in large part to accommodate the memory-hungry hypervisors, which are the chief consumers of memory-dense hardware platforms. The Intel Nehalem architecture improved memory densities as a key objective, and innovations like Cisco Unified Computing System (UCS) memory-multiplexing extend such gains with advanced memory controllers. Additional processor and memory changes related to virtualization are in the works, but hardware is already adapting to virtualization's pervasive presence.

Extending Storage
Mainstream storage designs had been organized chiefly around direct-attached storage (DAS) and, since the mid-1990s, around SAN and NAS. Pre-virtualization, these models were the only effective way of aggregating diverse storage resources for consumption by an ever-expanding physical server base. A centralized storage paradigm within the data center has eclipsed DAS-based storage, and to date virtualization has largely layered itself onto the SAN/NAS method of organizing and distributing I/O. It should be noted, however, that SAN/NAS architecture arose without consideration for virtualization's specific needs.

As with other infrastructure layers like CPU/memory, storage will similarly reorganize and optimize itself around virtualization's pervasive influence. An early-stage trend illustrating this transformation is the emergence of distributed virtual NAS architecture (DVNA). By leveraging hypervisors' ability to host many VMs, it's possible to place a VM-based NAS atop the hypervisor, or even a grid-based storage cluster spanning a hypervisor cluster, and fully co-resident with the guest OSes to which the storage is being served. In this architecture, the hypervisor cluster serves storage I/O to its own consumers from within its own resources. Use cases suited to this strategy include distribution of repetitive and low-risk datasets, such as shared image OSes. While reference data may still be pulled from central storage, the primary I/O distribution workload is driven by VMs, which mount datastores up to the hypervisor clusters on which they reside. Interesting local disk-based strategies are also possible within this model. Figure 2 depicts this strategy.


[Click on image for larger view.]
Figure 2. Distributed Storage Design

The low latencies of network I/O across the hypervisor's virtual switch facilitate excellent performance between the VM-based NAS appliances. A maximum of one network hop is necessary to serve storage I/O to VMs elsewhere in the same physical rack or hypervisor cluster. Offloading overburdened centralized storage and often improving performance, DVNA is optimized for the needs and capabilities of virtualized workloads. Removing the need to provision storage at the beginning of a virtualization project, DVNA allows the scale-out of I/O operations per second and capacity in tandem with the expansion of a virtualization project. The error-prone estimation of storage resources at the beginning of virtualization projects is removed, and the dual risks of either excessive upfront capital expenditures or ongoing degradation of performance during ramp-up are avoided.

Similarly, a rethinking of storage caching strategies goes hand-in-hand with the spread of virtualization. Traditionally, the cache resides in front of centralized storage and functions as the primary accelerator of I/O in the data center. As I/O distribution moves upstream into the hypervisor, the cache location will become more distributed, delivering accelerated I/O using the memory of VM-based NAS appliances. DVNAs move the cache closer to the data consumers, locating the entire supply chain of I/O within hypervisors. Slowly but surely, the role of centralized storage will be transformed from today's primary I/O workhorse to a primary guardian of centralized data. Security, business continuity and disaster recovery and enterprise functionality will hold center stage, but distribution of I/O will move upstream and become inseparable from the clusters of machines hosting virtualized workloads. A transition to storage architectures that fully mesh with virtualization is unavoidable. Cost reductions and performance efficiencies will motivate storage infrastructure changes that move legacy SAN/NAS architectures toward new virtualization-driven designs.

The Depersonalization of Virtualization
The march of virtualization makes itself felt in the way we understand physical servers. Traditionally, servers were perceived as dedicated resources for particular applications and functions. Virtualization brought a new vision of data center servers -- one in which servers were generic managers of workload capsules that float freely between and among a large pool of physical servers. As Moore's law hit the limits of silicone clock-speed, CPU evolution moved in the direction of multi-core expansion. We stand poised to see 8- and 12-core CPUs soon, and if GPU development is any indicator, CPUs with hundreds of cores are not too far off. Perhaps more than any other application, virtualization is uniquely equipped to take advantage of this move towards multi-core CPUs. The inherent parallelism of multitenancy virtualization fuels and extends the trend toward multi-core CPU development.

The anonymity of the physical server brought about by virtualization is steering the data center toward layers of abstraction in the management of physical server identity. Just as VMs can be dynamically assigned to underlying servers, a physical server can be separated from its logical identity. Cisco UCS allows the seamless migration of server identities between physical servers. Identifying characteristics, such as the number of Host Bus Adapter/Network Interface Cards (HBA/NICs), along with specific world-wide names and MAC addresses, can now be instantiated into a mobile logical identity that migrates between physical servers at will. The abstraction between physical hardware and logical identity enables spare servers to quickly take on the identity of any failed server within the data center. N+1 pools of servers can now be serviced by common spare servers, lowering the overhead associated with maintaining resilient clusters. Post-virtualization, this mobile identity management strips away all remaining elements that comprise the personality of a physical server, extending and completing the depersonalization by virtualization.

The freedom from vendor lock-in and the general mobility of workloads brought about by virtualization is now being extended and absorbed by all aspects of supporting infrastructure. The multiplexing of workloads through common network pipes and the advent of fully virtual switches has spurred the move to extend network management deeper into the virtual infrastructure. The Cisco 1000v virtual switch introduces the full functionality associated with data center switching to the hypervisor context of software-based virtual switches. Network admins now have a level of visibility into the VM that mimics what they're accustomed to in the physical domain. Cisco UCS and other technologies like HP Flex-10 and Xsigo I/O director allow granular control over the quality of service (QoS) between competing traffic streams within common trunks. The ability to extend QoS policies all the way down to individual VMs is the key to implementing service level agreement-driven performance management. As IT transforms itself into service-based architectures, virtualization-driven enhancements to underlying infrastructure fuel and facilitate the realization of governance, granular control and Information Technology Infrastructure Library compliance.

Forming Functionality
Virtualization's presence is felt further up the technology stack, even inside OS design and functionality. Software already installed inside the OS adds functionality and performance linked to the underlying hypervisor. Examples include time synchronization, better memory management and enhanced NIC and video performance. Tighter integration is on the way with new features like dynamic memory sizing and resizing, improved paging and better multiprocessing performance. It's likely that the future will bring hypervisor management of guest memory sizing, a sort of set-it-and-forget-it type that's never over- or under-provisioned to the guest OS.

Today, the hypervisor brings fault tolerance functionality to the VM, a capability previously associated with hardware or proprietary functionality inside the OS. Anti-virus and backup functionalities that previously occurred within the OS will be abstracted away and replaced with dedicated virtual appliances that inspect memory and file systems through hypervisor mechanisms, freeing up the guest OS to service apps. The lines between OS and hypervisor will continue to blur as ultra-thin OSes emerge that are designed to place the application workload as close to the hypervisor as possible and eliminate traditional OS licenses. Google Chromium and ultra-thin Linux variants can perform these functions. The VMware acquisition of Springsource signaled a commitment to aid the seamless migration of apps from the developer's desktop to virtual infrastructure and into the cloud.

The Data Center and Beyond
We've looked at many ways in which the pervasive influence of virtualization finds its expression within various infrastructure transformations, supporting and deepening the paradigm shift in workload management that virtualization has brought about. Perhaps more than any other area, some of the greatest changes the future holds for virtualization relate to the physical location. Today, virtualization is almost synonymous with the data center. As the birthplace of enterprise-grade virtualization, leading platforms are still tightly wed to legacy data center infrastructure and assumptions. However, the very mobility and agility of virtualization begs for expansion beyond the confines of the data center and out into the wild, so to speak. The very attributes that have aided virtualization within the data center stand ready to create even more radical transformations in the way compute workloads are managed when the data center becomes just one in a portfolio of virtualization locations.

Hosted virtualization has been with us since the beginning with products like VMware Workstation and Microsoft Virtual PC, which allow select workloads to be virtualized on top of existing OSes. The next major event in this virtualization dispersion will be the launch of client-side hypervisors. Bringing the benefits of a true type-1 hypervisor to the PC, it will become possible from within a single management framework to manage virtual desktops with agility unmatched even by virtualized server workloads. Familiar benefits such as common virtual hardware, improved provisioning and lifecycle management will reinvigorate endpoint computing.

State-of-the-art desktop strategies will be hybrid, incorporating both data center and endpoint workload placement in real-time workload placement. The OS and attendant applications will be decoupled to allow placement of applications either on the edge, near the edge, within the local data center or within cloud resources. Local resource availability, application profiles and infrastructure "weather" conditions will contribute to workload placement decisions. The inherent mobility provided by virtualization will reach a new zenith of efficiency, optimizing workload placement within a broader geographical domain and against more complex criteria.

As Moore's law drives more compute power into smaller footprints, machines the size of today's cell phones will be capable of hosting x86 workloads equivalent to data center servers. The future holds a world where small footprint hypervisors exist in a common working environment, a hidden resource fabric in common office real estate. The ability to move workloads on this extended resource fabric will open up opportunities for real-time optimization of workload placement and predictive models that proactively relocate workloads based on past consumption patterns.

'Mist Computing'
Today's dialogue has been framed in terms of local data center and cloud. There's little attention to the obvious benefits of dispersing virtualization capacity into working spaces. As a concept, the cloud is far away in relation to the consumer of resources. A new metaphor is needed to describe a model where virtualization spans the space between data center and desktop. I call it "mist computing" -- compute resources that are all around us, nearby and omnipresent. Mist computing is the future of virtualization. Workloads requiring ultra low-latency data transfers will continue to benefit from data center concentration, and the many benefits of dispersed workload placement will move workloads out onto the resource grid. As the amount of compute continues to grow exponentially, many workloads won't be able to justify the thermodynamic tax associated with data center concentration once fabric-based alternatives exist. With new resource placement options available, data centers will host the most latency-sensitive applications while many other applications will migrate closer to end users or points in between.

The Virtual Economy
Today's cloud alternatives are the tip of the iceberg in terms of provisioning alternatives. Future virtualization will foster granular resource grids that are as commonplace within corporate campuses as electric outlets. Peer-to-peer, real-time leasing of compute resources will emerge as an alternative to today's remote-cloud offerings. Latency-based optimization will factor heavily into resource assignments for many workloads, so proximity of the resource provider will be important in the real-time pricing of resources.

Look for more advanced economic models emerging to arbitrate the inevitable competition for compute resources. Financial instruments such as the placement of options and futures will be applied to the arbitration of compute resources, allowing participants to mesh their own predictive models with limited budgets. Micro insurance policies will be available to hedge against various classes of resources becoming unavailable. As companies fluidly borrow from the resource grids of neighboring companies, corporate campus resources and upstream cloud providers, block buying, just-in-time leasing, hedging and risk mitigation strategies will emerge as pivotal areas of innovation within the virtualization ecosystem.

Today's rapid provisioning will seem antiquated, and the simplistic performance-based cluster optimizations will yield to multifaceted resource allocations with all the complexities of portfolio optimization theory. Risk, cost, performance and transactional overhead will all participate in workload placement decisions -- producing a virtual economy of sorts in which business and technical objectives find equal voice. Resource grids will find realization both in static capacity nodes embedded in common workspaces and in mobile clouds comprising resource-borrowing from the ebb and flow of nearby mobile devices, whose resources will be available for micro-leasing. Consider the collective compute power available from a thousand workers, each with an iPhone 3GS on hand. Tapping this massive nearby resource effectively will leverage mobile hypervisors and special software agents written to allow controlled contribution to nearby resource grids for specific functions.

Far from being the passive overlay of yesteryear, virtualization now blazes the trail that supporting technologies must follow. Fermenting change within the data center, edge computing and everywhere in between, virtualization stands poised to make its most radical contribution yet: ushering in a new age of pervasive compute resources and advanced workload optimization. Tomorrow's virtualization will be everywhere: on most static and mobile devices, in the data center and in the walls and structures of tomorrow's office buildings. Its ubiquitous presence will render it invisible, even forgettable. Like electricity, its functions will be indispensable, but its novelty will be gone as previous ways of managing compute workloads recede into the mists of computer history books.

Featured

Subscribe on YouTube