Virtual Observer

Building a VDI Environment on Shared Storage

Storage is at the heart of the problems some companies have in deploying VDI.

Last time, I talked about the challenges that remain around desktop virtualization: too many platform choices, the explosion in the number of end-user devices, the confusion caused by cloud offerings, and the lack of a breakout management solution for desktop virtualization.

Let's dive deeper into the granddaddy solution for desktop virtualization: server-hosted virtual desktops, traditionally called "VDI." VDI has often promised more than it delivered, due to stubborn complexity, performance and cost challenges. Chief among these has been the high up-front capital costs and subsequent inefficiencies of the storage platforms deployed to support it--and storage costs are at the heart of the VDI-ROI equation.

At first glance, it seems obvious that the consolidation, efficiency and mobility benefits of server virtualization should translate easily to VDI. But storage costs complicate that picture. Essentially, it all comes down to density. In order to justify VDI, desktop VMs must be deployed at much higher densities than server VMs--at least five to 10 times higher in many cases. But dense VM deployments tend to expose performance problems and edge conditions throughout the virtual IT stack, from memory and CPU contention to I/O bottlenecks.

Of course, high densities in VDI are only possible because desktop workloads are intermittent and variable. Unfortunately, this also makes them difficult to model and prone to wild load spikes. Desktop workloads were designed for local, dedicated disks that are generally much faster than users need them to be. With VDI, tens or hundreds of workloads access shared disks simultaneously and demand fluctuates throughout the work day. This makes it very difficult to guarantee user experience levels without over-provisioning storage capacity and throughput for the worst case.

The hypervisor complicates matters further. I/O from multiple workloads is consolidated and concentrated, which transforms potentially sequential I/O requests into random ones. Randomized I/O severely degrades the efficiency of traditional storage caching algorithms, leaving VDI performance highly sensitive to disk latency.

Variable and consolidated desktop workloads also give rise to intermittent load spikes. These I/O "storms" can occur when many users simultaneously log in, open the same application, run a virus scan, or perform some other common disk-intensive operation. I/O storms, when combined with the generally sporadic nature of desktop data access patterns, have stalled many VDI projects in the early proof-of-concept stage. And, In addition to requiring high levels of sustained aggregate I/O throughput, consolidated desktop workloads often exhibit wide variability in storage I/O read/write ratios.

In the end, delivering consistent and predictable performance for high-density VDI--at reasonable cost--demands innovation in both the hypervisor and storage layers, and deep integration between the two. Only a centralized storage infrastructure designed to handle these challenges without massive over-provisioning can hope to compete on cost against the current PC desktop status quo.

Next time: Storage innovators capitalize on VMware View's storage optimizations for higher-density VDI.

About the Author

A senior analyst and virtualization practice lead at Taneja Group, Dave Bartoletti advises clients on server, desktop and storage virtualization technologies, cloud computing strategies, and the automation of highly virtualized environments. He has served more than 20 years at several high-profile infrastructure software and financial services companies, and held senior technical positions at TIBCO Software, Fidelity Investments, Capco and IBM. Dave holds a BS in biomedical engineering from Boston University and an MS in electrical engineering and computer science from MIT.

Featured

Subscribe on YouTube