In-Depth

On Desktop Virtualization: The Protocol Is Not a Panacea

Five key parameters for benchmarking desktop virtualization platforms.

(This article is the third in a series on desktop virtualization; click here to read the first one, here for the second one. Full disclosure: This is a vendor-contributed article.)

As with every IT initiative, even a strong business case for desktop virtualization is no guarantee of real-world success -- users must adopt the solution in order for the organization to realize any benefits. In order to win over users, the solution must match or exceed the user experience of a traditional PC. From IT's perspective, it must also be able to scale, supporting an optimal number of users per server and making efficient use of existing resources.

While identifying the high-level criteria for effective desktop virtualization is straightforward enough, determining how to systematically test and compare various desktop virtualization solutions against those criteria can be more challenging. Product demos or even lab trials that are performed in tightly controlled scenarios will not provide the type of analysis required to effectively evaluate the myriad desktop virtualization solutions available today.

As an example, demos may show high-quality streaming video over a WAN--but not the vast amount of bandwidth it consumes, or the impact of those video-biased settings on text-based content like spreadsheets and documents. When evaluating the best solution for the organization, IT must do an in-depth analysis of how well the solution scales to large numbers of users and how it impacts and consumes network bandwidth and resources. Without such an analysis, IT may end up with a solution that can only address a small set of users with only "good enough" user experience.

In this light, it's essential to take a holistic approach to the evaluation of desktop virtualization solutions. The underlying delivery technology -- for example, Citrix HDX™, VMware PCoIP or Microsoft RDP -- determines the balance of bandwidth, CPU utilization and performance, but it's not the whole story. Equally important are the complementary technologies that address multimedia, USB peripherals, real-time collaboration, 3D graphics along with network bandwidth optimizations (LAN and WAN) and secure access. Rather than looking at one or two factors in isolation, IT should examine comprehensive benchmarks across the full spectrum of workloads and use cases to make a fully informed decision.

Preparing the Benchmark
A variety of freely available, non-proprietary tools and methodologies are available for benchmarking desktop virtualization platforms. Whichever approach is used, the goal should be to replicate a variety of end-user workloads as accurately as possible. Applications should load in random order, and should include common productivity and line-of-business applications as well as more demanding Flash-based workloads. Application and streaming video tests should incorporate background network traffic to simulate real-world conditions. Multiple repetitions of each test are needed to yield a good estimate of its real-world bandwidth and CPU utilization, as well as the quality of the resulting user experience in terms of responsiveness, screen clarity and audio/video synchronization. In cases where tests are conducted by vendors, third-party validation is essential to confirm that the results are accurate and repeatable.

To gain a complete picture of the strengths, weaknesses and general effectiveness of a desktop virtualization platform, make sure to consider each of these five areas:

1. Bandwidth
Tests should demonstrate the average bandwidth consumed by a real-world workload accessed through the virtual desktop platform. A sample moderate workload might include general business applications such as Microsoft Excel, Word and PowerPoint; Flash-based Web sites and multimedia, including streamed Flash video; PDF viewing; and printing. If there are specific line-of-business applications that are common, most of the available tools allow for customization so they can be included in the test scripts. To reflect both LAN and WAN environments, measurements should be taken in scenarios simulating ample bandwidth, as well as constricted bandwidth.

Tests should also take into account network traffic outside of the virtual desktop. For example, branch offices may continue to support traditional desktops generating packets on the network, and there may be VoIP or video applications that require predictable network bandwidth.

Ideally, testing will include individual sessions, as well as multiple concurrent sessions to see the impact of congestion on the user experience. Also look for whether consumption rises to a peak and stays there, or ebbs and flows, providing "space" for other traffic on the network. While averages over time may look the same, the behavior may impact what and how many other tasks (such as print jobs, file transfers, etc.) can occur simultaneously.

2. Server-Side CPU Utilization
The success of a desktop virtualization initiative has a direct correlation to scalability: the more users you can support per server, the fewer servers you'll need to own and maintain. An evaluation based on streaming HD-quality video can provide a view of the peak consumption of server resources per user by a given desktop virtualization platform. How often and how high does CPU utilization spike? Consistently high consumption will often have a significant impact on the quality of video playback and the number of users per server. Tests can be conducted using HD-quality video or very demanding line-of-business application, such as graphics-intensive applications or those performing statistical analysis functions.

A protocol technique that IT can leverage to conserve server resources is to offload the multimedia rendering to the endpoint in its native format; this is sometimes referred to as multimedia redirection and tests should be conducted with this feature enabled and disabled while monitoring CPU utilization to compare the impact.

3. Workload Types
While many typical use cases for desktop virtualization center on general productivity applications, the ability to handle multimedia and other high-end workloads is essential to support the full range of user profiles and tasks, enable widespread deployment and capture the full benefits of a virtualized architecture. As such, testing should include a wide variety of workload types that represent all the different use cases across the organization and should not be limited to just one class of application.

4. Network Scenarios
Much of the value of desktop virtualization lies in its ability to support anywhere, anytime productivity. To ensure that users can access a consistently high-quality experience through any network connection, tests should simulate the full spectrum of LAN, WAN and remote access environments for both general and high-end workloads.

A WAN emulator can be used to restrict bandwidth and introduce packet loss and latency to simulate the kind of impaired network conditions real-world users will likely encounter. Even the best-performing platform in a LAN environment is of limited value if its performance suffers significantly under sub-optimal conditions. Ideally, you want one desktop virtualization solution that can address all of the scenarios.

5. General Performance
More subjective than the other four parameters, the assessment of general performance comes down to an individual value judgment. In essence, you should consider the quality of the user experience delivered by the desktop virtualization solution and ask whether it would be accepted by the users in your organization. All the benchmarking can be performed but at the end of the day, does it pass the user test?

Does the screen display maintain a local-like experience when running business applications like Microsoft Excel, or does it blur excessively and distractingly while scrolling, making it difficult to read (often described as "build to lossless")? Evaluate video display with Flash optimizations both enabled and disabled, keeping in mind that these typically consume greater bandwidth and CPU resources. How do bandwidth constraints, latency and dropped packets impact video performance? Is playback "bursty," distorted, out-of-sync between audio and video, or otherwise degraded compared with the source stream? A frustrating, low-quality user experience can doom user adoption and leave the entire initiative in jeopardy.

Beyond the Protocol
Comparisons of desktop virtualization solutions often get bogged down in highly technical debates about the theoretical virtues of their underlying protocols. It's true that these protocols play a vital role in the effectiveness of each platform, but it's important not to lose sight of the forest for the trees.

Ultimately, what matters most is how well and how efficiently the solution can deliver virtual desktops for the organization. The most relevant data will come from testing that most closely simulates users' real-world workloads and scenarios. It's not hard to find a solution that will excel in one single area, but in order to deploy virtual desktops across a large set of users, organizations need to consider a solution that performs well across the board spectrum of criteria: WAN optimization, secure access, the best user experience for high-end applications and on-demand application delivery. Only then will enterprises see the strategic value and overall performance desktop virtualization can bring to their IT operations and bottom line. 

About the Author

Calvin Hsu is director of product marketing at Citrix Systems.

Featured

Subscribe on YouTube