In-Depth

Planning Primer: End Users

Virtualization should be about one thing: helping your users be more productive. Too often, though, that rule is forgotten.

This article was written by virtualization management vendor Aternity Inc.-Ed.

Today's IT world is moving toward consolidation. The trend is clearly visible in the data centers where server consolidation has already become a reality. However, a similar trend is also noticeable in the desktop world-the desktops are being moved alongside the servers, either by means of Terminal Services (known in the olden days as Server-Based Computing, or SBC) or through virtual desktop infrastructure (VDI).

Recent analyst research from Gartner Inc. projects that more than 4 million virtual machines (VMs) will be installed on x86 servers by next year, which is almost as many virtual PCs as are in operation today, according to Gartner. Clearly virtualization is a major disruptive technology, and as a result it requires radical changes in thinking and operating procedures to better plan, manage, provision and orchestrate resources throughout the enterprise.

In SBC and VDI environments traditional metrics like CPU, memory and network utilization are less accurate performance indicators, and the need for visibility into actual end-user experience becomes crucial. By capturing real end-user experience of application performance together with application-usage trends and resource usage of each desktop application, real-time, right-time decision support for virtualization deployment projects can be supported.

Fact-based assessments of the amount of resources required to support multiple users-running a variety of different applications on each virtual desktop server-greatly diminish the risk of choosing the wrong solution.

By using real-time analytics and correlation capabilities to pre-emptively detect performance issues, isolate impacted users and determine probable cause, organizations can also utilize end-user experience metrics to help determine virtualization provisioning and orchestration.

A strategy combining real-world virtualization assessment and planning, together with real-time performance deviation detection, will help you optimize your VDI environment. This will ultimately make it easier to add new applications and desktops to the VDI pool. It will also save you money by cutting the hardware and management costs typically associated with labor-intensive resource allocation.

To that end, this article offers some advice in planning, evaluating, managing and orchestrating desktop consolidation projects. It will discuss understanding the effect that virtualization can have upon business performance and productivity and how to accurately determine the impact upon end-user experience. It will also look at methods and tools for selecting the appropriate desktop virtualization technology and how to evaluate the optimal virtualization strategy.

New Complexity Brings New Challenges
With the growing adoption of virtualized environments, IT organizations are quickly realizing significant challenges involved in effectively monitoring a large number of VMs and sessions on physical servers. Because you're using both virtualization and Terminal Services technologies at the same time, it's very difficult with VDI to get a true sense of the performance seen by the user.

Additionally, the sharing of resources across multiple users produces even more complex issues. Running under a hypervisor means that monitoring the CPU and memory of the machine is less precise, because the metrics are complicated due to the hypervisor resource sharing across multiple VMs. It's not uncommon, for example, for a VM to be at 90 percent CPU utilization while the host machine isn't impacted at all.

This increased complexity raises new challenges around evaluating, planning and managing the performance of virtual environments in general, and virtual desktops in particular.

Security, problem detection and analysis, as well as the demand for automated provisioning of VMs within the physical infrastructure based on granular performance metrics, makes it even more difficult. As a result, existing system management and monitoring tools are obsolete for this new environment. Leveraging real end-user experience monitoring is becoming more important to efficiently manage virtualized environments.

Figure 1
[Click on image for larger view.]
Figure 1. The three factors that affect the end-user experience include desktop performance, application performance and user productivity.

Understanding Real End-User Experience
There are three primary components that dynamically interact, define and constantly impact end-user IT experience-in both real and virtual desktop environments (see Figure 1):

  • Desktop performance. This includes vital indicators such as running processes, CPU and memory utilization, non-responding processes, crashed applications, error messages and process latency.
  • Application performance. This includes response time, throughput, latency and end-to-end transaction time for any and all business processes.
  • User productivity. This measures mission-critical application usability. Important factors include activity level, usage trail and the number of business processes performed in a given period for each user. That may include everyday tasks such as the number of e-mails sent or received, the number of trades completed or the number of support tickets closed.
Figure 2
[Click on image for larger view.]
Figure 2. To get a thorough picture of the end-user experience, a number of processes must be examined, as depicted here.

End-User Experience Management
Because virtualization essentially disrupts the traditional relationships among PC hardware, the client operating system and desktop applications, it's crucial that enterprises understand how to plan for and support the changes.

IT and Line of Business (LOB) management need precise, comprehensive metrics describing real end-user experience, before and after going "virtual," in order to support strategic desktop virtualization and consolidation decisions. Some areas that should be considered:

  • End-to-End Application Coveragefor performance monitoring of packaged and/or custom Web/HTTP, rich Internet application (RIA), client/server and Java applications running in a virtualized desktop environment.
  • Comprehensive Performance Metricsfor every business process running in a virtualized environment, including response time, throughput, latency, and end-to-end transaction time, from end users' perspectives.
  • Application Usage and User Productivityfor gaining in-depth insight into end-user application usage, usability and quality of service, as well as user productivity.
  • Advanced Analytics and Correlationof virtual desktop, application and user-performance metrics. This will enable preemptive problem detection and probable cause analysis across all three primary components of user experience.

By monitoring and analyzing these end-user experience metrics in real time, an exact "snapshot" of real user experience can be mapped into the VDI architecture that's being planned for implementation.

Once an organization implements the selected virtualization environment, it will require an end-user experience management tool that can monitor, analyze, correlate and clearly visualize the relationship and impact between all the key metrics of the major components of the virtualized application environment.

Provisioning and Orchestration
Acceptable levels of end-user experience must be a core consideration for decisions regarding higher density of guest machines on the virtualized hosts. In addition, end-user experience metrics can also provide insight into how to best perform automated orchestration of services.

In server virtualization environments, where resource usage is high and the goal is to preserve fairness between machines, using hardware metrics works fairly well. In contrast, the cost structure of VDI and the number of VMs required in desktop virtualization, combined with the low-usage metrics of many end-user machines, render these metrics almost useless for effective orchestration. In such environments, automating problem detection and orchestration could bring significantly higher virtualization-density levels.

For example, business apps such as Microsoft Word or Microsoft Excel are mostly idle, with short bursts of higher CPU usage. These applications notoriously behave poorly in virtual environments with limited resources. Now compare this example with memory and CPU-hungry call center client/server applications. Because of their network focus, end-user experience is far less impacted by throttled CPU resources. It's clear that these hardware-related counters may be less helpful in deciding on the best physical server mapping and VM settings. That's why mapping this into the productivity metrics of real users, such as comparing the number of trades a trader makes when using a virtualized desktop as opposed to a physical machine, is crucial; in this case, there's a clear and compelling financial impact.

In order to reveal user-experience deviations and perform early problem detection, organizations can use clustering and correlation analytics to determine acceptable user-experience service levels and productivity targets; when combined with automated mapping of VMs to physical servers and insight into the VM and OS configurations, it's easier to quickly determine the cause of performance problems.

To achieve higher density of guest machines on the virtualized host, organizations need the ability to tune VM parameters and automatically change resource level allocations as a result of problems affecting the user experience.

It's All About the Users
Virtualization is a key disruptive technology for IT, requiring radical changes in thinking and operating procedures to better plan, manage, provision and orchestrate resources throughout the enterprise. In an environment where system metrics are less reliable than in a physical-only world, user experience becomes the best measurement to manage against.

Organizations that want to avoid "virtualization lock-in" need the ability to evaluate the benefits of different virtualization approaches and platforms-and select the appropriate platform for their unique environments.

Through real-time analysis and correlation of what end users really do and what they need, companies can go a long way toward determining the best virtualization platform for the application mix used by the organization's diverse user groups.

Featured

Subscribe on YouTube