A lot has changed within the IT industry over the past few years. Virtualization has become more widely adopted by organizations of all sizes and across all industries. The recession has forced companies to refocus priorities and carefully spend precious IT dollars on technologies that deliver real value to the company, and, most recently, Microsoft has released its first major desktop operating system in 10 years.
The release of Windows 7 leaves companies with the daunting task of determining when and how to best migrate to this new OS. Many of them are electing to postpone this migration as long as possible due to the cost and effort involved with testing, upgrading and replacing applications incompatible with Windows 7. Others are aligning the OS upgrade cycle with their PC refresh cycle and leveraging desktop virtualization technologies to aid the migration.
In either case, companies are interested in the benefits that Windows 7 and virtualization can provide in their efforts to reduce desktop management and support costs. Prior to the release of Windows 7, many companies were already considering various virtualization technologies as part of their overall plan to reduce the cost and complexity of managing the desktop environment, while taking advantage of virtualization's flexibility.
Virtualization technologies can be implemented in a phased approach to minimize risk and expense to the organization. In addition, not all of the virtualization technologies described in this article are required for a successful migration to Windows 7. This is not a one-size-fits-all solution: The key is identifying the types of users that exist within your organization and adopting the correct virtualization technology solution to meet their needs. For example, the desktop requirements for task workers are very different than they are for a mobile force or power users.
Application Management and Compatibility
Application compatibility has been cited by companies as the No. 1 concern in the upgrade from Windows XP or Windows Vista to Windows 7. Traditional desktop OS upgrades have proven time consuming and often require a significant amount of planning and IT resources to complete the project. At the same time, businesses must attempt to minimize the risk of lost productivity.
Application virtualization now provides an excellent alternative to the traditional approach by providing complete application management and compatibility between XP, Vista and Windows 7 OSes. Similar to the way that server virtualization abstracts server OSes from the underlying server hardware, application virtualization abstracts a program's executables, configuration files and dependencies from the underlying OS. Because the application is encapsulated in its own environment, it's possible to run multiple versions of the same program -- for example, Office 2003 and Office 2007 -- on different Windows desktop OS platforms with zero modification required to the application itself. Companies can also leverage this technology to virtualize applications in server-based desktop environments such as Citrix Presentation Server or Virtual Desktop Infrastructure (VDI) deployments.
The strategic benefit of this approach is that once an organization undertakes the effort to virtualize its applications, it can then enjoy:
- Increased speed at which applications can be packaged and deployed to end users
- Reduced application-compatibility test cycles
- Reduced maintenance costs from supporting multiple versions of the same or different OSes (only one application package is required for XP, Vista or Windows 7)
- The ability to patch, upgrade or install new versions of an application without having to regression-test it against other applications or OSes
- The ability to introduce application patches seamlessly from a centralized location without requiring application-maintenance windows
- Deployment of new desktop hardware with zero impact on the virtualized application
- The ability of organizations to consolidate down to a single-image deployment strategy for deployment
Application virtualization also gives organizations the opportunity to adopt a desktop-virtualization strategy. Often, companies are conflicted over the best way to integrate a Windows 7/desktop virtualization solution that would minimize the costs and risks of doing these projects separately. Application virtualization addresses the concerns about software compatibility and allows companies to create a strategic platform upon which they can distribute the same virtualized application to either physical or virtual desktops. Deciding which one becomes irrelevant once the application is virtualized. More importantly, organizations are not under any time constraints, as the same virtualized application can be deployed to desktops irrespective of the underlying OS.
Which Applications to Virtualize?
Virtualizing applications is similar to server virtualization in that specific hardware- or software-based features define a server's ability to be virtualized. Specific characteristics of an application make it a good or bad candidate for virtualization. For example, being able to identify and profile your application portfolio is positive for virtualization because it's a key factor in determining how successful and cost effective you'll be in the deployment of this technology.
Applications that have low-level drivers and services that are tightly integrated with the OS aren't good candidates for virtualization. These drivers and services are installed or run at OS boot time and therefore can't be virtualized; these include anti-virus programs, print drivers and VPN clients. In our estimation, approximately 80 percent of all applications are capable of being virtualized, although deeper analysis and risk assessment of each application is required before a deployment strategy is adopted.
Not all manufacturers support their application in a virtualized state, and some will request that you reproduce the problem by installing the application manually. This is similar to the early days of server virtualization when manufacturers would require problems on virtual servers to be recreated on a physical server before they would provide support. The ability to run a non-virtualized version of an application inside a VDI session was a big step toward meeting such vendor requirements. Companies will need to perform a risk assessment of these applications to determine if the risk of virtualizing and possibly losing support is worth the benefit of having the application virtualized.
Understanding the interdependency of applications will also help determine the best deployment strategy. For example, one application might interact with several other applications or add-on products. Virtualizing each of the applications as separate application packages shifts the focus of the IT administrator from managing applications to managing application-package dependencies. If there are too many dependencies, this could increase inefficiencies and raise costs. Conversely, virtualizing all the applications into a single package may not be the best approach. While this does provide seamless integration and eliminates the overhead of managing application-package dependencies, it may also introduce other inefficiencies and costs. For example, some departments in the company may not need all of the application components being delivered by the single package because that would result in either increased demands on desktop resources required to run the package, or incremental license fees. Additionally, software management, including application patching and upgrades, will be more time-consuming as the entire package may have to be re-virtualized after patching or during the upgrade process.
The costs of application packaging are also a major consideration when undertaking an application-virtualization initiative. These costs will be determined by various attributes, including the skill set of the IT staff, whether or not automation tools will be used, the level of software management already in place and the level of documentation that exists for in-house-developed applications. Organizations should plan on four hours per application, although it is customary to gain some level of efficiency when moving from one application to another.
Client-Side OS Virtualization
Companies looking for a simpler way of supporting incompatible applications in Windows 7 can utilize Windows Virtual PC (WVPC) or XP Mode, two add-on tools provided by Microsoft. In addition to these add-on tools, companies can also deploy the Microsoft Enterprise Desktop Virtualization (MED-V) product. Client-side hypervisors provide another alternative to these, and will become more widespread as they mature.
WVPC is client virtualization software that can be used on Windows 7 to create multiple virtual machines (VMs), each running a different OS. It's capable of running in two modes: a Virtual Applications mode to run legacy applications seamlessly, and a Full Desktop mode that gives the user full desktop exposure to the guest OS.
XP Mode specifically targets XP application compatibility. It leverages WVPC and a preconfigured XP image to create a virtual XP environment that allows an application to appear as if running directly on Windows 7.
Both WVPC and XP Mode are specifically designed to help small business users run XP applications on Windows 7. Larger-scale deployments should adopt an application-virtualization solution or MED-V.
MED-V provides a central management solution for provisioning customized and pre-configured XP VMs to end-devices. With MED-V you can centrally manage and adjust the settings of your WVPC environment as well as monitor and troubleshoot all virtual PCs deployed in your environment.
While not currently an option for migrating to Windows 7, client-side hypervisors will become available in the near future. They'll allow a single Windows 7 OS image to be made available to end-user devices irrespective of those users' desktop hardware platforms. They will also enable a more flexible virtual desktop architecture by allowing applications to execute locally while others are executed in a server-based computing model. Citrix Systems Inc., VMware Inc., Neocleus Inc. and other manufacturers already have, or are close to having, these Type-1 desktop hypervisors available.
Client-side hypervisors will be a significant benefit to IT shops that manage the laptops of mobile users who often work offline and have different laptop hardware models. The ability to manage a single OS image and provide all of the capabilities users need while in offline mode -- including remote access -- will reduce the overall cost for managing these workers and make them more productive.
This could also lead to companies adopting a "bring your own computer to work" standard, whereby employees can bring any desktop hardware platform into the workplace and connect securely to the corporate network. The corporate desktop image and their personal image would each run in an isolated desktop environment, allowing the user full access to corporate applications and data while enabling IT to fully control that image. More work has to be done in this area concerning legal issues and corporate policies, such as:
- Do you compensate employees for their use of a personal computer?
- What security policies do you establish to address lost or stolen computers, terminated employees and isolation between the personal and corporate images?
Introducing new hardware as part of a desktop-refresh project can complicate matters as more time, planning and human capital are required to migrate user-specific settings, data and profiles from the old PC to the new.
Profile virtualization enables users to separate their documents and profile information from the Windows OS, making it easy to get working again on a new machine or from a remote location. Working in conjunction with application virtualization, companies will be able to create a desktop experience that follows that user, regardless of where the PC is in use.
Most use cases for profile virtualization have been in the server-based computing space (Citrix and VDI), but companies should evaluate the use of this technology as part of their migration to Windows 7.
Server-based computing continues to be an important part of application management and can be an aid in the Windows 7 migration strategy. Applications that may not perform well under MED-V and resist application virtualization can be made available to users employing traditional Citrix or Terminal Services technologies. More recently, VDI-based application publishing has become an option, further augmenting the range of choice for remote application publishing.
Terminal Services is a centralized application deployment and remote-access solution that utilizes presentation virtualization, which separates where the application is used from where it's run. This helps to accelerate application deployments, improve remote worker efficiency and secure critical data and applications. It's a good choice for simple, well-behaved applications, and for cases where many users are running the same application.
VDI provides IT with the ability to centralize a user's desktop instead of a server session. VDI-based application publishing provides superior application stability and governance and gives true isolation between different application instances. Companies looking to carefully govern user experience and limit performance risk should consider VDI-based application publishing tools.
Server-based computing moves application workloads to the data center, reducing hardware requirements at the endpoint and potentially enhancing application performance by placing select applications closer to relevant datasets.
By separating some applications from the base Windows 7 OS, users have the ability to expand their computing power in an elastic manner, drawing from centralized resources when necessary. Whether solving application-compatibility, performance or capacity issues, server-based computing is a key part of a complete Windows 7 migration strategy.
Storage is a major consideration when considering a VDI deployment, and it's not as simple as a physical-to-virtual migration of the desktop into the data center. There are two main factors to consider when evaluating the important role that storage plays in a VDI solution.
The first factor is the amount and class of storage that must be provisioned to each virtual desktop. From an IT budgeting point of view, this is where a vast majority of VDI deployments never get beyond the proof-of-concept or pilot phase. Put off by high storage costs, many companies will opt to continue purchasing inexpensive new desktops and go forward with their current infrastructures. While storage-provisioning solutions from software and storage vendors such as VMware, Citrix, NetApp and Data Domain LLC assist in reducing the number of images, they still require high-end storage in an effort to deliver an adequate user experience.
The second factor -- which is ultimately the most important in the success of VDI -- is the storage I/O required to provide a better-than-PC or equal-to-PC end-user experience. From a business point of view, a VDI initiative won't get past the pilot phase if it becomes clear that the design doesn't provide a satisfying, scalable user experience. With the exception of the most expensive storage solutions, storage serving VDI is often incapable of providing the necessary I/O required to deliver a solid end-user experience.
However, there are a few solutions in this space that assist in offloading much of the I/O from the storage array, increasing the responsiveness of the virtual desktop and the quality of the user experience. This I/O virtualization brings storage resources into an upstream caching layer, delivering more than 80 percent of I/O requests without taxing back-end storage arrays. This approach transforms back-end storage into a reference device rather than a primary workhorse while providing a superb user experience. The Atlantis Computing ILIO product family is leading the charge in this new storage paradigm by providing new levels of VDI performance without the need for top-tier storage arrays.
Deployment and Management
Organizations are looking to consolidate efforts and costs associated with desktops. While exploring new cutting-edge technologies such as virtualization, organizations should look for solutions that integrate with their current and future management platform and processes. Otherwise, the benefits realized from virtualization solutions could very quickly become offset by the cost of managing and maintaining disparate solutions.
Desktop resources are siloed by nature, with each user having their own dedicated hardware and OS instance. In contrast, data center resources are shared and typically too costly to dedicate to users in the same manner as desktops. To optimize the use of both shared and dedicated resources, management tools and frameworks are needed that allow IT to minimize the management burden while preserving the scarce resources of the data center.
OS image-sharing technologies are available for use in both the data center and at the endpoint. In the data center, technologies that allow multiple VMs to spawn from a single shared OS image are useful in reducing the infrastructure churn associated with traditional patching -- and also in exponentially reducing the cost of VDI storage. OS streaming is a network-based approach to image sharing that has found success both in the data center and on the LAN. Diskless endpoints can obtain a fully updated OS image in real time over the network, minimizing data risk and ensuring compliance with the latest updates. Many of the benefits associated with thin-client devices can also accrue to the new class of ultra-micro Intel ATOM-based desktops, if integrated with OS streaming.
Stateless endpoints with streaming OS delivery are one of the many exciting new choices available today as companies look to upgrade. Few companies will be able to deploy such devices exclusively, however, so traditional desktop-management frameworks that provide deployment and patching intelligence to each endpoint will likely remain part of the picture for some time.
Windows 7 and the continuing economic downturn are providing the catalyst for change from the traditional approach to desktop upgrades, and this change will accelerate the adoption of virtualization technologies.
There are many options for companies to consider in architecting a migration strategy. Cloud computing and other hybrid desktop models hold the promise of optimizing workload placement in real time on the most appropriate infrastructure platform. Some application workloads are better positioned to run locally at the desktop; others are better suited to run from the data center; still others might be a good fit for a private- or public-cloud infrastructure.
Such optimized workload placement will reduce the application management burden, allowing applications and IT infrastructure to run in cruise control, providing the most cost-effective and optimal way for delivering services to end users.
Continuing advancements in virtualization and cloud computing will address many of the desktop challenges that have plagued the industry, giving companies new options for deployment of Windows 7.
John Premus is director of Infrastructure Services at GlassHouse Technologies. With more than 20 years of executive-level IT expertise, Premus has been instrumental in developing technical directions and standards at numerous organizations.
Daniel Beveridge is recognized as an industry expert with more than 15 years of industry experience. Beveridge is a four-time speaker at VMworld, and has also spoken at cloud conferences regarding the role that desktops will play in the cloud space.