Desktop Virt: When To Bake Apps in the Image and When To Virtualize Applications
There is an application virtualization riddle out there, wrapped in DVI (Desktop Virtualization Infrastructure) enigma, inside a vendor frenzy. Everyone wants to virtualize applications, and every vendor tells you to do it differently. So what is the right way of determining when to bake an application as part of the image, virtualize an application, stream it or deliver it via Terminal Services?
I will not give you the standard consultant line of "it depends," but rather I will start by saying that baking the application as part of the image should be the last resort approach -- it is the last ditch effort when all else fails. I bring this up first because I have seen many vendors recommend, off the bat, to bake it in the image. NO. OK, Eli, but saying no is easy, Why no? Well, for starters, you have now increased the size of that image exponentially, but assuming you are working off a master image technology (LinkedClones, Machine Creation, Provisioning, etc...) that is not a huge deal. Or is it? if you install all applications locally, doesn't your application updates become more frequent? Thus you have to constantly update that master image? Will you not be transferring the same issues you have on the physical desktops to the virtual desktops? Application conflicts, compatibility, testing, etc...?
In addition, you now require significantly more resources on the virtual infrastructure and the virtual machine itself to support this image as all processing is happening locally. On the other hand, if you take the approach of VDI with Windows 7, using technologies like terminal Server to deliver applications, you significantly reduce the resource requirements from a memory and processor standpoint. With this approach, you can most likely get away with having Windows 7 VM with 2 GB of memory or less, with 1 or 2 vCPUs, as opposed to at least 4 GB of memory and 2 vCPUs if you take the bake as part of the image approach. Consequently, this reduces the number of VMs per host, which leads to a bloated project, which leads to a failed project.
The right approach depends on the situation. Once again, let's take the example of implementing VDI: The ideal scenario would be to have the operating system image with nothing on it, just a shell and then layer applications on top of it using Terminal Services / Citrix XenApp / Ericom or other solutions. This allows you to deliver a rich user experience, all the benefits of VDI, maximum user density and extreme application delivery and flexibility. But we all know that is not the case and sometimes applications do not work from terminal server or support for certain hardware peripherals.
So now do I bake it into the image, Eli? Possibly. If you don't have any application virtualization technologies, then yes. If you have application virtualization or streaming, then that would be your second choice. It would liberate you from having to slip the application as part of the image. But why is streaming my second choice? Why can't it be my first choice? I don't want to invest in Terminal Services. Application virtualization, or streaming for that matter, takes up the same resources as an application baked into the OS would. Remember, the application is running locally, it is just not installed locally, so again this would significantly increase resources, whereas terminal server uses a shared application model, requires less resources and delivers higher density.
So to summarize things, here is a decision matrix on how to go about determining which technology to use:
- If you can use Terminal Server, that should be your first choice. Terminal Server offers easier application installation, management, higher density, and requires fewer resources at the endpoint device and at the Terminal Server.
- Your second choice is Application Virtualization / Streaming. This technology is more complicated when compared with Terminal Server. The application packaging process is complicated, with the ongoing update process also being complicated. It offers easy management and application isolation. However, it does require the same amount of resources at the endpoint as applications that are installed locally
- If both solutions don't get the job done, then, and only then, bake it into the image
Now that we have laid all this out, it is important to keep in mind that some applications will not work in Terminal Server, but will work virtualized (and vice-versa). Some applications will not work virtualized or on terminal server -- and as a result, you have no choice but to install it as part of the image. This is why we are using desktop virtualization in the first place, it provides the flexibility of using multiple technologies to solve a particular challenge. Just terminal Server does not always work, just VDI does not make sense, Application Virtualization does not meet all circumstances. It really is a blend.
Posted by Elias Khnaser on 03/17/2011 at 12:49 PM