VMware vCenter Now Manages Microsoft Hyper-V

Is VMware starting to feel the heat from Microsoft's Hyper-V, or is the release of VMware vCenter XVP Manager a master's stroke at containing the competition?

If you have not heard yet, VMware recently released vCenter XVP Manager and Converter, a plugin for vCenter that allows you to manage Hyper-V from the same centralized console. VMware hit two birds with one stone. On the one hand, VMware is secretly admitting that Hyper-V deployments are taking place and that they have to deal with it. On the other hand, VMware is also using this tool as a way of winning back these deployments.

The way to look at it is, if users are using Hyper-V via the vCenter console, they are already a bit biased towards VMware and if by using this tool they feel that Hyper-V is not doing what they expect, the conversion process is automatically built into this tool. It's brilliant; VMware is playing defense and offense.

Now on to the technical stuff, the vCenter XVP Manager and Converter can manage the following flavors of Microsoft Hyper-V:

  • Microsoft Hyper-V Server 2008 and Windows Server 2008 R2
  • Microsoft Windows Server 2008 with Hyper-V Role
  • Microsoft Windows Server 2008 R2 with Hyper-V Role

VMware is positioning this tool as a cross-platform virtualization management tool and I would not be surprised if the tool expanded to support different flavors of Xen and KVM. This approach reflects a maturity in the VMware thought process and also a sense of reality. There will be different hypervisors in an organization, so instead of fighting it, VMware seems be embracing it intelligently and competing on features. May the best hypervisor win.

Posted by Elias Khnaser on 03/01/2011 at 12:49 PM5 comments


Absolutely, Virtualize Citrix Provisioning Server!

Citrix senior architect Nicholas Rintalin wrote this blog entry today. A few minutes after it hit Twitter, customers started e-mailing me with the link: "Eli, have you read this? Citrix says not to virtualize Provisioning Server (PVS); why did we virtualize ours?" My immediate reply to these customers and before reading the blog: "Are you having any technical issues with PVS right now or since it was installed as virtualized server?" the answer was, "Well no, but Citrix...," and my answer was, "Well that is your answer to Citrix."

Then customers started e-mailing me, asking should we invite Citrix to see how PVS works virtualized, streaming to 800 physical machines and over 1,000 virtual machines? And my answer was, absolutely e-mail the author; he is from Citrix Consulting Services.

When I finally got around to reading the article, I started to giggle because you can tell the author (whom I have never met, by the way) is a consultant, and like a consultant doesn't give a straight answer. But he did say elusively and eloquently, the following (paraphrased from his blog, so they're my words):

  1. You can virtualize PVS, just not on XenServer, because we don't support LACP yet and there is a CPU bottleneck in DOM0. But considering we have a hypervisor that will not support this properly, we can't flat out recommend it. However, you might get away with virtualizing it using ESX
  2. The author then quotes Ron Oglesby's book and finally says, "It depends."

On the second point, Ron, I do know, and salute. And quoting Ron's book is not fair. That book was written many years ago when we were still recommending to not virtualize Exchange, SQL Server or XenApp, for that matter. Today, unless there is a hardware requirement that prevents you from virtualizing a server or the hardware specs are completely unacceptable (such as loading a VM with 64Gb of memory), there should be no viable reason not to virtualize anything. With VAAI, StorageLink and the ability to attach RAW Disk Mappings directly into VMs, these instances are extremely rare.

Furthermore, as a consultant, I always recommend testing first before ruling it out. Instead of making an assumption, test it. Not sure what Citrix Consulting is seeing, but all the deployments we have done are all PVS-virtualized on vSphere, which runs like a champ.

Now consider this scenario, what if I am deploying a large XenDesktop over vSphere model and I want to place a PVS server on each ESX host and stream locally, therefore, completely bypassing the network -- is that a bottleneck too? No, that is perfect, that is my version of Citrix IntelliCache without hypervisor lock-in.

What if I virtualize PVS today and I dedicate a physical NIC to it and isolate it on its own vSwitch? Would that provide adequate performance? Absolutely, and if you would like to see it in action, drop me a note.

As consultants we are obligated to give the pros and cons of our products, but I can see how, when you are a consultant working for a vendor, obligations can become difficult. So I sympathize with Nick. What is he going to say -- don't use XenServer, put it on vSphere? That would not work, but nonetheless, please don't blog about it; you confuse people if you are unable to be neutral.

I know, I know, you did mention it will work with vSphere. You did say that nicely and you did say that every one has a different opinion on this, but the reaction that I got was Citrix is not recommending we virtualize PVS.

But I say it works wonderfully!

Posted by Elias Khnaser on 02/24/2011 at 12:49 PM4 comments


How Will VMware Respond to RemoteFX in Hyper-V?

VMware announced several upcoming features in its next generation virtualization infrastructure vSphere 5 at Partner Exchange this month. While the announced features are cool and were sort of expected, I was hoping to see what vSphere 5 was going to offer as a competitive edge to Microsoft’s integration of RemoteFX with Hyper-V.

Microsoft back in January released Service Pack 1 for Windows Server 2008 R2 and Windows 7. As part of this release, Hyper-V received a “nitro” boost with the integration of RemoteFX. This is huge for many reasons.

First, RemoteFX now allows you to have greater control and more direct access to the Graphics Processing Unit, which will allow you to virtualize more servers that may have not been candidates for virtualization. But more importantly, RemoteFX is huge for virtual desktop infrastructure, as it significantly enhances the user experience.

How will VMware respond and when? Can VMware extend support for RemoteFX in vSphere 5? Is that even possible from a licensing perspective? Or will VMware just follow Microsoft and integrate PCoIP into ESX?

Since the majority of workloads being virtualized are Windows servers and desktops, VMware would be better served supporting RemoteFX at the hypervisor level. While thus far, the majority of virtualized workloads, servers and desktops are still on vSphere, in order for VMware to maintain the pole position, it has to extend this type of support to RemoteFX. Otherwise, virtualizing desktops could potentially become an area where Microsoft -- and Citrix, for that matter -- can chip away at VMware’s lead in the hypervisor space.

Posted by Elias Khnaser on 02/22/2011 at 12:49 PM1 comments


You Want To Do Converged Infrastructure, But What About Your Existing One?

Last time, we discussed converged infrastructure and how new data centers could take advantage of them. And what I've proposed so far may seem like you're going to need to immediately rip and replace components of your data center.

Surely, no one -- including me -- is suggesting simply taking all your existing hardware and doing away with it in favor of a converged infrastructure. What we are suggesting is as this hardware reaches end of life and it is time for it to be replaced, take a serious look at how you acquire and build your data center.

That being said, using existing servers, storage and networking infrastructure is absolutely possible while keeping in mind the manual, intensive nature of such a task. Of course, with this approach you will miss out on the centralized administration and monitoring capabilities of converged infrastructures.

Network Virtualization
Going down this route however will require that you at least entertain the idea of converging the different types of fabrics in your network.

Today, you likely have at least Ethernet and Fiber Channel. Many organizations will also have 10GB Ethernet and some may also have FCoE. Managing them can be cumbersome, expensive and difficult, even if we get down to the minute details that may seem irrelevant, such as the different types of cables that have to be managed, the different types of adapters in each server, etc...

There are technologies that alleviate the process. Companies like Xsigo, which specialize in virtual I/O, allows you buy a sort of insurance policy against any type of future fabric that you might introduce later in the network. Xsigo has an appliance that you install between the servers and all the different types of fabrics available. Xsigo will accept all these fabrics as inputs and will then output them in a standard format, which means if a new technology is developed tomorrow, you can simply virtualize that behind the Xsigo appliance and maintain the same cabling and the same adapter in your servers.

This type of technology will obviously be included in most converged infrastructures, but you can also purchase it stand-alone for specifically these use cases.

Storage Virtualization
We have been talking about storage virtualization through products like NetApp V Series, HDS USP-V (Universal Storage Platform-Virtualization) and IBM SVC (SAN Volume Controller) for several years now. These platforms are very flexible in terms of integrating various storage provider platforms behind their front-end controller and use the native virtualization features of these controllers to enable flexibility, elasticity and ease of use.

There are still environments where storage is bought and managed by different business units based on business requirements. The most logical approach will be to start consolidating all storage purchases, storage management and operations into a single, island-based approach.

While you venture into the private cloud and enable automation, you are absolutely not killing your legacy environments on day one. Approaches to run virtualized environments in parallel to legacy environments are typically the way customers have found success and adaptability of private clouds.

The end result with a complete virtualized stack of these layers, will enable an easy flow of information plus cut down extensively on the operational expenses including migration and management of these assets bring forward automation into datacenters.

Custom Hardware, Custom Software
You might consider custom hardware, which includes networking, host and storage environments, custom software that includes orchestration and management layers that enable the same end result. It's absolutely not necessary to purchase the converged stack, but management and orchestration might require custom programming and development of tools for deployment and provisioning.

If you were a service provider like Google, Amazon, SalesForce and many others out there and IT services is your business, taking the customized hardware and software route to build your next-generation data centers is possible. But if IT is not your core business, you might want to look at predefined solutions available in the market.

There are private clouds built today with Supermicro, HP or Dell Blades, coupled with Xsigo, HP Virtual Connect or Cisco virtualized network. On the backend, storage is provided by HDS, EMC, NetApp or others, and virtualization is provided by VMware, Microsoft, Citrix or others. The problem you'll run into is providing centralized management and orchestration of all these disparate assets from various vendors. Though it is not impossible to custom-create your management tools and orchestration tools, this approach can be costly and time-consuming.

The End Game!
The private cloud concept is a stepping stone, a necessary hop towards organizations embracing complete public clouds. In the technology world of tomorrow, no longer will organizations build data centers, no longer will we worry about acquiring hardware and software. We will simply use technology the way we use our electrical and water utilities today. Pay as you go for what you use. Once all the security and regulatory compliance have been met, once the communications pipes are capable of supporting the traffic and I/O, organizations will be at a point where public clouds are their data centers and they provision their resources accordingly.

Posted by Elias Khnaser on 02/15/2011 at 12:49 PM0 comments


Automated Workload Shifts and Configuration

Last time, we talked about the efficiencies that public clouds can gain through converged infrastructures. What we learned there applies as well to private clouds.

One of the most appealing ideas behind building a private cloud is the ability to automatically and dynamically shift workloads to achieve target SLAs. As mentioned earlier, private clouds are highly virtualized environments, which allows allocation of the necessary storage I/O, network I/O and compute resources to meet SLAs.

As for the storage I/O example, if a particular workload requires more IOPS, it can be automatically moved to different LUNs that have more spindles to satisfy them. Better yet, take the example of virtual I/O: How about if a particular workload requires more virtual HBAs or more virtual NICs? You can automatically deliver the necessary components with no interruption in service.

Infrastructure Monitoring and Application Monitoring
The oldest challenge in any IT department is infrastructure monitoring. Server manufacturers will have their suite, networking their own, storage will, of course have theirs, and so on. This is not to say there is no product out there that can monitor all of them, but third-party products lack one feature or another. The advantage of a converged infrastructure is also in its monitoring capabilities, the management software comes bundled with all the needed modules to monitor your infrastructure and all its components and alert you in the event of warnings, errors or failures.

Gone are the days of several different infrastructures monitoring packages and here are the days of simplicity and consolidated packages.

Primary focus of admins at every level is to verify that the infrastructure is operational, including servers, VM’s, HBA’s, storage, switches. It's typically achieved through infrastructure monitoring tools. Though at times in the race to monitor the infrastructure, not a lot of emphasis is given to application monitoring. Sure, DBAs monitor the query times, response times of the attached storage and components, but overall our focus should be application uptime, including failover and any additional, needed resources.

With converged infrastructure and private clouds, the focus now shifts to the application uptime and meeting required SLAs. With the usage of CMDB (configuration management database) systems, now applications can be managed in real time and resources can be provisioned based on these requirements.

All large virtualization providers are working on tools that will enable application monitoring within VM’s. We should see these tools as part of their next generation strategy that will enable a much granular view at an application layer. This will allow the drill down from infrastructure monitoring tools right into application monitoring tools and will give and end-to-end view to the converged infrastructure admin, giving a much detailed and needed view into the applications to verify up times, SLAs, failover, issues, dependencies, response times, resource issues, real-time patching, etc.

Utility-Based Computing
Utility-based, on-demand computing has emerged at the core of private/public/hybrid clouds. Pay as you go and pay for resources that you utilize. Being able to automate your data centers will enable the movement of virtual machines to either run within a private cloud or move the workload to a public cloud as required is the future and will help evolve hybrid clouds.

Private clouds typically sit inside a firewall with customer managed resources. As private clouds become a standard within the industry, we should see the next big wave towards hybrid clouds utilizing both private cloud resources and public cloud resources on a as needed basis. This is where the flexibility factor starts to come in. When you are able to shift your applications, virtual machines, storage, and compute resources automatically between private and public clouds to satisfy SLAs or to expand your infrastructure on a utility-based compute model.

The next generation applications are all being designed and deployed using some standard cloud-based programming language. This will enable the usage and the movement of data both inside and outside of your firewalls using private and public clouds.

Today, the primary reasons for not running business critical applications on public clouds is around secure multi-tenancy (SMT). With time, this will be resolved and addressed appropriately, thereby unlocking the full potential of the public cloud.

Posted by Elias Khnaser on 02/10/2011 at 12:49 PM6 comments


Converged Infrastructures Enable The Private Cloud

There is nothing technologically ground breaking about a converged infrastructure. It is, very simply put, an elegant solution that contains all the compute resources you need, from servers, storage and network connected by orchestration software that allows you to manage the entire stack from a centralized console. This reduces the need for rack, stack, inter-component configuration (e.g configuring servers to connect to network and storage etc..) and reduces cabling. If you need to grow your infrastructure, you add another rack of converged infrastructure that is then added to the pool of hardware resources available in your cloud.

No longer are physical resources siloed to applications; rather, all physical resources are part of a larger pool that is aggregated on an as-needed basis to satisfy workloads.

Acquisition
Converged infrastructures change the way we acquire equipment for our datacenter. It simplifies this acquisition. Typically, building a datacenter requires that you acquire each component independently. You purchase your servers from one manufacturer, your network equipment from another, your storage from a third and all other miscellaneous components from a fourth. These components arrive separately and require that you put them all together physically and logically.

Converged infrastructures simplify the acquisition process -- you no longer think of the components from a physical perspective but from a load perspective. You have set requirements, for example; you dictate the need for a specific number of virtual machines that carry a certain storage IOPS level, etc. ...

You are then presented with different options from which to choose from. The beauty here is that you can easily acquire more converged infrastructures and expand if you need to. Once you have made your selection, your rack arrives prepopulated with all the necessary components and is ready to be powered on and connected to your private cloud.

It is extremely difficult to ignore converged infrastructures when building your datacenter or refreshing it. The automation levels that it carries and brings to the table are the key enablers of the private cloud.

Deployment
Once the acquisition process is complete, we are now faced with the deployment challenge. Let's examine some of the things we deal with today when deploying resources in our datacenter. As I said, we acquire hardware from multiple different vendors, which prompts us to run tests to ensure software and hardware compatibility. These can sometimes lead us to update, upgrade or reinstall software, firmware and in some instances replace hardware for incompatibility reasons.

How many times have you had to ask the manufacturer to write special code that enables compatibility with other hardware or software components? How many times have you purchased a storage area network only to learn from the manufacturer that you cannot install it yourself, otherwise you void the warranty? So now you require outside engineers that can do the initial deployment. This process can take weeks and months to accomplish, all at the expense of the business.

Compare this to a stacked solution, it is shipped pre-tested, preconfigured and ready to be racked and stacked. Power it up, connect it to your network and "voila" the perfect turnkey solution, up and running in a matter of hours as opposed to weeks and months. All the components that are part of it are verified against a compatibility matrix with the virtualization platform you have selected.

Provisioning
After deployment comes provisioning. Today, provisioning is a manual or, at best, a semi-automated process. Typically, requests to provision virtual machines and/or storage get queued at the virtual infrastructure admin or storage admin, and require multiple layers of approvals before they can be implemented. These are almost bureaucratic delays caused by the fact that our IT department is functionally divided the same way our acquisition strategy is. Storage, network and server teams are typical teams at any organization. Requests will typically cross these layers before they are completed.

A Converged IT Dept. is a Better IT Dept.
A converged infrastructure completely automates the provisioning process down to a few mouse clicks by allowing the orchestration software that came with your infrastructure to dynamically allocate the resources you requested. This process automatically leads to the collapse of these layered teams within the IT department under a single banner of "datacenter administrator." This forces cross-training and blends the teams. An ambitious endeavor very few organizations have been able to achieve -- today, technology forces the change.

This level of automation and IT department consolidation directly affects the OpEx expenditures of organizations. It forces a more efficient use of IT resources by breaking down isolated team functions.

Posted by Elias Khnaser on 02/08/2011 at 12:49 PM2 comments


Orchestration Layer: Skyway To The Private Cloud

With virtualization the key enabler of the private cloud, but how do we make the transition from server virtualization to a private cloud? The answer to this question lies with the public cloud, and the first step in the answer is to understand the difference between a virtualized data center and the private cloud.

The public cloud, much like our utilities functions as a transactional pay as you go access to resources. This means that users can access their resources on demand through a self-service portal. It's is one of the defining models of the private cloud that sets it apart from the virtualized data center, where IT still needs to manage resources in a manual or semi-automated manner that does not fully optimize time or even the resources being used. Virtualized data centers do abstract resources but are not able to pool these resources for transactional access. Therefore, the question then is, how do we gain this functionality? And the answer is through Virtual Lab Automation. This plugs one of the most crucial gaps required to reach the private cloud, an orchestration layer capable of managing pooled resources and users from a centralized console.

VLAs were originally built for development and test environments but are now being leveraged as cloud management solutions because of their ability to provide VM management, monitoring, lifecycle management and self-service portals.

If your servers are already virtualized, adding the orchestration layer to manage the flow of your applications may create an easy migration to the private cloud for your applications.

VLAs are crucial because they are management software that services both the user community through self-service portals, and also the IT administrators by automating the virtual machine lifecycle. VLAs can track a VM's progress within the computing infrastructure from inception all the way to decommission. However, the most valuable function that VLAs provide is a self-service system that users can leverage at any time to satisfy all their IT needs in a timely and more agile and responsive manner while completely adhering to IT policies.

VLAs provide an amazing capacity for policy design, monitoring and enforcement, anytime there is a self-service portal where users can self-provision resources, a well-defined policy is required to avoid sprawl and enforce user consumption restraints. You can configure policies that enforce lease times; that way the development team that is provisioning VMs knows that their lease on these resources will expire in, say, three weeks unless renewed. Once the lease expired, the resources are added back to the pool. Another form of policies can be around quotas where a specific group of users can only consume a set number of concurrent VMs at a time. This in turns leads to easier consumption usage reports generation which can then be used for charge-back purposes and justification of infrastructure expansion.

VLAs are available through many software vendors. VMware has its vCenter Lab Manager, Microsoft its Visual Studio Lab Manager, VMlogix its Lab manager, Surgient (now acquired by Quest) has fantastic software to accomplish this task. And there are many others.

VLAs are the bridge that leads an organization from the virtualized datacenter to the private cloud while incorporating all the flagship features of the public cloud. This will automatically lead to significant IT cost reduction through the increased usage of existing resources and streamlined automated management that responds better and quicker to the business needs.

Posted by Elias Khnaser on 01/31/2011 at 12:49 PM2 comments


Kinect, Gaming Converging on Technology

At the end of 2010, I wanted to make a set of predictions but got preoccupied and missed my chance. One prediction I wanted to make was that the gaming industry will significantly influence the technology sector in a positive way.

Take, for instance, the very successful gaming franchise that Microsoft built in Xbox. Microsoft came late to the game and it managed to climb the charts and become number one, even against rivals as big as Sony. Microsoft's gaming division keeps reinventing itself. Its latest creation, Kinect, allows you to interact with games by leveraging multiple cameras that can track the movement of your hands. The prediction I was going to make is that at some point in the near future, Microsoft will incorporate the Kinect technology in its core operating system.

OSes have been very static, very 2D for a very long time. It's almost inevitable that for Microsoft to stay ahead, it would have to come up with something very creative, and what better than to interact with your computer with your hands, without a keyboard or mouse, something a la Tom Cruise in Minority Report?

You have Kinect on the one hand, and on the other we're seeing many new gaming companies, such as "Gaikai," which are building gaming clouds. These companies want to deliver any game, no matter how graphic intensive via the cloud. This means no portion of the game would be installed locally and would run completely centralized. This requires a very robust remote protocol, something more advanced than Citrix's ICA/HDX. However, if the likes of Gaikai are investing in ways to deliver graphically intensive games without any local installs, that sounds a lot like desktop virtualization infrastructure to me. The advantage is that now you have the creativity of the gaming industry investing in research and development to create a very low latency remote protocol. If they succeed, in collaboration with technology companies like Citrix or without, it would have a significant effect on technologies like desktop virtualization.

Since some of the challenges of desktop virtualization is dealing with graphic-intensive applications and latency issues, the advent of cloud is now lending a helpful hand into the creation of a more robust remote protocol. If Gaikai can deliver a game like Activision's Call Of Duty: BlackOps without installing any portion of the game locally, then we surely can use desktop virtualization to deliver any application, intensive or not, to any device with a phenomenal user experience.

Cloud computing certainly started as a buzz word, as just another marketing term, but it has since evolved into an identifiable framework which is bound to change the way we use technology in every aspect. Google's push for browser-based bare metal computers, Microsoft's RemoteFX integration with Internet Explorer and cloud gaming companies like Gaikai will surely reshape and change our industry for better and forever. The only question is, how quickly will these changes materialize?

Posted by Elias Khnaser on 01/26/2011 at 12:49 PM4 comments


Why XenDesktop on vSphere?

I am constantly asked this question when designing a virtual environment: Why vSphere and not XenServer? Isn't "Citrix on Citrix" a better choice? Why vSphere? The answer is simple, when designing a virtual infrastructure; I take into account not just what this virtual infrastructure will do for desktops, but what it will do for servers as well. I am looking to simplify and maximize the investment for the organization I am working for. vSphere delivers unparalleled performance, coupled with all the features that an enterprise needs for both servers and desktops.

I am a strong believer in combining best-of-breed software. That's how we have always designed and built all systems for our organizations, but let's break it down:

vShield Endpoint This feature is crucial for desktop virtualization. It allows us to offload the anti-virus functions from the individual VM to a a virtual appliance. Without vShield Endpoint, you would have to load an anti-virus agent in each VM, a method that requires significant storage horsepower, especially when anti-virus is updating or scanning.

There have been attempts by various anti-virus vendors to randomize how the scans take place, and how the updates are applied, in order to minimize the effect on storage and maximize the user experience. These efforts, while very welcome, are not enough. vShield Endpoint liberates these VMs and as such has a significant performance increase effect and a significant cost reduction from a storage need perspective. Sure, the anti-virus appliance comes at a cost, but it's nowhere near the cost of storage that you would need if you had to do it without this appliance.

Memory Management vSphere 4.1 has four different types of memory management techniques. Now granted, when designing a DVI environment, we don't design with memory management in mind. Still, it does help to know that you can always count on these technologies in the event that a host should go down or memory is scarce for any reason. It is also important to be able to provision memory temporarily or unexpectedly.

Security VMware places a significant important on security. Its software has been EAL4+ certified since June 2008. Its ESX 2.5 product was certified EAL 2 in 2004. The importance of EAL, which stands for Evaluation Assurance Level, is that it certifies that a product was methodically designed, tested and reviewed in compliance with the international standard for computer security. XenServer 5.6 is EAL 2 as of 2010. While this certification does not mean XenServer is less secure than vSphere (I am not implying this by any means), you still have to take all the necessary measure and best practices, and it does show that VMware emphasizes security a lot.

The importance of security with virtual desktops is twice the importance of servers, given the number of virtual desktop that could potentially exist. Furthermore, one should note that while physical desktops were less secure, they were decentralized; and so compromising a single desktop may not have been a big issue. Virtual desktops are centralized and in the datacenter; thus, properly securing them is imperative. It is worth noting here that Microsoft's Hyper-V has been EAL4+ certified since 2009, which demonstrates that Microsoft takes security very seriously, especially from a hypervisor perspective.

In addition to all this, VMware also has the VMSafe API, which partners can leverage to build secure applications into vSphere.

Storage Integration Almost every storage array in existence either currently has or will soon have support for VMware's vStorage APIs for Array Integration. VAAI offloads many resource intensive tasks to the storage array, thereby significantly enhancing performance while reducing host overhead. Citrix, also has a similar technology known as StorageLink. While some may argue that StorageLink is better than VAAI, the fact of the matter remains that only a handful of storage arrays support StorageLink. I do think this number is bound to go up, especially if Citrix extends StorageLink to Microsoft's Hyper-V.

Better Virtual Networking vSphere's virtual networking is rock-solid with many features that are missing with other hypervisors, notably network traffic shaping, per VM resource shares, QoS, support for high I/O Scalability via direct drivers and more…

All this being said, I don't want this to sound like I am slamming Citrix XenServer. I am merely stating why I typically recommend XenDesktop on vSphere. I get asked this question more often then not and I felt compelled to share my reasoning for recommending this. XenServer is a fine product with a bright future, but I have to recommend to my customers a solution that will address not only their desktop virtualization needs but also their server virtualization needs, while integrating as tightly as possible with storage and leveraging virtual networking to the fullest.

Posted by Elias Khnaser on 01/24/2011 at 12:49 PM3 comments


Citrix XenDesktop 5: Some Questions!

Citrix XenDesktop 5 is arguably the most powerful desktop virtualization solution on the market today. Compared to its predecessor, XenDesktop 4, it has some fantastic enhancements. I'm a big fan of the product, but by the same token I have many questions on why the product was released with some important features completely missing.

Where is XenDesktop Setup Wizard? When Citrix introduced XenDesktop 5, I was looking forward to seeing how much more tightly integrated Provisioning Services was with XenDesktop. Instead, I noticed that Citrix sort of brushed off Provisioning Services in favor of Machine Creation Services. Really? You ignored the integration of the most powerful technology in XenDesktop in favor of MCS? This is apparent because not only did Citrix not integrate XenDesktop Setup Wizard into the Desktop Studio console, they eliminated it altogether. How difficult would it be to integrate it into Desktop Studio? Should be very simple and perfect, while creating a catalog if I choose Streaming, it goes through the initial wizard and establishes all the connections and baselines. After that it should be simple to create VMs. It would have been perfect. Why in the world would you remove this wizard?

Today, when I want to use PVS with XD5, it is literally a manual process. Actually a better workaround would be to build a XD4 controller, run XenDesktop Setup Wizard, create all your VMs and then move them to XD5. That or create a script. Why do I need to waste my time writing a script and validating it, maybe it works and maybe not etc... The tool was there. Now I hear that the PVS team is releasing the tool soon. Very disappointed that this was not integrated from the get-go.

Where is the Active Directory Integration Tool? Here's another example of a very handy tool that was in XD4 that completely disappeared in XD5. Instead they want us to use Power Shell. I think the effect of the open source acquisition of XenSource is taking its toll on Citrix--so many features are now command-line-based. While I don't mind command line, why in the world would this not be under the properties of the Controller in Desktop Studio? Why do I have to search for it and run commands and figure out proper syntax? Complete waste of time and increased, unnecessary complexity.

Where is the Logoff Behavior? This one is my favorite. It was sitting in the GUI, happy and minding its own business. Then, someone intentionally said, let's make it a command line instead; we don't have enough command line stuff and this will make us geeky. Come on Citrix, why move this? It was there already; what is the benefit? This feature was useful when building an environment and running tests. You did not want VMs to reboot every time a user logged off.

In the final analysis, XenDesktop 5 is a major and very welcome improvement to XenDesktop 4; its features, functionality and performance are impressive. However, I urge Citrix to address the concerns above and to not sideline Provisioning Services. This is the heart of XenDesktop and its star feature. Machine Creation Services is a nice-to-have but cannot scale or perform anywhere near what PVS can.

My other wish to Citrix is to not integrate PVS into XenServer, thereby making it proprietary to the hypervisor. The power that Citrix holds today is that it is hypervisor-agnostic. It should keep it that way--that is a position of power and Citrix shouldn't give it up.

Posted by Elias Khnaser on 01/20/2011 at 12:49 PM3 comments


Citrix, VMware, Storage Vendors Invited To Talk

Without a doubt, desktop virtualization is the hottest topic in virtualization today. What keeps fueling this interest in desktop virtualization infrastructure technologies is the amount of use cases that you can now introduce to justify it.

Take the Motorola Atrix 4G example that I wrote about in this blog entry. There is a lot of excitement about DVI, but there are a lot of challenges with the implementation of DVI, a lot of technical challenges, such as boot-up storms, login storms, anti-virus storms ... let's just say the weather is frightful in DVI. Everyone is trying to come up with solutions to a lot of these challenges, you will see storage vendors running like chickens with their heads cut off trying to squeeze every inch of every IOPS to make DVI financially viable to deploy in enterprises.

Don't get me wrong. Storage vendors have been very innovative in some of the solutions that have come to market, such as dynamic tiering and the ability to detect and dynamically move workloads that get hot between the different tiers of disk to improve performance. Nonetheless, while the technology is cool and very promising, it cannot detect and react fast enough to avert the challenges presented by the different DVI storms we mentioned.

On the other hand, you will find that DVI vendors are also trying hysterically to come up with solutions to address these issues. Take Citrix, for instance. Its IntelliCache allows you to cache a copy of a centrally stored and managed virtual disk locally on the hypervisor host and then stream that image to all the resident VMs on that host. In theory, this sounds great -- you are now using cheap local disk, caching a copy of the centrally stored and managed VHD, you avoid boot-up storms and login storms, etc. ...

Sounds perfect right? Not quite. In a world moving more and more toward cloud computing, does Citrix really expect us to build these islands of siloed hosts that stream to locally resident VMs? What if a particular host is experiencing heavy utilization and I wanted to migrate some VMs? What if I want to dynamically live migrate VMs in order to recalculate load balance? Am I expected to ask the user to log out and log back in? This is supposed to be cloud computing. I want automation, and Citrix's Simon Crosby is one of the most enthusiastic people about cloud. So, how does this fit in?

IntelliCache is cool, but with all due respect, local disk is dead. Find me another solution where I don't have to give up any of the features or flexibilities gained with virtualization. When building DVI, a lot comes into play from an architecture and design perspective. What if I want to use Affinity rules to make sure that certain VMs are never present together on the same host? What if I want to separate VMs across different hosts in different blade chassis? Since in large deployments, it is inevitable to use blades.

So what is the answer? How about storage vendors and DVI vendors create a task force, a group of smart people that can sit at a roundtable and explain to one another the challenges? Instead of the storage guys trying to find metrics and trying to understand DVI, and the DVI guys trying to understand storage and building technologies around it, how about, hey, storage guys, we have this issue, we can give this data, how can we resolve it? Storage vendors have dynamic sub-LUN tiering, but it is not enough for DVI as it does not react fast enough. Great, can the DVI folks maybe provide more information so that the storage vendors can build technologies that can possibly detect a certain pattern and react quickly when they see this pattern? Maybe the storage array can detect a signature on the data that causes login storms or bootup storms and can move them immediately to a faster tier disk for processing.

Can we for example, enhance VAAI and StorageLink so that they can give the array more information, which would allow it to react faster? What if XenDesktop or VMware View had APIs similar to VAAI and StorageLink that tie directly into the array? In the event of a login storm, could XenDesktop or VMware View slow the amounts of VMs being powered on or logged into based on how much the array and the underlying storage can handle?

The bottom line is, virtualization created a marriage between storage vendors and software vendors that will not be dissolved anytime soon. So, while the technologies have married and have children already, the in-laws are still not getting the message that they need to get along.

I invite Citrix and VMware CTOs Harry Labana, Simon Crosby and Steve Herrod to form these task forces that can reach out to the EMCs, HDSs, HPs and IBMs of the world and talk to their storage folks, give them data they can use so that when they build these systems, they can support the innovations that you are developing.

Posted by Elias Khnaser on 01/18/2011 at 12:49 PM3 comments


Integrating Citrix XenDesktop 5 with VMware vCenter

I am a big believer in combining best-of-breed technologies to achieve best possible solution. Citrix XenDesktop 5 is very powerful desktop virtualization software packed with features and functionality. However, XenDesktop requires a virtual infrastructure, and VMware's vSphere is without a doubt the very best of them. But what happens when you try to combine these two best-of-breed technologies? Well, the end result is great, but not without some heartache.

In order to integrate them, XenDesktop needs to communicate with VMware vCenter's SDK services. Simple enough right? Not quite, as the caveat comes into play as you try to access the SDK over HTTPS. By default, vCenter creates a self-signed SSL certificate for hostname "vmware." In most vCenter installations, the name of the server is not "vmware" and, as such, a secure HTTPS connection fails when connecting from XenDesktop 5 Controller. This is in addition to the fact that the certificate was signed by an untrusted certificate authority to begin with, as is the case with most self-signed SSL certificates. 

To resolve this issue, you can do one of three things. Option one would be to purchase an SSL certificate for your vCenter from a third party. Option two, you can self-sign a certificate from your enterprise certificate authority. Option three is trusting the existing SSL certificate. To do that, you can follow these steps:

  1. If you are logged in as a local administrator, open Internet Explorer and navigate to https://vCenterServer/
  2. If you are not logged in as local administrator, or a user with sufficient permissions, it is very important that you SHIFT > Right-Click Internet Explorer, and run it as an Administrator, then navigate to https://vCenterServer/
  3. You will get a warning screen that the SSL Certificate is not trusted, select Continue to this web site (not recommended)
  4. Click the Certificate error in the Security Status bar and select View Certificate
  5. Click Install Certificate
  6. When the Certificate Import Wizard launches, select Place All Certificates in the following store and click Browse
  7. When the Select Certificate Store window comes up, make sure you select the check box for Show physical stores
  8. Find and expand Trusted People, select Local Computer and click OK
  9. It is important to note that if you don't see the Local Computer option under trusted People, you are not logged in with a user that has sufficient rights, therefore, you must run Internet Explorer as an Administrator.
  10. Click Finish to complete the certificate import process
  11. Click OK when you receive the import successful window
  12. Close your browser, re-open it again, and browse to your vCenter server. The browser should now trust your vCenter server and therefore you should not receive a certificate error. That is how you can verify if the process was successful
  13. Configure the hosting infrastructure settings on the XenDesktop 5 controller to point to https://vCenterServer.domain.com/sdk
  14. Voila, it works like a charm and there are no errors when creating catalogs

The previous version of XenDesktop, version 4, had another workaround that involved modifying the proxy.xml file on the vCenter and setting the SDK communications to HTTP instead of HTTPS. Unfortunately, as of this writing, this method is no longer supported with XenDesktop 5.  That being said, I know that some of you are spirited technologists and will try anyway, so we tried it as well.

While I was able to get XenDesktop 5 Controller to successfully communicate with vCenter, I experienced some issues. The issues started when I tried to create a pooled virtual machine catalog. The process begins and successfully communicates with vCenter, creating certain tasks, but then fails with an error that reads: "The catalog has the following errors, failed to create the virtual machine." This error occurs if you did not configure the virtual infrastructure in XenDesktop 5 to communicate over HTTPS. Once I got over this wrinkle, the integration of these best-of-breed solutions worked, and continues to work like a charm.

By now, some of you might be wondering why I keep referring to XenDesktop and VMware vSphere as best-of-breed -- why run XenDesktop on VMware vSphere? The answer will appear in this space soon. 

Posted by Elias Khnaser on 01/11/2011 at 12:49 PM10 comments


Subscribe on YouTube