The vSphere 5 countdown continues and at number 8 is Profile-Drive Storage. It's a new feature of vSphere 5 that allows you to easily profile the capabilities of your datastore in order to deploy your VMs on the most appropriate storage.
Today, most virtualization administrators gather a lot of information about what the VM’s function will be and what kind of load it could potentially generate against the datastore. Is this a database server? Is this a Microsoft Exchange server? And so on. After that, we scramble to find the most appropriate datastore. We later also try and follow up to make sure that those virtual disks have not moved from this datastore.
Profile-Driven Storage aims to address this process by simplifying it and automating it to some extent. You can now profile your datastore by manually adding some kind of an identifier, like a “tag” which would tag a datastore as a RAID-10 or RAID-5 datastore or some other characteristic. Once you do that, you can link VMs to this profile, thereby ensuring that the VM linked always exists on the right type of datastore. You can also link the VM to the right datastore during provisioning as well.
VMware has also introduced a new set of APIs with vSphere 5, known as VASA or vSphere Storage APIs for Array Awareness, which expose the capabilities of the storage to vCenter. In other words, you can see what type of datastore this is, what RAID level, etc. These APIs make it easier to profile the different types of storage.
PDS significantly reduces a lot of the manual work that one had to go through when provisioning VMs by automating it and allowing you to adhere to different levels of SLAs depending on the application profile of the VM. Not only are you assigning the VM to the right storage, but you are also validating that the VM is where it is supposed to be.
Depending on the environment you are in and the level of process, SLs and automation you are required to have, the PDS feature may or may not be useful to you. In larger enterprises, I can see how it can be of significant help.
Posted by Elias Khnaser on 07/26/2011 at 12:49 PM1 comments
Last time, I started counting down the top 10 best features of vSphere 5. VMware View Accelerator was number 10. This time, at number 9, is AutoDeploy and Image Builder. Technically, AutoDeploy and Image Builder are two distinct features, but I find that they are very complimentary to one another, so I converged them.
Let's start with Image builder, now that ESXi is the only version of the OS that is available and ESX has officially been retired, there is a lot more that goes into installing the hypervisor than was the case with ESX. ESXi has a smaller footprint and is a very thin install. As a result, drivers and software packages for the different hardware that you may be using is not baked into the image.
If you were to install the plain vanilla ESXi 5 image that you download from VMware, odds are some of your hardware will not work, such as network interface cards or Fibre Channel cards, because they don't have a corresponsing driver. Image Builder allows you to customize the ESXi install image by adding the necessary drivers, software packages and any other relevant software bits that are needed for the hardware you are deploying on. In a nutshell, Image Builder streamlines the installation of ESXi.
Now, AutoDeploy streams the installation of ESXi into the memory on the host, so you no longer need to install it on a USB drive, have it burnt on a chip on the motherboard or boot from SAN. Instead, the installation is streamed into memory. The way it works is by PXE booting a stateless host and downloading the assigned copy that is relevant to the hardware you are booting from. When a host's hardware fails, you can very easily swap that hardware for a new set of hardware, PXE boot again and in seconds you have restored that failed host back to productivity. So with AutoDeploy, the personality layer that is traditionally associated with a server is abolished, replaced by a memory-resident install of the OS that can be repeatedly downloaded to new hosts. And because ESXi has such a small footprint, it is easily loadable in memory.
By now you are probably wondering how AutoDeploy and Image Builder interconnect. Think about it this way: If you have IBM, HP and Cisco blades in your environment, you would use Image Builder to create a customized ESXi install with all the necessary components for each hardware platform. Then you'd use AutoDeploy to automatically deploy the right ESXi install to the right hardware. As a result you have streamlined your host provisioning operation and automated the process significantly, thereby getting you one step closer to true private cloud computing.
The AutoDeploy feature bears a striking resemblance to Citrix Provisioning Services. PVS is an awesome feature and I believe that the VMware community will love AutoDeploy as much as the Citrix community loves PVS.
Posted by Elias Khnaser on 07/21/2011 at 12:49 PM2 comments
VMware vSphere 5 is packed with great new features, many more than we are about to cover in this feature countdown. I have chosen the top 10 features that I believe most people will be most interested in, and this time we kick it off with number 10 slot: VMware View Accelerator.
At first glance one would think the VMware View Accelerator is a licensing package, since VMware currently has Accelerator packages. But make no mistake: The View accelerator is one of the coolest features in vSphere 5 and since I am particularly focused and interested in desktop virtualization, I could not help it but start with this feature.
For years now, desktop virtualization technology has been plagued with different types of "storms," from bootup storms to login storms all the way through anti-virus storms and others. vSphere 5 takes a giant leap forward in addressing the issue pf boot storms using what is now know as the View Accelerator.
Even though certain aspects of View Accelerator, such as the cache, are configured from the VMware View Composer, View Accelerator is really more of a hypervisor feature, not a VMware View product feature.
The accelerator works by caching bits of the master image in memory on each ESXi host. When VMs start to boot, it redirects and de-duplicates VDI VMs to boot from those cached bits. This approach significantly reduces if not completely eliminates the boot-up storm issues. The number of VDI VMs to ESXi host is right about 60VMs, which can very easily be managed by the local ESXi memory cache.
It is worth noting here that Citrix's XenServer has a similar feature known as IntelliCache. It basically does the same thing by caching the bits of the master image in memory and booting the VDI VMs from this local cache, thereby significantly reducing the IOPS dependency while maintaining centralized control and management.
So, take a guess what number 9 will be...
Posted by Elias Khnaser on 07/19/2011 at 12:49 PM8 comments
I have been fortunate enough to be part of the beta program testing and evaluating VMware vSphere 5 for the past few months, one of the many perks of the VMware vExpert program. During this entire time, I have been very excited about all the features that I was testing and evaluating and had been very excited, counting the days to this big announcement, and now that this day is here, I can very happily say, "The best just got better." vSphere 5 is definitely a very feature rich upgrade to its predecessor satisfying most of the features on our wish list.
In this series of blogs, I'll examine all the features of vSphere 5. Today, however, let's dispose of the disappointing licensing news and take a look at what this crown jewel will cost.
While I gave vSphere 5 two thumbs up from a feature standpoint, I cannot but voice my utmost disappointment at the licensing changes that are being introduced with vSphere 5. It's as if someone at VMware took a look at these features and told the technical team, "Awesome job!" and then started whispering into the business ears, "We can charge premium for these quality features and make a killing!"
What is vRAM?
Prior vSphere 5, the licensing model for vSphere was based on processor with the Enterprise Edition having a maximum of six cores with 256GB, while Enterprise Plus was licensed for 12 cores with virtually unlimited memory.
The new licensing is still based on processor, except all limitation on cores and memory has been removed. Instead, licensing is now per-processor with memory entitlements as follows:
- Standard Edition with 24GB of vRAM
- Enterprise Edition with 32GB of vRAM
- Enterprise Plus Edition with 48GB of vRAM
vRAM or virtual memory refers to the memory assigned to a virtual machine. Each edition of vSphere allows a specific amount of vRAM, so if you buy the Enterprise Edition of vSphere with 32GB of vRAM, then you can power up eight virtual machines, assuming each virtual machines is assigned 4GB of ram. That being said, vCenter will pool all the available vRAM from all the ESXi hosts that it manages and is able to compensate for one hosts' lack of vRAM.
So, if you have two ESXi hosts with Enterprise Edition licensing, you would be entitled to 64GB of vRAM. If one host is low on vRAM, and the other host has available vRAM, you can compensate for it. All licenses are aggregated in vCenter in a pool and vCenter will manage the licensing needs based on total available vRAM. When you buy and add an Enterprise license, you increase the pool by 32GB.
I completely understand why VMware changed the licensing model from a per-processor-with-core-limitation to the new model, given Intel's forecast for CPUs with 12 cores or more being a standard -- the old licensing model won't work. But what VMware failed to understand is that it will cost me about three times as much to upgrade to vSphere 5 or to roll out a new environment.
Let's take an example: If you have a 2-socket, 6-core server with 96GB of RAM, you need two vSphere 4 Enterprise licenses. To do the same thing with vSphere 5, you will need three vSphere 5 licenses assuming that all the memory that you have in your servers is allocated to VMs that are powered on.
VMware also must have realized by now that CPU is not the most valuable resource in a virtualized environment. Memory is. So, Intel can increase the number of cores all they want, but we still have about 50 percent or less CPU utilization, even with virtualization. I wish VMware would have left the price the way it was and would encourage enterprises to invest in additional tools that VMware offers, like vShield Endpoint, vCloud Director, SRM and others. Why put a premium on a cloud OS?
Unfortunately, VMware gave Microsoft -- and Citrix, for that matter -- ammo to strike at the cost issue of vSphere 5. I hope that VMware does not suffer from the "greatness" syndrome and think no one will be able to topple it as "cloud king" Citrix just acquired Cloud.com, so you can rest assured XenServer will get significant feature boost. And Microsoft is investing heaviliy in cloud.
Aside from licensing, VMware should be very warmly congratulated on hosting the first real virtual event and generating enough buzz and social interaction, which is only a reflection of how much the community appreciates the hard work and quality software that VMware delivers.
Now, if only at VMworld 2011 in Las Vegas, Stephen Herrod would stand up and announce that based on popular demand, they would be modifying the licensing model to reflect what is in vsphere 4. I suggest VMware keep the vRAM concept, except do it as follows:
- Standard Edition with 64GB vRAM
- Enterprise Edition with 256GB vRAM
- Enterprise Plus Edition with 1TB of vRAM
This more accurately resembles what we are deploying today, makes the customer base happy and encourages an upgrade to vSphere 5.
In my next blog, I'll finally get down to highlighting all the great new features of vSphere 5. Stay tuned!
Posted by Elias Khnaser on 07/12/2011 at 12:49 PM23 comments
If you are like me, you were probably dragged kicking and screaming into understanding and learning storage as you started your journey on "Route Virtual." Had you asked me six years ago about storage, I would have answered, "You mean the the LUN I ask the storage guy to provision?" Really, that would have been it. I knew nothing of Fiber Channel Arbitrated Loops, SAS, NAS/NFS or iSCSI, for that matter; I just understood, "I need a LUN for my Windows servers."
Since then, I have evolved with the technology and today, storage makes the virtual world go round. If you don't know storage, you cannot properly design and architect your environment across many technologies from server virtualization to desktop virtualization.
Storage is important to our careers, so I'm starting a series of storage blogs specifically around SSD, which is the hottest topic in storage right now. Please note these blogs will be more geared towards the virtualization admin, so I will have enough technical information that matters to the virtualization admin, things that you need to know when evaluating SSDs, what you will use them for, etc.
Let's start with a brief overview of the different types of SSD that exist, specifically, Single-Level and Multi-Level Cells, the pros of each and what different companies are doing to enhance the technology.
SLC SSDs are typically enterprise-class technology with the following characteristics:
- Higher Cost per Bit
- Higher Endurance
- Low Power Consumption
- Higher Write/Erase Speeds
- Higher Write/Erase Endurance
- Can endure operations at higher temperatures
MLC SSDs are similar, but provide higher capacities and lower cost per bit -- that's from a high-level, very basic overview. Now, let's look at some technical numbers:
Specification |
SLC |
MLC |
Density |
16Mbit |
32Mbit |
64Mbit |
Read Speed |
100ns |
120ns |
150ns |
Block Size |
64Kbyte |
128Kbyte |
Endurance |
100,000 |
10,000 |
Operating Temperature |
Industrial |
Commercial |
|
At first glance, you are probably thinking that SLC is the only way to go and MLC is just not yet enterprise-ready. You probably came to this conclusion by looking at the MLC's Endurance -- you just can't afford such a low life expectancy with MLC SSDs.
While the numbers I have provided in the table above are what I could find on the internet as somewhat of a standard, there really is not enough data to qualify the exact life expectancy of an MLC SSD. The number is a good guess at best. It doesn't take into consideration all the enhanced techniques that vendors are incorporating in their solutions to extend the endurance and reliability of MLC SSDs.
The fact of the matter, however, is that MLC technology has been advancing significantly and is now found in products that are enterprise-ready. Even IBM OEMs MLC technology from STEC and makes it available in their enterprise arrays. Manufacturers like Xiotech, Tintri, Whiptail and others all leverage MLC-based SSDs and make them available in the enterprise. But you may be wondering, "How do they do that? How can an MLC which is 10 times less reliable than an SLC from an endurance and longevity standpoint be viable and safe in the enterprise?"
MLC manufacturers and vendors are using a variety of techniques to compensate for the shortcomings in MLC, especially around endurance and longevity, including:
- Error Correction Codes (ECC)
- DRAM cache / Write Coalescing
- Write Leveling (Wear leveling), spreads out the write distribution
- Compression
- Write Amplification
Note that all these techniques can be used with SLC or MLC SSDs, but they are more commonly used with MLC to compensate and extend its endurance and longevity. These techniques minimize the amount of data that has to be written to the flash memory. MLC SSDs are worn out by the number of writes. So, if there is a way to minimize the amount of writes, we inevitably extend the life of the SSD MLC disk.
There is significant effort under way to enhance MLC technology and it is getting better every day. Next time, I'll discuss the effects of using SSD MLC in VDI and what you can expect from it.
Posted by Elias Khnaser on 07/05/2011 at 12:49 PM1 comments
While managing your Citrix XenApp 6 farm from the comfort of the Citrix Delivery Services Console GUI tool is a very powerful thing, you should know that XenApp 6 also has a subset of command-line tools that are at your disposal and can help with advanced farm management and troubleshooting:
1. Altaddr: If you're a Citrix old timer, you probably remember using this tool before secure gateway was available, in order to extend the published applications to users outside the secure network. In essence, what Altaddr allows you to do is give your XenApp server an alternate IP address. Traditionally this was an outside facing public IP address or some kind of a NAT. Of course, the down side is if you have 100 XenApp servers, you would need 100 public IP addresses. Altaddr is not in use or popular anymore, but it is still there if you have the distinct situation where it makes sense.
2. App: Typically used for scripting or customizing an application's behavior. You can use App in your scripts to control the application's environment or to satisfy its prerequisites before the application runs.
3. Auditlog: A handy utility that, for the most part, dumps the security event log from Windows and allows you to pipe that log into a text file. You can use the output to audit who logged in or out of the server and when. This could come in handy if you are troubleshooting, for example, if your print service crashes; you can see who logged in during that time and if they are mapping a bad driver, etc.
4. Change client: Allows you to change the client device mappings in an ICA session, LOT, COM, USB mappings. etc.
5. Ctxkeytool: You can use this utility to generate an encryption key that can be used when enabling IMA encrypted communication in the Citrix farm.
6. Ctxxmlss: This handy little utility allows you to modify the XML port on the XenApp servers should you need to change that port for any reason.
7. Dscheck: I use this tool frequently to check for consistency on the Citrix IMA data store. It is particularly handy in conjunction with the /clean switch, which you would use to clear out any inconsistencies in the database.
8. Dsmaint: All interaction with your data store can be manipulated using this tool. For example, if you want to change data store database servers, you can use Dsmaint to do so. You can also use it to verify the local host cache on the XenApp server. The Local Host Cache or LHC is a subset of the data store database that runs on each XenApp server. Sometimes, as a troubleshooting step, you may want to verify the accuracy of that data or even refresh it altogether.
9. Enablelb: This utility restores XenApp servers into the load balancing mix after they have failed Citrix health monitoring tests.
10. Icaport: Here's a too that allows you to change the default port that the ICA protocol typically runs on.
11. Imaport: allows you to change the default port that the IMA protocol runs on
12. Query: Arguably the most useful command-line tool, I use the query command for troubleshooting, for verifying load on the server, for just about anything administration related. If I need to know, I always start with the query command. It has a subset of values:
- Query Farm or QFARM returns information on the XenApp servers in the farm and which one is a data collector
- Query Session returns information about the sessions that are running on the XenApp server
- Query Process will return the all the running processes on the server
- Query User or QUSER returns information about all the users on the server
All these tools are available to you with the default installation of XenApp 6. However, as you can see from the list above, the number of Citrix XenApp 6 command-line tools has been shrinking with each new version and that is not because of insufficient development. Instead, it's because Citrix is kind of following the Microsoft lead and porting most of its command-line tools to PowerShell. Event so, XenApp 6 has a very rich PowerShell footprint, wihich I plan to explore more in future blogs.
Posted by Elias Khnaser on 06/30/2011 at 12:49 PM10 comments
The answer to that question is, "Perhaps." Let's go through the benefits and then I would love to hear your comments and insights.
But first, why I'm writing this blog: About a year ago I blogged about Windows 8 and Hyper-V 3 in InformationWeek. I was discussing news that leaked from a French source about upcoming features. And then last week, more news was leaked about Windows build 7989, which showed Hyper-V 3.0 in the Windows Features section of the product. This is exciting on many levels, and it reinforces the excitement I and others have been “drumming” about for a while now that type-1 client hypervisors will change the desktop and laptop market for better and forever.
Windows 8 will leverage a technology, codenamed "MinWin," that was introduced with Windows Vista. MinWin is slated to replace the parent partition approach that Hyper-V currently uses on the server side. MinWin is a very thin layer of software that installs on bare metal and occupies a footprint smaller than Windows Core. MinWin will also shake off the resource hog that is the Windows Shell and will have the bare necessities to run the hypervisor. MinWin will have the following benefits:
- A true type-1 client hypervisor means you can host multiple VMs on the same client device including Windows XP, Vista, Windows 7 or 8 and maybe even Windows Mobile. The last one would be cool if you could run Windows Mobile Phone and its apps on your client device.
- The removal of the parent partition reduces the attack surface on the hypervisor, rendering it BIOS-like.
- A client hypervisor significantly reduces deployment, troubleshooting and repair time for laptops and desktops
- As Windows 8 begins to ship, type-1 client hypervisors will become standard on desktops and laptops with a single VM being deployed across different hardware profiles.
And that's not all. Also leaked was information on App-V, which will have tight integration with Windows 8, thereby, further reinforcing the notion of application virtualization. What would be exciting is if we could run App-V applications directly on the hypervisor without requiring a VM in the middle. This would be wishful thinking, of course, considering the registry and other DLLs that MinWin may or may not support without a VM. What I am asking for most likely requires applications to be written specifically for the hypervisor rather than traditional Windows.
Also rumored is a new virtual disk, to debut with the extension of .vhdx with capacity limitation greater than the 2TB limit at which .vhd is currently capped.
What is Microsoft trying to accomplish? Well, it's trying to hit many targets at once when it releases Windows 8. Microsoft desperately needs to do something to boost Hyper-V's value proposition. If the features leaked are real, those features are significant and serve Microsoft's purpose to boost Hyper-V's value very well. Microsoft has historically always won back a technology in which it trails by leaning on its traditional stranglehold of client devices. If Microsoft can trigger mass adoption of Windows 8's client hypervisor, that will inevitably lead to winning back marketshare on the server side. It's an approach that Microsoft has been successful with against Novell.
What will be the effect on Citrix? Bittersweet. On the one hand, the mass adoption of type-1 client hypervisors is a sweet spot for Citrix, considering its relationship with Microsoft and the fact that it is virtually a guarantee that XenDesktop will support Windows 8 type-1. In that regard, it's a big win for Citrix, as they already support .vhd. So, porting and tweaking to support new technology is easy. The unknown part will be around XenServer -- if Microsoft's ultimate goal is to win back the server market, it will most likely not extend support for XenServer. What happens in this space will be interesting.
And how will it shake out for VMware? What happens to VMware is completely up to VMware. No doubt, vSphere is and will continue to be the platform of choice, at least for the foreseeable future. Where VMware will struggle is in the end-user space. Windows 8 will deliver a master's blow and unless VMware is willing to be a bit flexible and possibly support .vhd and .vhdx, I cannot see how they can fight back. As long as the endpoint is running Windows, it will be next to impossible.
But VMware can turn around and concede a bit, which would put Microsoft back into a position of having to find another way to win back the market. If vSphere supported .vhd and .vhdx and organization are currently on vSphere, it would be less appealing for organizations to switch even as Windows 8 takes hold of the client devices.
And finally, what happens to current third-party type-1 vendors? They will evolve. Some will maintain the product and will try and compete head-to-head with Microsoft by offering richer and better features; others will accept the new landscape and will evolve into offering management functionally, advanced synchronization and server-based features that extend Windows 8 type-1 capabilities.
I'll be giving this topic full coverage in a future issue of Virtualization Review. In the meantime, what do you think? Am I full of it or am I making sense?
Posted by Elias Khnaser on 06/28/2011 at 12:49 PM4 comments
One of the many benefits of server virtualization is that it makes BC/DR easier by transforming physical servers into files that can then be easily transported over different media. Well, that takes care of the DR portion of the acronym. But, if organizations want to address the BC (business Continuity) portion, they would then need some form of replication and while many solutions exist on the market today, all of these solutions were never designed for virtualization. Instead, almost all were built for an era of physical machines or LUN-based storage replication.
Don't get me wrong, lots of BC solutions work and get the job done with some caveats. Some of them rely on snapshots, which is a fine approach except that it is a single-point-in-time replication. While you can constantly take snapshots and replicate them, you can see how the process becomes frustrating and does not feel like it is a "clean" way of approaching a solution in which the customer wants or aspires for real-time or near real-time replication. LUN-based replication also has some challenges. Some of the solutions here require the same hardware on the other end and a number of other prerequisites that, again, take away from the virtualization value-proposition of abstraction (hardware agnostic) and simplicity.
What I like about Zerto is that it was built specifically for virtualization. Zerto does not care what storage you are replicating from or to; it is completely agnostic, which satisfies and is in line with the virtualization value-prop and also is simplicity: A virtualization admin can administer replication without the need to involve the storage administrator in the process. The storage admin goes back to simply provisioning my storage on both ends and doesn't worry about anything else. Oh, and by the way, you can have any combination of storage matrix on each end -- NFS to FC, iSCSI to local disk, whatever you want.
The way Zerto works is actually pretty cool. It interacts with the vSCSI controller, which passes the I/O from the VM to the hypervisor. Zerto taps into that stream, makes a copy of it and redirects it. This approach means that if you can constantly be tapped into that stream copying the traffic to a different source, your replication is constantly in real-time.
The components that Zerto requires is a master Zerto server that can manage the environment and is capable of deploying Zerto appliances onto each ESXi host in the virtual infrastructure that needs to participate in the replication.
I am sure you're asking about any performance impact and that'd be a good question. Given that Zerto loads a driver into the hypervisor, it must be taxing it from a resource standpoint somehow, right? Something's gotta give. You have Zerto appliances on each ESXi host, the Zerto driver into the hypervisor, so there has got to be some hit in performance hit. The question also is, how much?
I am going to run a demo in the lab and report back in a blog post soon. However, I wanted to test the waters and see how many of you would be interested in a hypervisor-level replication that puts you back in the driver seat from a replication standpoint. Think of Zerto as Site Recovery Manager on steroids -- it tackles the storage replication where you don't need to rely on third-party replication and also tackles the logical pieces of SRM.
Posted by Elias Khnaser on 06/23/2011 at 12:49 PM4 comments
I have been working with Citrix products for a very long time and while I appreciate the technology in most of the products, I can't help but look at EdgeSight and wonder: At what point will Citrix make a change?
EdgeSight is a good monitoring tool, but it is a very complex and difficult tool to work with. Frankly, it's past its time. There are software packages on the market today that can do everything that EdgeSight does without requiring an MBA in software development to get things done.
Companies like eG Innovations and Splunk are fantastic alternatives to EdgeSight that extend the ease of use and monitoring capabilities down to the application layer without EdgeSight's complexity. Citrix charges a significant fee for customers who want the Citrix Platinum licensing package, and one of the selling points for upgrading to Platinum is EdgeSight. Even so, is anyone really using EdgeSight?
Furthermore, companies want to be able to report on simple things, not just complex things. If I've deployed a NetScaler Access Gateway in my environment and I want a report of which users have logged in at what time, can EdgeSight give me that? The answer is no. Granted, the NetScaler appliance can't give me that either, and at least not without going through an extensive exercise in software development. In both cases, how is this acceptable?
The bottom line: With XenDesktop catering to an enterprise audience, the ability to have a more comprehensive monitoring solution is imperative, especially when customers are paying a premium for the Platinum edition of the software. We would also expect that this monitoring platform is now hypervisor-aware, such that the monitoring understands the hypervisor layer and can also report any performance degradation or bottleneck at every junction. And any enterprise customer can benefit from being able to see the different hops that packets travel through and where a potential bottleneck exists. Such built-in, fully integrated solutions just do not exist today.
I would love to hear from anyone that has implemented EdgeSight and how you are using it.
Posted by Elias Khnaser on 06/21/2011 at 12:49 PM7 comments
When you examine the landscape of desktop virtualization solutions that are being implemented at organizations, two such solutions immediately stand out: Citrix XenDesktop and VMware View. I have written a lot on XenDesktop and I most certainly love the product, but I also have a secret love affair with VMware View.
In this article, I will show you how you can tweak VMware View and its PCoIP remoting protocol for optimal performance and the best possible user experience. Let's get started:
1. Back to basics: Tweak the User Interface Visual Effects
- If possible, use the Windows Thin PC version of the OS
- Set Visual Effect to Best Performance
- Disable Desktop Wallpaper
- Disable Screen Saver or set it to None
- Revert back to the classic Start menu
- Disable Themes (if possible)
- Disable additional fading
- System icon and text changes
- Disable any unnecessary Windows services -- Help and Support, Windows Audio (if you don't need sound), Wireless, Remote Registry (be careful, though: some applications need this service, so make sure you properly test), Error Reporting -- and any other service that is not needed
2. QoS PCoIP
If left without any QoS, PCoIP can consume up to 20MB per session, causing significant bandwidth usage on the LAN, WAN or remote. Thereby, I strongly suggest you QoS PCoIP and prioritize it immediately after VoIP in your environment.
3. PCoIP MaxLinkRate (Kbps)
This is equivalent to the duplex speed; set to a default of 1Gbps. Unless there is a need to change it, I would leave it at that:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Teradici\ PCoIP\pcoip_admin_defaults\pcoip.max_link_rate
4. PCoIP MTU Size (bytes)
This is the Maximum Transmission Unit for the PCoIP session. Keep in mind when configuring this setting that both endpoint's MTU settings would have to match to maximize performance. If they differ, the lowest MTU value is used:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Teradici\ PCoIP\pcoip_admin_defaults\pcoip.mtu_size
5. PCoIP Bandwidth Floor (Kbps)
Defines the minimum bandwidth that a session will consume when network congestion is detected. The default value is set to 1000Kbps; increasing this value will give users a better user experience, of course:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Teradici\ PCoIP\pcoip_admin_defaults\pcoip.device_bandwidth_floor
6. PCoIP Minimum Image Quality (0-100)
The default value here is 50 and in my deployments this value has worked well. Remember, a higher frame rate will deliver a smooth experience at the expense of image quality. The lower the frame rate value, the poorer the image quality, but the session will respond better. Find a good medium that works for you. Conversely, a higher image quality will do exactly the opposite and deliver crisp images at the expense of a slower session roaming experience:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Teradici\ PCoIP\pcoip_admin_defaults\pcoip.minimum_image_quality
7. PCoIP Maximum Initial Image Quality (0-100)
The default value is 90. In my deployments values have varied between 70 and 90. This setting behaves as follows:
- A higher initial image quality means that larger bandwidth bursts will be used when refreshing or updating a larger end-user screen change
- A lower initial image quality means that less bandwidth bursts will be used when refreshing or updating a larger end-user screen change: HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Teradici\ PCoIP\pcoip_admin_defaults\ pcoip.maximum_initial_image_quality
8. PCoIP Maximum Frame Rate (0-30)
The default value is 30, but I have seen better results when dropping it to 15. This setting deals with the frequency of frames, or how many frames per second at which your end-user screen refreshes. Of course, the higher the rate the better the experience; the lower the rate the less data you send across the wire. So, be sure to test:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Teradici\ PCoIP\pcoip_admin_defaults\pcoip.maximum_frame_rate
9. PCoIP Audio Policy (1/0)
Unless audio is explicitly needed, this setting should be disabled to save significant bandwidth and improve the user experience:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Teradici\PCoIP\ pcoip_admin_defaults\pcoip.enable_audio
10. PCoIP Encryption (1/0)
I have had better performance setting this to Salsa256 than to AES128, but be sure to run your own tests and validate:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Teradici\PCoIP\ pcoip_admin_defaults\pcoip.enable_salsa20_256_round12
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Teradici\PCoIP\ pcoip_admin_defaults\pcoip.enable_aes128
11. WAN Considerations
- Ensure VPN solution supports UDP tunneling, not just encapsulation.
- Beware of double NAT'ed client IPs.
- PCoIP requires 1MB of peak bandwidth. When planning, consider allocating 500Kbps per user even though the typical user will consume about 200-300 Kbps.
Ok, so it was really 11 tips, but hey, 10 tips sounds better. You will notice that some of these settings you can deploy via Microsoft GPO, while others you will have to modify via the Windows master image prior to deployment. Just be sure to spare no detail: Whatever you can disable in Windows to make it run faster, don't hesitate to disable -- every little bit counts and if you can tweak the environment well enough, you will save on vCPU usage and also on memory usage and provide the best possible user experience.
As an old school Citrix guy, I chuckle when I remember the good ol' days of MetaFrame 1.8 or MetaFrame XP, when this type of tweaking was done and more. We used to have a complete cheat sheet on MetaFrame Tips and Tricks. This goes to show that no matter how much VDI is glorified, at the end of the day it is a server-based computing model. All the performance made to a Terminal Server apply to VDI as well.
I will leave you with this: Tweaking VDI is an art, not a science. I can write books (and I have) on the subject and it would still be different in your environment when running your applications over your network. Use my tips as a framework -- not a bible -- for tweaking and enhancing your VMware View deployments.
Posted by Elias Khnaser on 06/14/2011 at 12:49 PM3 comments
More and more mobile devices are being used in enterprises everywhere, from tablets, to phone, all the way through netBooks and laptops. This mobility frenzy comes with the added hassle of having to support the miscellaneous components of day-to-day computing. A good example of this is printing: When users are roaming through the enterprise with their cool devices, it is only when they try to print or do other similar tasks that IT takes a step back and says, "Oops, I did not think of that...." For this reason and many more, I always stress that is crucially important when choosing a desktop virtualization solution to pick one that is flexible, enterprise-driven and can accommodate user computing.
One of the coolest but not-so-popular features of Citrix XenApp and XenDesktop is the concept of proximity printing. Here's an example: An iPad user is roaming the campus of the organization and bumps into a manager. That manager wants the iPad user to print a document for one reason or another from where they both stand. Well, the iPad user has a printer configured, but that printer is on the second floor, where the user is usually stationed. When he bumps into his manager, he just so happens to be on the 96th floor. Certainly, one option is to go back down 94 floors or call the helpdesk and have them configure his printer. So, what happens if that user lands on the 56th or 32nd floor? Citrix recognizes this as an issue and as a result supports proximity printing.
Proximity printing basically detects a user's current IP address and then maps the user to a printer in the user's subnet. Granted, one would hope that maybe every floor has its own scope or some kind of an identifier that would make it unique so that you can map a printer accordingly. Assuming that is the case, you can configure Citrix print policies to map network printers for users based on the IP address of their device.
To configure proximity printing, follow these steps:
- Create a separate Citrix Policy for each subnet or each geographic location
- Enable proximity printing through the session printing policy rule in Citrix Policies
- Add the printers for that subnet or geographic location in the policy
- Set the Default printer policy to use the Do not adjust the user’s default printer setting
- Filter the policy bt Client IP address
Now there is another method which you should be aware, called Workspace Control. This feature is useful when users disconnect from their existing session, move to a different location and a different device and connect back to that same session. Typically, without Workspace Control, if they resume the same session the printers will not change. With Workspace Control, when they connect again, it automatically detects that they are connecting from a different location and a different device and maps their printers. Cool, no?
Posted by Elias Khnaser on 06/09/2011 at 12:49 PM4 comments
When I first heard that VMware was acquiring Socialcast, I quickly dismissed it and went on with my day. Still, it stayed in the back of my mind and it did not really hit me until I was preparing a presentation on the Consumerization of IT (CoIT) for a client. Then it hit me: How could I have missed that? And now, it makes perfect sense to me what VMware is doing.
The problem is, no matter how much you want to evolve, the knee-jerk reaction is always a technical one until you take a step back and see the larger picture. With that in mind, think about this: Global companies spend millions of dollars every year on collaboration and team-building activities. Their belief is that if employees knew one another, and understood what each did, that would automatically lead to increased productivity as a consequence. Knowing who your colleagues are and being able to contact them directly without having to go through the process of introductions and the awkward moments and the he/she-look-unapproachable kind of scenario, all such wasted time and expense for the business--if we could avoid that through team-building activities, could we improve and increase productivity? This approach is valid and companies that encourage these activities are usually among the best companies to work for and seem to always be among the most successful ones.
Facebook took the whole social experience to the next level, breaking barriers and building a billion dollar company before they even started selling anything. People by nature like to socialize and Facebook made it easy to break many of those barriers on a global scale. So, what if companies could take the Facebook idea into an organization? It'd be brilliant. But in doing so, would companies have to stop holding company team-building events? Absolutely not--those activities are still important, but something like Socialcast gives you some sort of mechanism for following through those team-building events. Rather than having those events happen in isolation and only effective at that moment, the team-building process can continue over time and as an ongoing process.
Socialcast isn't meant to out-social Facebook, but it does take a direct, full swing at Microsoft's SharePoint. SharePoint is IT's approach at team-building and collaboration, while Socialcast is a consumer approach at team-building and collaboration. Think about this way: People love Facebook, so let them use it internally as a business tool. After all, that is the essence of embracing CoIT rather than fighting it, right? I frequently send e-mails to colleagues about an interesting article, which then means replies and counter-replies will end up cluttering e-mail. (I've been on a thread that had over 100 e-mails on what laptop brand the company should adopt; then the VP stepped and said, "DUDE, you're getting a Dell!")
The point is, e-mail today is used to communicate, to store files, to do everything, but when you try to find anything with e-mail, it takes time and with e-mail you forget things, and so on. What if we decomposed how e-mail and file sharing and storing is addressed? Socialcast is like the new SharePoint, except now I want to use it because it is fun and easy to use.
I can see Socialcast as a way of significantly improving employee collaboration while embracing the immense success in social networking. What's more is that Socialcast is a SaaS product: no huge CapEx investments in hardware, in clustering, in SQL databases or index engines. Just configure it and use it. Now that being said, Socialcast is missing some enterprise features like ability to webify files and directories that reside on your internal network or connect to cloud storage, but those features should be relatively easy to implement and may in fact be part of the product at some point.
I welcome your comments and thoughts, but please note that before you respond, I fully understand that SharePoint offers more to the enterprise today, I understand its full-featured potential. So, my question to you is, do you think Socialcast can grow to replace SharePoint and do you think SharePoint desperately needs a social networking look and feel that would make it a bit more interactive and a lot more consumer-friendly?
Posted by Elias Khnaser on 06/07/2011 at 12:49 PM4 comments