I have recently been working with a customer that was having an issue with ESXi 5, the problem was that the client unpresented a physical LUN to ESXi 5 by simply deleting the datastore and physically removing the LUN from the storage array software. That led to the condition knows as APD (All paths Down).
What happened is while the datastore was deleted, ESXi continued to try and access that device and since ESXi uses hostd to access devices and also uses hostd for ESXi to vCenter communications, that led to a slew of issues. In this blog, I want to focus on the correct way to delete and unpresent a LUN in ESXi 5. Here are the steps:
- Unregister all objects from the datastore including VMs and templates.
- Ensure that Storage DRS and Storage I/O are not configured to use this device.
- Detach the device from ESXi host, which will also initiate an automatic unmount. To do this click on Configuration, and then Storage, find the datastore you wish to unmounts, right-click it and select unmount.
- To avoid doing this to every ESXi host, from vCenter do a Ctrl+Shift+D to switch your view to Datastore clusters view. Execute unmounts and choose which hosts you want to unmount this from.
- Now, while still under the Storage node, switch your view to Devices, right-click the NAA ID of the device and click on Detach. For more info on finding the NAA ID look up VMware KB2004605.
- Now double check that the LUN has been properly unmounted by checking the operational status of the device which should read unmounted.
- Physically unpresent the LUN from the storage array controller software.
- Perform a rescan of the ESXi host.
Now if you are trying to unmount an RDM, first delete the RDM from the virtual machine and delete from disk, then follow steps 5 to 8.
Posted by Elias Khnaser on 01/25/2012 at 12:49 PM2 comments
It's interesting to discuss mobility with the different departments in an organization. They each define mobility differently and want to address it separately.
Talk to the networking team and all they want to talk about is wireless and how to enhance their wireless infrastructure and what features they can offer for a better user experience. For them, BYOD is a matter of enabling their users to use these devices on wi-fi securely and effectively.
The systems group will talk to you about mobile device management and their wanting to control these devices and secure them, push content to them and remotely wipe them, etc.
When you talk to the virtualization group they want to do a mixture of both. They start off with talking about desktop virtualization and to them BYOD is about enabling users to access apps and desktops on any device. Some of the more savvy virtualization technologists will bring up VMware Horizon Mobile and Citrix CloudGateway and some will ask about Microsoft System Center Configuration Manager 2012 capabilities.
The truth, folks, is that mobility is about all the above. In a nutshell, it is about:
User Experience -- This enables BYOD, desktop virtualization, mobile applications, SaaS applications, etc. It is important to note here that when developing a BYOD strategy we can enable applications that were meant to run on mobile devices, I call these Post-PC era applications, but it is equally important to enable Windows based resources (apps and desktops) on these devices as well. We can stretch this further and talk about enabling Dropbox-like technologies for data, but I think you get the idea.
Mobile Device Management -- The influx of mobile devices has created a nightmare for IT. Using MDM is absolutely imperative, but make no mistake: You don't want to manage the device here, folks. This creates a support nightmare, but also opens your organization to legal issues. When adopting BYOD, manage the enterprise resources, not the devices. Don't worry about upgrading IOS; that is not your concern anymore. Do not approach BYOD as if it were a desktop from 10 years ago. Manage the enterprise resources and respect the user privacy, data and applications.
Wireless -- No mobility project will ever be successful without a solid wireless infrastructure, so always remember when deploying any of the latter technologies that wireless is a critical component and make sure your infrastructure is capable of delivering these services.
Now you can approach these pillars independently if you have a specific project, but my advice is before you take on any aspect of mobility, make sure you have communicated and collaborated with the other departments within your enterprise. These pillars are interdependent and collaboration is needed to make certain projects work properly. For example, if you are revamping or enhancing your wireless network and there is a desktop virtualization project going on, it is imperative that your wireless team is included in design and planning meetings. That way, they can adequately prepare for the different remote protocols that will be used, so they can address session drops when changing wireless access points, etc.
Communication goes a long way. It is high time we broke these silos that we have built in enterprise IT if we plan on delivering a proper service to the enterprise.
I welcome your thoughts!
Posted by Elias Khnaser on 01/23/2012 at 12:49 PM0 comments
In previous blog posts, I covered some new features of XenDesktop 5.5 and XenApp 6.5. I want to continue with that, and this time, let's look at the new QoS support via Mutli-stream ICA/HDX.
First, some background: The ICA packet today is comprised of multiple virtual channels that carry different types of data within them. For instance, each ICA packet could be carrying virtual channels of the following types: graphics, keyboard, mouse, audio, printing, clipboard, drive mappings etc. The challenge for QoS today is, if I want to classify ICA/HDX traffic from a network perspective, I would have to do the same for all the virtual channels inside of that ICA packet.
Keep in mind that QoS has been available for optimizing the traffic within a single ICA packet for a while now, so you can prioritize virtual channels within the same ICA packet if you want to, but you are still carrying all the virtual channels, and network administrators cannot QoS inside the ICA protocol, they can only apply a class of service against the entire protocol.
Let's take an example, where a remote office using a VoIP application is complaining of poor audio quality. Let's also assume that there is non-ICA traffic on the WAN as well and you have determined that the WAN link is significantly congested. Our options here are limited. We can certainly play around and optimize the virtual channel responsible for audio or we can engage the network administrator to raise the class of service of the entire ICA/HDX protocol running on port 1494 or 2598. By doing that, however, you are now prioritizing all the additional virtual channels inside the protocol, even though users were not suffering from any performance issues from those other channels. While you may have fixed the VoIP issue, you more than likely caused another issue because the ICA protocol is transporting so much more than just VoIP, and raising the class of service might affect other non-ICA traffic.
You can see how this is very limiting, and this is where multi-stream ICA comes into play: It allows you to establish multiple TCP connections (ICA runs on TCP) between the client and the server, carrying different types of data. As a result, you can now associate different ICA virtual channels with different ICA streams and, conversely, with different classes of service so that you can easily prioritize them on the network. Furthermore, with XenDesktop 5.5, you can even enable an optional UDP stream if an application can take advantage of it.
Of course, using multiple TCP connections between the client and the server requires an architectural change in the way you design and deploy your Citrix technologies. Additional ports will need to be opened and configured on both the network and the server side, but multi-stream ICA gives you the granularity that you need to better control and classify the traffic on your network to give your users the best possible user experience.
If anyone is using this technology, I would love to hear how that is working out for you.
Posted by Elias Khnaser on 01/18/2012 at 12:49 PM2 comments
I was meeting with a new customer right before the holidays and part of the conversation was that they were very unhappy with the performance of their blade system in a virtualized environment. They told me that they benchmarked the same workload on rack-mount servers versus blades and they saw the CPU Ready Time drop significantly using the rack-mounted servers. They asked if I I had seen that with other customers because, they were considering moving away from blades. This was a large customer with over 2,000 VMs in production.
A week later, in an amazing coincidence, I had another customer call and complain about the exact same thing, except this time the customer said that the workload would vary depending on the blade the VM was on. They were seeing this performance degradation on their VMware View environment running Windows XP with a single vCPU.
I recalled reading something of the sort on the VMware communities' forum about processor power management causing VMs to run sluggish, so naturally we checked that first. In both cases the processor ower management was set to Dynamic at the blade BIOS level. Keep in mind that processor power management can be managed in two location: in ESX or in the hardware. It's also worth mentioning that the only time ESX can actually manipulate the processor power management settings is if the hardware configuration is set to "allow OS-controlled power management." During our inspection we found that it was not the case, so we started looking at the hardware. There, we found that it was set to Dynamic. As soon as we disabled that setting, the environment started to function perfectly again.
The processor power management setting affected mostly workloads that were CPU latency sensitive such as a VDI or XenApp/Terminal Server environment. However, I truly recommend that you disable this feature unless there is a compelling reason to use it. VMware also published a knowledge base article on this issue found here.
If you have experienced this, drop a note in the comments section so that we can help others.
Posted by Elias Khnaser on 01/09/2012 at 12:49 PM1 comments
As we bid farewell to 2011 and welcome 2012, I figured we would end this blogging year with a reading of the crystal ball "Eli style." It has been a great year and I have enjoyed all the comments and the responses that I have received from all of you on all the social media channels I am connected on. That being said, let's see if we can end this year with a bang of predictions:
I am sure you were expecting this one from me so I will keep it brief. In 2012, the adoption of desktop virtualization will continue to grow and enterprises will recognize that the effects of consumerization will force them at some point to start rolling out Windows 8 even as they've yet to complete or even start their migrations to Windows 7. Couple that with the expiration of Windows XP in 2014 and I think 2012 will be the year of the desktop.
Most of my clients tell me that they won't to Windows 8, but my take on this is that it is really not their choice to dictate that anymore , Windows 8 will make a splash in the tablet arena, the Metro-style OS will take off and consumers that are buying these new devices and getting used to Metro and not wanting to go back to Windows 7-style computing will force IT to make the move. Desktop virtualization will be used to deliver a choice of Windows 7 or Windows 8 and a slew of virtualized applications.
Mobile Device Management
MDM in 2012 will most certainly take off. I am predicting that cloud-based MDM solutions will dominate, but as the number of mobile devices increases and their uses for accessing enterprise resources solidifies, most enterprises will be seriously looking at an MDM solution not necessarily to manage the end point, but rather to manage the enterprise resources on that device.
My favorite companies to watch in the MDM space are OpenPeak, MobileIron, Citrix CloudGateway and VMware's Horizon Mobile. Keep in mind these solutions do a lot more than just MDM, but since the term MDM summarizes a lot of other things like mobile application management and security, I figured I would just use that term.
The increased number of mobile devices automatically translates into an increased volume of user-generated data and automated application-generated data. Couple that with the amount of data generated out of social media -- especially as that data makes its way into the enterprise -- and you end up with an enormous amount of data that will overwhelm the current enterprise infrastructure from a hardware perspective as well as from an information management perspective.
The increasing volume of unstructured data will inevitably lead companies to start investigating options not only around how to contain and manage this volume of structured and unstructured data, but also how to mine it and leverage it for competitive advantage. We will see this accelerate in the enterprise as Microsoft and Oracle adopt Hadoop behind their databases. Currently, IBM offers Big Data through InfoSphere.
While I think that cloud computing in general will accelerate significantly in 2012 with IT organizations offloading more IT tasks like collaboration, mail and others to the public cloud, I see 2012 as being the year of the hybrid cloud. I think IT will finally come to terms with the fact that expanding the enterprise infrastructure does not make much sense moving forward.
Challenges like Big Data and other technologies will make it very expensive for companies to continue to invest in an enterprise infrastructure when the alternative is available and cheaper. Take something like VDI for instance, which at some point will end up in the cloud, probably not in 2012 but it sure makes financial sense to have it there, especially if your current colocation datacenter provider offers cloud solutions. It's a matter of deploying to the public cloud compute resources. This and many other reasons lead me to believe that hybrid clouds will be the talk of the town in 2012 with products from Citrix, F5, and Cisco leading the charge. VMware will be there, of course, but in VMware's case I think you can only extend to a vCloud and not any cloud.
The Facebook generation is starting to make its way into higher positions in the enterprise and as that trend accelerates so will the adoption of new collaboration methods. I personally think that email's usefulness will be lessened in the enterprise, with Facebook-type solutions taking over. I think that coupled with Dropbox-like solutions for the enterprise will almost wipe out enterprise intranets. Microsoft will significantly enhance Sharepoint or face stiff challengers in the enterprise space. In this space, I really like VMware Socialcast, Citrix GoTo portfolio and collaboration in addition to the work that Cisco is doing in this space and, to some extent, what Microsoft is doing with Lync.
Automation & Orchestration
Key enablers of private clouds, automation and orchestration tools will be another highlight of 2012. We will see significant consolidation in this space with potential acquisitions of companies like Cloupia and Gale Technologies. Citrix's message with its acquisition of Cloud.com and CloudStack is spot on and I like what VMware is doing with vCloud Director. I think Microsoft has a strong solution in SCCM but I do believe that at some point that has to be broken into a separate product as the SCCM suite has grown significantly large and complex.
Finally, the biggest innovation for coming years will happen around storage. In 2012 we will see a larger adoption of Flash in the enterprise as the cost becomes reasonable and the technical barriers are abolished. I also believe that the ever-increasing drive capacities and the volume of current and expected data growth will force enterprises to demand either a replacement technology for RAID or an evolution of RAID. I believe that trends in both directions will rise. However, I'll make a more accurate prediction here in that RAID will probably evolve in some form that resembles IBM's RAID-X implementation -- some iteration of this type of RAID can accommodate larger drive capacities, significantly lower rebuild times. I would keep an eye on EMC, HDS and IBM in this space. I also think that technologies like Erasure Codes will be given a serious consideration.
As always I would love to hear your comments and perspectives on these predictions, where you think I'm spot on and where you disagree with me. I also want to take this opportunity to thank you for reading the blog this past year; I hope I was able to positively contribute to your knowledge, give you insight and a different perspective on things. If you have suggestions for topics you want to hear more about in 2012, send me e-mail or tweet or Facebook me -- the social revolution is at your disposal and I am available and always online.
Posted by Elias Khnaser on 12/29/2011 at 12:49 PM7 comments
With the release of VMware vSphere 5, VMware added GUI capabilities which allows you to tweak how many virtual CPUs and how many cores per virtual CPU each VM was configured with. This is a very useful tool because many times you come across operating systems and even applications that are limited in terms of the number of CPU sockets that they support, conversely, having the ability to add cores per virtual CPU socket increase performance quite a bit. This is a cool new GUI addition, but the technology exists in vSphere 4.1 as well, albeit not very apparent and requires manual configuration.
Now before I go any further and delve into how to configure this in vSphere 4.1, it is worth noting that this technology was ported from VMware Workstation, it existed in VMware Workstation for many versions now, VMware tends to release features and functionality into the Type-2 hypervisors first as a way of vetting its stability and functionality before it is released in the enterprise products, not a bad approach at all.
For those of you that want to configure virtual machines with multiple vCPUs and multiple cores per vCPU on vSphere 4.1, follow these steps:
- Right-click a VM and click on Edit Settings
- Click on the Options tab
- Choose General in the Advanced Options list
- Click Configuration Parameters
- Click Add Row
- Add cpiid.coresPerSocket in the name column
- Enter the number of cores you want in the value column, a value of 2, 4 or 8 is valid, of course the 8 cores, Enterprise Plus licensing required for the latter.
- Click OK and Power On your VM
It is worth noting here that when using this feature, the CPU Hot Add / Remove is disabled. This is a really cool feature and ofr those that are not ready to go to vSphere 5 just yet, I wanted to make sure that you were aware of it.
Posted by Elias Khnaser on 12/20/2011 at 12:49 PM4 comments
Virtualization has changed the game as far as how we troubleshoot issues in our environment. It is no longer as simple to determine as it was before. Now that resources are shared, it is sometimes tricky -- is it a VM problem? Or is it the network? Or compute? Or all of the above?
That being said, at the heart of every virtualization infrastructure today is storage. Some have large, enterprise-shared storage of different sorts, while others have smaller ones, but we all have shared storage one way or another. Monitoring the latency between VMs and the storage that they're running on is crucial. SolarWinds for its part has released a really cool and free storage response time monitoring tool that plugs directly into your VMware infrastructure and reports back on virtual machines' IO.
The cool thing about this tool is that it will also break down the latency metrics, separating the time spent in the host (kernel) versus the time spent on the device (SAN). This information can prove very valuable to any virtualization administrator when dissecting a technical issue.
Here are some of the things you can monitor and measure:
- Host to datastore enumerated by the worst response times
- The busiest VMs from an IO perspective
- Kernel versus device latency metrics
This is just a subset of what you can do with this free tool, which I have already added to my toolbox.
SolarWinds has a broader portfolio of products and its acquisition of Hyper9 not too long ago gave it a significant foothold in the virtualization market. I will cover the other SolarWinds products in a later blog but I wanted to share this tool with you. You can download it and a slew of other SolarWinds free tools here.
Posted by Elias Khnaser on 12/15/2011 at 12:49 PM1 comments
One of the coolest features of Citrix XenApp 6.5 has got to be the new Instant App Access feature, what this does is dramatically reduce the amount of time it takes to launch published applications. So instead of the traditional login and wait for your profile to be created locally, settings to be applied, policies and so on, Instant App Access gives instant access to your applications -- as soon as you click on an application you are ready to use it.
The magic behind this functionality is quite simple. As soon as you login to Citrix Receiver and your applications are enumerated, an empty session is opened in the background for you on the XenApp server. This is an empty session, but it manages the entire pre-launching functionality so that when you are ready to click on a published resource, you get that resource instantly without waiting. It is a very smart and intuitive way of masking the process and significantly enhancing the user experience.
I'm sure you have a ton of questions. Here's one you might be wondering about: What happens if I silo my applications on servers? Because each application gets its own set of servers and I have 10 silos, does that mean that the user will have an empty session on each silo of servers? The answer is simply, yes. If you are in a silo that is exactly what happens, and there is really no way around this.
Now, of course if the application silo contains 10 servers, it will open up one session on the least busy one, not a session on every server. That being said, your second question might be: Will that not consume resources on my XenApp servers? Again, the answer is yes it will. But in this case, the resources are minimal and considering XenApp 6.5 is a 64-bit only application, I don't see the resource issue being as big a deal as it is in the 32-bit versions. And if you are worried that this might consume a license, you are correct -- it does! However, keep in mind that the user logged in to launch applications. The good thing is if the user does not launch an application within a given time, it puts that empty session in disconnect mode, which then releases the Citrix license.
This is just one of the features of the new XenApp 6.5. In the future, I'll cover some other new features such as the multi-stream ICA protocol. Stay tuned!
Posted by Elias Khnaser on 12/12/2011 at 12:49 PM1 comments
It's an exciting time to be in technology. As IT professionals, we get to say we are the generation that witnessed how Big Data transformed the world. we saw how virtualization revolutionized IT and gave birth to the cloud. But more importantly, we get to say that we are the generation that witnessed how technology helped us feed the world quicker, warm the world quicker, cure the world and maybe even heal the world.
You see, everything we do on a daily basis as IT professionals means something if we can put it in context. They say every action generates a reaction, an event could lead to a sequence of events or could have a trickle effect, but there is so much data out there, so much information and our brain can only process small nuggets, can only address information from a very narrow perspective.
Bottom line is, we are slow, and most research takes years. It takes years because it is necessary to correlate data and make sense of it. So, what if there was a way to correlate data and all sorts of information in real time? If that were possible, would you not be able to make better decisions for your business? Would that not enable you to find cures for disease faster? Or maybe even avert epidemics altogether? Maybe even avert wars? Stop terrorist attacks? The possibilities are limitless. They say society is knowledge, and knowledge is power. Well, Big Data promises to deliver information of all sorts, correlate it and analyze it so that you can make a better, more informed decision in real time.
Enough philosophy -- let's look at an example. What if the shortage in certain types of food, which is high in certain vitamins was responsible for people getting sick? Getting the flu? This type of data research and analysis would take years to gather, to understand and to conclude. The data would be so old and after-the-fact that it would only be useful in university studies and research papers. So, what if that information was available in real time? What can government do with it? How about they alert the communities mostly affected, suggest certain vitamin intakes or perhaps the consumption of other foods high in that vitamin. What about pharmaceutical companies? They could use that data to develop a cure to a flu variation that has not spread yet. But you will say that is all for large enterprise. I say Big Data is for all sizes of enterprise. If you owned a pharmacy, would this information help you? Absolutely it would. Based on Big Data analysis, you might stock more of a certain type of medicine
That's just one example of many on how Big Data can impact and better our lives, but processing that large amount of data requires a platform capable of handling that amount of data, computing it fast enough and is capable of scaling as fast as we generate data. Today, there are many platforms capable of doing that. Hadoop is one such platform, capable of ingesting structured and unstructured data, Hadoop is quickly becoming the platform of choice and has garnered support from the three largest database developers: Microsoft, Oracle and IBM.
The power of Hadoop is that you can deploy it on standard x86 computers and you can scale it by adding more nodes, a true implementation of grid computing. As a result, Big Data can be leveraged by large and small enterprises. As the data grows and your ability to manage it becomes more challenging, there is always the cloud, ready and able to address these concerns, able to scale, absorbing large quantities of data and unlimited computational resources.
Make no mistake, Big Data is here and enterprises that appreciate it, leverage it, will reap the benefits and grow substantially. Big Data for me is the first time I can literally appreciate the technology beyond the geeky aspect of engineering or putting it all together, and being able to know that our efforts make a difference in the world we live in.
What are your thoughts on Big Data? Comment here or send me an e-mail.
Posted by Elias Khnaser on 12/05/2011 at 12:49 PM10 comments
As you plan your migration from ESX to ESXi, you should be pleased to learn about the ESX System Analyzer, a tool from VMware that helps you gather information about your ESX environment to ensure a successful transition to ESXi. As consultants, admins, engineers, our most valuable resource is information. Once we have all the information we need, we can design the best solutions, ESX System Analyzer collects the following information for your review, design and planning exercise.
First, it does the following to collect ESX Host Information:
- Evaluates the hardware that ESX is installed on and determines if it is compatible with ESXi. It's an important first step: If your current hosts don't meet minimum hardware requirements, your planning and design exercise goes to procurement.
- Spots dependencies or modifications made to the Service Console. Since with ESXi there is no Service Console where these modifications can live, it is important to know that your current environment has that dependency so that you can plan accordingly for an alternative solution. It will scan for files that have been added or removed, cronjobs, users, etc.
- Analyzes VM datastore locations. This is an important exercise because it determines where the VMs are located and whether or not you need to migrate them off to a temporary or permanent location as you go through the upgrade.
Now for VMs, the tool collects the following data:
- Virtual machine Virtual Hardware version. This is not a show stopper since the newer ESXi version is backward-compatible, but it is good information to know for future upgrades.
- VMware Tools version is important, again for putting together a proper upgrade plan for these VMs and a reboot schedule for the upgrade to successfully complete.
The tool can also collect the VMFS version on the datastore; again, this is information that is useful depending on how you plan to do the migration to ESXi.
VMware is diligently facilitating the migration to ESXi by releasing tools like the ESX System Analyzer. I recommend you download it here and give it a shot.
Posted by Elias Khnaser on 12/01/2011 at 12:49 PM2 comments
Mobility is defined mainly by who you ask to define it. For instance, ask Citrix and mobility is all about apps, desktops and data on any device. Ask Cisco and mobility might be about wireless. If you ask other folks in the industry, you will learn that mobility is about mobile device management. Ask me and I say mobility is about all three of them combined and not separated as they fulfill a complimentary role to one another.
Desktop virtualization is definitely a building block when it comes to mobility. The influx of mobile devices from smartphones to tablets has invaded enterprises and users have demanded their applications, desktops and data to be accessible on these devices. Desktop virtualization directly addresses these concerns and enables the user access to applications, desktops and data on any device. Still, that is not the sole definition of mobility. How do we control these devices? How do we separate personal data from business data and how do we secure it, encrypt it, remotely wipe it? All these questions and more lead me into Mobile Device Management, aka MDM.
While desktop virtualization is very important in an enterprise' mobile device strategy, you will quickly find that every user probably does not need access to apps, desktops and data, but every user most definitely has one or more mobile devices at any given time. In retrospect, these devices are now going to be connected to the enterprise network and could pose a potential security and regulatory risk. So how do we control them? There are two popular approaches. There is the traditional premise-based approach of installing software and using it to manage these devices. There are also now plenty of choices in the cloud, in the form of SaaS, which allow you to bring order to this chaotic spread of consumerization.
The [premise-based approach is a traditional model where software like Microsoft's upcoming System Center 2012 can be deployed in the enterprise to manage these mobile devices of different makes and models. This approach of course comes at a premium -- you have to invest in acquisition costs for software and hardware, invest in training and on-going management costs and of course upgrades and upkeep of the environment. The solution, however, is quite impressive and very feature rich. I have a detailed article coming up in the print version o f Virtualization Review specifically about Microsoft SCCM 2012, so make sure you pick up a copy.
And then you have the cloud! SaaS offerings in the MDM space have become very popular and very feature rich. Companies like MobileIron, OpenPeak, Airwatch and others are offering alternatives that don't require a CapEx investment and make for a very compelling OpEx play, all the while shortening the learning curve and time to deployment. But even in the SaaS approach you have choices. After all, it is in the cloud, so the sky is the limit. When selecting a cloud-based SaaS for MDM, it is important to know what type of platform the vendor is offering. There are two approaches:
- Native management of the device--This approach manages the device natively, which means it does not isolate the enterprise data in any container. Instead, it manages the device and the applications installed on it and also secures the entire device.
- Secure Container--Type-2 hypervisors is exactly what the name implies. Vendors deploy a secure container which holds the corporate data and aplpications and manage that container on the phone, rather than the entire phone.
Now, in the second approach I am talking in general here -- there are vendors who will offer one or two features here and there that go a bit beyond the definition, but in general the idea is to manage the container. VMware's project Horizon Mobile for instance does just that: It deploys a virtual phone, complete with applications and enterprise security policies and allows you to control and manage that portion of the phone, leaving the user's traditional device completely intact. It is worth mentioning here that Project Horizon is strictly a smartphone solution. As a result, you cannot use it on other devices yet and support for operating systems other than Android do not exist just yet.
Now in closing, I want to leave you with wireless. Many organization forget the effect of consumerization on the organization's wireless infrastructure. As you craft your mobility strategy, it is imperative that you assess your current wireless infrastructure, understand the current and expected load and then design it accordingly, especially if you will be deploying desktop virtualization and extending that to mobile devices. Remember, if your infrastructure is not solid, everything else you build atop it will also not be solid.
I would love to hear your comments on how you are going about dealing with mobility in your enterprise, what are some of your challenges and how you are proposing to address them. Post here or send me e-mail.
Posted by Elias Khnaser on 11/29/2011 at 12:49 PM6 comments
While browsing through the Citrix downloads section, I noticed that the Citrix Universal Print Server (UPS) snuck up on me, peeked its head out and said hello. Well, hello back! You are finally here? No, not yet, I am still a technology preview. Darn it.
So what the heck is the Universal Print Server? UPS brings the Universal Print Driver's Enhance Metafile format functionality to network print servers. (In English, Eli, English, you might be saying...)
Let's take a stroll down memory lane and see what we have today. Citrix has had a Universal Print Driver for a while now. The functionality of the UPD delivers a stable print driver that would work across printers regardless of make or model. Why? Well, for those of you that don't know, printer drivers are the source of all evil when it comes to server-based computing, especially Remote Desktop Session Host, a.k.a. Terminal Server and Citrix XenApp. Print drivers that are not compatible could cause services to crash, applications to open or perform very poorly, etc.
Using Citrix's UPD ensures a stable environment. The Citrix UPD is based on Windows EMF, which means you no longer have to install any native or third-party drivers on the client device in order to print to a printer. It provides for true device independence. The problem, however, is that this technology was only available for client printing, not for printing through a network print server. As a result, printers that were attached to the client device could use UPD, whereas printers going through a network print server still needed the printer driver for printers installed on XenApp servers.
Here's where the problem lies: In many organizations, the Citrix team does not control or have input into which drivers get deployed to the network print server, and these drivers do not get tested for stability or compatibility with XenApp or RDS for that matter. The result is a printer driver that get automatically installed on the XenApp Server that may or may not be compatible with XenApp and causes all sorts of apparent and very devious issues.
Of course, one can take any number of precautions to map different print drivers through XenApp, work with the other teams to test and deliver stable print drivers, etc. There is a lot one can do to address and mitigate the issues, but wouldn't it be nice if there was a way to print to network print server-based printers independent of native and third-party print drivers? That is exactly what the Citrix Universal Print Server gives -- device independence. You no longer need to install or test any drivers on XenApp, you can use Windows EMF format to print directly to the printers.
The installation and configuration of the Citrix UPS is very straight forward. You can install the server component on a XenApp 6.5 Server or XenDesktop 5.5 VM. It installs two Windows services, UPS and CPG. After the installation is complete, you will notice that the Citrix (JDX) Policies have been updated and you now have the ability to configure UPS. Configuration is very simple: enable or disable is one option, and the other option is to enable UPS with native failback. The latter means that if the UPS policy is enabled but the server components were not detected on the endpoint, it will revert to using the old native approach of requiring a driver. There are other configurable parameters that deal with port numbers and so on, but from a functionality stand point, you only have to worry about configuring two settings.
I am a little disappointed that the technology will only be available for XenApp 6.5. I think many customers could use this functionality in their environments today and not everyone has made the transition to a 64-bit operating system yet. All in all, this is a welcome and overdue technology, so I hope it gets out of preview mode soon.
Posted by Elias Khnaser on 11/17/2011 at 12:49 PM1 comments