Disaster Recovery as a Service Considerations

This cloud is definitely getting cloudy and has been for the last few weeks. I can't help it, as the conversation I'm having with customers is all about the cloud.

Many organizations are exporting certain workloads -- like messaging and collaboration -- to the cloud. You can throw disaster recovery in with them. Many of my clients tell me that they cannot justify paying a hefty price for a secondary DR site to protect against disasters that may never happen. For those clients, DR is a workload that is well suited for the cloud.

As with any other "as a service" offering, there are tons of providers out there offering these services. So, this week I offer a few considerations when choosing the best provider to meet your company's expectations:

  • DRaaS assumes a highly virtualized environment. In other words, an environment that needs to be restored must meet all the prerequisites for being virtualized. With that, don’t expect an AS/400 to be supported. There are customers that have large, physical SQL clusters that have VM and physical nodes that can be replicated, and so on. Well, as long as it can be virtualized there is a way.
  • Hopefully you have a business impact analysis and hopefully you have your applications and data prioritized, which would determine the servers and data that can take part in the DRaaS offering. Going through this exercise makes the pricing exercise easier and clearer.
  • Many DRaaS providers offer failover, but very few offer failback. As you weed out providers, ensure that you pick those ones who have failback capabilities. Without it, you will end up stuck on their DRaaS platform.
  • Make sure that your Service Level Agreements are clear and are aligned with your business expectations.
  • Inquire and understand where the delineation occurs between the cloud provider and enterprise IT. For example, cloud providers will typically stop at the operating system -- once the VM is powered on and the OS comes up and data is connected to the VM, the provider’s responsibility ends. It is now up to your team to make sure that the applications are functioning properly.
  • Assess your capabilities and if you determine that in the event of a DR you will need more technical support, inquire about the provider’s ability to produce system engineers that can support them and have specify how quickly such resources wil be available to you.
  • You should understand clearly how long your cloud service provider will allow you to use the DRaaS before they have to move you to a different service level. For example, if you failover and you are now using your DRaaS and you happen to stay on this service for two months, does it make financial sense to perhaps move you to an IaaS offering which would be more cost effective, considering DRaaS must have a timeline? Once again, understand how long you can stay on the DRaaS platform.

Workload after workload finds the cloud a formidable fit that achieves a critical business requirement in an easy and cost-effective manner.

Are you having a DR conversation internally? And if so are you considering DRaaS? Please share your comments here.

Posted by Elias Khnaser on 05/15/2013 at 1:30 PM2 comments


DaaS vs. IaaS for Desktops

Let's continue the cloud conversaton that I brought up in last week's blog, but this time on another topic that has garnered steam in the last few weeks among my customers: Desktop as a Service. Customers are now asking, why DaaS instead of VDI?

I don't want to turn this blog into a comparison between them and this fight has been discussed to death in other forums. Still, I'd like to highlight a few things that DaaS needs before it is a viable alternative to VDI. The biggest hurdle is Microsoft licensing. At the moment, the company doesn't have a Service Provider License Agreement for its desktop operating system products and that means customers have to provide their own Microsoft licensing to their DaaS provider. I have a problem with that -- without one, it gets very complicated, even more so than VDI Besides, it then is no longer provided in an "as a service" model.

Here's another hurdle: DaaS providers are delivering Windows Remote Desktop Session Host desktops and accessorizing them with a Windows 7 theme, and that presents its own set of challenges with apps and other considerations. There is also the concern with data ownership and compliance. Most important, DaaS would be limited to SaaS applications or Windows applications that are self-sufficient, meaning they don't need access to the corporate data or back-end databases. These are just very quick nuggets of some show-stoppers that I see at the moment.

That being said, I believe DaaS will eventually surpass VDI adoption once some of the obstacles are addressed. However, one alternative I have not heard many discuss yet is IaaS for desktops. I believe it will be the most popular and cost-effective solution. With DaaS, you lose a lot of control over customization of desktops, and management flexibility, which is understandable. Service providers cannot have an unlimited number of change requests, as that becomes a nightmare to manage and support.

IaaS for desktops brings the cost savings that IT is looking for, liberates IT from the hassle of managing the infrastructure, and it's customizable to meet the requirements of an organization at an SLA level. Instead of SP providing the desktop, they provide customization at an IOPS level, compute level, etc. All IT has to do is deploy and manage their VDI environment atop an IaaS customized for desktop workloads. This scenario makes a lot more sense for enterprises to acquire their own Microsoft licensing, as they are simply swapping out a physical infrastructure in their datacenter for a cloud-based IaaS designed for desktop workloads. They still have to do everything else.

It's a scenario that can give IT that warm and fuzzy feeling that they are still in control but are leveraging the cost savings and scale of the cloud. You still have some challenges to overcome in that if your entire data and applications are not in the cloud, preferably on the same IaaS with a different SLA, you still have performance challenges. But if you entertained the idea that I raised last week that in five years most enterprises will be on an IaaS platform instead of their own datacenter infrastructure, then this approach makes a lot of sense.

Put all your technical reservations aside for a second when you think of DaaS. We understand the separation of data from compute will not give optimal performance, but then again, I'm assuming in five years IaaS will be the norm. With those caveats, do you see a DaaS or an IaaS for desktops scenario being more relevant? Share your comments here.

Posted by Elias Khnaser on 05/06/2013 at 1:31 PM7 comments


Enterprise IT Will Be Out of Infrastructure Biz in 5 Years

If your blood is boiling and you want to skip to the comments section and let me have it, that's fine. Just keep in mind that I purposefully wrote the title to be controversial and to open up a conversation that I believe is very relevant and that will shape enterprise strategies and the entire ecosystem of supporting product and services.

Let's examine the facts, shall we? Today, most (and, no, not all) datacenters are colocated and typically "lights-off" datacenters. That is, while we still purchase, lease and manage the infrastructure, it is very much hands off. Most of our operations are focused on the virtual infrastructure and what is left of legacy systems, like AS400. If you compare and contrast this situation with what enterprise IT was like in the late 1990s, you will find it very different. Back then, the focus was heavily on the hardware. I still remember that all the jobs I was applying for had requirements to have experience on specific hardware. Now, that requirement has almost disappeared.

Fast forward to today, and the cloud is finally staking a claim. The explosive growth and success of Amazon Web Services is a good indicator of a market shift. Couple that with: Google's plans to enter the public cloud IaaS space; Microsoft's Azure already battling AWS; IBM is already has a strong cloud presence; and VMware is about to enter the market. That's just to name a few.

Now, I know that when I bring up this subject I am immediately going to be confronted with security, regulation, cost, and a whole slew of other very valid concerns or reservations. But remember, I give it five years before enterprise IT exits infrastructure. So, if we assume that in five years most of the regulatory compliance, security, connectivity, and other issues would be resolved, what other barriers do you have for running your business completely on an IaaS?

You might be asking, what about legacy systems, large data sets or storage requirements? As far as legacy systems are concerned, in five years they'll probably be where they are now, considering that they can't be virtualized.

Storing large data, now that's a different story. Let's take the Amazon strategy on storage. One can easily make the case that storing large data on Amazon today is far more cost-effective given their sophisticated methods of moving the data into their infrastructure. But let's not assume that we can move large amounts of data today, so would it be reasonable to think that in fie years and given the tech landscape and amount of competition in this particular area, the cost of hosting your data in a public cloud will be far more appealing than the hassle and operational hassle of managing and refreshing storage on premises? We have examples today of enterprises moving large amounts of data into Amazon. So that's what is happening today, and in five years with all the momentum, I would say it'll be a given.

Let me give you more evidence that this shift is actually happening. Most vendors now, from Cisco, NetApp, EMC and others, have approved cloud service providers lists which they use to compensate their sales teams. That is, if a NetApp rep puts a customer into an IaaS cloud provider, he makes a commission. Now, you are probably wondering, why would they do that? For starters, NetApp recognizes that this trend is unstoppable and that they need to find a way to continue to monetize their hardware. In turn, they then find a new type of customer -- cloud service providers. Vendors will make a deal with cloud service providers to compensate their teams on bringing customers into the cloud, provided that these cloud providers dedicate some or all of their spend with these manufacturers. Now EMC can sell massive amounts of storage to cloud providers, and same goes with Cisco and the rest of them.

By now you are at least thinking it's not absurd and I have a point. Now, what happens to current jobs? We have millions of architects, engineers and experienced folks that design these large, complicated networks, storage networks, and so on. Well, their skill sets will evolve to manage everything in a "software-defined" infrastructure, or they will go work for these large cloud providers who will need their skills as they scale out massive infrastructures in multiple regions.

But will we really abolish all infrastructure from the enterprise IT? I don't think so. With the advent of the Internet of Things (IoT) and the potential for billions of new smart devices to be connected and be under the control and governance of IT, the network infrastructure will still exist in the form of wireless and LAN. Management could be cloud-based, but equipment will reside on premiseS to provide connectivity. Here's what it means in a nutshell: New roles and jobs will be created in IT to manage, govern these smart devices, but you can see there still is a role for traditional networking engineers. It'll just be a bit different than what it is today.

I am eager to hear your thoughts. What would prevent you as an organization that is 80 percent or more virtualized from completely moving to an IaaS provider? If there are constraints today, will there be fewer constraints in five years that infrastructure will be abolished? What have I missed that could be a show stopper? Please note that I am huge believer that enterprise IT will never be able to build infrastructures as reliable or secure as public cloud providers simply because of the sheer size and expertise that goes into building them.

Posted by Elias Khnaser on 05/01/2013 at 1:29 PM13 comments


Why Enterprise Content Management Systems Matter

Data is the most important asset an organization has. So, users getting immediate access to relevant data is the most important enabler an organization can offer its employees for competitive and sustained success in the marketplace. Today, enterprises are struggling with the amount of data that they have to work with and how to properly classify it and deliver it to end users. I had a conversation with one of our customers about specifically this topic and the customer had many valid questions:

  • Should I use SharePoint?
  • Do I need object-based storage?
  • Can I leverage cloud storage?
  • Do I need and can I use enterprise file sharing?
  • What do I do with applications that generate reports which users need access to?

This is just a sample, but from those it was very clear that the customer needed an enterprise content management strategy.

This is truly humorous. If you've followed my blog long enough, you will notice that I am recommending a strategy for almost every problem that I am being confronted with -- enterprise mobility strategy, enterprise cloud strategy, and so on. It is important to realize that a shift has occurred and that everything is more interconnected than it ever was in the past. As a result we can no longer address projects independently of an overall strategy because one project affects another in one way, shape or form.

So back to the problem, and the answer and the result of our engagement was to start with the business owners. Like I have always said: There are no IT projects, only business projects. So, we conducted a proper data assessment, established a governance model which the customer had already started with the SharePoint project. To that, we added regulatory and security considerations.

In the end, SharePoint most definitely has its place in the grand scheme of things. But enterprise file sharing is also incredibly useful as users become more and more mobile and it is imperative that IT enable them to have access to their data without using aVPN. That being said, enterprise file sharing is good for personal documents, good for replacing your "I" or "H" or "O" drive. It is not meant to eliminate file servers in an enterprise. These file servers still have a role to play for certain high-IO applications or even to simply store Outlook PST files.

Data tiering and object-based storage could also play a role in this particular instance. Data tiering can be extended to the cloud using services like EMC Atmos or Amazon S3. The end result? Well, it will be a centralized access point for SharePoint, enterprise file sharing and also access to all the applications that generate reports since they all happen to be web-based. So, aggregating those reports in a single location and giving user-controlled access to one or all these services works just fine.

Now the interesting thing here is that this delves into desktop virtualization as well and what the strategy is for certain files that can be manipulated using a VDI or RDSH session but cannot be downloaded or moved.

In the final analysis, my recommendation to you before you start working on your archive project or your backup project or your data tiering project or your SharePoint project is to take a step back, identify the business objectives, what is it exactly that you are trying to accomplish and build a strategy or a solution that can be implemented in phases to achieve your goal. And that goal? Simply put, it should only be to enable your users to have access to the relevant data as quickly as possible to increase business profitability.

Are you undergoing an enterprise content management project? What type of hurdles are you coming across from governance, regulatory and other issues that you can share with the rest of us? Please share them here.

Posted by Elias Khnaser on 04/22/2013 at 12:49 PM7 comments


Automate Certificate Replacement with vCenter Certificate Automation Tool 1.0

It's time to stop "ignoring" SSL certificates in your VMware vCenter environment and start replacing them with valid secure SSL certificates. I know that many of you just don't want to deal with SSL certificates. As a result, you simply choose to ignore the problem as the process of replacing these certificates is tricky. With the release of vSphere 5.1 the number of certificates that we now have to manage or replace has increased. It's a good thing VMware released vCenter Certificate Automation Tool 1.0 to help automate the replacement of the default SSL certificates with your own valid and secure certificates. You can use it with the following systems:

  • vCenter Server
  • vCenter Single Sign On
  • vCenter Inventory Service
  • vSphere Web Client
  • vCenter Log Browser
  • vCenter Orchestrator
  • vSphere Update Manager

As you can see from the list, there are quite a few SSL certificates to replace and doing it manually can be challenging. What I love about VMware is that they recognize that a large number of customers are ignoring best practices, especially in small and medium sized organizations just because no one wants to deal with certificate replacement. So instead of ignoring the problem, they simplified it with this tool.

Now as the name implies, this tool is aimed at the vCenter suite of products and it does not support replacing SSL certificates on ESXi hosts -- it would be a nice addition at some point. I would love to get your thoughts of this tool, whether you will be using it or find it useful.

Posted by Elias Khnaser on 04/17/2013 at 1:33 PM3 comments


Do You Need HP Moonshot?

The answer to that question is no, especially if you're virtualized. In its current iteration, it would not make any sense. Besides, it has no support for most hypervisors. Right now, it only supports RHEL 6.4, SUSE 11 SP2 and Ubuntu 12.04 from an operating system perspective. That being said, HP's Moonshot is not aimed at virtualization workloads, it is more niche focused on massive , cale-out computing that would suit big data, cloud providers and even grid computing-enabled applications. 

I am, however, very excited to see HP back in "invent" mode and I am hopeful that this is the beginning of a new innovation cycle. Let's be honest: If Windows XP needs to go, then so does the C7000. And while Moonshot will not replace the C7000 yet or be of little impact to corporate customers, I am hopeful that HP builds on this new architecture to introduce products that are more suited for this era of computing.

Now granted many corporate customers will try and find workloads for Moonshot; some might even use them as a replacement for the old blade PCs, while others might use them in financial verticals, etc. But I still believe that the use case in corporations will be limited given we could provision a virtual machine with more power than a single blade in the Moonshot chassis.

That brings us to the architecture of the platform: Moonshot 1500 fully loaded packs a whopping 45 Proliant servers inside of a very oddly sized 4.3U chassis. Each Proliant server consists of an Intel Atom S1260 based processor, 8GB RAM, a 500GB or 1TB SATA drive, and dual 1Gbps Ethernet ports. HP said that future versions will feature AMD and ARM processors. You will quickly wonder about display functionality and you will note that the Proliant servers do not have any VGA connectivity. That's because all management aspects are handled at the chassis level instead of at the blade level, making management centralized and streamlined.  The Moonshot 1500 includes two networking modules for internal server connectivity, with each module serving 45 x 1 GbE, and featuring two uplink modules offering 6 x 10GbE.

I like HP's Moonshot and I think the future of computing probably lies in these small factor blades, but I'd argue that we are still far from what it's capable of doing for those working with virtualization. I am interested in seeing what use cases that this type of architecture will attract.

Posted by Elias Khnaser on 04/10/2013 at 12:49 PM1 comments


Windows XP: The Day After End-Of-Life Support

Windows XP debuted in 2001 and it's still in wide use today. Despite the April 8, 2014 end-of-life support date looming, it is estimated that roughly 40 percent of desktops still run it. That is staggering.

What makes it even scarier is the fact that if organizations have not yet started their Windows XP migrations, it's highly unlikely they will roll out a new OS before the deadline. So what will happen when that date rolls around? Many of my customers say they don't care and that they we will continue to use Windows XP until they are ready to migrate. Basically they're saying, "We have been supporting Windows XP for a long time, we know its ins and outs and we don't need Microsoft's support."

The problem with these types of statements is that they are emotional and dangerous. It's a world rampant with cyber security threats, where even the most secure government computers in the world are being compromised. To boldly risk continuing to use an aging and soon-to-be unsupported OS is to merely invite hackers and malware writers to exploit and affect your business. Malware writers and hackers are mostly after the glory, the attention. What better opportunity than to try and exploit an installed base of 40 percent of PCs that Microsoft is no longer patching? Yes, Microsoft is ending security patches and updates. Sure, you can probably buy extended premium support for what Gartnera analysts estimate to be $200,000 for those with a Software Assurance agreement or a mere $500,000 if you don't have one.

What I have just mentioned is an obvious example of a very possible scenario. It's a nightmare from the IT perspective to clean up; from a business perspective, it's highly disruptive, especially if your reaction will be to start a migration under the pressure of cleaning up a vulnerability. On the flip side, when I ask customers about why they have not commenced a migration yet, I get a number of reasons along the following lines:

  • We don't have a budget for a migration.
  • We have a lot of applications that pose compatibility issues with newer operating systems.
  • We don't have the capacity to run though this migration and maintain our business
  • We need new hardware to support the new operating system.
  • We are not sure which version of Windows to migrate to, Windows 7 or 8.

Budget always comes up and, frankly, most of the time it is IT management's fault. I understand that economic pressures have trimmed budgets, but I am also convinced that IT has not properly presented a business case regarding the implications of doing nothing or delaying a migration from Windows XP. Unless the organization is unsure if it will be in business in 2014, budget must be allocated to allow for this migration. I encourage IT to be vigilant to present a business case for this type of project.

I also encourage IT to begin to think strategically about end-user computing in general instead of always being reactive. I encourage you to develop a strategy that includes desktop virtualization in many of its different types like VDI, Terminal Services and others. There are many ways of addressing application incompatibilities, there are several discussions that should be had on which operating system to migrate to and the implications of doing so.

The bottom line is, the impact that Windows XP's end of life will have on your business must be front and center of your projects this year. My advice is to not assume anything on behalf of the business and don't assume that you're not taking any risks by not informing your company of the dangers.

If you're not planning on migrating off Windows XP and expect to be using it past its end of life, I am very interested in learning what factors brought you to this decision in the comments here.

Posted by Elias Khnaser on 04/08/2013 at 12:49 PM20 comments


Your Thoughts Are Valuable -- And Might Win You a Free Pass to Citrix Synergy 2013

Citrix has given me a free full pass to Citrix Synergy, the company's conference taking place in Anaheim, Calif. from May 21-24 (travel and accommodations are NOT included). I can give it away without any restrictions, but to make it interesting, how about we run a contest? Here's what you need to do to enter:

  • Comment on this blog and tell me how the articles that I have written on Virtualization Review have helped you in your Citrix career.
  • Comment on this blog if you attended my Synergy 2012 session on best practices for running XenDesktop on VMware vSphere; tell me what tips you used from that session (if any) that were helpful to you.
  • Post a comment on the TrainSignal Web site, telling me how you used my video training to get certified, get ahead, or simply get better at administering your Citrix environment.

You can also tweet a comment on the above to me @ekhnaser -- make sure you include the hashtag #citrixSynergy in order to qualify.

If you have never been to Citrix Synergy now is your chance to participate in this first-class conference of truly "elites" when it comes to Cloud and mobility. This is also a wonderful time to network, make new acquaintances, learn from peers and gather information on new products and services from the ecosystem of vendors that surround Citrix.

You will also find that the sessions at Synergy are also very educational and informative. You certainly don't want to miss out on the Geek Speak sessions that are typically run by the CTPs -- those are always fun times and very informative.

You have until Monday April 8, 2013, after which I will be selecting either the best tweet or the best comment. I am trying to get a winner chosen so that you have ample time to register for the labs, which fill up pretty quickly.

Posted by Elias Khnaser on 04/03/2013 at 12:49 PM4 comments


Confusing Story of the Week: PayPal Dumping VMware for OpenStack?

News that PayPal and eBay were dumping VMware in favor of OpenStack -- and the VMware stock price drop and media frenzy that followed -- makes me wonder about the influence of technology journalists and how they can so easily jump on rumors and make fools of themselves. The story was basically leaked by the consultancy firm that was supposedly leading the effort. That the leak came from an executive makes me wonder about the technical competency of this company. The story further goes to claim that 80,000 servers will move to OpenStack.

While reading these reports, I wondered, "How will PayPal dump VMware, exactly?" OpenStack is not a hypervisor, so what will the virtual infrastructure be replaced with? And how do you make a headline like that? I thought, "Well maybe they will replace it with Xen or KVM."If that were the case, one of those would probably have been specifically mentioned.

I also wondered, why can't PayPal keep VMware at the infrastructure layer and simply adopt OpenStack at the cloud management layer? After all, OpenStack is really a competitor to VMware's vCloud (not to vSphere virtual infrastructure). In the era of cloud to think that companies as large as PayPal or eBay will standardize on a single virtual infrastructure is not realistic and to think that these companies can afford to dump VMware in favor of any other hypervisor is also unrealistic. It is now acceptable by even the staunchest hardcore believers in one technology over another that most organizations will have a "collage" of hypervisors, and so who cares? Maybe you use certain hypervisors for certain workloads because they work better? And if you layer System Center, OpenStack, CloudStack, or vCloud, then who really cares? They all support them all.

I then started to wonder, does PayPal really have 80,000 virtual servers? Heck, if they do and I have not heard of a virtual environment this large, then something is definitely wrong. Lo and behold, the story started to unravel and all of a sudden the CEO of the consultancy company pretty much publicly denounced one of his executives as someone who is not that familiar with the deal or the technology (I am paraphrasing here). Other reports came in that PayPal did not have 80,000 servers and the story just crashed from there.

It does beg the questions: What happened to responsible journalism? What happened to fact checking? And triple fact checking? What happened to experts who understand what makes sense before they even start typing away on their keyboards?

And one more thing: This story exposed the quality of that consultancy, one with an executive who cannot differentiate between a virtual infrastructure and a cloud management layer. PayPal should be as careful with who it does business with as it is with its own sensitive operation and its integration into our virtual lives. I will never visit PayPal again without wondering if any portion of this infrastructure was planned, designed or implemented by this consultancy firm.

Posted by Elias Khnaser on 04/01/2013 at 12:49 PM3 comments


Can You Implement A Private Cloud Without Optimizing Business Processes?

It's been said that cloud is 80 percent business process and 20 percent technology. My customers continue to ask me, "How can we begin to entertain a private cloud before we begin by optimizing or in some cases defining and implementing our business processes and our ITIL framework?"

I agree that commencing a business process optimization or even implementation is critical and I also believe that good ITIL framework is a very important step in maximizing your private cloud deployment. So, what should you do? Hire consultants to begin an expensive and endless business process optimization or decomposition and wait until that project trickles down to the technology in order to begin that private cloud journey?

The answer: Everyone's journey to cloud in general and private cloud in particular is different. You'll find that in actual practical implementation and adoption of private cloud the two projects are sometimes started independently. The reason is because we will all adopt cloud for different reasons.

Some of us are pressed to optimized highly virtualized environments by implementing lifecycle management; others need to address a problem in rapid provisioning or VMs on a daily or weekly basis. Some others might be looking at some way of implementing standardization; others will be driven to provide self-service capabilities. So it is not foreign to find bits and pieces of private cloud taking place and that does not hurt.

Of course, there is nothing that can beat a properly organized and chronologically implemented project, but private cloud pieces can be modified to fit an overall plan once it is adopted. For example, if you want to implement chargeback but your accounting system does not allow or accept entries that are less than $10,000, that is a business decision that needs to change to allow it and it usually is the fruit of the business process project of private cloud. If you want to enforce standardization, then your procurement practices have to change and that is also driven by changes in the business that need to be implemented and enforced.

In both examples, if you already build the catalogs, the services, the offerings, assign values to Showback, then changing to chargebacks and specifying a different value is easy. Changing your standards or modifying them as a result of a business decisions is also relatively easy but you have already done the heavy lifting, you have already deployed the right pieces to enable these changes.

Do you have to have an Accenture or similar consulting firm begin the private cloud project before you can even touch the technology? My answer is definitively No, although in very large enterprise these projects may be driven by the business. In most cases you will find that VMware, Microsoft, Citrix and others are pushing their virtualization technology forward and as a result of that many organizations will begin adopting private cloud principles independent of the business.

Are you deploying or planning on deploying private cloud this year? How are you approaching it from the business side or IT side? If from the IT side, do you have any plans or do you know of any plans for the business to optimize process around private cloud? Is your senior IT management aware and involved? Please share in the comments section.

Posted by Elias Khnaser on 03/27/2013 at 12:49 PM2 comments


A Peek at Unity Touch in VMware Horizon View 5.2 Feature Pack 1

Not long after Horizon View 5.2 was launched, VMware followed up with Feature Pack 1. Boy, the company wasn't kidding when it said it would focus on EUC this year!

The FP1 release marks the official launch of Project AppShift, the technology that enhances the Windows desktop user experience on mobile devices. As you may be well aware, using a Windows desktop on a tablet can be challenging and difficult given the amount of zooming and scrolling that you have to do in order to execute the command. Project AppShift, now officially named Unity Touch addresses these challenges by decorating the Windows desktop with enhancements like larger icons and quicker, easier access to commonly used task and other tweaks to make it a much more pleasant user experience.

Now, for those of you that have been testing and paying around with Unity Touch, if you have come across some technical difficulties in that it is not working as advertised, please note that the Windows Firewall Service is required to be turned on and that, by default, when you apply the batch file supplied with the VMware View Optimization Guide for Windows 7, it automatically turns that service off. I am pretty sure that in the next release of the Optimization Guide batch file VMware will address this issue -- for now, just be aware of it.

Of course as a new feature, you must observe the minimum software requirements that are needed for this to function properly as follows:

  • View Agent 5.2 or newer
  • View Connection Server 5.2 or newer
  • Remote Experience Agent

Have you used Unity Touch yet? I am eager to hear from those that have been testing it who can offer some feedback on any nuances or drawbacks that you have observed. Please share so that other may also learn from your experiences in the comments section here.

Posted by Elias Khnaser on 03/25/2013 at 1:36 PM2 comments


Who VMware Should Fear in the IaaS Public Cloud (It's Not Amazon)

Amazon is not the only IaaS Public Cloud that VMware should be worried about!

I am sure by now you have all heard the controversy around VMware CEO Pat Gelsinger's comments at the company's recent Partner Exchange. To paraphrase, he cautioned partners that any workloads on Amazon public cloud are lost forever and that no one wins in that situation, not VMware and certainly not its partners.

Some think his comments are short-sighted and unjustified. VMware does not have a public cloud offering, so how can it fault Amazon for having a service that his company doesn't and verbally attack them so blatantly? In any case, I'm one of those who believe his attacks are short-sighted and here's why: VMware see Amazon Web Services as an enterprise IaaS threat, but it is neglecting the fact that Google will most likely launch its version of an IaaS probably right around the same time as VMware Public Cloud.

Google's Compute Engine is currently in beta testing or limited preview and does not support Windows workloads, which is completely unacceptable if Google aims to satisfy consumer or enterprise workloads. However, the consensus is that Google will get it right by launch date and many analysts believe that the public cloud will then come down to Amazon vs. Google.

I don't share that analysis and believe the public cloud is a very crowded space with backing of organizations with deep pockets. IBM, Microsoft, Rackspace, AT&T, Verizon, and Savvis are just a few of the companies that will most certainly carve a piece of the public cloud service for their own.

As such, VMware entering this market is not going to be easy. Google owns fiber-optic networks while VMware, Amazon and others will all rely on ISPs, thereby significantly affecting performance and SLAs among other things. All that being said, VMware does have an advantage. If they can properly demonstrate the capabilities of vCloud in the public cloud and and can demonstrate to enterprises what they should expect their private cloud deployments to look like, VMware would be uniquely positioned to win on both sides of the spectrum. Enabling the hybrid cloud would be the culmination of its efforts.

VMware vCloud has to evolve more into a services model that focuses on the delivery of a service rather than the current dominant focus on infrastructure. Don't get me wrong -- vApps and other enabling technologies are effective but they have to be further expanded and empowered.

Finally, I think VMware's decision to move into the public cloud is a spot-on move. It remains to be seen how well they will execute on it and how well they will enhance the vCloud suite to be able to convince enterprises that it is the right choice in the cloud era.

What are your thoughts on VMware entering the public cloud IaaS arena? Comment here.

Posted by Elias Khnaser on 03/18/2013 at 1:37 PM7 comments


Subscribe on YouTube