On the eve of VMworld 2014, I can't help but be excited about the potential number of announcements expected at this year's show. As I consider past VMworlds, I've seen an evolution: the conference has moved past the point of being a place to announce the next version of vSphere, to become the focal point around which the entire industry revolves. Vendors wait for the event to launch new products, new companies come out of stealth mode, and customers leverage it to plot their course of action and plan ahead. That's a long way to come for a company that began with a single product that abstracted server hardware from software.
With that in mind, I'll take a look at some expected announcements and speculate on what I'd like to see at VMworld.
In keeping with tradition, VMware is expected to announce the availability of vSphere 6, an upgrade to its flagship virtualization platform. vSphere 6 promises new features, enhancements to existing features and of course raising current maximum limits of hosts and virtual machines to support larger workloads and higher levels of virtualization.
Virtual Volumes, or vVols, are expected to make its debut this year. Many are looking forward to vVols, and for good reason: they'll simplify the provisioning and management of storage in a virtualized environment. I expect vVols to transform the storage industry, specifically as they relate to virtualized environments.
"Project Marvin" is another expected announcement that will literally steal the show. This is VMware's venture into the hyper-converged space; if I was a betting man, I'd put money on VMware leveraging SuperMicro hardware to empower this solution, in addition to its traditional server OEM vendors that will ship SKUs of their servers coupled with Project Marvin. Since we're speculating, I wonder what name this new hyper-converged infrastructure will carry: vPOD? vPAD? Or maybe vHYPER?
I'm also confident we'll hear several announcements around VMware NSX, its software-defined networking (SDN) solution. In addition, I expect VMware to announce expanded geographical coverage for its vCloud Hybrid Service (vCHS). With that, I hope to see expanded features, especially a Disaster Recovery as a Service (DRaaS) integration with Site Recovery Manager (SRM). I also hope to see more competitive pricing with the likes of Amazon Web Services (AWS) and Microsoft Azure as far as the Infrastructure-as-a-Service (IaaS) offering is concerned.
It's hard to imagine VMworld 2014 without a mention of container-based technology, especially after the splash that Docker made. I think it's inevitable that vSphere will eventually support containers, given the many benefits it adds. This is especially true when it comes to application mobility and portability in the cloud. I'm hopeful that one surprise will be a demo of container technology running on vSphere.
My other wish list item is around the enhancement of Secure Content Locker, the cloud file sync tool VMware acquired via AirWatch. My hope here is that VMware would acquire or fold EMC's Syncplicity and integrate the two products for a more full-featured solution. I have a lot of expectations around the end user computing announcements, especially regarding integration between VMware products. (More to come on that in a future blog).
A Multi-Front Battle
VMware is tirelessly innovating, in many ways. In addition, it's taking on significant competitors in numerous areas, including:
- In the cloud, specifically pitting vCHS against Amazon AWS and Microsoft Azure.
- Microsoft, on the hypervisor front. Yes, that battle is still ongoing.
- It's in fierce competition with Citrix in every arena of end-user computing.
- Its relationship with Cisco has deteriorated as a result of its thunderous entrance into the SDN space, a battle so fierce it prompted Cisco CEO to say "We will crush VMware in SDN".
- Hyper-convergence. Here it faces competition from OpenStack, Docker and others.
With everything happening in the industry, the question inevitably becomes whether VMware can realistically maintain its aggressive course without prioritizing and choosing its battles? It's crucial for management to understand the importance of prioritization on the quality and innovation of its products.
For those of you attending this year's VMworld, I hope you enjoy the show. For those that can't make it, be sure to follow this blog, as I'll be covering all the new announcements.
Posted by Elias Khnaser on 08/07/2014 at 8:24 AM0 comments
Today there is a definite trend, or as some may like to call it, a "wave" for hyper-converged systems that take a more modular, Google/Facebook-like approach to datacenter build out. While the concept of converged infrastructure has been around for many years, hyper-converged is a twist on that with a smaller form factor and true convergence between the different components. I say "true" because some of us -- myself included -- would even classify the traditional converged infrastructures as mere reference architectures with a unified management software layer. But that's a controversial topic for a another day.
As you may know, VMware is working on a not-so more super-secret project codenamed "Project Marvin". This is an attempt at pre-packaging hardware and software, and accelerating not only the sales cycle, but also the deployment and implementation cycle for the customer. VMware is validating the strategy that others like Nebula have already embarked on by simplifying the Software Defined Datacenter story. Project Marvin is a hyper-converged integrated system that combines CPU, memory, network and storage in a small factor standard x86 server. The project, in collaboration with EMC, will feature hardware and software from both companies. The possibilities for this solution are infinite, and while most of this is still speculation (hopefully to be confirmed at VMworld), just imagine how easily this could be applied to the Horizon View suite as well. VMware's making an excellent bet.
Should Citrix follow suit? Definitely: In fact, I think Citrix should have been out in front of VMware in this approach, especially considering it's already in the hardware business with NetScaler. That's why I wrote a few months back that I thought Citrix would be well served by acquiring Nutanix.
Let's take a closer look. Citrix owns CloudPlatform, the commercialized, productized version of the open source project CloudStack. Citrix has had limited success with CloudPlatform in the enterprise, although in the public cloud sector CloudStack is widely used. CloudStack, just like OpenStack, is realistically impossible to implement in any acceptable time frame, and would require an army of consultants for extended periods of time to deploy. This is why companies like Nebula have simplified that process by providing a pre-packaged, hardened version of OpenStack at a reasonable price that can be deployed in a very short period of time.
If Citrix hasn't yet learned that CloudPlatform will be hard to adopt without a similar approach, VMware's venture into this space should be the wakeup call. It needs to understand that adoption won't happen without a reasonable method of implementation. Why not have CloudPlatform pre-packaged on a hyper-converged system like NetScaler, or in conjunction with vendors like Nutanix? Even better, it could be an expansion of Citrix's current hardware play with a system dedicated to CloudPlatform.
That's not the only area or product in which Citrix can leverage this type of solution. During Synergy 2014, Citrix announced a service called Citrix Workspace Services. It's essentially a pre-configured Citrix XenApp/XenDesktop deployment on Microsoft Azure or Amazon Web Services (AWS), effectively abstracting that entire process from the customer. This gives the customer ready-to-be-configured infrastructure; all they need to do is tweak it for their environment and upload their images. It also provides them with ongoing support, and perhaps even patching and updating.
That's great for the cloud, right? Now take this exact same concept, without changing a thing, and instead of deploying on AWS or Azure, pre-package and sell it on a hyper-converged system. This would significantly accelerate XenDesktop deployments as well. You can then slip XenServer into the mix since you're selling a closed solution, and increase adoption of this product, as opposed to everyone deploying your products on top of vSphere.
Citrix, more than any other company, absolutely needs the hyper-converged approach to carve out a piece of the datacenter infrastructure, at least for its products. Going the route of the OEMs is not enough; frankly, it's a 1990s approach. It doesn't need to be scrapped, but it's just not good enough on its own anymore.
Do you agree that a hyper-converged system would benefit Citrix and increase adoption of its products? Let me know in the comments section below.
Posted by Elias Khnaser on 07/29/2014 at 12:04 PM0 comments
In our industry, customers have the misconception that software has to work out of the box for everyone. While that expectation is true for the most part, when it comes to a platform that hosts your desktops and applications it is very difficult to configure the software to work right out of the box for all environments, given the different variables that exist. Maybe, one day when Big Data Analytics is truly mastered and matured it can be incorporated into our software to be able to tune and configure appropriately. Until then, we stuck doing the hard work manually.
With that lets' talk about Citrix HDX 3D Pro. The key to a successful Citrix XenDesktop or XenApp deployment with excellent performance is understanding what to configure and how to configure it. Sometimes that is not very easy or apparent.
For instance, if you are looking to improve the performance of HDX 3D Pro, the key would be to tweak some Citrix Studio Policies appropriately, but which policies should you tweak? If you have played around with Citrix Studio, you know that the HDX 3D Pro Policies node is limited in terms of configuration. As a result, you have to tweak some of the other more generic policies in order to get the ideal performance. Here are some policies to consider:
- EnableLossless -- Controls whether or not users are allowed to use the image quality configuration tool to control lossless compression
- Lossy Compression level -- Controls how much lossy compression is applied
- Moving Images -- Contains a section for XenApp and one for XenDesktop and controls compression for dynamic images, an important policy here is Progressive Compression Level which allows for faster display of images at the expense of lesser details
- Visual Display -- This is a collection of polices that control the quality of images sent from virtual desktops to end user devices. The most important policy here is the Visual quality
I am pretty sure what is on your mind at this point is, what are the correct configurations for these policies? My answer is ... wait for it .... "It depends." I say this because in all my years working with server-based computing, I have come to the conclusion that tuning these environments is more of art than a science, and it really depends on whether you are configuring a LAN, WAN or Remote Internet Access, available bandwidth, number of users, and so on.
Citrix understands that too, so it developed the Image Quality Configuration Tool to allows users to tweak their user experiences in real-time and balance between image quality and responsiveness. You're probably thinking, "this is a mess of epic proportions -- users don't know how to do this." I somewhat disagree. I learned over the years that we underestimate users' capabilities and when given an easy tool to work with, they are willing to try and be self-sufficient technically. They will try and mess with the settings and see if they can optimize their experience. Think about it this way: You are getting the call either way, so this tool is a glimmer of hope that they might be able to do it themselves to some extent.
If you have other policies that have worked for you in the past that would improve the performance of the HDX protocol, please share in the comments section.
Posted by Elias Khnaser on 07/07/2014 at 10:42 AM0 comments
This is not really a new tip, but I have been coming across so many customers lately that either are not virtualizing vCenter because they have had a bad experience or because they believe that this is a core infrastructure piece that needs to remain outside of the virtual environment it manages. I am of the opinion that vCenter should be virtualized and that there is only goodness that comes out of virtualizing vCenter server. That being said, like with everything in life, doing it right makes all the difference in the world.
A customer who was about to migrate vCenter back to a physical machine had an outage and did not know which ESXi host the vCenter was running. From that, they arrived at the conclusion that it was far too risky to continue.
After a bit of conversation I convinced them of all the benefits of keeping vCenter virtualized as long as some basic best practices were taken into consideration. To address their issues, I suggested that they set up a VM-Host Affinity rule. What that allows them to do is to pin vCenter on to a set of ESXi hosts so that it can continue to participate in DRS balancing and all the other benefits, but it can only vMotion among the ESXi hosts that they specify.
How is this beneficial? In the event that they should have another outage, they will immediately know that vCenter resides on one of the ESXi hosts that they specified in the VM-Host Affinity rule. One of the other recommendations that I gave them is since they were using blade servers, I suggested that the hosts that they pick to participate in the support of vCenter Server should not be on the same physical chassis. In other words, pick ESXi hosts from different chassis will limit your exposure a bit more.
Now, I know some of you will say that we can prevent vCenter server from participating in DRS altogether and just pin it to one ESXi host. That definitely works, but I would recommend the more elegant solution of letting vCenter partake in all the benefits of DRS and specifying the hosts to which it can move -- simple enough, right?
This is just one of many best practices around vCenter Server as a VM,, but what I wanted to leave you with is that vCenter can and should absolutely be virtualized and reap the benefits that virtualization presents.
Posted by Elias Khnaser on 06/30/2014 at 9:35 AM0 comments
One of the complaints I typically hear about Hyper-V is that while Windows performance is quite solid, when it comes to Linux the performance is just not there. Today, I want to introduce you to a tip that will improve your Linux VM performance anywhere from 20 to 30 percent.
Linux kernel 2.6 and newer offer four different I/O elevators -- that is, it uses four different algorithms to accommodate different workloads and provide optimal performance. These elevators are:
- Anticipatory: This is ideal for an average personal computer with a single SATA drive. As the name suggests, it anticipates I/O and writes in one large chunk to the disk as opposed to multiple smaller chunks. This was the default elevator prior to kernel 2.6.18
- Complete Fair Queueing (CFQ): This algorithm is ideal for multi-user environments, as its implements a Quality of Service policy that creates a per-process I/O queue. This is ideal for heavy workloads with competing processes and is the default elevator as of kernel 2.6.18.
- Deadline: A round robin-like elevator, this deadline algorithm produces near real-time performance. This elevator also eliminates the possibility of process starvation.
- NOOP: Aside from its cool name, NOOP stands for "No Operation." Its I/O processor overhead is very low and it is based on the FIFO (First In, First out) queue. NOOP pretty much assumes something else is taking care of the elevator algorithm, such as, for example, a hypervisor.
All these algorithms are great, except in a virtual machine these are bottlenecks since the hypervisor is now responsible for mapping the virtual storage to physical storage. As a consequence of this, it is highly recommended that you set the elevator algorithm to NOOP which is the leanest and simplest elevator for virtualized environments. NOOP allows for the hypervisor to then assign the ideal elevator and that in turn yields better performance for Linux VMs. The only prerequisite for this tweak is that the Linux distribution you are using must be at kernel version 2.6 or newer.
You can make this configuration change by editing the boot loader's configuration file /etc/grub.conf and setting / and adding elevator=noop.
It is also worth noting that as of kernel 2.6.18, it is not possible to set elevators at a per-disk subsystem level as opposed to a system level.
Posted by Elias Khnaser on 06/23/2014 at 4:12 PM0 comments
Quick Tip #1: Today I want to cover a quick Microsoft Hyper-V tip that is simple and very straight forward in nature but seems to go overlooked with many customers.
For those of you that were not aware, there is a Hyper-V Administrators group available locally to every Windows 8, Windows 8.1, Windows Server 2012 and Windows Server 2012 R2 which can allow any member to have full Hyper-V administrative control without necessarily belonging to the Local Administrators group on that machine.
[Click on image for larger view.]
Figure 2. A quick tip for those who tend to ignore role-based access settings in Hyper-V.
Like I said this is a quick tip but one that seems to go overlooked. I highly recommend that you enforce role-based access within your environment if you don't do so already, and leveraging these built-in, pre-defined groups is most definitely the first step in the right direction.
Quick Tip #2: Last week, Microsoft released a new tool to help you assess workloads and guide you through how to migrate these workloads to the cloud. The tool is called Microsoft Azure Virtual Machine Readiness Assessment and you can download it here.
[Click on image for larger view.]
Figure 2. Microsoft Azure Virtual Machine Readiness Assessment does more than what the name implies.
Don't let the name of the tool mislead you. While it specifically references virtual machine, once installed the tool will crawl your infrastructure -- both physical and virtual -- and search for Active Directory, SQL, and SharePoint servers.
Once identified, it will then give you detailed recommendations and guidance on how to migrate these workloads onto Microsoft Azure.
Posted by Elias Khnaser on 06/16/2014 at 2:43 PM0 comments
I have always maintained that the cloud will arrive in bits and pieces, not in a single large chunk. The shift to cloud computing will be slow and selective instead of a complete rip and replace where you migrate everything to the cloud as if you're building a brand new datacenter for your enterprise.
I have also maintained that cloud coupled with consumerization will significantly loosen IT's historical grip on technology, so much so that if left ignored or unchecked it will lead to "shadow IT" and to unsponsored cloud services sprawl within the environment.
That unchecked cloud sprawl is already occurring and many factors are contributing to it, primarily because we are taking cloud and its impact too lightly.
At one point saying cloud would immediately lead to every other Joe cracking a cloud joke about what cloud really means and how irrelevant and meaningless it is. Meanwhile, users are installing and using consumer cloud services like Dropbox and business units such as marketing were consuming enterprise-grade services like Amazon AWS' IaaS. Even CIOs and IT management started to realize the benefits of cloud and began to adopt SaaS services like Salesforce and adopt services like Office 365 and outsourced mail, unified communications, backup and much more.
Take a step back and look at this spectrum of services I just mentioned while keeping in mind this is a subset of what is really going on out there. The problem is bigger. Correlate the use of those services with the fact that each of these services is assessed, designed, deployed, consumed and supported differently with separate contracts for each and you will quickly realize that almost every enterprise is experiencing cloud services sprawl whether they like it or not.
And yes, the move to cloud shifts from a CapEx-heavy model to an OpEx-optimized one, but with all these services in an enterprise, who is really tracking them all and how? By generating reports and using Excel spreadsheets to manage this is inefficient and very passive as opposed to real-time, up-to-the-minute data that allows you to see the big picture immediately. This assumes that you are aware of all the cloud services that are being consumed, so what about those that you don't know about?
Organizations that realize this problem exists and needs to be fixed sooner rather than later will be able to transform IT from a center that delivered IT-built internal services to a broker that governs cloud services and that allows internal IT-built services to compete with public cloud delivered services. Here's an example: Today, if your CMO is using Amazon to run his/her campaigns, you will find it very difficult to curb or stop that CMO form doing so without providing an alternative that is just as efficient, just as good and most importantly just as fast (from a provisioning perspective). This is one of the drivers that I explain to my customers as to why a private cloud on-premises deployment is crucial to them. By deploying a private cloud, IT provides an alternative that they can then take to the CMO or other business units that are consuming external resources and offer up an internal solution which may have several advantages in security, cost efficiency, privacy and more.
Another example is deploying an enterprise-class cloud file sync solution where you can then go your users and remove access to Dropbox while providing a solution that is sponsored by IT and offers the same functionality with benefits as mentioned before.
This approach now earns IT the right to take away services from business units and regular users that are consuming these services, but how do we bring all these services under a single pane of glass? How do we discover cloud services that are running within our environment? Furthermore, how do we present to our internal customers external services from Amazon side by side with similar services delivered by IT and allow them to weigh the price comparison and all the other pros and cons of each solution and consume whatever they are willing to pay for? How do we bring all the different cloud services that we have today under a single pane so that we can track the billing and chargeback or showback in real time? Basically, how do we transform IT into a broker of services?
One of the companies that I have had a "cloud crush" on is Gravitant with its CloudMatrix platform allows enterprises to govern external cloud services and offer up IT-designed and -built services under a single pane of glass. CloudMatrix also allows you to discover what cloud services are running in your environment and bring those under the same platform as well. Real-time billing and consolidation of billing is a huge benefit that organizations will appreciate.
The cloud is already changing the enterprise IT landscape, and we should recognize that our role is not restricted to building technology services but evolving to also governing a lot of these cloud services that are being consumed by our users, our business units and lately by IT. Platforms like CloudMatrix brings discipline, organization and clarity to a chaotic space that is bound to continue to grow.
Posted by Elias Khnaser on 06/03/2014 at 3:20 PM0 comments
I have, on several occasions advocated that Citrix should buy Rackspace, most recently here in my InformationWeek blog and some time before that, I suggested that Cisco should buy both Citrix and Rackspace here in my Virtual Insider blog.
I still stand by my recommendation that if Cisco really wants to have a chance in hell of "crushing" VMware (John Chambers' words, not mine) in the software-defined networking space, it will need a hypervisor, and unless it plans to acquire Red Hat or Oracle, it's only other choice is Citrix. (I'll let that topic marinate for a while and talk about it some other time when we can expand on it a bit.)
When I wrote my columns on Citrix or Cisco acquiring Rackspace, I based my opinions on my knowledge of things that you can observe from outside of Rackspace, things like stock price dropping, CEO vacancy, a sales force incapable of selling to the enterprise, and so on. This past Friday, Rackspace pretty much put itself up for sale by hiring Morgan Stanley to broker a deal with a potential suitor.
I still maintain that it would be a strategic "miss" on Citrix's part to not step in and acquire Rackspace. Citrix cannot allow itself to become middleware, it just cannot be satisfied with that position. The acquisition of Rackspace would shoot Citrix to the forefront of the cloud conversation in the public eye and get the attention of enterprises, especially what with all the buzz going on around OpenStack.
Rackspace is the perfect size. It's not too big, not too small, and can be easily digested by Citrix and groomed for growth. Citrix can bring a lot of goodness to Rackspace, especially from a sales perspective and how to sell in the enterprise. Citrix has an army of account reps that focus entirely on the enterprise. Meanwhile, Rackspace has an inside sales army that develops leads, for the most part.
In addition to the sales force component, Citrix can also augment Rackspace from a large partner community perspective and also from a consulting services and expertise perspective. In many ways, it can help RAX build the intellectual property needed to support enterprises moving to the cloud, not to forget that Citrix can also bring education and training capabilities for partners and IT professionals.
Rackspace, on the other hand, puts Citrix front and center into the cloud conversation. Citrix/Rackspace would be to Microsoft Azure what Citrix XenApp was to Microsoft Terminal Server. Think about it: Microsoft Azure is a very generalized cloud infrastructure. Rackspace is all but generalized -- its claim to fame is "fanatical" support, in addition to value-added services that one cannot obtain from the likes of Azure, AWS, IBM or others. Is that not what Citrix has always done -- add value, features and capabilities on top of a platform? Why can't that platform be the cloud?
Citrix must not sit this one out and be on the side lines enabling partners or take the approach that it is just middleware to everyone else. The cloud is too big to sit out and it is not enough to have enterprise software for on-premises or for other cloud service providers to use. Citrix needs to enter this game and Rackspace gives it that opportunity.
Missing out on Rackspace would be a mistake, as Citrix won't come across another CSP with such strong brand recognition, the right size and poised for growth with the right suitor. And Rackspace owns one of the most talked about cloud management stacks in the industry today.
Citrix can also then offer Workspace Services on Azure or its own cloud, just like it has XenServer and it supports Hyper-V and vSphere.
If not Citrix, Cisco would do wonders with Rackspace in a short period of time. Cisco announced its Intercloud intentions and I expect that we will hear a lot more about this during the upcoming Cisco Live event in June.
Speaking of Cisco Live, what a great time and venue to announce the Rackspace acquisition. Cisco can bring to Rackspace all the benefits and goodness that I mentioned that Citrix can bring, and then some. The only reason I like pairing Citrix with Rackspace as opposed to with Cisco is that in the latter RAX would quickly melt in the Cisco engine. Of course, Rackspace would be a strategic project for Cisco, but with Citrix, Rackspace would elevate the companies to a different level.
I have always maintained that cloud is first and foremost about scale, and that's why there's a definite advantage to Cisco acquiring RAX, as it has the ability to rapidly and "fanatically" expand RAX to Amazon- and Azure-scale levels. Cisco definitely has the means and capabilities to do that.
Let's not dismiss other potential suitors for RAX. VMware which could use Rackspace to enforce vCHS. I'd be less excited to see EMC buy it just because I think the synergies would be missing. And if IBM acquired Rackspace it would be to add more capacity and to get OpenStack, which is not exciting or interesting and I don't see IBM being motivated to create something new or different. The same applies to HP, Dell, Verizon, ATT and a whole slew of potentials. I just don't see them being successful.
I truly hope Citrix steps in and picks up Rackspace, as customers would be the ultimate winners.
Posted by Elias Khnaser on 05/22/2014 at 7:00 PM0 comments
Last week at TechEd 2014, Microsoft released Azure RemoteApp. While speculation was floating for months whether Microsoft would venture into the desktop as a service space with what was internally known as "Project Mohoro," instead Microsoft dodged that bullet and released what is essentially a SaaSified version of its Remote Desktop Session Host platform.
Microsoft's infatuation with its client-OS licensing and its refusal to relax its service providers license agreement to allow deployment of Windows client OS as a service (a.k.a. DaaS) and to simplify enterprise VDI deployments is a truly a riddle wrapped in a mystery inside an enigma.
I can understand that Microsoft wants to capitalize on Windows client OS licensing, as that is a strategic product for them. What I don't understand is why it needs to be so complicated and how they can justify hanging on to this policy when the CEO clearly set the ship's course to "Cloud First, Mobile First." Seriously, Microsoft, it's time to rethink VDI licensing.
Coincidentally (or not), the Microsoft announcement on Azure Remote App comes merely a week after Citrix announced its Workspace Services, which is Citrix essentially SaaSifying XenApp and XenDesktop and thus repeating the synergies that existed in the 1990s between MetaFrame/Presentation Server/XenApp and Terminal Server/Terminal Services/RDSH. These two companies seem to be auto-magically attached especially around these two products in all their transmutations.
The Azure RemoteApp cloud service will allow customers to deploy preconfigured and provisioned Microsoft applications or customer-provided applications. RemoteApp's advantage is that it removes the complexity of deploying infrastructure and maintaining that infrastructure.
Let me be clear: By infrastructure I am talking about deploying the virtual or physical machines, installing and configuring an operating system and then maintaining and patching that operating system and all its dependencies. In addition, scaling this infrastructure up and down as the load increases or decreases has traditionally also been on the IT professional. RemoteApp simplifies all of this by abstracting the platform-related tasks and offloading them to Microsoft, and this includes automatically scaling the environment. By doing this, Microsoft allows the IT professional to focus on what matters most, which are the applications that directly impact and affect the business. IT professionals can very easily upload images that contain all the necessary applications onto Azure RemoteApp. From there and on, it is a true cloud service offering. I must admit that I find this to be really cool.
Now let me give you another tidbit to think about: Since Citrix Workspace Services s also built on Microsoft Azure Infrastructure as a Service, would it be inconceivable for one to think that there will be a sort of connector that would allow Azure RemoteApp to be delivered into CWS? We can accomplish this today with Microsoft App-V configuration within XenApp and XenDesktop, so why not extend this to the cloud?
All in all, Azure RemoteApp will definitely hit home with many enterprises and many IT professionals as we journey into the cloud era. As more enterprise applications and platforms either migrate to the cloud or are integrated into a cloud strategy, it will only be a matter of time before Microsoft relaxes its grip on licensing for VDI. What Microsoft is doing with licensing no longer resembles the times we are in -- that model belongs to an era that is long gone.
Posted by Elias Khnaser on 05/19/2014 at 1:07 PM0 comments
Citrix proved at Synergy 2014 last week that mobility and end user computing is at its core DNA and that the company is still able to innovate and think outside the box. I was completely surprised when it announced Citrix Workspace Services. It is not even beta yet, but what Citrix previewed was really cool and very telling as far as where the company will take its flagship products and how it intends to compete in the future.
The keynote announcement around CWS definitely left us all asking a thousand questions. To simplify what Citrix is trying to do here, Citrix is SaaSifying its XenApp and XenDesktop platforms -- as simple as that. Instead of the customer having to install and configure XenApp and XenDesktop, that configuration will be simplified and streamlined on top of Microsoft Azure. There will be connectors available to plug-in resources from on-premises to public clouds like Amazon and others, I am sure. In essence, Citrix has separated the management plane from the resources plane.
Let's break this down even more. The management plane lives on Azure and is a managed instance that Citrix makes available to customers. As the architect, engineer or admin of your company, you can log in to CWS, select the version of XenApp or XenDesktop that you wish to build and configure your database (hosted on CWS), and maybe even choose the geography where you want to deploy that instance.
Once you've configured your management plane, you can then start connecting resources. If your hypervisor of choice is vSphere and that happens to be on premises, you install a connector on vSphere and attach that to the management plane. Now from within your CWS, you can see vSphere available for you to provision against. You can do the same with others on a geographical scale, where Hyper-V is used in one location and XenServer at another, and so on.
Once you've identified resources, you can then provision VDI instances, XenApp instances or a mixture of all the above. This is definitely a positive mutation from DaaS, this carries the best of both worlds and simplifies upgrades and migrations to new versions or even newer platforms.
What I like even more about this is the ability to plug in additional Citrix products. So, by following this approach, if a customer wants to use ShareFile, they would simply enable it and connect it at the management plane level and that would make it available for immediate use. Think about how simplified licensing would become as well and how quickly you can consume licensing. Unexpected blizzard in Chicago that shuts down the city and you need licenses immediately? No problem, how many do you need?
CWS does not mean that you will not be able to install XenApp, XenDesktop and all the other Citrix components the traditional way anymore, you will always have that option, but CWS significantly simplifies that. Heck, I am willing to bet that once CWS is rolled out, we will see Citrix rolling out CWS virtual appliances for on-premises deployment. Just like Microsoft promises that future versions of Windows Server will look like Azure, future versions of Citrix products will also look like CWS in that they are containerized virtual appliances. This opens up a slew of managed services offerings that Citrix can directly manage or that partners can manage on behalf of customers.
I was pressing Citrix hard for a cloud strategy and I still am -- I don't think one clearly exists yet. Still, I will also admit that CWS is a very interesting take on innovation without trying to build it themselves and get into this DaaS turf war. Citrix with CWS will be able to carve out a large piece of the pie and with a single master blow address any questions around deployment complexity, time to deploy, platform for deployment and many more. You want DaaS? CWS offers it. You want on-premises? No problem, you want both? Sure thing.
Citrix definitely continues to lead and innovate in the end-user computing space and has shown time and again that they are able to get out of a pinch, out of a corner. I say this because prior to Synergy, that's where VMware had Citrix. Now I feel that Citrix not only is out of the corner but will also force VMware to follow this model because it makes perfect sense. It's as if the competition is making these companies feed off each other in this space, and the innovation as is a result is bringing out the best in each company.
Posted by Elias Khnaser on 05/12/2014 at 1:32 PM0 comments
Next week will be an interesting week as I bounce between Citrix Synergy in Anaheim and EMC World in Las Vegas.
For those of you attending EMC World, without a doubt "Project liberty" without a doubt will steal the show. Projectd Liberty is EMC's first foray into software-defined storage. I am very excited about it and you'll find me hovering around "Area 52," the code name for the location where Project Liberty will be featured.
What is Project Liberty in plain terms? It's a virtualized-software-only VNX appliance that can be deployed on pretty much any kind of hardware, whether that hardware was purchased from EMC, a competitor or something customized from off-the-shelf hardware.
I am sure one of you is thinking, "EMC already offers a VNX virtualized appliance, so what's the big deal?" The big deal is that the current offering is not production-ready and is meant for training purposes for the most part. Project Liberty will be fully production-ready on EMC as well as other hardware.
The great thing about this project is that it accelerates and simplifies cloud adoption. So, you will be able to deploy an instance of the VNX appliance in your preferred cloud and manage it just like you would your on-premises deployment. This enhances several cloud solutions, especially DRaaS.
There's more. You can deploy the VNX instance for point solutions like VDI. In the past, you had to acquire hardware and software from EMC or others. With Project Liberty, you'll be able to deploy the VNX appliance and leverage whatever hardware you want. This can also be deployed in remote offices where needed. The idea is to have the same software, and have centralized management of your storage environments whether it's on-premises locally, remotely or in the cloud. This approach will lend itself very well to private cloud deployments because it significantly facilitates automation and orchestration.
I am sure we will hear about integration with OpenStack and VMware vCloud at some point.
Let's switch focus to Citrix Synergy, where there's also a lot going on. I'll mainly be on the lookout primarily for XenApp 6.5 to 7.5 migrations, feature parity and feature releases. I'm also looking to get a sense for how Citrix will respond to VMware Horizon 6.
Also on my radar at Synergy is Citrix's public cloud strategy from a DaaS perspective in particular, but also what plans they have for CloudPlatform from an on-premises perspective and even more integration with the other Citrix products and suites.
Of course I cannot attend both conferences without finding that one vendor that will have some really cool technology to showcase, so I will definitely be on seeking them out.
I hope to see those of you that are attending either show out there. For those that can't make it, I promise to bring back as much relevant and interesting content as I can and to cover it in this blog. If there is anything in particular that you are expecting, hit me up in the comments section.
Posted by Elias Khnaser on 04/30/2014 at 3:18 PM0 comments
In a data-driven culture, the potential for maximizing business profitability by leveraging Big Data represents a great opportunity. But, it has been hyped and rumored that in order to manipulate this Big Data and be able to visualize it and drive the benefits from it, enterprises have to hire a new breed of specialists that scarcely exists today: data scientists.
Microsoft disagrees and wants to empower the average user to be able to manipulate and visualize data without being a data scientist by enhancing its front-end tools like the Office suite and minting them with Big Data and Business Intelligence capabilities. As a prime example, Microsoft aims to enable the average user to use Excel in conjunction with Power BI to translate regular rows of data into visual, actionable assets.
It's truly refreshing. For once, someone is simplifying Big Data and saying, "Look, it does not have to be this complicated. Human beings understand Big Data naturally, so why should technology complicate what we do naturally without knowing?"
Eli? Are you saying humans are Big Data analyzers? Yes, that is exactly what I am saying. Our nature is to absorb and analyze data of different sources, correlate it and then make decisions based on it. We even absorb structured and unstructured data as well naturally. Here's an example: When driving a car, you are absorbing and analyzing different data sources that are unstructured and unpredictable. First, you need to learn how to drive a car. That comes through a structured data source. Someone took the time to show you how to drive the car, showed you the gas pedal, the brake pedal, how to park, how to turn, etc. You learned the rules of the road from a book. All of this is structured data.
Now while driving, you absorb unstructured data in the form of pedestrians crossing the roads or bikers coming up on the side, pot holes and debris on the road. You also have to factor in weather and adjust your driving based it. The way you drive in snow and rain is different how you drive in sunny, 80-degree weather.
So, when you factor in the weather, road conditions and pedestrian and adjust your driving accordingly, are you not correlating different unstructured data sources which your brain is then visualizing in real time? Is that not Big Data? And if you don't need to be a car scientist to drive a car, you should not be a data scientist to analyze data. Of course, I want to keep things in perspective. Large, complicated data sets require advanced skill sets, and then again, driving a car is not the same as piloting a plane. And the same applies to data scientists.
With that all in mind, consider what Microsoft last week unveiled in its portfolio of products geared towards big Data. They started off with a new version of SQL Server 2014 that has in-memory processing for all workloads. That means that SQL can process workloads up to 30 times faster. That is huge and so is its potential financial impact on business from a profitability stand point. What is the most critical aspect of a sale? I will tell you without fail it is timing. The consumer is likely to change his or her mind if they are given enough time to process something. It has happened to me many times where I was ready to buy something, asked a question and it took the sales rep five minutes to get me an answer: "Sorry sir, my computer is slow today." No, your computer is not slow, your database probably is, and by the time I get my answer, I may not be interested in buying anymore.
Microsoft also announced an appliance-based Big Data analytics solution called Analytics Platform System that unifies SQL Server and Hadoop from a software stand point and leverages Microsoft partners for converged infrastructure on the hardware side.
Of course, no Big Data announcement is complete without a cloud twist of some sort: Microsoft announced Azure Intelligent Systems Service which will allow you to collect and manage data from the Internet of things. This is important because I constantly tell my customers, if the Internet of Things were to take shape and we start seeing sensors, machine-to-machine communication, human-to-machine, and so on, our private datacenters will never be able to keep up with the amount of compute or storage needed for this real-time world. We don't even have the ability, scale or discipline to rapidly build or expand our private data centers to keep up with this real-time world.
As a result, relying on public cloud services for its scale and ability to handle these large sets of data is inevitable and this is where this Azure announcement fits the bill perfectly. Microsoft is positioning Azure as a contender to host these data sets and enable customers to visualize them.
Microsoft's strategy is a smart one: enable the business at the local level with SQL Server on the back end and Office at the front end. Better yet, accelerate adoption by offering appliance-based software, hardware and support and position yourself to take advantage of the Internet of Things with Azure, so that customers that are using your on-premises based solutions will want to migrate to a platform that they are familiar and comfortable with when the time is right.
I definitely like the new Microsoft CEO and his vision. Thoughts? Please share in the comments section.
Posted by Elias Khnaser on 04/21/2014 at 1:10 PM0 comments