Microsoft Seeks To Empower Users With Big Data Tools

In a data-driven culture, the potential for maximizing business profitability by leveraging Big Data represents a great opportunity. But, it has been hyped and rumored that in order to manipulate this Big Data and be able to visualize it and drive the benefits from it, enterprises have to hire a new breed of specialists that scarcely exists today: data scientists.

Microsoft disagrees and wants to empower the average user to be able to manipulate and visualize data without being a data scientist by enhancing its front-end tools like the Office suite and minting them with Big Data and Business Intelligence capabilities. As a prime example, Microsoft aims to enable the average user to use Excel in conjunction with Power BI to translate regular rows of data into visual, actionable assets.

It's truly refreshing. For once, someone is simplifying Big Data and saying, "Look, it does not have to be this complicated. Human beings understand Big Data naturally, so why should technology complicate what we do naturally without knowing?"

Eli? Are you saying humans are Big Data analyzers? Yes, that is exactly what I am saying. Our nature is to absorb and analyze data of different sources, correlate it and then make decisions based on it. We even absorb structured and unstructured data as well naturally. Here's an example: When driving a car, you are absorbing and analyzing different data sources that are unstructured and unpredictable. First, you need to learn how to drive a car. That comes through a structured data source. Someone took the time to show you how to drive the car, showed you the gas pedal, the brake pedal, how to park, how to turn, etc. You learned the rules of the road from a book. All of this is structured data.

Now while driving, you absorb unstructured data in the form of pedestrians crossing the roads or bikers coming up on the side, pot holes and debris on the road. You also have to factor in weather and adjust your driving based it. The way you drive in snow and rain is different how you drive in sunny, 80-degree weather.

So, when you factor in the weather, road conditions and pedestrian and adjust your driving accordingly, are you not correlating different unstructured data sources which your brain is then visualizing in real time? Is that not Big Data? And if you don't need to be a car scientist to drive a car, you should not be a data scientist to analyze data. Of course, I want to keep things in perspective. Large, complicated data sets require advanced skill sets, and then again, driving a car is not the same as piloting a plane. And the same applies to data scientists.

With that all in mind, consider what Microsoft last week unveiled in its portfolio of products geared towards big Data. They started off with a new version of SQL Server 2014 that has in-memory processing for all workloads. That means that SQL can process workloads up to 30 times faster. That is huge and so is its potential financial impact on business from a profitability stand point. What is the most critical aspect of a sale? I will tell you without fail it is timing. The consumer is likely to change his or her mind if they are given enough time to process something. It has happened to me many times where I was ready to buy something, asked a question and it took the sales rep five minutes to get me an answer: "Sorry sir, my computer is slow today." No, your computer is not slow, your database probably is, and by the time I get my answer, I may not be interested in buying anymore.

Microsoft also announced an appliance-based Big Data analytics solution called Analytics Platform System that unifies SQL Server and Hadoop from a software stand point and leverages Microsoft partners for converged infrastructure on the hardware side.

Of course, no Big Data announcement is complete without a cloud twist of some sort: Microsoft announced Azure Intelligent Systems Service which will allow you to collect and manage data from the Internet of things. This is important because I constantly tell my customers, if the Internet of Things were to take shape and we start seeing sensors, machine-to-machine communication, human-to-machine, and so on, our private datacenters will never be able to keep up with the amount of compute or storage needed for this real-time world. We don't even have the ability, scale or discipline to rapidly build or expand our private data centers to keep up with this real-time world.

As a result, relying on public cloud services for its scale and ability to handle these large sets of data is inevitable and this is where this Azure announcement fits the bill perfectly. Microsoft is positioning Azure as a contender to host these data sets and enable customers to visualize them.

Microsoft's strategy is a smart one: enable the business at the local level with SQL Server on the back end and Office at the front end. Better yet, accelerate adoption by offering appliance-based software, hardware and support and position yourself to take advantage of the Internet of Things with Azure, so that customers that are using your on-premises based solutions will want to migrate to a platform that they are familiar and comfortable with when the time is right.

I definitely like the new Microsoft CEO and his vision. Thoughts? Please share in the comments section.

Posted by Elias Khnaser on 04/21/2014 at 1:10 PM0 comments


The Sky Is Not Falling On Citrix's Head

One of my favorite comic book series is "The Adventures of Asterix." In it, the fearless Gauls who resisted Julius Caesar's Roman Empire had but one fear in life, that the sky would fall on their heads tomorrow.

Last week, Brian Madden's post, "Cloud platforms diminish Citrix XenDesktop/XenApp's value. *This* is the opportunity for VMware!" essentially implied that the sky was falling on Citrix's head tomorrow and that VMware is on the verge of achieving "Plato's Republic" in terms of tight integration among its current and acquired products, remarkable automation and orchestration, perfect agility and mobility between cloud and on premise cloud-like deployments, exemplary and complete feature set across the portfolio.

Madden makes some excellent points, but it's a bit one sided and too much doom and gloom. The same argument with exactly the same points can be made for VMware if we were to consider that Microsoft already has all the components needed.

Microsoft has a very good hypervisor and I think everyone would agree that at this point it is good enough. The company has an excellent application virtualization technology. It has RDSH and RemoteApps. It has the remote protocol, whether RDP or RemoteFX. It has an excellent management suite in System Center. It has a VDI broker. And it has Azure, a very mature and very large-scale worldwide cloud deployment. Once project "Mohoro" comes to life, Microsoft will then also have a DaaS offering.

If all the above was not enough, Microsoft also controls licensing. Considering everyone is trying to virtualize its operating systems and its applications, that would give Microsoft a definitive edge. Microsoft also has decent integration across all its products and is making great progress in terms of private cloud.

All that being said, the question now becomes: Why would anyone use VMware? And ironically for that matter, Citrix? The world, however, is not that black and white and features matter a great deal. So does performance, security, scalability, maturity, and ease of use. This exactly where Citrix has been playing since its inception and this is exactly why VMware will be able to compete and why not everyone will just use a Microsoft solution.

VMware Has Its Challenges
VMware has always had excellent vision. Heck, it practically invented an industry that did not exist, and I am a fan of that vision. But its execution has been spotty and slow especially when it comes to EUC. It took VMware quite some time to collect all the necessary pieces to fulfill an end-to-end end-user computing solution. VMware now has most of the components and if rumors are true, VMware will soon announce a direct competitor to Citrix XenApp.

That's great, but VMware is now challenged with integrating all these products and history has shown us that VMware has been slow to integrate technologies. Think ThinApp, and profile management. VMware now has to integrate AirWatch with Horizon View. It also has to integrate AirWatch's Secure Content Locker with Horizon Data. VMware has to figure out how to integrate Desktop with Horizon View and how all this will tie into vCloud. VMware has to, at some point also acquire Teradici, something I have been screaming about for years. (VMware, you cannot OEM the heart of your solution, your remote protocol.)

So, VMware has its work cut out for itself and it will be busy for quite some time with all this integration stuff. Let's not forget the fact that as it integrates them, the company has to continue to innovate across all these products to remain competitive.

What's Up, Citrix?
Citrix on the other hand has done a wonderful job building a true end-to-end end-user computing suite that spans beyond just Windows desktops and apps to include MDM, MAM, cloud storage, collaboration, topped with a suite of networking products for security and acceleration and optimization. Citrix has been building this portfolio by acquisition and by development and has been integrating for quite a while now, and while tighter integration is still needed and some enhancements are needed here and there, Citrix is very far along and ahead in some cases.

Citrix has been so focused on its mobile work styles vision that it is completely missing out on the cloud opportunity. Sure, it has a good cloud portfolio with CloudPlatform and it has been working on integrating that with its mobile work styles vision. And that's exactly the problem: Citrix is integrating its cloud portfolio with the mobile and desktop virtualization product suites instead of going after the cloud from a platform perspective. Citrix absolutely has a lot of catching up to do in this space.

I hope that Citrix realizes that its strategy of empowering cloud service providers is a 1990s approach and that making an acquisition in this space would properly position the company to take advantage of the cloud just like Cisco, EMC, VMware, Microsoft and others are doing. Empowering is not enough, Citrix -- you must own a piece of the cloud. That will be beneficial for Citrix to deliver its own DaaS solution but also expand its offering.

My suggestion is that it acquires Rackspace, a company that has built a great brand name for itself and one that I believe Citrix is capable of acquiring financially. Citrix can bring a lot to Rackspace immediately in regards to solutions, but even more important is scale. See, Rackspace has for the most part been a direct consumer sale company with very little enterprise sales experience. Citrix can bring an army of sales people and partners that would immediately be able to sell the portfolio as they are familiar with it, with only a little bit of training needed. Rackspace brings to Citrix a profitable business, a great brand name and the ability to immediately own a piece of the cloud and begin to build and offer Citrix solutions. Some OpenStack/CloudStack bickering aside, a Rackspace acquisition is exactly what Citrix needs.

So, the sky is neither falling on Citrix's head, nor is VMware on the verge of achieving Plato's Software Portfolio Republic. Both companies have excellent vision, excellent product portfolios with gaps and a lot to improve on. I see VMware competing with Citrix neck and neck, and that means we as customers are poised to benefit from that competition with reduced pricing on XenApp, and better features from either company's solutions as the competition heats up.

Posted by Elias Khnaser on 04/08/2014 at 9:40 AM0 comments


How To Prevent Uncontrolled Use of VMs as Routers or DHCP Servers with Hyper-V R3

Hyper-V R3 has two advanced but somewhat overlooked networking features that can be handy and I'm sure administrators would appreciate them and put them to good use, so we'll cover them here.

You've worked in the enterprise long enough, so you've come across rogue DHCP servers and routers that show up on the network and could cause headaches. Many years ago before virtualization and even VMware, I had to deal with these types of problems, especially with physical developer workstations acting as DHCP servers (among other things) that our friendly developer colleagues innocently believed weren't a big deal. Back then tracking down these machines that were offering these services was not as easy or simple. Sure, there are ways of configuring the switches and routers to handle this issue, but this is only aspect of it -- we still need to get to them and turn them off. In later years, software was available to help track them down.

The problem still exists, except now they're in virtual machines. There are many ways to control them depending on how you provision these VMs. So while the problem isn't as widespread as it used to be, I still find that it is useful to know that safeguards are available to deal with them should the need arise. 

The two features in Hyper-V R3 that address this issue are DHCP Guard and Router Guard. Both are accessible from the Network Adapter's Advanced Features node of a virtual machine's settings. As the names imply, if you enable either of the two guards you can prevent a VM from being able to broadcast packets or acting as a DHCP server; with Router Guard enabled, you can prevent a VM from acting as a router and redirecting packets. 

Where such features can be very handy is in the event of a VM being connected to multiple virtual networks and where you only want this service to be broadcast on a specific virtual network rather than all of them. You can then enable DHCP or Router Guard on those networks that should not be receiving these broadcasts. It's useful for both servers and desktop VM implementations. They don't always have to be implemented to prevent misuse or abuse -- you can leverage them to address a situation where you are designating those VMs for a specific purpose.

One final thought on these two features: While some of you may want to enable this by default and make it part of the process of provisioning these VMs, keep in mind that these two features have a light performance penalty when enabled. So make sure you are testing, comparing and contrasting before you decide to use them.

Posted by Elias Khnaser on 04/02/2014 at 11:18 AM0 comments


vSphere Mobile Watchlist: Monitoring On The Go

Mobile devices are infiltrating our lives and changing our every behavior and habit-- from the way we shop, to the way we learn and even the way we make decisions. So, IT professionals are expected to gravitate and welcome mobile applications that allow them to perform certain aspects of their jobs on the go.

VMware's solution for monitoring VMs is vSphere Mobile Watchlist. Available in both the Apple AppStore and the Android Market, the app can monitor and alert, and it has remediation and delegation capabilities. You are able to configure a watchlist of important VMs and can monitor them on the go. In the event of an issue, you are able to initiate remediation or delegate the task to another member that can help in troubleshooting the issue. The idea here is that you can be made aware of an issue or outage as near to real-time as possible, and can respond with some form of action.

From a remediation perspective, the application can be used to initiate all power operations that can be done from a traditional client, such as a restart, shutdown, reset, power on and power off. The application also provides a dashboard with a summarized view of watched VMs, where you can view state and health and other information.

From vSphere Mobile Watchlist you can acknowledge any alerts that the VM presents, and disregard those that are safe for now and act on the ones that need immediate attention. What I also like is that the application is able to suggest VMware KB articles that may be relevant to the alerts, which can be passed along to other team members.

Another cool use case that the mobile app enables is "read-only": Mobile users with read-only capabilities can monitor an environment, but are limited in their capability to remediate based on the role or permissions of that person. It should be named "role-based" access, considering you could modify the user profile and give them limited remediation capabilities.

On the security end, VMware applies the same security requirements on vSphere Mobile Watchlist that it does on its popular vSphere Web Client, so this is another good reason and opportunity to seriously consider a Mobile Device or Mobile Application Management strategy as the number of mobile applications being used by end users and even admins continues to grow. Today, in order to run this mobile application, some form of VPN needs to be established in order to meet the security needs for the application to function properly.

Something tells me that we will be seeing more mobile applications that are geared towards IT professionals from the major software vendors in the months to come.

Posted by Elias Khnaser on 03/31/2014 at 3:37 PM0 comments


ThousandEyes, AppDynamics: New Breed of Application Performance Management

I have yet to come across an enterprise that has used application monitoring in any proactive way. In most cases, admins use systems that use a red light or show an up/down symptom indicator, and look for before they take any kind of action. In essence, we acquired the software but only scratched the surface of most application monitoring software capabilities. More often than not, we never fully configured those tools.

To go even further, you've likely been in a situation where an alert would go off and the up/down indicator was just that - an indicator. It didn't help much, so you had to fish around for the reasons the application was not working properly. Three days of research later, you figured out that the database reached its maximum connection limit.

I am sure some of you will throw in the mix synthetic monitoring and some other fancy keywords. Synthetic monitoring is great, but I still insist that most of us never really configured it or barely got the basic functionality out of it.

Taking all the above into account, we still managed to get by with somewhat acceptable service levels. That was a different time, a different era. It was a time when everything was contained within the boundaries of our data center and we had control over every aspect of the application. A time before SaaS, the cloud, social or mobile.

Today, add all these factors into the mix and you can pretty much render traditional application performance monitoring as obsolete. Now, you have to consider so many new variables, such as your many different SaaS application providers, your cloud provider, the Internet, and, of course your traditional data center. Troubleshooting, monitoring and watching the up/down indicator is no longer a strategy that allows you to just get by in order to maintain any sort of reasonable service levels.

Imagine having an issue and getting into a pointing contest between your internal IT team, your cloud IaaS provider and possibly your SaaS provider -- not to mention your Internet provider -- on where the slowness is, where the outage occurred or who is responsible for fixing it. Imagine getting that call on Monday morning: "Hey, the application was horribly slow on Sunday around 3 pm." Anyone care to troubleshoot that for your CFO with all the factors I mentioned weighing in?

You can see why I am excited about ThousandEyes and AppDynamics. There are other good app monitoring solutions, but those two caught my attention because of their ability to monitor and pinpoint issues within the data center and across the application stack, as well as between the data center and the cloud provider and the Internet provider that the traffic is passing through. You are essentially able to see end to end what is going on with your applications and where you might have impendig issues.

In addition, both companies have significantly aggregated large amounts of data into visually pleasing to navigate dashboards that go beyond traditional up/down monitoring , all the way to exposing the entire landscape of an application with all its interdependencies. IT pros will find themselves actually wanting to navigate through, fully configure and use these tools because  of the great value that you can finally derive from them without needing a PhD to configure them.

Another nifty feature: After you've detected an issue you can share and collaborate with coworkers or maybe even professionals from other companies to visually see the problem and work together to address it. The share and collaborate feature is one that will be most valued just because of the ease with which you can share that information. (I know, right? It's such a basic feature, but so powerful.)

Route changes, link delays, bandwidth issues, database connectivity problems, user experience enhancement and more are all new features that these new breed of application performance management tools offer. The exciting thing about some of these companies is they have helped large SaaS providers like Twitter and Citrix GoTo enhance and improve their user experience by detecting potential issues which allows developers or IT professionals to then address by reconfiguring or enhancing the software code.

How many of you are looking at ThousandEyes or AppDynamics today? What has your experience been in a world that is no longer contained within the boundaries of a data center, be it physical or virtual for that matter?

Posted by Elias Khnaser on 03/26/2014 at 2:03 PM0 comments


10 Can't Miss Citrix Synergy 2014 Sessions

Citrix Synergy 2014 is in Los Angeles in a few weeks, and Citrix has released the session catalog online. I took a good look inside and here are my recommended sessions, hand-picked to help you navigate the sea of information that typically are presented at these conferences.

A session that needs no introduction and is year after year one of the best, most interactive sessions of the show is "Geek Speak Live." You can’t attend Synergy and not participate in it. The panel is top notch, the topic are often timely and the setting is a casual one. For those who want to track me down, don't miss that session, because I'll be there. There are several Geek Speak Live sessions so be sure to check the catalog.

Next up is SYN251 – Direct from the performance labs: Best practices for VDI, a virtual reality check presented by my friend and fellow CTP, MVP and vExpert Ruben Spruijit and Jeroen van de Kamp. Do you think you know VDI best practices? You will find that the research that Ruben and Jeroen do is pretty deep and extensive and you are bound to learn a lot from their session. If you are working on a VDI deployment, attend this session. Remember, VDI in a POC or a limited deployment of up to 500 is not a big deal, but VDI at scale is a different beast!

Speaking of VDI at scale, are you working for an enterprise with a VDI project in the tens of thousands? Then I recommend SYN119 – How Atlanta Public Schools delivers virtual desktops to 50,000 students, presented by Thomas Gamull. If you are in healthcare, check out SYN250 – Deploying XenDesktop with Cisco UCS for 10,000 healthcare workers, presented by my good friend Jarian Gibson. This will be a technical session, so bring your coloring books and crayons.

Are you looking at the cloud for possible deployment of your VDI environment? We all love showdowns and comparative analyses that can save us a ton of research and heartache, right? I have a session for you: SYN254 – Showdown: AWS vs Azure for desktop delivery, presented by another one of my fellow CTPs, Henrik Johansson. Henrik does an excellent job and is very thorough. I am personally very interested in this session.

Do I need to stress the importance of monitoring and proactive identification of issues or user experience degradation? We have all experienced that. Heck, it is the dreadful demise of our day when we get these types of calls. Check out SYN326 – HDX Insight to identify XenDesktop Bottlenecks, also presented by Henrik.

You can’t attend Synergy and not get a taste for GPU computing in virtual desktops and to get an unbiased, unfiltered, honest-to-God opinion, I can’t think of anyone better than my fellow Chicagoan and friend Shawn Bass and his evil German mad scientist and fellow CTP Benny Tritsch. Think you are technical and can handle this session? Show up and prove me wrong by attending SYN324 - Comparing GPU-accelerated high-end graphics performance of virtual desktop platforms.

If you say you don’t have a Dropbox problem in your environment, you are either not aware of them or you're ignoring it in the hopes that it will go away. Likely, you are wrong on both counts. Come to SYN216 – ShareFile: What’s New and What’s Next. ShareFile is quickly competing with XenApp as my favorite Citrix product.

I have always maintained that enterprises should look at End User Computing holistically and not in a silo -- not just XenApp and XenDesktop, not just ShareFile and XenMobile, but all of them together and how they integrate. SYN308 - How XenMobile integrates with NetScaler, XenDesktop and XenApp for complete enterprise mobility should cover a big chunk of that strategy.

Finally, you can’t go to Synergy and not attend a cloud session that is tied to business. For that, I recommend SYN233 - Achieving business agility with cloud computing in data-intensive, media-rich, web-scale environments. I have high hopes for this session.

As you can see from the session catalog, the show is packed with excellent content and you can’t go wrong with session choices. I just figured I would give you my perspective and some of the sessions that I will be attending. If you have other sessions that you think should be highlighted, add them in the comments section. I hope to see you there!

Posted by Elias Khnaser on 03/12/2014 at 5:13 PM0 comments


Can Private Cloud Storage Accelerate Enterprise Cloud Adoption?

The barrier to mass cloud adoption by enterprises is realistically two-fold. First, the amount of data that is stored on premise would take an exorbitant amount of time to move to the cloud and would in many cases not be financially beneficial. (we're not even mentioning security and privacy concerns about data and a whole slew of other things). The second thing is, even if we move all the data into the cloud, the communications link would dictate that the compute (VMs) be as close as possible to this data, because most enterprise applications and databases are latency-sensitive. This would suggest that you now need to move your compute and your storage to the same provider, and that leads to vendor lock-in.

There are many options that can address some of these caveats and overcome some of these situations, but there really isn't an elegant one yet for this problem. Well, what if certain data centers could establish direct, high-speed, low-latency connections to the major cloud service providers like Amazon, Azure, vCHS, IBM, and others? It would plug one of the two challenges I mentioned earlier, but then we still have the issue of storage.

Let's stretch the "what if" a bit more: What if these data centers also offered customers the ability to host their storage arrays in a fully managed offering? The customer still buys the storage arrays, but they never see them. Instead, the storage is managed on their behalf and they can pay as they grow. So, instead of buying all the storage upfront, they buy what you need now and then as they need more, it is made available to them. And the customer dictates SLAs and the provider bills accordingly. This would now centralize the storage in a location that can be accessed by several cloud service providers over high-speed, low latency links, thereby doing several things: avoiding lock-in, addressing the security and privacy concerns, and enabling enterprises to move applications and databases into the cloud without worrying about latency or a degraded user experience.

Now, before every vendor under the sun jumps on my comments and says we are already doing this, read carefully what I am suggesting. I understand that today you will place an array on premises or at a customer co-located space. I understand that you are willing to manage it on their behalf, that is great too. What I am suggesting is, take the customer out of the data center business and out of the co-location business altogether. The customer buys the array and where it is stored is not their concern as long as it meets certain criteria. What but the customer needs is that the storage array need not be in a customer cage. Instead, it could be wherever as long as the storage is theirs and managed on their behalf.

Back to the scenario I described earlier: We now have centralized storage in a location that is accessible by cloud providers and they are offering it at a high speed with low latency, which means the compute, your VMs, are now free to live on any cloud. So, they could be on Azure today, and tomorrow VMware introduces a limited-time offer where the VMs could be cheaper to host. Guess what? Migrating those VMs is now very easy. Maybe, the day after tomorrow Amazon offers cheaper prices, so you move back to that provider. Your data is still in the same place, but you are not locked in with your cloud provider anymore.

In this model, we are blending the traditional data center model with the cloud for a best of breed solution. In such a scenario, you can enable services like DaaS and not worry about user experience or performance. You can move your tier-1 apps to the cloud and not worry about performance.

What do you think? Is this the way we are going to migrate to the cloud and get out of the data center and infrastructure-owning business altogether? Where do you see challenges? What am I missing? Please share in the comments here.

Posted by Elias Khnaser on 03/10/2014 at 3:11 PM0 comments


No Private Cloud Plans? Well, You'll End Up in the Public Cloud...

Enterprise IT undergoes an evolutionary phase every 20 years or so. IT research firm IDC labels these evolutionary phases as platforms. IDC identifies the first platform as the mainframe, the second as client/server and the third as the cloud. While every evolution diminishes the importance and reliance on its predecessor, no platform has yet to completely replace another. To this day, IBM makes significant money with its AS/400 mainframe.

Today, there is a lot of talk about cloud and cloud services. Even then, many in enterprise IT have yet to jump on board, trying to ignore the hype and thereby downplaying the importance of the cloud and how it will affect their businesses into the future. I believe those companies are still transitioning from the first to the second platforms, and have yet to even consider the third.

Yes, there's still growth in the second platform. Many IT research firms tout 30 percent growth year over year with traditional IT spending. What enterprise IT decision-makers fail to notice or ignore is that the third platform -- cloud services -- is growing by about 300 percent year over year. Most of them see these numbers as hype, but it will change the face of IT suddenly and without notice.

Enterprise IT does not realize how many cloud services they already are consuming today, whether it is ERP and CRM systems or even applications that were just until yesterday a pillar of the data center. In the enterprise there is significant adoption of Office 365, or EMC Atmos and many others. While traditional IT spend grows at 30 yearly as enterprises go about their business building system the traditional way, what will happen in the coming months and years is that this growth will screech to a halt. I see 30 to 15 percent and then even quicker than that as the end approaches. So, what will be the reason then?

For one answer, let me share with you my thoughts based on endless customer interactions. Today when I talk to customers, I silo them into two categories. The first goes those who are content with the existing environment, but want to make it better, virtualize more and make it a lot more streamlined and efficient. The environments are typically static -- not a lot of changes occur, not a lot of VM requests, no need for self-service portal or elasticity, and so on. The environment is very predictable.

In the second customer silo, the environment is very dynamic, Those customers have adopted converged infrastructure but are starting to complain about converged infrastructure sprawl and their ability to manage it all more efficiently. The latter silo is a perfect candidate for a private cloud deployment and are either looking at a serious deployment or are already deploying it.

The first silo of customers that are striving to be 100 percent virtualized don't realize it yet, but they are headed to the public cloud. Right now, they're in no hurry to do this because either their hardware refreshes are not imminent yet, or culturally they are not ready yet. But without a doubt, this silo of customers will at some point leverage an IaaS offering to host their already 100 percent virtualized environment. Some of you might be thinking, "Elias, we have large amounts of data and storage on premises that would cost a lot to move to the cloud," or you may bring up some security or privacy concerns. Let's tackle both of these obstacles.

Storage capacity is a valid concern, but technology will overcome this easily in a number of ways. We may see technology that is able to bring up a working set of data and keep it as close as possible to the infrastructure, while replicating back to the on-premises storage. It's one way of not moving everything to the cloud.

The other option is that cloud storage will get to a point where it really rivals on-premises storage on cost. Today, the argument can be made that cloud storage in the long run is not cheaper than on-premises storage and that may be true, but I think that will not be the case soon. I also think that network and communications link are continuously improving and therefore moving these large sets of data will not be as challenging as most think.

As far as privacy and security are concerned, I have my own views that privacy is dead and I have written about it here on Virtualization Review many times. I have also discussed security and the fact that I believe that cloud service providers have better security than enterprise IT can ever have (also something I've discussed here before).

That being said, from a compliance, security, privacy perspective, we are starting to see the rise of the "verticalized" clouds that are intended to address healthcare requirements, financial requirements, educational requirements, and farther down the line.

For all these reasons, I believe that enterprise IT that is striving for 100 percent virtualization will soon find itself asking the question or find itself being asked by the business or a new savvy CIO: If we're 100 percent virtualized, why are we still spending CapEx money on IT infrastructure? Does this mean the on-premises or collocated data center is dead? No, it means its footprint will be significantly reduced and just like today you virtualize first. So, tomorrow you will put the workload in the public cloud first and you will have to justify moving it on-premises where the cost of hosting will be higher. Eventually, many IT organizations won't own a data center, and many IT departments who were once known as builders for infrastructure will switch it up and brokers of cloud services.

Posted by Elias Khnaser on 02/24/2014 at 4:59 PM0 comments


How To Clone Running VMs with Hyper-V and SCVMM

Cloning a virtual machine while it is running is a very handy feature that I have personally used on numerous occasions, especially to test upgrade versions of a production virtual machine.

Sure, there are other ways of testing software, but in my meticulous approach, I always like to test against as close to the real thing as possible. There are other use cases for cloning a virtual machine without powering it off; I just gave you one that I have used in the past.

While vSphere has had this capability for a while, it is a very welcome new feature in Windows 8.1, Windows Server 2012 R2 and System Center Virtual Machine Manager 2012 R2.

The action is named differently in Windows 8.1 and Windows Server 2012 R2 than it is in SCVMM 2012 R2. In the latter, the feature is appropriately called "clone." In Windows 8.1 and Windows Server 2012 R2, it's called "export." Same function, different name.

To initiate this action from within Hyper-v Manager on Windows 8.1 or Windows Server 2012 R2, use these steps:

  1. Open Hyper-V Manager.
  2. Find the running virtual machine that you want to clone.
  3. Right-click on it and click on Export.

To initiate this process on SCVMM 2012 R2, use these steps:

  1. Launch your Virtual Machine Manager console.
  2. Select the virtual machine tab.
  3. Locate the running virtual machine that you wish to clone.
  4. Right-click on it, hover over "create" and select "clone"

It's that easy and simple. Once you start the task, the time it will take to clone the VM depends on the size of the virtual machine. You should also be aware that once you initiate the cloning process, the new VM is created at the point in time when you click on the "clone" or "export" action.

Posted by Elias Khnaser on 02/18/2014 at 2:36 PM0 comments


After AirWatch, Syncplicity Should Be Next on VMware's List

A few weeks ago I wrote about VMware's executive hirings and how that will not close the EUC gap. In that same blog, I also outlined my vision for what VMware needs to do to ramp up on EUC.

One of my very, very long time requests was for VMware to finally acquire a true MDM/MAM company and enhance Horizon Mobile. OpenPeak was always a favorite as a target acquisition, but so was AirWatch and MobileIron. For whatever reason at the time, I felt that AirWatch would be the right choice.

As I was writing my "rant-athon," I never expected that after several years VMware would finally make that acquisition, and so when I heard the news, I said some words that can't be published here, but "About time" sums it up. I will be watching VMware to see how quickly it integrates AirWatch into Horizon.

The AirWatch buy is now behind us, but technology moves that fast and there are still some gaps that need to be plugged up. So while VMware is in the acquiring mood, let's see if VMware is listening and maybe it can knock out a few more requests off my wish list. In that same post, I suggested that it enhance Horizon Data, and it can do that by combining it with EMC Syncplicity. The combination would make for an excellent gap-filler.

As EMC and VMware continue to collaborate and position products in the right portfolios, I am sure they can see how Syncplicity shouldn't be a stand alone product. It really belongs in Horizon Data, so that VMware can properly address its lack of a Dropbox-type of solution in the enterprise. It can sell the solution as part of an end-to-end End-user Computing strategy, where the new Horizon Data (with Syncplicity) can be integrated easily with AirWatch, View, Mirage, and the other components. I'm hoping that in the near future it becomes the platform upon which VMware expands its profile management to leverage Horizon Data.

Dropbox is a true problem in the enterprise, and not because it is a bad solution. (On the contrary, it is a revolutionary offering that created a new market segment.) Dropbox for one lacks auditing capabilities, and secondly it lacks good security and privacy features.

More important, enterprises are tired of purchasing point solutions that are difficult to integrate. Rather than make an isolated purchase of an MDM solution, then MAM, then cloud file sync, then collaboration and finally desktop virtualization and physical PC management, enterprises are after more holistic solutions, where each product works well together. They absolutely need and want to take a holistic approach to its software and services choices, end to end. That's why the combination of Syncplicity and Horizon Data is the right move. The VMware sales force can sell it better, the story is better, and the customer wins with a significantly richer Horizon Data.

Posted by Elias Khnaser on 02/12/2014 at 3:28 PM0 comments


Return of the King: Citrix XenApp Back with Version 7.5

I don't think there is a more appropriate title to describe Citrix's announcement of the re-introduction of XenApp, just a few months after it was abolished. I could have used a phrase from the great Toni Stark: "Never has a greater phoenix metaphor been personified in human history." That's a bit much for a piece of software, no matter how much I like it.

The irony with Citrix is it has excellent vision and fantastic products, but sometimes it makes decisions that are truly mind boggling. When Citrix decided to integrate XenDesktop and XenApp, we applauded the effort from an architectural stand point. It was and still is the right thing to do and that has not changed with XenDesktop and XenApp 7.5.

When the products were being integrated, all the components that had the words "desktop" in them were being renamed to "delivery" because the product was no longer about just desktops. That was cool and everything, except, HEY, CITRIX, the product you are converging into has the word "Desktop" in it. So, for instance Citrix renamed Virtual Desktop Agent to Virtual Delivery Agent, Desktop Studio became Citrix Studio, Desktop Director became Citrix Director, Desktop Groups became Delivery Groups and so on. But you missed one.

Citrix knew back then that it was going to be all about applications. Today, they are justifying the rebirth of XenApp as a stand-alone product due to changing market conditions. I am calling them out on this. Market conditions have not changed; it's still is about applications. Citrix, you messed up with XenDesktop. Now you are setting the record straight and that is a welcome step.

Now that I got that off my chest, let's take a look at what the young XenApp 7.5 king has in store for you:

For starters, and just to be clear, IMA is still dead. The architecture is still built around FMA and none of the consoles have changed. So while the product is re-introduced, it is simply packaging and product positioning at this point and not necessarily any architectural changes.

Hybrid cloud provisioning is probably the most exciting new feature of this release. This version gives you the ability to deploy XenApp or XenDesktop infrastructure on public clouds like Amazon, with Microsoft Azure support to come. In addition, you can leverage Citrix CloudPlatform for on-premises or public cloud deployments that you CloudPlatform. This is exciting news for enterprises that are deploying or thinking about deploying IaaS-like private clouds or want to leverage the public cloud.

Citrix Framehawk also makes it in this release, and it's definitely the most exciting technology of this release. Wait, did I say that already for the hybrid cloud stuff? Ok, I lied about the hybrid cloud; Framehawk is cooler. This technology is what Citrix acquired a few months earlier from a company called Framehawk. Among many things, the one that is important is that it developed a lightweight protocol that can function exceptionally well over very high-latency links that may experience significant amounts of packet loss. We have seen videos on YouTube that show 50 percent packet loss; yet, it's still able to perform very well.

And, finally, AppDNA is now part of XenApp. Depending on the edition of XenApp you own, you will be able to leverage the ability to pretty much P2V applications. This will significantly streamline the migration process and put you in a better position to tackle mobility.

I can't help it, I am a fan of XenApp. So, I'm more interested in what your thoughts are. Was this a good announcement or a confusing one on Citrix's part, and what do you think of some of the new features?

Posted by Elias Khnaser on 02/03/2014 at 4:19 PM0 comments


Who Wins, Now that the Hypervisor Wars Are Over?

Let's face it, the hypervisor wars are over. VMware clearly dominates the market and Microsoft is closing the gap with every release of Hyper-V, but the focus right now is no longer on the hypervisor.

We have reached a point where no single hypervisor will reign supreme in any organization. Furthermore, I believe that VMware will come under increased pressure from Citrix, Red Hat and other contenders. What we are beginning to witness in the enterprise is the compartmentalization or classification of hypervisors based on a workload perspective.

A year or two ago I probably would not have recommended Citrix XenDestop be run on anything but vSphere because of all the added benefits that VMware's hypervisor can offer, especially from a performance perspective. Today, I find that I can comfortably recommend the use of Hyper-V or even XenServer for XenDesktop workloads. Conversely, I can see situations where workloads of certain tier-1 applications can run on vSphere, whereas some other tier-1 applications can be run on Hyper-V, and so on.

The end of the importance of any particular hypervisor isn't a bad thing. You are probably thinking just about now that having all these hypervisors deployed will be a support nightmare, maybe even a logistical nightmare. It's inevitable, since the shift is already happening. What can alleviate the pain or "soften the blow" as it were are the management tools that will allow you to manage these different hypervisors from a single pane of glass. Microsoft System Center Virtual Machine Manager is a great example of a management console that carries with it a lot of promise.

The public cloud is another factor contributing to the new status quo. Enterprises will without a doubt use services from Microsoft's Azure, VMware's vCHS and others. So, it makes total sense for them to also leverage the hypervisor locally to facilitate interchangeability between the public cloud and the private virtual infrastructure.

This adoption of multiple hypervisors could have potentially been avoided had we been smart enough as a virtualization community to develop a standardized virtual machine format that works across hypervisors. But as long as each vendor maintains their own format, the enterprise will find itself using several hypervisors and several public clouds for different workloads to avoid vendor lock-in and to take advantage of the price wars that will take place.

So for those of you whose strategy is to standardize your company on a single hypervisor to streamline processes, support and training, I caution you to rethink this approach and take a closer look at the market circumstances. It might be cheaper for your organization to train in and support multiple hypervisors and multiple public clouds.

I am eager to hear from those of you that are running multiple hypervisors in production. What's the reasoning and justification that you used to win over your company. Please share your comments here or at mdomingo@1105media.com (my editor's e-mail).

Posted by Elias Khnaser on 01/22/2014 at 11:12 AM0 comments


Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.