Enterprise IT undergoes an evolutionary phase every 20 years or so. IT research firm IDC labels these evolutionary phases as platforms. IDC identifies the first platform as the mainframe, the second as client/server and the third as the cloud. While every evolution diminishes the importance and reliance on its predecessor, no platform has yet to completely replace another. To this day, IBM makes significant money with its AS/400 mainframe.
Today, there is a lot of talk about cloud and cloud services. Even then, many in enterprise IT have yet to jump on board, trying to ignore the hype and thereby downplaying the importance of the cloud and how it will affect their businesses into the future. I believe those companies are still transitioning from the first to the second platforms, and have yet to even consider the third.
Yes, there's still growth in the second platform. Many IT research firms tout 30 percent growth year over year with traditional IT spending. What enterprise IT decision-makers fail to notice or ignore is that the third platform -- cloud services -- is growing by about 300 percent year over year. Most of them see these numbers as hype, but it will change the face of IT suddenly and without notice.
Enterprise IT does not realize how many cloud services they already are consuming today, whether it is ERP and CRM systems or even applications that were just until yesterday a pillar of the data center. In the enterprise there is significant adoption of Office 365, or EMC Atmos and many others. While traditional IT spend grows at 30 yearly as enterprises go about their business building system the traditional way, what will happen in the coming months and years is that this growth will screech to a halt. I see 30 to 15 percent and then even quicker than that as the end approaches. So, what will be the reason then?
For one answer, let me share with you my thoughts based on endless customer interactions. Today when I talk to customers, I silo them into two categories. The first goes those who are content with the existing environment, but want to make it better, virtualize more and make it a lot more streamlined and efficient. The environments are typically static -- not a lot of changes occur, not a lot of VM requests, no need for self-service portal or elasticity, and so on. The environment is very predictable.
In the second customer silo, the environment is very dynamic, Those customers have adopted converged infrastructure but are starting to complain about converged infrastructure sprawl and their ability to manage it all more efficiently. The latter silo is a perfect candidate for a private cloud deployment and are either looking at a serious deployment or are already deploying it.
The first silo of customers that are striving to be 100 percent virtualized don't realize it yet, but they are headed to the public cloud. Right now, they're in no hurry to do this because either their hardware refreshes are not imminent yet, or culturally they are not ready yet. But without a doubt, this silo of customers will at some point leverage an IaaS offering to host their already 100 percent virtualized environment. Some of you might be thinking, "Elias, we have large amounts of data and storage on premises that would cost a lot to move to the cloud," or you may bring up some security or privacy concerns. Let's tackle both of these obstacles.
Storage capacity is a valid concern, but technology will overcome this easily in a number of ways. We may see technology that is able to bring up a working set of data and keep it as close as possible to the infrastructure, while replicating back to the on-premises storage. It's one way of not moving everything to the cloud.
The other option is that cloud storage will get to a point where it really rivals on-premises storage on cost. Today, the argument can be made that cloud storage in the long run is not cheaper than on-premises storage and that may be true, but I think that will not be the case soon. I also think that network and communications link are continuously improving and therefore moving these large sets of data will not be as challenging as most think.
As far as privacy and security are concerned, I have my own views that privacy is dead and I have written about it here on Virtualization Review many times. I have also discussed security and the fact that I believe that cloud service providers have better security than enterprise IT can ever have (also something I've discussed here before).
That being said, from a compliance, security, privacy perspective, we are starting to see the rise of the "verticalized" clouds that are intended to address healthcare requirements, financial requirements, educational requirements, and farther down the line.
For all these reasons, I believe that enterprise IT that is striving for 100 percent virtualization will soon find itself asking the question or find itself being asked by the business or a new savvy CIO: If we're 100 percent virtualized, why are we still spending CapEx money on IT infrastructure? Does this mean the on-premises or collocated data center is dead? No, it means its footprint will be significantly reduced and just like today you virtualize first. So, tomorrow you will put the workload in the public cloud first and you will have to justify moving it on-premises where the cost of hosting will be higher. Eventually, many IT organizations won't own a data center, and many IT departments who were once known as builders for infrastructure will switch it up and brokers of cloud services.
Posted by Elias Khnaser on 02/24/2014 at 4:59 PM0 comments
Cloning a virtual machine while it is running is a very handy feature that I have personally used on numerous occasions, especially to test upgrade versions of a production virtual machine.
Sure, there are other ways of testing software, but in my meticulous approach, I always like to test against as close to the real thing as possible. There are other use cases for cloning a virtual machine without powering it off; I just gave you one that I have used in the past.
While vSphere has had this capability for a while, it is a very welcome new feature in Windows 8.1, Windows Server 2012 R2 and System Center Virtual Machine Manager 2012 R2.
The action is named differently in Windows 8.1 and Windows Server 2012 R2 than it is in SCVMM 2012 R2. In the latter, the feature is appropriately called "clone." In Windows 8.1 and Windows Server 2012 R2, it's called "export." Same function, different name.
To initiate this action from within Hyper-v Manager on Windows 8.1 or Windows Server 2012 R2, use these steps:
- Open Hyper-V Manager.
- Find the running virtual machine that you want to clone.
- Right-click on it and click on Export.
To initiate this process on SCVMM 2012 R2, use these steps:
- Launch your Virtual Machine Manager console.
- Select the virtual machine tab.
- Locate the running virtual machine that you wish to clone.
- Right-click on it, hover over "create" and select "clone"
It's that easy and simple. Once you start the task, the time it will take to clone the VM depends on the size of the virtual machine. You should also be aware that once you initiate the cloning process, the new VM is created at the point in time when you click on the "clone" or "export" action.
Posted by Elias Khnaser on 02/18/2014 at 2:36 PM0 comments
A few weeks ago I wrote about VMware's executive hirings and how that will not close the EUC gap. In that same blog, I also outlined my vision for what VMware needs to do to ramp up on EUC.
One of my very, very long time requests was for VMware to finally acquire a true MDM/MAM company and enhance Horizon Mobile. OpenPeak was always a favorite as a target acquisition, but so was AirWatch and MobileIron. For whatever reason at the time, I felt that AirWatch would be the right choice.
As I was writing my "rant-athon," I never expected that after several years VMware would finally make that acquisition, and so when I heard the news, I said some words that can't be published here, but "About time" sums it up. I will be watching VMware to see how quickly it integrates AirWatch into Horizon.
The AirWatch buy is now behind us, but technology moves that fast and there are still some gaps that need to be plugged up. So while VMware is in the acquiring mood, let's see if VMware is listening and maybe it can knock out a few more requests off my wish list. In that same post, I suggested that it enhance Horizon Data, and it can do that by combining it with EMC Syncplicity. The combination would make for an excellent gap-filler.
As EMC and VMware continue to collaborate and position products in the right portfolios, I am sure they can see how Syncplicity shouldn't be a stand alone product. It really belongs in Horizon Data, so that VMware can properly address its lack of a Dropbox-type of solution in the enterprise. It can sell the solution as part of an end-to-end End-user Computing strategy, where the new Horizon Data (with Syncplicity) can be integrated easily with AirWatch, View, Mirage, and the other components. I'm hoping that in the near future it becomes the platform upon which VMware expands its profile management to leverage Horizon Data.
Dropbox is a true problem in the enterprise, and not because it is a bad solution. (On the contrary, it is a revolutionary offering that created a new market segment.) Dropbox for one lacks auditing capabilities, and secondly it lacks good security and privacy features.
More important, enterprises are tired of purchasing point solutions that are difficult to integrate. Rather than make an isolated purchase of an MDM solution, then MAM, then cloud file sync, then collaboration and finally desktop virtualization and physical PC management, enterprises are after more holistic solutions, where each product works well together. They absolutely need and want to take a holistic approach to its software and services choices, end to end. That's why the combination of Syncplicity and Horizon Data is the right move. The VMware sales force can sell it better, the story is better, and the customer wins with a significantly richer Horizon Data.
Posted by Elias Khnaser on 02/12/2014 at 3:28 PM0 comments
I don't think there is a more appropriate title to describe Citrix's announcement of the re-introduction of XenApp, just a few months after it was abolished. I could have used a phrase from the great Toni Stark: "Never has a greater phoenix metaphor been personified in human history." That's a bit much for a piece of software, no matter how much I like it.
The irony with Citrix is it has excellent vision and fantastic products, but sometimes it makes decisions that are truly mind boggling. When Citrix decided to integrate XenDesktop and XenApp, we applauded the effort from an architectural stand point. It was and still is the right thing to do and that has not changed with XenDesktop and XenApp 7.5.
When the products were being integrated, all the components that had the words "desktop" in them were being renamed to "delivery" because the product was no longer about just desktops. That was cool and everything, except, HEY, CITRIX, the product you are converging into has the word "Desktop" in it. So, for instance Citrix renamed Virtual Desktop Agent to Virtual Delivery Agent, Desktop Studio became Citrix Studio, Desktop Director became Citrix Director, Desktop Groups became Delivery Groups and so on. But you missed one.
Citrix knew back then that it was going to be all about applications. Today, they are justifying the rebirth of XenApp as a stand-alone product due to changing market conditions. I am calling them out on this. Market conditions have not changed; it's still is about applications. Citrix, you messed up with XenDesktop. Now you are setting the record straight and that is a welcome step.
Now that I got that off my chest, let's take a look at what the young XenApp 7.5 king has in store for you:
For starters, and just to be clear, IMA is still dead. The architecture is still built around FMA and none of the consoles have changed. So while the product is re-introduced, it is simply packaging and product positioning at this point and not necessarily any architectural changes.
Hybrid cloud provisioning is probably the most exciting new feature of this release. This version gives you the ability to deploy XenApp or XenDesktop infrastructure on public clouds like Amazon, with Microsoft Azure support to come. In addition, you can leverage Citrix CloudPlatform for on-premises or public cloud deployments that you CloudPlatform. This is exciting news for enterprises that are deploying or thinking about deploying IaaS-like private clouds or want to leverage the public cloud.
Citrix Framehawk also makes it in this release, and it's definitely the most exciting technology of this release. Wait, did I say that already for the hybrid cloud stuff? Ok, I lied about the hybrid cloud; Framehawk is cooler. This technology is what Citrix acquired a few months earlier from a company called Framehawk. Among many things, the one that is important is that it developed a lightweight protocol that can function exceptionally well over very high-latency links that may experience significant amounts of packet loss. We have seen videos on YouTube that show 50 percent packet loss; yet, it's still able to perform very well.
And, finally, AppDNA is now part of XenApp. Depending on the edition of XenApp you own, you will be able to leverage the ability to pretty much P2V applications. This will significantly streamline the migration process and put you in a better position to tackle mobility.
I can't help it, I am a fan of XenApp. So, I'm more interested in what your thoughts are. Was this a good announcement or a confusing one on Citrix's part, and what do you think of some of the new features?
Posted by Elias Khnaser on 02/03/2014 at 4:19 PM0 comments
Let's face it, the hypervisor wars are over. VMware clearly dominates the market and Microsoft is closing the gap with every release of Hyper-V, but the focus right now is no longer on the hypervisor.
We have reached a point where no single hypervisor will reign supreme in any organization. Furthermore, I believe that VMware will come under increased pressure from Citrix, Red Hat and other contenders. What we are beginning to witness in the enterprise is the compartmentalization or classification of hypervisors based on a workload perspective.
A year or two ago I probably would not have recommended Citrix XenDestop be run on anything but vSphere because of all the added benefits that VMware's hypervisor can offer, especially from a performance perspective. Today, I find that I can comfortably recommend the use of Hyper-V or even XenServer for XenDesktop workloads. Conversely, I can see situations where workloads of certain tier-1 applications can run on vSphere, whereas some other tier-1 applications can be run on Hyper-V, and so on.
The end of the importance of any particular hypervisor isn't a bad thing. You are probably thinking just about now that having all these hypervisors deployed will be a support nightmare, maybe even a logistical nightmare. It's inevitable, since the shift is already happening. What can alleviate the pain or "soften the blow" as it were are the management tools that will allow you to manage these different hypervisors from a single pane of glass. Microsoft System Center Virtual Machine Manager is a great example of a management console that carries with it a lot of promise.
The public cloud is another factor contributing to the new status quo. Enterprises will without a doubt use services from Microsoft's Azure, VMware's vCHS and others. So, it makes total sense for them to also leverage the hypervisor locally to facilitate interchangeability between the public cloud and the private virtual infrastructure.
This adoption of multiple hypervisors could have potentially been avoided had we been smart enough as a virtualization community to develop a standardized virtual machine format that works across hypervisors. But as long as each vendor maintains their own format, the enterprise will find itself using several hypervisors and several public clouds for different workloads to avoid vendor lock-in and to take advantage of the price wars that will take place.
So for those of you whose strategy is to standardize your company on a single hypervisor to streamline processes, support and training, I caution you to rethink this approach and take a closer look at the market circumstances. It might be cheaper for your organization to train in and support multiple hypervisors and multiple public clouds.
I am eager to hear from those of you that are running multiple hypervisors in production. What's the reasoning and justification that you used to win over your company. Please share your comments here or at email@example.com (my editor's e-mail).
Posted by Elias Khnaser on 01/22/2014 at 11:12 AM0 comments
I'm a fan of VMware, but I must confess that I have been frustrated with their End-User Computing strategy and execution. The only moves I see VMware making in this space are either the wrong ones or "not enough" ones. VMware has hired away a ton of Citrix employees and executives to tighten up the EUC offerings. Having the right team is crucial, but it is far from being enough and far from being drastically effective if the technology they will be in charge of is limited.
VMware has obviously embraced and strongly believes in the end-to-end EUC vision of allowing people to work anywhere, on any device. It has taken steps in completing this vision, but it is still lacking some crucial components. So instead of focusing on how to fill these gaps, VMware goes out and acquires a DaaS company.
Don't get me wrong, DaaS is important, but it is not that important and it is not on any company's radar in any serious way. I am pretty sure that VMware realizes that there really isn't an opportunity for DaaS in the enterprise YET and that most of what's going on today is hype and it will stay that way for the foreseeable future.
After the recent Citrix acquisition of Framehawk, VMware definitely needs to make acquisitions to reinforce its position and vision and capabilities for enterprise customers that are looking for end-to-end solutions that can:
- Unify all resource access from an enterprise app store (Windows, Mobile, SaaS, Data, etc.).
- Deliver Windows apps and desktops to any form factor device.
- Manage or govern mobile devices, especially in the wake of imminent security threats to mobile devices.
- Address the Dropbox problem in the enterprise.
These are just a subset of what enterprises are looking for and VMware needs to plug the gaps in the portfolio and here's what I suggest for starters:
Acquire an MDM/MAM company. I've said this a thousand times here and on my other blogs hosted on InformationWeek, and I've been saying it for years. VMware, some good picks are MobileIron or AirWatch, or look at OpenPeak. Look at the latter if you don't care about existing customer base, as the technology is solid. Stop fiddling around with this and make an acquisition.
Yes, MDM is still very relevant because enterprises like Home Depot, United Airlines, American, Belly Card and many others are deploying income-generating enterprise applications on company-issued devices that need to be managed. These are not BYOD scenarios and this segment will continue to grow. What exactly are you waiting for? You also need a way to manage Windows Phone, iOS, and Android in a bit more uniform way as opposed to two differing strategies.
I am not sure what happened to AppBlast, but after the Citrix acquisition of Framehawk, I suggest that you acquire Mainframe2 and bring AppBlast to life--please! You need a way to deliver seamless applications to devices. The idea that everyone is going to get a desktop is happening less so now, so let's move on and plug the devices gap. Pick up Mainframe2 or a similar company.
How much longer will your solution rely on PCoIP and you don't own the intellectual property for this protocol yet?! Really, why would you not pick this up and consolidate it? Why the hesitance? PCoIP also has a connection to RDSH, so you at least have RemoteApplications as well. I would love an answer for this one.
Please, please take EMC's Syncplicity product and consolidate it with Horizon Data to reinforce the product's capabilities, so you don't compete in that area. I know the products are not the same, but that product belongs in the VMware portfolio anyway.
So, minting your team with executives is not enough. You need to make some serious acquisitions and rearrangements in the EUC business unit in order to position the company to address the needs and requirements of the enterprise. Maybe after you tackle all these gaps, the market will then be primed for DaaS.
I sincerely hope the DaaS move was not a knee-jerk reaction to Amazon entering the DaaS space. VMware and Amazon are very different and cater to different customers. Sure, they intersect in some places, but for the most part, you belong in the enterprise and Amazon is looking in from the consumer side and trying to break into the enterprise.
Posted by Elias Khnaser on 01/13/2014 at 3:35 PM0 comments
Service packs are usually a collection of hot fixes, and we typically don't see a lot of user interface changes with them. That's not the case with Microsoft's latest App-V 5 SP2.
When you install the client you will find that the UI has totally disappeared. Even the icon that traditionally showed up on the System Tray is gone, and there's nothing on the Start menu. The idea behind this UI change is to make App-V very transparent, and centrally configured and managed using group policy. In my book that is a very welcome step--technology should be as transparent as possible.
But, what if you wanted to manually configure the client for whatever reason? Microsoft has made available the Application Virtualization 5.0 Client UI Application which you can download here.
Once installed you will be able to again get access to a UI for the client, and here is how you can quickly configure it to point to a publishing server in order to enumerate the different App-V 5 application packages that can run.
First things first: Open PowerShell and make sure you have your execution policy set to remote signed by issuing this:
Second, you want to import the appvclient cmdlets; the command for that is:
Third, you want to establish a connection between your App-V client and the publishing server so you will need to use the following command:
Add-AppvPublishingServer –url http://www.yourpublishingserverFQDN:portnumber –name <give the connection a name>
The last step is to synchronize and update the App-V client so that it see the packages that are published. Use this command to do that:
If all your commands execute successfully, you will see that the packages you have access to are now available for launch from their configured locations (Desktop, Start Menu, etc.) and you will also find that the client UI Update and Download boxes which were previously unavailable are not lit up and now readily accessible.
Posted by Elias Khnaser on 01/07/2014 at 3:45 PM0 comments
So, last time, we talked about security. I had some great feedback from you. This time, let's talk about some of my other predictions at length: flash storage, DaaS, desktop virtualization, big data, and how the role of IT will start to make significant changes.
The storage industry will wrestle with all flash arrays in 2014 with new players like Cisco entering the market and existing players introducing or improving their all-flash arrays. There is room for consolidation here, so I expect to see HP in particular make an acquisition. When the dust settles, hybrid arrays will be the winners but all-flash technologies will dominate the storage world.
The VMware acquisition of Desktone and the Amazon introduction of Workspaces will no doubt make 2014 the year of DaaS. It will lead to some serious conversation and then some time between 2015 and 2016 we might start seeing enterprises looking at DaaS as a serious and possible deployment method. Before that happens, it will require some other aspects of the technology to improve, specifically remote protocols and bandwidth availability.
I also cannot imagine that Citrix does not get in on the DaaS momentum without an acquisition of its own. What I see is Citrix leveraging its GoTo brand with GoToMyDesktop and possibly GoToMyApps. The latter is built around an HTML5-based technology for delivering Windows applications, after all. While Citrix's strategy so far has been to develop products and enable ISVs and partners to build solutions, it makes no sense to pass on recurring revenue in such a large market segment, so I expect Citrix to make an acquisition for DaaS and to either build or acquire an HTML5 technology for Windows apps in the cloud. I am aware that Citrix supports HTML5 access to apps, so they'll either build on this technology or acquire something to make it happen.
What about Desktop Virtualization?
Yeah, what about it? Personally I think desktop virtualization has matured from a technical perspective and the cost is very realistic and competitive, and all the road blocks that were in the way have for the most part been removed. There is always room for improvement and as always desktop virtualization is not about just VDI, or just RDSH or just application virtualization. It's about all of them and delivering to the user the right resource depending on the form factor, the location, and the connection type that users are connecting from.
I think Citrix in general is very much on the right track with XenDesktop 7 and more specifically with Project Avalon. Citrix definitely has the right approach and should continue to build on it.
IT Department Restructure
Here's one prediction I am excited about: In 2014, IT management will finally start to take a serious look at the current siloed structure and begin to make the corrective changes to position resources for the transformation that is happening in terms of cloud services, mobility, security and big data analytics. The storage, networking, virtualization and compute silos will start to converge into a datacenter role especially as Software-Defined-Everything starts to become a reality.
This change in IT department structure is part of the transformation to the new IT department, one that is built around enabling and servicing the business, as opposed to the current perception that IT is a burden on the business and is a cost center. This change will also open the way for automation and orchestration in the environment, something that IT has been slow to adopt due to the fact that you have to have all the different players' consent on the tools to use, the functions to automate, and so on. Everyone is in job-protection mode -- as companies reorganize departments and give clear direction to what is to be the company mission moving forward, we'll see rapid technology adoption especially around redundant tasks.
IT Broker Services Build-Out
Vendors will release products that allow IT departments to become brokers. Here, I am talking about the ability to have one enterprise-wide resources store that can aggregate local applications and resources with external resources, cloud and others and present them in a meaningful way. This is more than just the current self-service portal and app stores. What I'm envisioning is brokerages that will allow departments to branch out and include things they can't provide today.
In-Memory Database Adoption
Big data analytics is a trend that will be talked about for a very long time. I'm a fan and a big believer that big data analytics will change the world. Next year, we will start seeing big data analytics move from concept to implementation and value extraction. What I also predict is that the adoption of in-memory analytics and processing will significantly increase as we start to deploy and leverage big data analytics. I expect the major database vendors will innovate products specifically geared towards in-memory which is a requirement in order to generate the real-time analytics that businesses will be looking for.
The year 2014 will definitely be cornerstone for big data analytics, cloud and social with a security wrapper that will lead the conversation for years to come.
Did I miss anything? I would love to hear your thoughts on what else we could potentially see in 2014, please share in the comments section here.
Posted on 12/17/2013 at 3:46 PM0 comments
"Everything is achievable through technology: better living, robust health, and for the first time in human history, the possibility of world peace." That's a quote from Howard Stark, the father of IronMan, in the movie. The quote came to mind as I was writing this column, and except for the world peace part, what he says is relevant to technology and to what I see coming in 2014: big data analytics, mobile, cloud, social, and, of course, security.
Later this week, I'll give you my thoughts on the other hot issues, but this time I'd like to talk about security and how it will be a big concern next year (especially in the context of big data) above all the others.
The IT conversation in 2014 is inevitably going to be about security. The proliferation of devices is bound to present security threats at some point. So far we have not seen a major attack on mobile devices, but to think that it won't happen is asking for trouble. I believe the adoption of MxM technology will accelerate especially if vulnerabilities surface early on.
Security will also be a significant focus as more enteprises adopt and accept the cloud. Now that enterprises are accepting the cloud and using the various services, many security questions are arising and the need to protect data in the cloud will be crucial moreso into 2014.
The security conversation will also bring up the issue of privacy, which will be one big road block toward cloud adoption for many companies. What I am about to say will not sit well with many, but it is unfortunately a fact that sooner or later everyone will come to terms with: "Privacy" does not exist anymore. Anyone who thinks that in today's age, there is room for privacy is holding on to a pipe dream. What's so funny about privacy is that we all want it, but when you boil it down to someone and explain to them what it will take to attain it, they no longer want it.
Privacy is a concern and it's not just because of the media frenzy around the NSA incidents. Privacy goes beyond that and here's an example: The new Microsoft Xbox One has a built in camera with facial recognition technology, and that sounds like a great idea for single sign-on, right? Yes, from a consumer perspective. Can it be misused? I can nearly guarantee that it will be mis-used. I can imagine a greater database for the NSA to tap into, or a great target for hackers to go after. Such privacy abuses apply not just to the Xbox One, but remember that Sony's newest PS4 also has a camera. Sure, you can unplug the camera from the back, but that's a temporary fix and you may eventually find that you might need to use it for some gaming.
Even if you address the camera problem, Xbox One and other consoles can use voice-activated commands. So, that console is always listening to what you're doing. You can disable that as well, or can you? Did you know that even your cell phone, when it's turned off, can be used by someone smart enough to be able to listen to your conversations? The capability is there.
Let me take this one step further. Cable and satellite companies will soon be shipping boxes that have the capability of listening on your conversation in the living room. Based on keywords and context that those boxes pick up on, they can show you advertising that is relevant to your conversation. Now that's a clever use of big data analytics. You can disable that as well if you like, but maybe not completely.
At some point, the only way to have privacy is to not use any technology whatsoever. That's because everything will be a smart device and it'll be difficult to turn them off with certainty. And that's why I believe the notion of privacy for enterprises and for consumers is over. The new thought process is how to create data, content, and conversation that is private at the source. It is too early to tell, but you should be thinking about dealing with this and not trying to fix. I believe it can not be fixed.
Let me know what you think so far in the comments section. And next time, I'll talk about the rest of my predictions and how your job as an IT professional will change in the coming year.
Posted by Elias Khnaser on 12/16/2013 at 4:30 PM0 comments
As we wind down 2013, I began thinking about the 2014 predictions that I usually make sometime in December. But Virtualization Review asked me to write a predictions column for the printed edition, which accelerated the process this year. While my predictions in the print edition are a subset of what I will be blogging about online, one prediction stands out from the rest for 2014 and I am betting it will dominate--and in fact, lead--the IT conversation.
We are undergoing a massive shift in the way we use technology, so much so that I could without exaggeration say there is a lot of chaos surrounding our projects. We have broken so many previously conceived best practices and we have made so many exceptions to cater to the changing landscape. Even then, we have not stopped to establish new security best practices, to rethink existing strategies and refine them. As a result, we are more vulnerable than ever before to security attacks.
The proliferation of smart devices mixed with the adoption of public cloud services are responsible for this security threat. Things have progressed so fast that we have not taken a step back and thought about whether perimeter security is still the best approach. While some enterprises have taken some steps and asked some questions, most have not. And even the enterprises that consider themselves security-aware have taken limited steps in rethinking security in today's landscape.
In 2014, I am thinking Mobile Device Management will make a come-back, not because it is necessarily the right tool to use but because we might see an outbreak in mobile device malware attacks. It only makes sense that the people who have been writing viruses or malware for PCs will naturally try and attack the device with the most widespread use and the least security on it. Heck, let's just say it: There is practically no security on our devices. Is MDM the right solution? Possibly, but there are other approaches as well.
I don't need to tell you about security in the cloud, but I do want to highlight one thing: While I believe security in the cloud is critical and so is understanding how to leverage security in the cloud, what to look for, what to ask your provider for, and so on, I strongly believe that Amazon AWS, Microsoft Azure and others can handle security ten times better than we can at an enterprise level. If you look at it strictly from a capacity perspective, they have teams of security professionals, the latest and greatest hardware, and so they are also in the spotlight and the target of every hacker on the planet. Security specialists should look at the risks and build policies to mitigate them.
Is your company considering a new security strategy? Are you raising security concerns around security? What is the approach you are taking and how much success have you had so far?
Posted on 12/02/2013 at 4:22 PM0 comments
Our industry will latch on to something and use marketing tactics to try and establish a trend, gain mind share and ultimately sell products that the consumer does not always need as badly as advertised. It's not a bad thing, it's how the market works. After all, companies are in the business to make a profit. That's where analysts like me come in, to dissect some of this hype and present you with the facts.
So, the only word I could find to properly describe the craziness that is surrounding all-flash arrays and the amount of hype around it is "epidemic."
I don't mean to sound as if I'm anti-all-flash arrays, not at all. But I must insist that all-flash arrays are not the be all and end all that will fix all of our problems and it most certainly will not kill spinning disks. Our industry will gravitate more towards hybrid solutions while rethinking existing storage architectures and approaches. So to be clear, I do believe that for certain point solutions all-flash arrays are well suited, but for the majority of workloads, I'm not so sure.
Recently I visited with a large customer who also follows this blog and learned a lot about how their business is evolving and how IT is closely and intimately aligning with business goals and the business in general. It reinforces what I've been saying and repeating to anyone who will listen: "There are no IT projects, only business projects and initiatives." Back to that conversation, my customer also told me that they now live and die by asking, "What for?" Everything they do, every product they purchase, every project they initiate, they ask, "What for?" The question reminds them of why are they making the decision they are making and how that decision will translate back into the business.
Now, back to the flash problem: The first thing that I want to bring up is warranty and how flash arrays stack up against traditional and hybrid solutions from a warranty perspective. If you have been following my blog you know that I place a high importance on the role of procurement moving forward. Not that their role was not as important up until now, but just because I believe in the age of cloud they will be scrutinizing the purchases, squeezing them and ensuring the IT department has thought of all the aspects that are not always technology-related. So, for starters, if you are considering all-flash for any reason, I would be wondering about the warranty.
Most procurement departments are expecting three to five years of service out of their IT-related equipment. Vendors that come in with products carrying 1- or 2-year warranties will find the bottleneck with the procurement department, not the IT department and that is a business factor that you should never discount.
My favorite line about all-flash arrays is that they are faster and perform better. While they are undoubtedly fast, to come out with a blanket statement that they are faster is misleading. Sure, flash arrays significantly reduce latency, which is crucial for many applications. But have you considered the bandwidth requirements for certain workloads? Flash arrays will give you hundreds of thousands of IOPS but have you considered the profile of your workloads? Have you taken the time to understand the ratio between sequential reads and writes and random reads and writes? You might be surprised to find that flash arrays will perform exceptionally well for reads in general but might be challenged as far as random writes and even sequential writes in some instances. VDI anyone? Some might be surprised to find that for VDI I favor hybrid as opposed to all-flash for performance reasons, among other things.
The second thing that even had me going for a while was the idea that flash requires less power and cooling. It's true if you consider raw flash modules and it might be the correct choice for small environments or limited deployments. However, as this industry is touting the death of spinning disk and the era of all flash, I am making the assumption that all flash companies want to replace every piece of enterprise storage out there. When you consider this endeavor, you will find that flash arrays at scale will require additional components like CPUs and cache that will draw more power and require more cooling than hybrid systems. Again, this point is interesting -- for a limited deployment or a specific workload, it's true; at scale it's not true at all.
I could go on about some other challenges with all flash. My point is that in my experience no technology has ever been able to kill off another technology. You can provide value, you can improve but you cannot complete abolish the other technology. So, another "what for" that organizations should be asking is, "What do you really need all that performance for?"
Knowing that the average workloads in enterprises run anywhere between 50,000 and 100,000 IIOPS (and I am simplifying it here for the sake of conversation), do you need an array that gives you 1 million IOPS? If you ask me I would say yes, you can never have enough IOPS (j/k) but procurement might disagree and you always have to have the business needs and requirements in front of you as you tackle this and not just address it from a techie's point of view.
I realize that the storage world will converge on me with criticism or praise due to this blog and that's OK; I am really hoping for good conversation in the comments section.
Posted by Elias Khnaser on 11/20/2013 at 1:18 PM0 comments
A little over three years ago, I made an insane prediction. I said that enterprise IT will be out of the business of owning and operating physical infrastructure and out of the data center business in five to ten years. I was ridiculed for being unrealistic and not in tune with the market. I made these predictions when cloud was just a buzz word with no real definition or application.
Fast forward to today and you find that anywhere from 15 to 25 percent of workloads are in the cloud already, whether that is e-mail, unified communications, CRM, ERP. The list of workloads that are already there is long and distinguished. It took three years to get this far, and I think in the next two years we'll see much faster adoption with rapid cloud deployments, especially as we start to address business topics that were a hassle in the past, such as disaster recovery.
Most organization saw DR as a necessary evil but also a waste of money (so to speak). Cloud simplifies DR and makes it affordable enough to where it now makes perfect business sense. While DRaaS is traditionally built around VM technology which means everything should be virtualized, traditional players are starting to realize the benefits of the "as a service" model and applying it to some physical components of the datacenter that have not or cannot be virtualized in order to facilitate these implementations.
DR isn't just the only example. We are starting to see many organizations migrate test and dev workloads to the cloud as well. Desktop as a Service is starting to become interesting, albeit in my opinion it's not valid -- yet. But it's definitely part of a larger enterprise-wide cloud strategy.
So if you have not developed a cloud strategy, what are you waiting for? You are going to come under increased pressure from IT leadership and they will come under increased pressure from the business leaders and procurement specialists to look at alternatives before making new purchases, especially those with larger dollar signs. I would not be surprised to see a procurement specialist asking IT if they have looked at cloud storage as an alternative. And it will probably happen right before you ask for that next expensive SAN upgrade or expansion. And I would not be surprised to see them asking questions about archiving and backup, expanded storage, additional compute, and so on.
CxOs are also being exposed to the cloud at conferences. These conferences are not always IT related, so you se, the scope of cloud is not contained in a world that we can control anymore. As a result, do your homework so you're ready to address and answer cloud questions intelligently.
If you have not done so yet, a comprehensive comparison of what it costs to host your virtual infrastructure with its dependencies in-house versus an IaaS is a crucial first step. I advise that you don't play around with the numbers -- it can win you time but no points in the organization. If it is cheaper in the cloud, suggest it and move on it.
Evaluate DaaS, look at the cost of hosting desktop virtualization internally with all its dependencies versus in the cloud, but make sure you take the technical requirements into account. Separating data from the desktops kind of negates the idea of DaaS, but if you are embracing IaaS and all your data is in the cloud, then maybe DaaS makes more sense. If not today, whywhy? And when do you think it would make sense? Those are questions you should be asking, and coming up with honest answers.
You should also factor in what it is going to cost to deploy desktop virtualization in the mean time. Does it make sense given when you believe DaaS will be viable to you? Will the organization have depreciated the cost of desktop virtualization by then? Apply the same logic to backup, archiving, e-mail, and so on, and as these conversations come up in your organization, you will look really polished having put in the due diligence. What you'll be doing is forcing a new level of respect to all levels of the organization. Your bosses will appreciate that you took initiative, procurement will look at you differently and your future requests will go through a little easier. This is true regardless of the size of the organization.
Final thoughts: As you progress through your cloud strategy, start thinking about how the role of IT is going to change from an owner and operator to an aggregator and governor of these cloud services Again, I already see this happening, but at different levels and speeds. Start to think along these lines and it will change your priorities and allow you to start looking at different products and solutions that enable this larger vision.
Have you developed a cloud strategy yet? If yes, I am interested to learn what you are doing. If you haven't, I'm interested to know what is stopping you.
Posted on 11/13/2013 at 4:16 PM1 comments