In-Depth

2015 Virtualization Predictions

Predictions can be tricky things, especially in the field of virtualization and cloud computing, as everything changes so quickly, and innovation can come out of nowhere. For example, just a few years ago not many folks would've predicted the rapid ascent of containers, with virtual machines so firmly established as a core technology.

Yet here we are, trying again. For this article, Virtualization Review reached out to various industry experts -- writers, vendors, industry analysts -- to get their take on what they expect to see coming in 2015. If you have your own predictions, and would like to see them on the Web site, send an e-mail to Editor in Chief Keith Ward with the subject line: 2015 Predictions.

Chris Wolf, CTO, Americas, VMware Inc.
1. Network Virtualization's Rapid Growth Continues

I have a lot of conversations with senior IT leaders about network virtualization, and the outcome is almost always the same -- network virtualization isn't a matter of if, but when. For many enterprises, "when" will arrive in 2015. We are continually pressed to improve time-to-value through greater agility and automation. For most organizations, networking and security remain the primary bottlenecks to workload provisioning. Many mature enterprises take two weeks or longer to provision all of the necessary network and security resources to bring a new service online, and some enterprises take as long as six to eight weeks. Network virtualization and its associated automation capabilities can reduce provisioning delays from weeks to less than a day. That represents a significant gain in business agility.

Beyond speed, network virtualization can also free IT security professionals to do the kind of work that got them excited to join the industry in the first place. Many network security administrators that I speak with describe a workday that mostly consists of opening a service ticket, creating or modifying firewall rules, and so on, and then closing a service ticket. They feel like factory workers. When network virtualization automates mundane security tasks, security professionals are freed to focus on more important work, like actually getting time to research emerging threats -- a job they might only get a chance to do one week a year at a security conference.

Reading about major IT security incidents is practically a daily event. Most organizations have reached the conclusion that their traditional approach to network security simply cannot scale in the cloud era. Virtual networks can provide dedicated stateful virtual firewalls to every application in a datacenter -- a concept known as microsegmentation. Segmentation at the VLAN or network subnet level has proven ineffective. However, microsegmenation that delivers comprehensive security at the application container (a virtual machine, for example) level provides a level of protection beyond what enterprises have ever been able to achieve.

There are many mature network virtualization options on the market today, and as a result 2015 will see network virtualization achieve significant penetration into the enterprise datacenter.

2. Our Building Blocks Will Get Bigger
Most IT professionals are builders at heart. Designing and building complex IT systems lets us flex our brains, but at the same time often slows down business agility. We can still flex our brains, but we need to redefine our value, which is no longer about who can build the best architecture from scratch. Our value is in how can we solve problems quickly and safely.

To that end, more organizations will deploy hyperconverged or converged infrastructure solutions in 2015. It's not because they can't build their own integrated systems. It's because IT administrators have better use of their time. Take infrastructure integration off the table and suddenly we all have more time to focus on the delivery and optimization of critical business applications. Our inner nerds will always want to build everything, but by the time 2015 comes to an end, more and more organizations will come to realize that simply buying pre-integrated solutions gives them more time for innovation and lets them spend less time on maintenance. IT administrators will lose their obsession with having to build everything and, instead -- like the infamous Walter White -- will obsess over speed.

Andi Mann, Vice President, Office of the CTO, CA Technologies
I hope you didn't come here for the same old 2015 predictions: Docker kills VMware sales; VMware debuts a public PaaS; Docker doesn't survive the year; Amazon launches a DevOps Cloud Platform; or Docker IPOs for <DrEvil.gif> a billion dollars. Instead, I wanted to go with something a little more thought-provoking.

So here it is: I predict 2015 is the year virtualization goes mainstream!

Sure, to many in our industry, this may seem absurd. Virtualization has been mainstream since 1960! Or was it 2005? Yet virtualization has only been "mainstream" within IT and perhaps a relative handful of low-level "process" workers. If you want a business exec to glaze over, talk to her about software-defined anything!

In 2015 I predict virtualization moves into broad mainstream business and consumer use cases, not just IT mainstream use cases, in the following five ways:

Virtual devices enable bring your own device (BYOD). Mobile VMs seem to have come and gone, and with good reason. However, in 2015 new virtualization approaches like containers (backed by industry heavyweights like Samsung and CA Technologies) will appear in the hands of millions, perhaps billions, of consumers and business users with more flexible, personal and efficient approaches to enable BYOD.

Virtual security enables BYOD. With new data breaches revealed almost daily, mainstream users are increasingly concerned about password risk, yet annoyed by heavy-handed solutions, especially on mobile devices. In 2015, business users and consumers will increasingly exchange abstracted identity with virtual single sign-on (SSO), powered by technologies like OAuth, OpenID, and WS-Federation.

Virtualization goes upstream. At CA Technologies, we already see huge interest in service virtualization, as much for its business impact as its IT impact. Abstracting composite applications from their environment -- infrastructure, middleware, databases, APIs and so on -- will see huge growth in 2015, allowing upstream teams like dev, test and QA to deliver business solutions better, faster and cheaper.

Virtualization goes downstream. As Agile, DevOps and service virtualization accelerate up-stream activities in application delivery, businesses will need to remove more bottlenecks. In 2015 we'll see virtual approaches increasingly used by more downstream activities like training, marketing and support, just as we have with documentation.

Virtual catches up to reality. For three days at the recent CA World, there was a constant queue for a "Game of Thrones" "experience" using Oculus Rift VR. In 2015, we'll see augmented and virtual reality (formerly the [non-exclusive] province of truly awful movies) improve UX in manufacturing, real estate, health care, hospitality and many other businesses.

Oh, and one last thing: A certain vendor we all love to hate will fork Docker into a commercial distro. Guaranteed! Remember, you heard it here first.

Simon Crosby, Cofounder and CTO Bromium Inc.
2015: The Rise and Fall of the Docker Security Ecosystem

Docker is awesome. Developers love its powerful, simple abstractions for packaging and deploying application code. Develop and package once, then deploy anywhere, from your PC to any cloud with many choices for automation frameworks. The Linux distros are fawning over Docker, and it's also a key building block for enterprise PaaS. Of course, something this popular is also a threat to almost every incumbent: Who needs VMs with Docker? Who needs a Linux distro?

But developers dictate the future, and what's not to like about a write-once-run-anywhere developer framework?

Well, we've seen this movie before: Java + Security anyone? In 2015, the giddy flock of Docker devotees will have to start to deliver differentiated products, and Docker security will be high on their list. This is a big mistake, and they will fail.

First, is there a Docker security issue? This is the wrong question. The right way to phrase the question is, "Is Linux/Windows secure?" We all know the answer to that question already. A malicious container can easily compromise a vulnerable host OS, and an already compromised host OS completely owns the container. This is no different from the failed experiments in sandboxing on PCs.

The Docker security challenge therefore relates to the security of the infrastructure that runs Docker containers. Is it multi-tenant safe? Can your container share a Linux instance with containers from other customers of the cloud? Very clearly the answer is no -- not with any confidence, anyway.

In 2015, the Docker acolytes will make many implausible promises, instead of focusing on empowering developers with better tools, and leaving security to credible infrastructure vendors, who understand the full complexity of application isolation.

For enterprises to safely adopt Docker and operate it at scale in private or public clouds, what's needed is a simple extension of the most robust isolation structure in computing today -- the hypervisor. The extension is called micro-virtualization, and it confers on Linux, Windows or even Mac OS X an ability to instantly hardware-isolate an individual application or task.

The Docker ecosystem should focus on adoption of Docker by app developers. The hypervisor and cloud infrastructure vendors will, in 2015, deliver technologies that make Docker deployments safe for app developers and cloud infrastructure operators.

Dave Bartoletti, Principal Analyst, Forrester Research Inc.
1. The App Drives the Hypervisor Choice

This trend has already started, but it'll be the primary way server virtualization platforms are chosen in 2015. With effective parity between the leaders -- VMware and Microsoft -- companies should choose the hypervisor that's easiest to deploy, has the broadest feature set and is easiest to integrate for the app they plan to virtualize. That might mean Oracle VM -- yes! That might mean KVM. With such a mature market, the action moves to the app layer. Pick the VM container that makes the app easier to deploy, manage, support and upgrade.

2. Multi-Hypervisor Management Capabilities Are Table Stakes
Every management tool must support multiple hypervisors -- don't pick one that doesn't in 2015. Virtualization management and cloud management are converging (or already converged, for many products), meaning your future virtualization management toolset must support multiple on-premises and cloud virtual machines today -- not in the next release. Look for vendor tools that support other hypervisors with the same feature breadth they have for their own.

3. Containers Will Become a Viable Deployment Option for More Apps
Docker is new, but containers aren't. Still, the huge spike in interest in lightweight, efficient containers as the deployment method of choice for hyper-scale Web apps means every virtualization vendor is either already on board or will be in early 2015. What does it mean? You have another option for app deployment, more granular than a full VM. Start learning about Docker's current strengths and limitations as compared to full VMs -- you'll need to offer them to your dev and apps teams by the end of next year to keep up with your nimble cloud competitors.

Taneja Group Inc.
The analysts at Taneja Group got together and came up with our top predictions for 2015. We believe 2015 will be a pivotal year for the datacenter as some of the nascent technologies we've been watching are starting to mature and become mainstream. We feel that 2015 is just a jumping off point for the datacenter of the future, and that the datacenter of the next decade will bear little resemblance to today's datacenter.

Software-defined (SD) will become a commonly expected feature (or available from) for most arrays, rather than continue as a special separate category of storage.

SD will become less of a resource-specific concept and more of a common technology checkmark. SD will be interpreted as a broader cloud-like concept as a growing number of vendors come to claim SD capabilities simply because their solutions are delivered in the form of software. The original, fundamental idea that there be an identifiably separate "control" plane (that is, master application) that can dynamically define (through software APIs) and configure distributed "data" plane resources (hardware, firmware or software) will only be a secondary concern of the vendors. Despite the obvious value in having an open, remotely programmable infrastructure that can be optimized by third-party intelligent controllers (that might even incorporate knowledge from outside the system itself), we'll continue to see completely closed systems presented as "Software Defined" simply because they come packaged as virtual appliances.

Many IT folks, however, will continue to prefer buying pre-packaged stacks of integrated IT infrastructure, rather than attempt to deploy pure SD solutions. This will muddy up the definition of software-defined even further, because not all converged or hyperconverged solutions are software-defined. In any case, we feel certain that more and more datacenter functionality will become dynamically and elastically provisionable, whatever the name or form of implementation.

Data protection will increasingly become directly part of primary storage.

Unfortunately, data protection has taken a back seat in the software-defined datacenter. We feel that in 2015, VM-centric and profile-driven data protection will take its rightful place in the datacenter -- directly as part of the storage solution. To enable this, the storage used for data protection can no longer be a discrete component, but instead an integrated yet abstracted component used to protect a datacenter's most valuable component: data. Many users are moving to VM-driven snapshots and replication as primary approaches for protecting their data. Several vendors are already offering data protection solutions integrated with primary storage: examples include Microsoft StorSimple, which combines backup, archive and DR with primary storage; and Nimble Storage. As a larger trend, we see more hyperconverged systems and storage vendors adding embedded data protection features that will not require, or rely on, traditional backup software. Data protection will simply be another attribute to set when an object is being created and the storage will implement these attributes.

Marc Malizia, CTO, RKON Technologies
The Move to a Hybrid Model

Cloud providers will make a big effort to support the hybrid cloud model in 2015. Currently, there are three typical cloud deployments: public, private and hybrid cloud. The public cloud is a commodity cloud service, where customers spin up workloads at the cloud provider's datacenter, typically for a specific application.

Little integration exists between those servers and the customer's internal compute platform hosted on-premises, and physical compute resources are shared across all users of the public platform.

On the opposite side of the spectrum is the private cloud, driven by dedicated hardware on a per-customer basis, and can exist at a cloud provider's or customer's premises. The scenario that merges both concepts in an effort to meet real-world challenges is the hybrid cloud. Most companies cannot fully relocate to a public cloud due to limitations, constraints, security and compliance issues. A compromise is to leverage the benefits of the public cloud, where possible, while using a private cloud to overcome the remaining challenges.

One of the current issues of the hybrid model is that enterprises want a common management platform to manage their myriad compute resources, wherever they may be. Today, many of the cloud providers have proprietary interfaces, which makes for a difficult task of managing different vendors from a single console. In an effort to grow market share, cloud providers must give customers the ability to manage their internal (private cloud) compute with their systems and even their competitors' systems.

Signs of this shift can already be seen with the recent purchase of Eucalyptus by HP and Meta-Cloud by Cisco. Both of these acquisitions were aimed at building a management platform with integration hooks to other providers and private cloud networks.

The Explosion of the Software-as-a-Service (SaaS) Market
With SaaS being a win-win for both manufacturers and its customers, the SaaS market will grow dramatically in 2015. From the manufacturer's standpoint, hosting their software allows them to maintain and support a single code train, greatly reducing support efforts and development costs. All support volume associated with software installation and software conflict issues, due to local installations, also goes away. The manufacturer builds in a recurring revenue stream and assures the newest code is always in use. This platform also allows the manufacturer to easily let customers try new products or modules, due to the fact they continuously have a captive audience on their Web site.

Customers also see great benefits in SaaS, because it typically provides a lower cost than the customer hosting the application internally. Customers are alleviated from the tasks and associated cost of deploying and upgrading the applications, as well as maintaining the systems needed to run the software. The ability to change license count on a monthly basis is another big advantage.

Doug Hazelman, Vice President of Product Strategy, Veeam Software
The Always-On Business Will Become the Norm Across the Globe

Immediacy will be pivotal across many levels; users will continue to demand unfettered access to applications and data 24x7 from wherever they're located, and this trend will only accelerate as device innovation progresses and the roll-out and proliferation of higher speed networks (4G, LTE, 5G and so on) continues. Business users will have even more aggressive requirements for ubiquitous access to applications and data than consumers. Network agility and reliability will come under the spotlight as virtualization continues to gather pace, but availability will become a strategic concern to business leaders. No longer will recovery time or point objectives of hours or days be acceptable. IT will be expected to deliver recovery time and point objectives in minutes.

Data Explosion Continues to Accelerate
Data volumes will explode even further over the next few years (some commentators think to as much as 400ZB by 2018). Organizations will make new investments and leverage existing modern datacenters in an attempt to manage their exploding data footprint during 2015.

The "Internet of Things" Becomes a Reality... and IT Will Need to Wake up to This Fact
IDC forecasts that the worldwide market for Internet of Things (IoT) solutions will grow from $1.9 trillion in 2013 to $7.1 trillion in 2020. Such is the appetite for IoT, that more than two-thirds of consumers plan to buy connected technology for their homes by 2019, and nearly half say the same for wearable technology. This rapid pace of adoption will cause IT departments immense problems over the next 12 to 24 months. Users will expect always-on services and IT will be under scrutiny to deliver, and with the cost of downtime already into six figures per hour, downtime in the IoT era could be far costlier. 2015 will be the year where we'll see IT step up to the plate and embrace availability, or it will be the year where we'll see businesses fall on their swords. Only time will tell.

Garret Grajek, Chief Security Officer, dinCoud
In 2014 and even some of 2013, we've already seen the acceptance of hosted virtual desktops, seeing many advances in virtualization and graphics in VDI products. This was an important breakthrough, in helping both enterprises and users alike know that virtualization of desktops can help the enterprise in key pain areas like provisioning, manageability and security. What the market needed to see was that VDI was actually workable (for example, users could actually get real work done on a Windows desktop through a virtual interface).

Enterprise communities in various sectors are begging for a more cost-effective solution to end-user desktop support; this has been proven with VDI.

VDI Was the Provisioning Ground; DaaS Is the Future
But switching from a user-centric cost of PC and software to a thick cloud client on a user's desktop, and then forcing the enterprise to support the whole network, software and infrastructure burden of VDI is not seen as enough of a solution for many enterprises. What they've asked the industry is, "If you can move burdensome software packages such as ERP, CRM, e-mail and collaboration to the cloud, why can't VDI be offered as a cloud service?"

This is where Desktop as a Service (DaaS) comes into play. Enterprises want to realize the cost, manageability and security of the cloud -- for desktop virtualization. They understand that the technology exists to virtualize the user desktop; the problem is they just don't want to do it themselves.

The marketplace is open to the industry players bold enough to offer desktop virtualization as a cloud service. This should consist of a fully hosted and provisioned set of desktops that allows enterprise users access with their existing enterprise (read Active Directory) credentials. In addition, the space is open for the vendors who can meet the modern authentication/access authentication challenges (for example, two-factor for PCI DSS, NCUA, FFIEC environment, HID/NFC for health care and HSPD-12 [CAC/PIV] for federal markets).

The access and desktop virtualization needs to be offered in a cloud package; in a bundled, quantified manner that allows the enterprise to pick, choose, and scale to both local departmental and global requirements.

New Technologies Must be Supported by DaaS Vendors
The move to the cloud will be driven not only by manageability of the cloud resources, but also by the freedom of device flexibility for the VDI platforms. Of course, support for mobile, iOS and Android will be required, but the real growth will come in using thin clients as the "window" to the DaaS environment.

The most promising looks to be the support of Chrome extensions from all platforms. With the advancement of thin-client technology, users can choose which device they want for VDI access -- including the exploding Chromebook market. Chromebook desktop access looks especially promising, especially in the education market.

The Complete Package
Enterprises looking to move their desktops to the cloud should also expect cloud vendors to be able to support their storage and server requirements. The cloud vendors savvy enough to meet both desktop virtualization and other traditional server and storage requirements will be situated correctly to take advantage of the market trends.

2015 will be the year for DaaS, cloud growth and adoption. The technologies exist, but the challenge (and opportunity) will be for vendors to put together a complete package for their users.

About the Author

Keith Ward is the editor in chief of Virtualization & Cloud Review. Follow him on Twitter @VirtReviewKeith.

Featured

Subscribe on YouTube