In-Depth
Virtualization Review Roundtable: 3 Major Obstacles to Proliferation of Virtualization, Cloud Computing
Six CTOs tell us what they think is keeping virtualization and cloud computing from proliferating, and offer solutions to get past those barriers.
Welcome to the Virtualization Review CTO Blogger Roundtable. We asked six of our regular contributors to describe three of the biggest impediments to virtualization and cloud computing, and then spell out their solutions. Participants this time include:
- Simon Rust, vice president of technology, AppSense
- Doug Hazelman, senior director, product strategy, Veeam
- Karl Triebes, CTO, F5 Networks
- Krishna Subramanian, COO, Kaviza
- Jason Mattox, CTO, Liquidware Labs
- Alex Miroshnichenko, CTO, Virsto Software
We plan to make this a regular feature, and welcome Virtualization Review readers to contribute ideas and questions. Send those questions, issues and ideas to me at [email protected] with "CTO Roundtable" in the subject line of your message.
Definition, Distrust, Expenses
By Alex Miroshnichenko
It is difficult to list the top three challenges to the adoption of cloud computing because, above all, I am convinced that before broad adoption can occur, the IT industry has to come to a common definition of "cloud computing."
I think that today the majority of cloud offerings fall into rough categories:
- Elastic computing, which is great for computational tasks with little transactional persistence
- Blob storage, which is great for content serving as well as backup and archiving.
There is a whole world that falls in between these two endpoints.
Another factor holding back the proliferation of virtualization and cloud computing is the inherent distrust of the public cloud that is pervasive among IT professionals. Change is hard. With a legacy of building one's own environment, few if any are willing to implicitly trust cloud providers with their mission-critical data and compute resources. I think today that this is more psychological rather than a real risk factor. The actual security in the cloud could in fact be more robust than that found in private cloud or in-house datacenters -- but not everybody is willing to admit it.
In order to satisfy the risk-averse majority, dramatic changes to the status quo are adopted only when externalities are so strong (orders of magnitude cost savings, for example) that change is finally embraced. Cloud solutions need to become dramatically cheaper to drive people to rethink their habits.
Once these formidable obstacles are overcome, I think the next serious challenge to adoption will be to get the industry a new storage architecture.
A number of cloud vendors today claim to have addressed the need of a new storage architecture, but upon closer scrutiny they all turn out to be little more than glorified colocation facilities with expensive enterprise storage systems to provide the required quality of service.
I do appreciate a competent colocation service, but on its own they will not change the cloud adoption trends. At the end of the day, they employ the same technology as in-house datacenters and therefore cannot change the fundamental economics of the problem.
VR: What are the solutions to those obstacles?
For me, cloud computing is an external service which gives me at least the same level of performance with at least the same level of reliability at a significantly lower price than it would cost me to implement it in-house.
Such a dramatic reduction in cost is absolutely impractical without the new storage architectures I raised as a primary obstacle to widespread adoption of pervasive virtualization and private, public and hybrid clouds. To attain the full potential of cloud computing, a centralized pool of virtualized compute and storage resources is necessary. To optimize these resources, the core issue is the separation of the exiting storage technology and the compute virtualization.
It is simply impossible to meet the economic requirements of large scale clouds with the expensive dedicated enterprise-class devices offered by the storage industry today. And what is called "cloud storage" today simply does not provide the feature set and service levels required. In order to achieve the high levels of storage utilization and concurrent optimized performance to drive these new economics of cloud computing, a new approach has to emerge.
I think the most compelling way to solve this virtualization performance issue will take the form of new storage software which, tightly integrated in the hypervisor layer on the computing side, can manage the commodity devices on the storage side.
It's All About Application Availability, Infrastructure and Security
By Simon Rust
I see the obstacles to the proliferation of virtualization and cloud computing being: application availability and performance, infrastructure availability and security. To be specific:
- Application availability is concerned with the sheer number of applications that are available today as cloud-delivered applications. For example, an organization typically has a total number of business applications adding up to approximately 10 percent of the total number of employees (so an enterprise of 10,000 users typically makes use of approximately 1,000 applications). At this time, less than 10 percent of these applications are available as cloud applications, meaning that the enterprise is a long way from being able to cloud-enable their workforce, and integration between not only the cloud providers and the local applications remains a constant challenge. We must be aware that not all applications will be cloud computing available. This is really down to the fact that the heavy graphical applications are always going to require some form of local Graphical Processing Unit (GPU) to enable rendering of complex content.
- Infrastructure availability is concerned with the availability of suitable, resilient, always-on-network infrastructure between the end user's end-point device and the cloud computing resource. Some cloud-delivered applications can cope with highly latent connections between the two points, since they make use of http connections, which were purposely designed for long-distance connectivity with potentially poor links. However, as we find ourselves looking to deliver services that have a much lower tolerance to latency (desktops for example), then the network link between the two points becomes critical.
- Security is concerned with many things, but to keep things succinct let's consider just data encryption and integrity. The enterprise needs to feel completely secure in that their data is not going to be compromised and find its way into the hands of the competition. There needs to be enterprise class encryption to ensure that data does not leak between the cloud datacenter and the representation on the screen of the end-point device. Given that the end-point device may well be a kiosk or non-IT managed device, then we must ensure that no data is left behind on the end point at any time.
As we look at how these key issue areas are being solved, we find that we are in a waiting game because much of the issue here is the new nature of cloud computing and its inherent market immaturity. The market is not mature enough to have more than 5 to 10 percent of corporate applications available at this time. Security vendors also need to "get on the game" so to speak, with the application vendors needing to deliver solutions to manage the security implications in order to be able to snare the enterprise to make use of cloud computing for the delivery of their applications.
If we compare the immaturity of the market to 64-bit desktop computing, we see that things do not happen overnight. For example, Microsoft delivered a 64-bit version of their desktop OS (Windows XP Professional x64 Edition) during April of 2005, but the flagship Microsoft desktop application (Office) was not available until Office 2010, launched in April 2010.
Infrastructure availability is an area that is a little more complex, in that this really comes down to network capability at the end point. As the consumerization of IT continues to make things smaller, cheaper and more attractive to consumers, the network capabilities of these devices become more and more necessary and ultimately critical. These devices require always-on connectivity, and on today's connectivity, only the applications capable of handling high-latency connections are appropriate to these devices. While there are available options to the mobile business user while on trains and planes, these are similarly unable to operate with applications that require anything other than perfect, latency-free connections. So, in a similar message to the security and application availability aspects, we must wait for the infrastructure to be always on and to exhibit extremely low latency.
Clean Up, Reevaluate the Cloud and Standardize
By Doug Hazelman
For those of us within the virtualization community, it's sometimes hard to understand why everyone's not 100 percent virtualized and why people aren't taking more advantage of the benefits of the cloud. But despite my extreme enthusiasm for virtualization, even I can see barriers that are slowing growth. Here are the three I view as most important:
The virtual environment requires a very different kind of management -- fewer boxes means managing more apps
The initial and biggest push for virtualization so far has been server consolidation. Everyone has been sold on benefits such as maintaining fewer boxes, lower cooling requirements and reclaiming much needed space in the datacenter. But once consolidation was complete, organizations may have reduced their physical server footprint, they typically did not reduce the number of operating systems, applications or even guests that they needed to manage. In fact, in most cases, all of these things increased thanks to how easy it now is for administrators to create new VMs. This new challenge of managing added application instances in the virtual environment is hindering virtualization's growth.
The virtual environment is not just a super-dense physical environment, and it cannot be managed in the same old way. When setting up a new server required a fat purchase order and the installation of a new box, people thought hard about how to justify it, what would go on it, and whether they could avoid having to buy a new one by freeing up existing servers via the elimination of applications and data. Today, creating a new server is just a few clicks and a keystroke away.
Look at it this way: Thanks to virtualization, everyone has replaced their tiny little hall closets with a smaller number of enormous closets. But instead of reorganizing them to get rid of the junk and rearrange items so they can be found, people are simply cramming back in everything they own, plus a bunch of new stuff, putting new blazers next to old leisure suits. A number of independent software vendors are starting to tackle the new management challenges that virtualization creates. But in many cases, the most effective thing an organization can do to improve management of their virtual environment is simply to clean out the closets.
Implementing the world's best management solution for virtual environments isn't enough, because unless the organization has a process that governs the creation of new VMs, how applications and data are organized among those VMs, and how VMs are retired, it's just garbage in, garbage out. Taking the time to establish policies around VM management and then finding the right tools to monitor and report on the virtual environment to ensure those policies are followed can go a long way toward eliminating the chaos inadvertently caused by the power that virtualization has unleashed.
Viewing the cloud as infrastructure
In IT, there's never time to get comfortable. Just as IT is starting to get its virtual infrastructure under control, here comes the cloud. Unfortunately, few seem to have a clear idea of just what the cloud is, and that's a big obstacle to its widespread adoption.
IT has a perception problem when it comes to the cloud. Too many organizations are chasing the cloud because they fear they'll fall behind if they don't take advantage of it, but they have no clear idea of what they're actually trying to achieve. They're viewing the cloud as a new kind of infrastructure instead of looking at it from the point of view of how it can be useful. Organizations would have more success viewing the cloud not as a new kind of infrastructure, but as a tool that can make an organization's applications perform better and cost less.
Many of the big virtualization companies clearly think applications will drive the future of the cloud, judging by the investments they have made recently in application development space around Java and .Net.
The takeaway here is that before IT jumps into the cloud, it should look at its applications first and determine how they might benefit from the wide variety of emerging technologies and services that fall under the nebulous definition of "cloud." If cloud is treated instead as a general infrastructure upgrade, people are going to be disappointed with the results. Viewing the cloud as an end in itself will only lead to frustration, wasted money and lost opportunities.
Lack of cloud standards
There's a final barrier to widespread cloud adoption, beyond just the general confusion over the meaning of the term. Mass adoption of anything is difficult without standards and, right now, standards are scarce within the cloud. IT organizations will need to have a set of standards they can look to for development, security, and integration. Software developers, for example, need to know what resources they have at their disposal when writing cloud-aware applications, and few will use the cloud for critical applications without strong security and privacy standards.
Sometimes standards are defined by a market leader, other times the standards are defined by a governing body. Until there are standards and cloud providers adopt them, no one beyond the early adopters will sign on to cloud in a big way.
There's no quick fix for this problem, unfortunately. Until the myriad cloud vendors organize themselves to establish standards, cloud will not take off in the same way as virtualization already has. That's OK, though -- IT still has quite a bit of work to do just getting its virtualization house in order.
Where to Begin, Which Model to Follow, and Compatibility with Legacy Infrastructures Pose Challenges
By Krishna Subramanian
Contrary to popular belief, recent surveys of business IT leaders reveal that security and compliance are not the biggest impediments to the near-term adoption of cloud computing and virtualization. This makes sense when you consider that over 60 percent of cloud adopters today are using it for non-mission-critical applications. So, what then, are the top three impediments to cloud computing and virtualization today?
From having been involved with virtualization, SaaS and cloud computing for well over a decade, I see the biggest impediments having to do with the inertia of the status quo:
Where to begin? Everyone has too much on their plate, and there are too many IT priorities vying for attention and budgets. The economics of cloud computing and virtualization are compelling to make it to the top of the list, but the next question is, which application, use case or area do you start with? I see companies grappling with this one all the time.
Remedy: Take an incremental approach and start with a well-defined use case or application where the business results can be quantified and immediate. For instance, libraries and educational institutions have adopted cloud-hosted virtual desktops for their public facing kiosks -- this is a well-defined use case, and the business drivers to reduce costs while increasing public access make it an ideal candidate.
Which model (public cloud, private cloud, hosted private cloud, virtualization, MSP, etc...? Once you have identified specific use-cases, the next impediment to overcome is choosing the right architecture and business model. Often this involves a trade-off between keeping tight control versus convenience, access, and the associated costs, and can be a tough one to resolve, especially if you are trying to pick one consistent model across the organization.
Remedy: Again, not trying to boil the ocean and taking an incremental approach really helps. Pick the right business model and architecture for the particular use case or problem you are trying to solve, and expand from there. For instance, for public-facing applications, a public cloud model may be the right one. With virtualization, a centralized datacenter model may not always be right, especially if you have remote branch offices with poor connectivity; a distributed virtualization solution with centralized management might be the better option.
How does it fit with my legacy infrastructure? This is probably the biggest impediment I hear about from customers -- especially as they consider cloud computing and virtualization for the more mission-critical applications. For instance, if you used cloud-hosted virtual desktops, where does your Active Directory sit? How do you integrate with it? If you use many SaaS applications, how do you have a single sign-on across those? How do you integrate the islands of data that are created within each application?
Remedy: The good news is that vendors are actively working to address this issue in various ways. The bad news is that there is no clear standard, and there may never be one. In the short term, it is best to pick use cases that least require legacy integration or where the integration is well defined. For instance, we have seen new office deployments where the entire office is hosted and run in the cloud (Active Directory, virtual desktops, key applications, etc.) or SMBs that are adopting an office-in-the-cloud model. This, of course, makes things a lot simpler.
Adoption of cloud computing and virtualization is growing at a rapid pace, but it will be several years before the cloud is the status quo. Meanwhile, companies that are adopting these technologies are taking an incremental approach that is least disruptive and are learning as they grow their usage. I'd love to hear your comments on how you are adopting the cloud and virtualization, and what's holding you back.
Virtualizaton and Cloud Computing Have Separate Obstacles
By Karl Triebes
There's no shortage of opinions about what's keeping enterprises from adopting virtualization and cloud computing -- just Google "barriers to virtualization (or cloud) adoption" and you'll find enough surveys and opinions to keep you busy for days. Some argue that virtualization and cloud computing adoption have hit a lull, but I believe they're both still gaining momentum -- although perhaps not as quickly as analysts had predicted.
Virtualization
While virtualization and cloud computing share many challenges, those standing in the way of virtualization adoption are complexity, lack of management tools, and poor application performance.
Complexity
For organizations that have chosen to go down the path of virtualization, green IT, cost savings and server (or entire datacenter) consolidation are driving factors, but they inevitably come at the price of complexity. By definition, virtualization adds a layer of abstraction to the infrastructure that can obscure visibility and make the infrastructure more difficult to manage. An organization that before had 1,000 servers to manage might now have tens of thousands of virtual machines to manage. To avoid compounding server management issues, organizations must be able to successfully manage their traditional IT environments before creating virtualized environments.
Lack of Comprehensive Management Tools
Best practices alone aren't enough. IT managers are crying out for comprehensive management tools -- those that enable them to manage the entire infrastructure, not just virtual machines. Ideally, these tools would help IT automate server provisioning, balance workloads, enable live migration of virtual machines, and aid with capacity planning and configuration management. The goal is to have automated, policy-based management of the entire IT infrastructure. Many vendors are actively working with technology partners to deliver comprehensive and integrated management tools.
Poor Application Performance
Most recent surveys of IT pros show that poorly performing mission critical applications account for many enterprises slowing down or temporarily putting a hold on their virtualization efforts. This concern was voiced less frequently in earlier surveys because many early adopters tested the waters with non-critical applications. It's one thing to virtualize a few IT applications but quite another to put your business at risk with unpredictable, unreliable performance of key business solutions -- applications that are typically highly I/O intensive. This begins to get to the heart of the issue, because in the end, what really matters is whether IT is able to meet its SLAs and deliver services securely, reliably, and within budget. Much of these issues can be solved at the network level using techniques such as SSL offloading, application acceleration, TCP optimization, caching and compression.
Cloud Computing
If moving from traditional IT to a virtualized environment represents one "small step" of added complexity, moving to the cloud represents a "giant leap." In the same way that traditional IT environments must be well managed before being virtualized, virtualized environments must be well managed before they are extended to the public cloud.
In a sense, virtualization is a stepping stone to cloud computing; it gives IT the ability to extend the capacity of existing applications on demand using public cloud resources. For example, the "cloud bursting" model offers potentially significant cost savings to organizations that experience unpredictable or seasonal traffic volume on their web-based applications. Imagine an automatic process that dynamically offloads a portion of portal traffic to the cloud when local datacenter resources reach capacity, and in turn, automatically shuts down those cloud resources as traffic volume levels out.
Sounds great, but it turns out that's not easily done. Platform compatibility and application performance are two obstacles standing in the way of enterprises realizing this type of on-demand IT model, but hands down, security is still the barrier to cloud computing most often cited by IT professionals.
Platform Compatibility
Among enterprises that are serious about leveraging the cloud, a majority of them want to employ the cloud bursting model described above, but that's only possible if the organization and the cloud provider use the same virtualization platform. When that consistency is lacking, the organization only choice is to invest in solutions that aid portability of dissimilar platforms across environments. That, in turn, adds cost and complexity, increases the existing management burden, and raises operating expenses due to the need to train staff in the use of these tools.
Performance of Migrated Applications
One of the most challenging aspects of the cloud deployment model is that it is dependent on a WAN. Bandwidth and latency issues are inherent with a WAN and can cause application migrations to fail repeatedly. For those applications that are successfully migrated to the cloud, how are they made known to and accessible by the infrastructure so users can be directed to them? How will cloud-based applications ultimately perform across a WAN? For many organizations, these are still unknowns.
Cloud Security
"Cloud security" is a bit of a misnomer because the term doesn't really mean anything without context. What are organizations trying to secure? Data? Applications? The network? When people talk about cloud security, the real issue isn't necessarily about technology or products, it's about organizations being limited in their ability to implement their own datacenter security practices in the cloud; they won't have control over the cloud provider's web application firewalls, data leak prevention solutions, or intrusion detection/prevention systems. And that's a real concern for CIOs who can be held legally accountable for securing (or not securing) corporate applications and data in compliance with various regulatory requirements. What organizations need is the ability to apply internal security policies to the applications that they move into the cloud. With advanced application delivery networking tools, this is already possible for organizations that share a common virtualization platform with their cloud provider.
Overall, these challenges that continue to slow the adoption of virtualization and cloud computing aren't insurmountable. Solving them will require holistic solutions that enable flexibility and simplicity without requiring organizations to give up visibility and control. Such solutions must provide a foundation for creating reusable services that understand context and can provide control regardless of the platform, deployment model, application, user, location, or device.
Chief Offenders: VM, Storage and Backup Management
By Jason Mattox
Management challenges are the biggest obstacle to virtualization and cloud proliferation. These technologies work. They've proven their value. Managing their use -- and controlling their growth -- is what's proving to be most challenging for many enterprises.
In many cases, management has been the victim of virtualization's success. Many enterprises that started small, by virtualizing test and development systems for example, found virtualization was easy and got great results. They then quickly virtualized much more of their environment -- but didn't scale resources and management controls accordingly. That's where some significant detours popped up on the road to greater adoption.
Three specific obstacles are common: VM management, storage management and backup management. You can avoid these obstacles and get on the fast track to greater use of virtualization and cloud computing along with the efficiencies these technologies bring.
VM Management
First, the good news: Provisioning a VM via self service is easier for users than ever before. Now the bad news: Provisioning a VM via self service is easier for users than ever before. Too often these days, the words "virtual machine" are followed by "sprawl." A virtual environment that grows in an uncoordinated, ungoverned, ad-hoc way will waste limited resources and become impossible to manage efficiently.
To keep VM volumes from getting out of control, organizations need automation solutions in place that enforce rules and limits for VM (and private cloud) provisioning while still enabling the convenience of user self-service.
Using a self-service product that commissions and de-commissions VMs is key here. You want the private cloud to be self-sustaining in all rights to avoid over consumption of your infrastructure. Products that come to mind to help in this space are Quest Cloud Automation Platform and VMware Cloud director. Both are built to solve problems in the space of VM management to control sprawl.
Storage Management
Storage management provides many great examples of the problems that can occur when virtual environments grow too quickly and do not have sufficient management systems and visibility tools in place. Without visibility, admins can't know how many VMs are out there requesting storage resources. Self provisioning and thin provisioning make it all too easy to create an environment where VMs are requesting storage that simply isn't there.
The overall virtualization management solution should solve the problem of unauthorized or unaccounted for VMs. A storage management solution should give administrators a convenient view of storage resources and the ability to quickly change allocations to prevent conflicts and ensure reliable system performance. Over-allocation of VM storage is a major problem because you don't know or can't predict how your application storage footprint will grow over time, so you end up over-allocating more storage than needed. To solve this problem, you need a tool that can detect this over-allocation and remediate it. Quest's vOptimizer Pro is the only tool that I know of on the market that can do both to help reclaim your storage.
Backup Management
Businesses need confidence that data and systems that are virtualized or put into the cloud will be secure and available. Systems can't be virtualized if they can't be backed up comprehensively and conveniently. Yet recent research found that 77 percent of enterprises are using conventional backup software developed for physical systems to back up their virtual infrastructures. It doesn't have to be that way. Regular readers of this blog already know there are lots of highly effective ways to back up virtual systems to the cloud or other destinations..
Backups are not the problem that's holding greater cloud and virtualization back -- but lack of understanding about how to effectively back up in these environments is. No tools yet directly backup the cloud with 100 percent compatibility, but your best bet to support the cloud as best as possible today is to use a tool that was designed for VM image-based backup from the ground up.
Better management is what's needed for cloud and virtualization use to grow -- management of what gets virtualized, management of resources within the virtual environment and managing how to back it all up. The more convenient and comprehensive these management functions become, the further cloud and virtualization use can go.