Dan's Take
The Hype, Promises and Challenges of 'Hyper-converged'
Look before you leap into that pool.
- By Dan Kusnetzky
- 02/24/2016
I was chatting with some friends recently about both the promise and challenges presented by the recent industry trend to re-integrate functions back into system enclosures. It's clear that enterprises want the simplicity of management and the reduction of hardware complexity, datacenter floor space and, of course, reduction of both power consumption and heat production: in other words, they want everything, and they want it now.
The Pendulum Swings
Our discussion started with the fact that enterprises have long segregated individual system functions like processing, networking and storage into separate devices. They did this knowing full well that they were increasing the complexity of their computing environments. They thought, however, that the benefits, such as improving workload performance, agility, reliability and flexibility, would be worth a bit of additional complexity.
After working with highly distributed, multi-tier computing environments for a time, it became clear to many that the separation of functions increased complexity to an unpleasant degree, power consumption rose enough to create concerns about overall cost, and these "a box for every function" system configurations consumed more datacenter floor space, power and air conditioning than a better integrated platform.
The Vendors Strike Back
In response, many suppliers, such as Dell, HP, IBM and a number of smaller suppliers, began to re-integrate functions back into the system enclosure. While these functions were still based upon independent processing power for each function, it was possible to improve manageability and maintainability by reducing the number of power supplies required and reduce the need for external networking capacity to link these functions together.
These re-integrated systems typically were built upon the foundation of a fast, reliable, integrated network fabric. These new system configurations were also designed to reduce complexity by introducing proprietary monitoring and management technology. The hope was that enterprises would see these compact configurations also reduce administration costs, floor space, power consumption and heat production.
Buzzword Wars
As usual when these types of changes are made, vendors come up with new buzzwords to describe them. They know that if their buzzword wins out over the others, they have a marketing advantage and can sell more systems.
The problem, of course, is that it takes some time for the buzzword madness to wind its way through the industry before a single term is settled upon. So far, we've see converged systems, hyper-converged systems and even ultra-hyper-converged systems in the marketing literature.
Although the buzzword battle still rages, it's beginning to become clear that the industry is slowly standardizing on "hyper-converged" as the name for this reincarnation of the mainframe computer. The new wrinkles are that functions live in a virtual environment, and these systems are built to support a "scale-out" computing architecture.
Dan's Take: Know Thy Infrastructure
While hyper-converged systems offer enterprises many benefits, their adoption also brings a number of challenges. Many suppliers, however, will only discuss these if an educated customer asks for answers.
Unfortunately, quite a few of the new hyper-converged system offerings have been designed to support only internal storage. This means that there's no way to access external storage already in use in the datacenter. These systems also have been designed to rely on vendor-supplied, proprietary components such as memory cards, processor cards and networking components.
If we stop and take an enterprise view instead of getting caught up in the hyper-converged hype we're hearing, it would be wise to think about the fact that enterprises have deployed SANs, NAS and even cloud storage as a way to cut through the silos of resources that emerged as each department or business unit geared up to support its own workloads. This happened without giving much consideration to the overall needs of the enterprise.
Winding the Clock Backward
As currently designed, we'd be forced to come to the conclusion that the approach taken by some of the system suppliers is winding the clock backward and reintroducing separate islands of storage and processing power.
Another concern is that hyper-converged system designs appear to not have the capability or capacity to maintain large amounts of data locally. So using them for large-scale transactional or big data workloads can be difficult to support.
Key Questions
A key question enterprises should be asking these vendors is whether these systems can be monitored and managed using the tools the enterprise is already using; are the vendor's proprietary tools the only option?
Wouldn't it be better to start the acquisition decision-making process in a different place? Starting with hardware and working out from there often leads to issues of integration, interoperability and function migration.
It seems that a better place to start is with a clear understanding of what these systems should be doing for the enterprise, rather than just the basic hardware capacities of these new systems. Critical concerns include:
- Workloads
- Virtualization tools
- Storage
- Development and runtime tools required
- Management environment
It would be best to purchase systems that fit into your enterprise information architecture, rather than creating a new set of islands of computing.
About the Author
Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.