In the Cloud Era, The Era of Convergence Is Upon Us

What exactly is convergence and what is making vendors scramble to get included in this category?

The era of IT infrastructure convergence is upon us. Every major vendor has some type of offering under this category. Startups and smaller players are also "talking" convergence. But what exactly is convergence and why are all the vendors so interested in getting included in this category? We will explain below the history of convergence, what it is, what it is not, what benefits accrue from such systems, who the players are, and who is leading the pack in true convergence.

Legacy Architectures Gone Wild
The fact is the fundamental makeup of IT infrastructure has remained the same for thirty years, maybe more. We have the compute layer, the networking layer and the storage layer. Each layer has innovated in its own way and at its own pace but the layers have stayed intact. One could argue that the compute layer has followed Moore's law, the networking layer a quasi-Moore's law and the storage (or at least the HDD performance portion of it) a "flat line" law.

The issues created by the so-called I/O gap are well known and I will assume the reader is up to speed on its impact. As the amount of data kept growing at astronomical rates and the variety of data went from almost pure text to text, audio and video in a variety of formats, we kept throwing more and more hardware at the problem. We did this by simply adding more servers, each with more cores and higher speeds; networks with bigger switches and more bandwidth; and storage with more HDDs and more powerful controllers.

By the middle of the past decade, however, these infrastructures were at a breaking point, in spite of many "surrounding" technologies that surfaced in the 2000-2003 timeframe that kept them for utter collapse. And given the tsunami of data coming at us today it is only a question of time before the classic infrastructure will simply collapse on itself. But before we look at the "revolutionary" alternatives, let's look at some of these surrounding technologies, as they play an important role in the future of computing, regardless.

Technologies That Mitigate Infrastructure Issues
Technologies that have made the largest positive impact and allowed the current three-layer infrastructure to stay put, albeit in a wobbling state, include server and storage virtualization, data deduplication, compression, WAN Optimization, and flash in variety of implementations, including hybrid arrays and disk-based backup appliances. Of course, a list such as this would necessarily have to include cloud computing, cloud storage, and Hadoop (along with all its associated products) even if we would be hard pressed to call them "surrounding" technologies. I would also put scale-out architectures on that list.

Each of these technologies, in its own unique way, has helped keep the balance. For instance, server virtualization brought consolidation and agility; storage virtualization delivered improvements in provisioning speed, capacity utilization, and management; data deduplication and compression enabled HDDs to be used economically for backup and restore and brought associated improvements in RTO's and RPO's and DR; and WAN Optimization made sure remote office employees didn't feel like second class citizens of the enterprise anymore. Of course, flash is on its way to revolutionizing application performance; cloud computing is fundamentally changing how we consume compute and storage resources and Hadoop is helping extract information out of mounds of collected data so better business decisions can be made.

With the exception of cloud computing and Hadoop, however, all these technologies have been bolted on to the traditional three-layer architecture of the 1970's. As a result, the overall IT stack today looks like a mishmash of technologies, essentially with compute, networking and storage layers surrounded by the plethora of new technologies mentioned above. This raises the question: Is this the best way to run the railroad? The answer, as you guessed, is a definite no. Two essential approaches have appeared on the horizon: convergence, and what we call hyperconvergence. We will look at each in more detail.

Convergence Defined
In a bid to simplify the IT infrastructure, a number of vendors, especially the legacy players, started bundling specific configurations of compute, networking, storage and server virtualization and pre-testing them for interoperability and performance for targeted workloads. The first one to market was VCE, a joint collaboration of VMware, Cisco and EMC, with Cisco providing compute and networking components. Specific configurations were pre-tested for strict interoperability and performance for workloads, such as SAP or MS Exchange or VDI, etc. Management was simplified by adding software that viewed the unit as a whole; however, if a layer was not performing adequately, regular tools that came with that layer were used to diagnose and change configurations.

I think of this type of configuration as taking three atoms and combining them to create a molecule. You buy, deploy, run, and manage the unit as a molecule. If you buy the right model number for the task, the probability that it will deliver the right SLA for the applications is higher than if you bought these layers separately from three different vendors and put them together yourself. The burden of deciding which models were appropriate to mix together to do a specific job was taken off the buyer. This simplification is far from trivial. With customers looking to deploy cloud-scale infrastructures, one could drop these molecules into place, knowing they work at a specific level of performance. Management became easier and deployment time went down from weeks or months to days. Just as importantly, the TCO was impacted favorably.

Over the past three years, all major players have jumped into this fray that the market calls Convergence. HP offers CloudSystem Matrix; NetApp worked with Cisco and VMware to offer FlexPod; Dell combined PowerEdge servers, EqualLogic arrays and Force10 network switches to deliver their converged solution as Active Infrastructure; and IBM offers PureFlex Systems, which combine IBM POWER and x86 server blades and Storwize V7000 storage with networking, server virtualization and management components.

While these converged systems provided ease of purchase, deployment and use along with significantly improved sharing of resources, a fundamentally different phenomenon was taking place in the market. At Taneja Group we call this hyperconvergence and consider it to be distinct from convergence. Alternatively, one could think of hyperconvergence as a continuation and maturation of convergence but we prefer to keep the categories separate on the fundamental assumption that players along the convergence axis cannot simply improve their products and become hyperconverged, without serious architectural changes. In other words, we believe that true hyperconvergence can only be achieved by starting with a clean slate and not by mixing existing pieces.

So what is hyperconvergence and how is it different from convergence?

Hyperconvergence Defined
We believe hyperconvergence occurs when you fundamentally architect a new product with the following requirements:

  1. A genuine integration of compute, networking, storage, server virtualization, primary storage data deduplication, compression, WAN optimization, storage virtualization and data protection. No need for separate appliances for disk-based data protection, WAN optimization or backup software. Full pooling and sharing of all resources. A true datacenter building block. Just stack the blocks, and they reform into a larger pool of complete datacenter infrastructure.
  2. No need for separate acceleration or optimization solutions to be layered on or patched in.  Performance (auto-tiering, caching and capacity optimizations all built in). As such, no need for separate flash arrays or flash caching software
  3. Scale-out to web scale, locally and globally, with the system presenting one image. Manageable from one or more locations. Radical improvement in deployment and management time due to automation.
  4. VM centricity, i.e. full visibility and manageability at VM level. No LUNS or volumes or other low level storage constructs.
  5. Policy-based data protection and resource allocation at a VM level.
  6. Built-in cloud gateway, allowing the cloud to become a genuine, integrated tier for storage or compute, or both.

With today's converged systems one would have to add separate backup appliances, backup software, WAN optimization appliances, flash arrays, flash cashing software, cloud gateways, and more, to get to the conceptual equivalent of the above. But one would still not achieve VM-centricity, or web scale or space and power savings, or the ability to manage all these pieces as a whole. One could get close but no cigar. This is why we believe hyperconvergence is a separate and distinct category. According to our definition, we believe three systems in the market fall into this category: Nutanix, SimpliVity and Scale Computing.

Nutanix came to market first with a "built from scratch" hyperconverged system that met most of the requirements from day 1. Missing initially was data deduplication and global management capability, which Nutanix added recently in rev 4.0. The Nutanix Virtual Computing Platform is VMware-based but Hyper-V was added as an option in rev 4.0. One can now build a cluster using VMware-based nodes and a separate cluster, using Hyper-V nodes, and manage the whole, globally, as one instance.

SimpliVity took a slightly different tack. Given that many of its developers came from Diligent Technologies (now IBM), the purveyor of in-line data deduplication appliances, SimpliVity started with the premise that data should be reduced to its smallest size at inception and kept that way for the entire lifecycle, whether it is being moved, stored or analyzed, except when it needs to be viewed by a user. In order to ensure that this capability stood out, SimpliVity developed a special PCIe card to handle the number crunching required by the deduplication algorithms, without impacting the ingest performance.

Nutanix, on the other hand, wanted to stay true to a 100 percent commodity hardware strategy, so they chose post-processing data deduplication for HDD to ensure zero impact on performance. For main memory and flash storage, Nutanix chose in-line data deduplication, which makes these small capacities effectively much larger.

Regardless of the differences, both products meet the essential premise of hyperconvergence and the differences between them are architectural and can only be evaluated in a hands-on evaluation.

Scale Computing, on the other hand, is targeted at the lower end of the market and uses KVM as the hypervisor. Given the open source nature of KVM, Scale was able to more tightly integrate KVM into the architecture (more so than one could with VMware or Hyper-V). At least at this point in time Scale does not offer data deduplication but most other requirements of HyperConvergence are met in full. For smaller IT shops where IT specialists are rare, the ability to buy the whole infrastructure as a unit and manage it as such can be a gift from the heavens.

Benefits of hyperconvergence
The best way to think of hyperconverged systems is to think of them as "infrastructure in a box." You can start with the minimum number of nodes the vendor requires -- two for Nutanix and SimpliVity and three for Scale Computing. All functionality we've mentioned as requirements is included in each node. Installation and deployment times are trivial. You decide on the importance of each VM you wish to run and assign each a priority, which will determine how much resource is to be made available to that VM, in terms of IOPs, throughput, latency, degree of protection, RTO/RPO, etc. The system does the rest. All relevant data is available on a VM by VM basis.

If more resource is needed, given the mix and resource requirements of all VMs, the system alerts the operator that one or more additional nodes are needed. Adding the nodes is simple. The cluster recognizes the additional nodes automatically and storage and compute resources become available instantly. If remote clusters are installed, the clusters can recognize each other and present a single image to the IT administrator. All data is presented at the VM level and there are no low level storage tasks (provisioning, LUN creation, virtualization, expansion/contraction of volumes, balancing workloads, etc. etc.) to be performed. As such, the server virtualization administrator can easily manage the entire cluster, without requiring strict storage experience.

Management, even at the global level, becomes trivial, compared to managing traditional architectures or, for that matter, Converged architectures. All data exchange across the WAN happens efficiently, using WAN optimization methods, and only unique data is sent across and even that in a compressed fashion.

We believe this level of functionality and integration can only happen if one starts with a clean slate. It is hard, if not impossible, to make this happen with equipment from ten or more vendors, each with its own idiosyncrasy. This is why we believe hyperconvergence may conceptually be viewed as an evolution of convergence, but in reality it is more "revolutionary" than not, even if most components of the technology it uses are well defined and mature.

Where Does Virtual SAN Fit In?
VMware announced Virtual SAN as a product that essentially allows a number of compute nodes with local HDD and flash storage (DAS) to pool their storage resources and make the pool available to all applications, running as VMs. vCenter becomes the central place to manage the entire cluster (no separate storage console). Configurations as large as 32 nodes were announced and the product is being targeted at midsize and large enterprises for all but tier-1 workloads. All vSphere services are available, including vSphere Data Protection Advanced, vCenter Site Recovery Manager, vMotion, Storage vMotion, DRS, etc.

The Future of Hyperconverged Solutions
If the current reception to hyperconverged solutions by midsize and large IT is any indication, hyperconverged solutions will cut deeply into traditional architecture-based solutions.  And they will do so very quickly. The pain of managing large infrastructures has become so acute that hyperconvergence presents almost a panacea. The combination of workload unpredictability, the pace at which new data is coming into the enterprise, and the requirement to deliver results instantly, all point to a new architecture that adapts and adjusts automatically, with little or no human intervention.

While traditional architectures keep improving in all these dimensions, incremental improvements are just not enough. hyperconverged solutions could not have arrived at a more opportune time.

Featured

Subscribe on YouTube