Dan's Take
The Return of Fragmented Computing?
The past may, yet again, be prologue when it comes to datacenter architecture.
- By Dan Kusnetzky
- 10/07/2016
Recently, I spoke to the folks at Kaleao about the company's "hyperconverged" ARM-based servers and server appliances. While I found the discussion and the products the company is offering quite intriguing, I couldn't help but see this announcement within a larger context. I guess that's the burden of being a software archaeologist.
Kaleao KMAX
Our discussion focused on today's market and the desire of many enterprises to swing the pendulum back from a complex, overly distributed computing environment and re-converge computing functions on an inexpensive, but very powerful and scalable computing environment. The company described how it looked at the same problems everyone sees, but tried to think differently about possible solutions.
The company decided that it would start with a clean sheet of paper and build a computing environment based upon ARM processors and use the best of today's massively parallel hardware and software designs.
Here's how the company describes its offering, KMAX:
KALEAO brought together technologies in the KMAX platform following the design principles of low power consumption, data locality, high density and high performance. The result is a significant gain for customers in terms of:
- Performance density: 1536 CPU cores, 370 TB of all flash storage and 960Gb/s in 3U rackspace - to offer up to 10 times the performance density than today’s typical hyperconverged offerings, blades and rackmount solutions.
- Energy efficiency: less then 15W for each 8-core server with 10Gb/s I/O, providing over four time the performance per unit of energy spent.
- Cost reduction: KMAX further reduces an organization cost of ownership by over 3 times by allowing the adoption of web scale, flexible and manageable infrastructure, paving the way for enterprises to obtain a more efficient and agile IT management that translates into bottom line savings.
Dan's Take: Is Divergence the New Standard?
Although the KMAX is interesting all by itself, I couldn't help but think about the trends that led up to the existence of both Kaleao and KMAX. Seemingly long ago, I watched the market slowly consolidate on Intel's X86 processor architecture. Software and hardware suppliers appeared to believe (at the time) that it would be easier and much less costly to become part of another vendor's ecosystem, allowing many of them to drop the resource-intensive process of developing and supporting their own processing architecture.
Watching the Market Coalesce
At one point, nearly every systems supplier had its own processor architecture, operating systems, development tools and data management software. The market was highly fragmented, but enterprises could find systems and software that were precisely focused on their needs. We saw 8-bit, 12-bit, 16-bit, 24-bit, 32-bit and even 36-bit designs. Now, we're seeing 64- and 128-bit designs emerge.
The development of a personal computer market back in the late 1970s -- a very high-volume, low cost market -- drove a consolidation of both hardware and software, with implications we're still dealing with nearly 50 years later.
IBM still dominated the mainframe market and controlled both system architecture and system software. As PCs came to take over the desktop, we saw a similar architecture, based on Intel's x86, move to take over entry-level servers, then workgroup servers, then midrange servers; and today, much of the datacenter.
x86 Dominance
Some suppliers, such as Microsoft, saw this consolidation as an opportunity to displace many software suppliers by offering an increasingly integrated portfolio of software based on this "industry standard" platform. By undercutting the cost for software, Microsoft moved ahead of the others, but it now faces a market that increasingly sees software as a commodity. To assure its revenue stream, Microsoft is well on the way to moving the industry away from the packaged software model to a software-as-a-service, perpetual subscription model.
x86 Hardware, From the PC to the Datacenter
Initially, x86 architecture wasn't as robust or powerful as some competitive offerings. It didn't have as many registers, it could only access a limited amount of memory, and could address only a limited amount of storage. Furthermore, at the beginning, the hardware didn't have the capability to easily support virtual computing environments.
As time went on, enterprises moved to standardize on x86-based systems, due to the perception of the low cost of these systems. They found ways to work around the limitations of the hardware to accomplish their goals.
Intel and the other suppliers of x86 architecture processors enhanced the architecture, allowing it to address larger amounts of memory, process larger pieces of data, and, relatively recently, added the primitives that supported virtual environments.
The Downsides of a Monolithic Market Emerge
Experts in the areas of security, technical computing and graphics processing have long spoken about the problems that can emerge when everything from a watch to a toaster to a PC to enterprise servers all are based on the same architecture. Some worried about making it far too easy for malware to be created that would work on many classes of devices. Others looked at the unified world of x86 systems and worried about performance, size of executable software, power consumption and heat production.
Divergence, Like Convergence, Started at the Ends of the Market
As we entered the smartphone/tablet/Internet of Things world, we saw a different center begin to emerge; customers began to worry more about power consumption and battery life than processing power. Since the processors were now more than powerful enough to handle the tasks imposed upon the systems, customers demanded devices that would do the job, but wouldn't run through batteries quickly. Although Intel offered products for this space, ARM became the architecture of choice for Android- and iOS-based devices.
On the other end of the spectrum, the high performance and technical computing markets focused on massively parallel computing architectures, and some discovered that graphics processors, such as those offered by NVIDIA, could be pressed into service as servers.
Suppliers such as HP and Kaleo are offering ARM-based systems today, and there are hints that cloud providers such as Google and Facebook are quietly moving their operations onto machines in this class.
Suppliers such as CISCO, Exaact and HP are offering NVIDIA processor-based accelerators and servers.
Are we seeing the market diverge after 30 years of x86 dominance? It's clear that ARM has come to dominate the market for handheld devices. NVIDIA-based processing is starting to make itself known in the worlds of high performance computing and digital content/graphics.
About the Author
Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.