In-Depth

A New Spin On Virtualization

The idea behind server startup Kaleao is that each application gets its own resources, rather than fighting over shared ones.

Server startup Kaleao has come out of stealth mode with their ridiculous-density ARM servers. Kaleao has been described using every term under the sun: converged, hyper-converged, virtualized, physicalized, containerized and who knows what else. Kaleao is trying to do an old thing in a new way, and it may change how all of us end up doing virtualization.

Virtualization using a hypervisor creates a bunch of virtual resources and presents them to workloads. You can massively overcommit these resources (or not) as you choose. The hypervisor then figures out how to execute everything against the physical resources it has available. Virtualization using a microvisor is the art of putting blinders on applications so that they don't see any of the other applications running on the hardware. Applications believe they have access to the whole system's physical resources. When other containers exist, they can see chunks of host resources are in use, but (hopefully) can't see what is using them, or affect those areas of the hardware in any way.

Why Kaleao Is Different
Kaleao is different. Kaleao implements a specialized clustered microvisor across a massive array of weak but plentiful nodes. When you want to spin up a workload, Kaleao hives off physical resources and dedicates them to the workload.

A typical 3U Kaleao chassis contains 12 blades. Each blade has 4 nodes, each node has 4 servers and each server has 8 ARM cores, 128GB of non-volatile cache and 4GB of RAM. A node can have up to 7.7TB of NVMe SSD installed.

The short version is that a rack of Kaleao can have more than 5PB of flash, more than 21,000 cores of compute, 24TB of RAM and over a terabit of network capacity. Little wonder that Kaleao talks up the option to ship racks of its gear pre-configured for immersion liquid cooling.

What's worth thinking about here is how transformational Kaleao could be. Instead of running up a squillion containers or VMs and hoping the *visor can handle everything fighting for contention, each application can get its own physical resources. Coded right, each thread could get its own resources, further reducing contention.

To the virtual administrator, Kaleao doesn't seem like it will be that odd. It jacks in to OpenStack and looks like it should behave in practice a lot like a standard microvisor. The difference being, of course, that you're not going to manually create workloads on a Kaleao rack, because you'd probably be there until retirement.

Cattle vs. Pets
Kaleao is -- emphatically -- infrastructure designed for heavy automation. Facebook-style Web farms, Big Data analytics and other forms of hyperscale computation. No "pets" workloads live here; only "cattle."

Kaleao makes me wonder if this may be the underpinning of a practical "private cloud" solution. Legacy "pets"-style workloads won't ever go away, but those can probably be handled with some hyper-converged gear running a traditional hypervisor. The heavy lifting, however, would be moved to heavily automated machine-generated and machine-controlled workloads running on something a lot like what Kaleao is putting forth.

Simple, small, but very, very plentiful resources that can be consumed 1:1 by workloads without contention. I expect that Intel's version of this will be based on the Xeon-Phi MIC gear they've been bringing to market. A slightly different take, but it could evolve into something much like Kaleao. It's too early yet to say exactly how this emerging category of workload management affects the evolution of today's virtualization platforms and best practices. But if history is any guide, it will be in ways we least expect.

About the Author

Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.

Featured

Subscribe on YouTube