The Composable Datacenter of the Future

IT Infrastructure is always evolving. The next big change will be back toward an old, familiar model, closer to the days of the mainframe than the era of siloed servers that has dominated for decades.

As the progressive abstraction of datacenter resources continues its inexorable march forward, many organizations are turning to converged and hyper-converged infrastructures to meet the needs of a business whose demands on IT are increasing exponentially. Although these infrastructure models are a giant leap forward in moving the datacenter toward simplicity and efficiency, another paradigm looms on the horizon: "composable" infrastructure.

The entirely software-defined and API-driven composable datacenter will allow a new level of flexibility unseen by the adopters of converged and hyper-converged infrastructure models. Although it will come at the cost of simplicity to some degree, many IT business leaders will look to composable infrastructure in the next few years to provide the foundation for their next-generation IT initiatives.

Most people are still catching up on the possibilities that converged and hyper-converged infrastructure can afford them, and see composable infrastructure as so far off as to be unimportant. Although there are still some substantial advances to be made before a fully composable datacenter becomes a reality, it's not as far off as you might think.

What Does It Mean for Infrastructure To Be "Composable"?
Software that enables the abstraction of infrastructure resources transformed the modern datacenter—most notably, the abstraction of physical machines into virtual machines (VMs) via a hypervisor. Spurred on by the benefits many organizations realized by abstracting the machine construct, some datacenter thinkers have been working toward building a "hypervisor for the datacenter."

In essence, it means moving the abstraction of infrastructure resources to an even broader and deeper scope that includes networking and containers, as well as higher-level components like service discovery and resource management.

When we reach the state of a fully composable datacenter, applications will be able to define their resource requirements (as opposed to the administrator defining them); then some datacenter control software will logically provision all the resources the application needs, disregarding underlying hardware boundaries altogether.

In this future version of the datacenter, a low-level control mechanism will disaggregate cores, dynamic random access memory (DRAM), storage class memory, storage and networking so that the physical layout of infrastructure resources is irrelevant to applications. Really, the old notion of a "machine" becomes obsolete.

That's right: the same way that virtual desktop infrastructure (VDI) has made the idea of full desktop computers less relevant, server infrastructure will likely go the way of the Dodo bird. Future datacenters will resemble mainframes much more than they resemble the islands of x86 CPU and memory we have today.

Although thinking about abstracting the datacenter in this way continues to move datacenter design in a helpful direction, there's something still fairly limiting about how datacenters are constructed today: the units for assembling the infrastructure remain rather inflexible. Memory and CPU are tightly bound to the motherboard of a single server, and therefore can't be aggregated and pooled across units the same way that a resource like storage can.

What's Compelling About Composable Infrastructure?
Because the needs of the datacenter have become more dynamic over time, infrastructure design and management need to become increasingly dynamic, as well. If it doesn't, we'll see the cancer known as operational complexity continue to eat away at the profitability and stability of IT organizations. Composability might be the solution to that particular problem (or at least one of the solutions).

Hewlett Packard Enterprise (HPE) CEO Meg Whitman is quoted as having said, "We're now living in an idea economy where success is defined by the ability to turn ideas into value faster than your competition." Few things pose a more significant threat to quickly turning ideas into value than operational sluggishness on the part of IT. If you think about the cause of slowness, the root of it is people, isn't it? People take too long to approve changes, place orders, complete configurations and so on.

That will always be the case to some degree. We'll never fully remove humans from the equation. However, the growing movement of treating "infrastructure as code" is a step in the right direction, and plays very nicely with the idea of composable infrastructure.

In a composable datacenter, API calls from applications will call into existence, manipulate and destroy underlying infrastructure elements without the need for any human intervention whatsoever. Not only does this approach deliver more speed, but also accuracy, security and task throughput increase.

As a result of the greater flexibility and programmability of the infrastructure, you can expect a number of outcomes:

  • Shortened development lifecycles
  • Reduced operational costs
  • Increased output from the same staff

Today's technology isn't quite ready to lead us into a fully composable datacenter just yet. We're getting close, but there are still a few things that stand in the way.

What's Still Missing Before Full Composability Is a Reality?
As great as the technology is today, you'll recall from earlier that datacenter infrastructure is still too inflexible for realizing the total composability dream. One thing that IT is unequivocally incapable of today is aggregating CPU capacity across multiple server units. In a fully composable datacenter, four different CPU sockets can be combined to create a single logical CPU that acts as if it was inside a single one of today's servers.

Another challenge IT has is that datacenter infrastructure is made up of all kinds of different hardware from a variety of different manufacturers. Orchestrating all of it is a tall order, and currently there's no set of standardized ways for directing that infrastructure.

These aren't the only two barriers to full comparability, but they're the big ones. Fortunately, the industry giants responsible for most major shifts in datacenter computing are hard at work solving these problems by way of two different industry alliances.

The Gen-Z Consortium's New Approach to Data Access
First, the lack of an I/O fabric that can disaggregate CPU and motherboard-level components from the server unit needs to be addressed. This has been attempted with things like Broadcom's ExpressFabric and PCIe switches. The problem remains that today's protocols and interconnects have entirely too much latency and too little bandwidth to allow the normal operations of CPU and memory across them.

To that end, the Gen-Z Consortium ( has formed to collaborate on building "an open systems interconnect designed to provide memory semantic access to data and devices via direct-attached, switched, or fabric topologies." According to the Gen-Z Web site, it's a "very efficient, memory-semantic protocol that simplifies hardware and software designs, reducing solution cost and complexity. Gen-Z supports a wide range of signaling rates and link widths that enable solutions to scale from tens to several hundred GB/s of bandwidth with sub-100 ns load-to-use memory latency."

In other words, Gen-Z is going to be a new kind of interconnect that removes the latency associated with existing protocols and allows devices to communicate directly with the CPU in the way that DRAM does today. This will pave the way for the widespread adoption of a new breed of devices like storage class memory. Moreover, this low-latency interconnect will allow the disaggregation and relocation of devices that typically had to be located right next to the CPU.

Almost 50 of the manufacturers that we know and love like Broadcom, Dell EMC, HPE, IBM, Micron, Seagate, VMware and Western Digital are all working on bringing Gen-Z to life and to the marketplace. This isn't the only potential solution to the disaggregation problem, but it's certainly a promising one.

Redfish API Manages Modern, Scalable Platform Hardware
In addition to having the flexibility to disaggregate server-local resources to the rack scale, a better way to manage it all is also needed. Hardware footprints continue to grow because very few organizations are running fewer workloads and storing less data; it's almost invariably the opposite. As such, a better way to manage the sprawling hardware landscape is needed. The Distributed Management Task Force (DMTF) creates open manageability standards spanning diverse emerging and traditional IT infrastructures, including cloud, virtualization, network, servers and storage.

As was the case with the Gen-Z consortium, the who's who of datacenter infrastructure players are involved. The board members include the likes of Broadcom, Dell, HPE, Hitachi, Intel, Lenovo, NetApp and VMware. And the DMTF is responsible for driving the development and adoption of many helpful infrastructure standards. Virtualization administrators are probably familiar with the Open Virtualization Format (OVF); that's the work of the DMTF.

In the composability picture, the DMTF has collaborated to conceive Redfish, which "is an open industry standard specification and schema that specifies a RESTful interface and utilizes JSON and OData to help customers integrate solutions within their existing tool chains" ( Although it will become something much bigger than this, Redfish is taking its first steps into the datacenter with server management in the IPMI space (think iDRAC, iLO).

Redfish allows administrators to programmatically manage servers via a RESTful API by providing JSON data; this way of interacting with infrastructure will be very familiar to modern developers and DevOps teams.

Although v1.x of Redfish is limited in scope to IPMI-type things like power management, discovery, console access and BMC configurations, you can easily imagine how the scope can expand to include all manner of datacenter infrastructure components. Once they're all manageable via a standardized, unified API, the composable datacenter starts to become a reality.

Food for Thought
Because the technology that will allow the fully composable datacenter to become a reality in the not-too-distant future is coming along nicely (read: only a handful of years before it's ready, most likely), it's important that IT leaders and practitioners alike start to consider the ramifications that this new paradigm will have on their current operations and how they might be able to prepare to adopt it when the time is right.

The primary reason that composability will be huge is that it has a distinct cloud-ness to it and it lends itself very well to the software-defined datacenter (SDDC) model. In driving toward the SDDC utopia, the more parts of the datacenter that can be defined in software, the closer you get. Being able to logically provision infrastructure at will via an API is certainly another step in the right direction.

As this technology evolves, one of the first places adoption will make lots of sense is in the services provider space. If you're operating some "as-a-Service" platform—particularly Infrastructure-as-a-Service (IaaS) or Platform-as-a-Service (PaaS) offerings—the flexibility composability offers will be incredibly valuable. The entry point for enterprise organizations will be the point at which the composable datacenter technology is mature enough to drastically change what they're doing with private cloud initiatives.

For some organizations, that could even be today. Some early versions of the composable model are already shipping; most notably by HPE Synergy. However, for some organizations, it won't be a significant-enough change to warrant the investment until the technology exists to fully pool and aggregate CPU and memory at the rack scale.

As you keep an eye on composable systems technology over the next couple of years, here are some things to be watching for and evaluating:

  • How comprehensively does the solution disaggregate infrastructure resources so that they can be recombined logically? Specifically, can it disaggregate CPU? This will be the most difficult.
  • How unified and universal is the API with which the infrastructure is controlled? It may or may not involve Redfish (and its extensions like Swordfish), but regardless of the exact implementation, how broad is the scope of the datacenter API you'll be adopting?
  • Assuming it's sufficiently broad, how rich is it? It won't do much good to be able to collect read-only system statistics and serial numbers from the widest array of gear. How thoroughly does the API allow for ongoing configuration and fluidity of infrastructure elements?

What's Old Is New Again
It's almost comical how circular datacenter evolution is. About five years ago, I was building VDI infrastructure (which looked oddly reminiscent of dumb terminals) to replace desktop computers (which, at one point, were so much better than terminals).

The server side is changing similarly to the user side, albeit at a slower pace. The scale-up world of mainframes has been abandoned in favor of scale-out x86 silos, and it was all well and good until … IT started slowly creeping back toward composable systems. In the future, it's probable that the choice will have be made to run one giant, amorphous pool of infrastructure resources that can scale up to massive configurations and is carved up logically and on a per-application basis … sound familiar?


Subscribe on YouTube