The Cranky Admin

The New Computing Bottlenecks

Storage breakthroughs are leading to new ways of thinking about infrastructure.

As the 2010s wind down, the tech industry finds itself charting new waters. After decades of continual innovation and discussions about Moore's law, we are about to enter an era where hardware and software are running up not against manufacturing limits, but the laws of physics as we know them. This is resulting in some extreme shifts in performance and capability bottlenecks, even in small systems.

After over a decade of a "Tick-Tock" product release cycle, Intel had to abandon its regular CPU process shrinkage. A large part of why is that getting the process down to 10nm is just plain hard. It's unclear how many more process shrinks are left. While 7nm looks to be on the table, anything smaller (such as the proposed 5nm process) has so few atoms per transistor that quantum effects come into play.

As CPU lithography limits go, so too does NAND. That's a hard limit on how much RAM and flash you can pack into a given space. 3D technologies will buy us some time, but not much. Meanwhile, everyone's beavering away on next-generation materials. Carbon transistors, various new flavors of storage, and silicon photonics will probably dominate the 2020s.

Between now and then, however, is a whole lot of "doing better with the stuff we already have." Intelligent combinations of hardware and software that have resulted in solutions like NVMe. Flash was too fast for SATA and SAS controllers, so get rid of the slow controller. RDMA does something similar for networking, in bypassing a bunch of old software layers and letting systems talk directly to remote storage.

More than a decade of slightly faster components has come to an end. Today's industry-wide focus on efficiency is creating an uncertainty about what systems can actually do that hasn't existed since the shift to multi-core CPUs and virtualization landed at the same time.

Getting the Most Out of Your Host
Even limiting ourselves to storage, it's easy to find examples of traditional bottlenecks being redefined. How many NVMe drives does it take to fill a dual-CPU Xeon's bus? I don't know. Do you? Which RAID controllers or SAS HBAs will top out at eight drives? 12? 24?

If I can load a system up with 16 NVMe drives and somehow get them to operate at full capacity, what can I do with them? If I wanted to build a hyperconverged server, how much networking do I need to handle three of those nodes? What about 16? Can I even push that much data and networking across a single bus?

What's the CPU load when trying to do that? What if I layer enterprise data services on top? Data efficiency and encryption both take their toll. It really does matter whether or not you encrypt before or after the data efficiency, and what you use to do it.

Intel's Storage Acceleration Library (ISA-L) is a popular choice. It accelerates RAID, Erasure Coding, cyclic redundancy checks, hashing encryption, and compression by using a CPU's enhanced instruction sets; namely, the AES-NI, SSE and AVX families.

Code using ISA-L vs. the same code not using ISA-L (turn the instruction sets off) is hilarious. Think a Saturn-V versus a Lada. Another great hilarity is to show virtualization unbelievers what KVM can do. With the right flags set on KVM, a virtualized software-defined storage (SDS) solution using ISA-L will see at worst a 5 percent penalty on specific tests. In most cases, the virtualized SDS will run at 99 percent of the speed it would on metal.

Now what's really interesting is that ISA-L, at least in its current iteration, seems to have a top speed. Admittedly, that top speed is retina-detachingly fast, but we now live in a world of single NVMe drives that do more than 1M IOPS, and 100GbE network cards with multiple ports. Heck, Mellanox has even demonstrated 0.7usec RDMA latency on their NICs, so "retina-detachingly fast" is a thing people do these days.

There are thus SDS solutions that don't use ISA-L. They decided instead to build some custom solution that will decide when the additional instruction sets aren't fast enough, and will just flatten the CPUs instead. Or they'll use GPUs. The more I learn about the bleeding edge of storage, the more I wonder who actually uses this stuff, and I'm more than a little terrified of their workloads.

All of the above is just one tiny corner of storage. It's before we even start talking about NVMe over fabric, fun times with software-defined networking (SDN), or what happens when you actually cram enough VMs/containers onto a system that you break virtual switches.

Endings and Beginnings
In the closing years of so many venerable technologies that have served us so well, it is nice to see their end complimented by an explosion efficiency-focused software and standards evolution. Our infrastructure capabilities have taken a massive leap forward. Now we get to spend a few years figuring out just what we can do with it all.

About the Author

Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.

Featured

Subscribe on YouTube