The Cranky Admin

ARMing for the Future

Finding a datacenter role for the humble processor.

ARM Server CPUs are now a thing that can be bought, but is there a place for them outside the hyperscale datacenter? Most answers to this question focus on the price of the CPUs; ARM chips tend to be less powerful but significantly less expensive than x86 CPUs. The true path to finding out whether ARM CPUs have a place in the datacenter is more complicated than mere price.

In a roundabout way, this question has some personal significance to me. I am not, at current, looking to put ARM CPUs in my server room. I have plenty of x86 servers that sit unused, so while ARM servers would be great to play with, justifying the expense without a specific purpose for them in mind is a little difficult.

The thing to focus on in the above paragraph is the term "without a specific purpose." Thanks to the trend of Software-Defined Everything, the pile of x86 servers sitting in my server room can be reconfigured for almost any task. No longer must they simply be compute nodes that run applications; they can be storage, networking, hardware-accelerated VDI units, security appliances, or anything else my heart desires.

If this discussion sounds a little familiar, it should. Most technologists will have been introduced to the "general purpose processor vs. task-specific processor" discussion at some point. x86 CPUs are the canonical general-purpose processor. Task-specific devices would be things like GPUs.

The debate rages about whether or not ARM, MIPS and other less-complicated architectures should be considered general purpose or task-specific. ARM CPUs are less flexible than x86 CPUs simply because x86 CPUs have more stuff packed into them. A modern x86 CPU can have hardware acceleration for encryption, vector instructions, video decoding and jibbers only knows what else. ARM chips tend to be more lean, but that also depends on what the vendor decides to bake into their particular version.

My own revelation about the utility of ARM CPUs, however, has nothing to do with what's baked into the CPU itself. Despite this, it has everything to do with the specific purpose of the device.

Motherboards and Firmware
In theory, x86 servers can be configured to do all sorts of things. How efficient they are at it depends a great deal on which options I select in my BIOS configuration. Toggle the wrong feature and suddenly Linux starts kernel panicking. Flip this bit and ESXi works fine, but GPU pass-through for VDI hardware acceleration won't work.

Firmware options are many and complicated. They vary not only between motherboard manufacturers, but can even vary quite a bit between motherboards from the same vendor.

None of this complexity is really the fault of the CPU itself. Yes, x86 CPUs are designed to be capable of doing just about anything one could want, computing-wise, but the complexity stems from server vendors trying to create servers that are equally flexible. They don't really want to rewrite firmware for each new server purpose, or even each new motherboard. If they can reuse code, they will.

This is where ARM chips (mostly) differ. ARM chips power switches and routers, NASes and sensors. They're everywhere, all over our datacenters, doing something or other innocuously in the background. Traditionally, that's just what ARM chips do: they're selected as the low-power, low-cost option when someone needs a chip that will do one thing and do it well.

ARM chips are clearly capable of being used as general-purpose CPUs. Our smartphones run on them, and they run any number of applications. This has lead to server variants being created, and this discussion about them being "used in the datacenter," meaning used to run human-assigned workloads, rather than to serve as an invisible part of an Internet of Things (IoT) device.

What's as yet unknown is whether this will result in ARM motherboards, BIOSes and the rest of ancillary ecosystem that surrounds the CPUs becoming as complex and unwieldy as that of their x86 counterparts. If this happens, ARM loses a lot of its charm.

Task-Specific General Purpose Computing
Imagine a server that could run specific tasks, but not others. One that was general-purpose enough to run a significant percentage of workload types one could want, but not complicated enough to serve some of the more niche requirements. This has more than a little appeal.

If such a server had a decent hypervisor and/or microvisor then it could be used to run the sort of workloads that many datacenters run in bulk: Web servers, Network Functions Virtualization (NFV), databases, storage, encryption or basic communications. The complicated BIOS options for passing back GPUs, doing six different flavors of PCI bus inspection in order to ensure compatibility for 20-year-old RAID cards or the 30 different ways to configure handling RAM can all be left out.

These task-specific general purpose servers could handle the workloads that are 1 standard deviation from the center of the bell curve and leave the rest to be someone else's problem. They would be designed to run only those workloads that the majority of people need to run today, and the rest is someone else's problem.

This is where ARM CPUs could be exceptionally useful. None of this has to do with price. It has nothing to do with any of the functionality inherent to the CPUs themselves, or even arguments about power consumption. ARM CPUs can find a place in the datacenter simply because they don't have to carry the burden of supporting a massive amount legacy hardware or configurations.

Being the new kid on the block, ARM motherboard vendors can unapologetically choose not to even try to cater to edge cases: "Here's a list of what works; for everything else, go see x86." For hyperscale datacenters running 5 million instances of nginx, this has some obvious appeal. This is perhaps a less useful approach to the small business trying to cram 200 different workloads into a two-node cluster.

Support
x86 server vendors and systems administrators can say that none of the above matters. Enterprise customers purchase their x86 servers from the vendor as validated designs and pay for support. If there's some bizarre BIOS setting preventing something from working properly, that's the vendor's problem, not the customer's, so why bother artificially limiting what one can do by buying ARM? The ARM approach isn't going to help the small businesses anyways, so how likely is it to matter in the real world?

While there is some validity to this argument, I think economics will put it to rest. Again, I'm not talking about the cost of the hardware itself here, but instead of the ancillary support systems built around it.

Solving problems takes time, and time is money. This financial impact is felt both by the vendor and by the customer. A solution that can be provided wherein the vendor can claim the server only supports a limited number of specific workloads and where the customer accepts this to be true can result in a significant savings of time (and thus money) for both parties.

You want to run a whole bunch of Apache instances on your ARM server? That's great, you go right ahead. You want to run a bunch of VDI instances? Sorry, you bought the wrong thing. Go get an x86 server and stuff a GPU in there.

From this standpoint, I can absolutely see a role for ARM in the datacenter. Being perfectly honest, I can see ARM taking over most of my datacenter, and that of every single one of my clients. Give me something simple, with few options and nerd knobs and let me run my mission-critical stuff on there.

Redefining Utopia
Let's put only the bits that need complicated tweakery or non-standard configurations on their own hardware. Limiting the scope of possible complexity makes the life of a systems administrator easy, even if it does mean steering away from the utopic ideal that any system anywhere can do any task.

The benefits I see ARM servers bringing to the datacenter thus have nothing to do with ARM itself, and everything to do with the excuse for jettisoning legacy support that they bring. x86 CPUs could just as easily fill this void, but they're burdened with the expectation that they'll support every esoteric widget and operating system on the planet.

If changing architectures is what it takes to finally make supporting the bulk of workloads simpler, I suspect that alone will be sufficient motivation to see ARM server adoption.

About the Author

Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.

Featured

Subscribe on YouTube