The Cranky Admin
The Evolution of Hardware Compatibility Lists
There's room for improvement, and VMware is heading in that direction.
Hardware Compatibility Lists (HCLs) aren't all they're cracked up to be. In many cases, they're compromised by politics, abuse, laziness and neglect. Some of the problems with HCLs are purposeful and some aren't, but there may be a better way.
The first sin of HCLs is that they frequently only tell you if something will work without breaking. They don't tell you anything about efficiency. If everything works on your shiny new omni-XPoint NVMe Brainslodinator 8000, but it only goes as fast Billy Bubba Bob's SATA Surprise, you might as well just have lit all the money you spent on those extra blue crystals on fire.
HCLs also rely a little too much on hardware vendors to self-qualify. The quality and extensiveness of this self-qualification varies greatly. Vendors also tend to want to qualify only their newest gear on the latest software releases, even when their older kit is perfectly serviceable. This defeats a lot of the proposed cost savings of software-defined widgets: the ability to run your hardware right into the ground.
HCL programs vary greatly between software vendors, too. Some software vendors push back on the hardware vendors, others can't afford to. This means a lot of research into the HCL programs themselves is required just to use them properly.
Glimpsing the Future
The role of the systems integrator, involving testing, re-testing, and testing all over again, can be somewhat automated. With the proper application of the frequently-misused "telemetry," a software vendor can find out a great deal about how different hardware combinations behave under different circumstances. This allows issues to be caught by the software vendor if they aren't acknowledged by the hardware vendor.
VMware is doing this, as part of the HCL reforms they're bringing out with VSAN 6.6. They expect to be doing more of it. While I applaud the basic idea, we're a long way away yet from doing away with systems integrators.
Someone has to be first through the minefield, though. Indeed, for things to really get flagged in a VMware-style HCL.next approach, a whole lot of someones have to make it go boom. Great for those that learn from the early adopters. Less so for those doing the YOLOing.
VMware's current HCL.next approach also doesn’t really solve any of the issues around supporting older gear past the point where hardware vendors want you to buy newer shiny stuff. Nor is the raw data currently available before anyone makes expensive choices on their new designs.
With luck, more vendors will take VMware's lead. Hopefully the HCL.next approach will evolve as competitors clue in. I would like to see it evolve from a risk-averse conservative solution still heavily grounded in helping hardware vendors sell the latest greatest, into a true cloud-backed distributed testing system.
HCL.Next
Imagine if systems integrators, testing departments and early adopters around the world could load up a test suite from a software-defined widget vendor, throw it at every standard config they currently run, and all configs they'd like to run, and beam that data back to the mothership. We would know, definitively, which older hardware configurations, firmware, software and whatnot worked, what worked at peak speeds, and which of the bleeding edge gizmos were going to cause problems.
Once, I knew a startup in Silicon Valley that dreamed of doing exactly that. One day, I hope someone fulfills that dream.
About the Author
Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.