Containers: Still More Hype Than Substance
Until they're boring, they won't go mainstream.
Containers are all the rage today. Microvisors are the new hypervisors and storage startups are scrambling to declare themselves "container-aware." Entire ecosystems are popping up to plug the gaps in container capability. The question hanging over the entire market segment is, Are containers ready for the enterprise?
Depending on which camp you belong to, you may consider containers a form of virtualization, or not. A lot of people don't like lumping containers in with hypervisor-based virtualization, because virtualization administrators had to fight a long, hard battle for more than a decade to be accepted as something other than a science experiment.
There are still people today who view hypervisors as too new and untested. So what does that say for containers?
That New Technology Smell
On the one hand, it wouldn't matter if containers were "too new," or had growing up to do. Silicon Valley doesn't really care about such things. Billions upon billions of dollars will flow like water to solving each and every possible problem, until some usable stack of software emerges from the rest as the de-facto standard.
At this point, the leading microvisor company -- which looks to be Docker -- will gobble up or re-invent the parts of the de-facto stack that it doesn't own until it has a minimum viable product that meets the majority of enterprise needs.
Docker can then IPO, the principals can put in the minimum time required by contract and then vacate the premises to spend their filthy lucre. This is how it works in Silicon Valley. And it gives us a point of reference.
That nebulous point, post-IPO, where the majority of the Docker principals have left for pastimes more exciting, is when Docker will probably be "ready for the enterprise." That's when the breakneck innovation has stopped and the boring stuff begins.
Containers today fill a niche, but there are numerous problems that remain unaddressed before the majority of workloads can be trusted to them. Addressing those problems means identifying and solving other niche problems, one at a time, and that's boring.
Early products are fun because the founders are solving problems that matter to them; problems they and theirs encountered and with which they're intimately familiar. Mature products are boring because it's all about other people's problems, and thus less about coding than about handholding.
The biggest problem with containers is that they're not for operations people. They're for developers. In the bad old days, operations staffs expected developers to package their applications up using InstallShield or MSI on Windows or the likes of apt and yum on Linux. This put the onus on developers to make sure the software actually worked.
In a container world, that doesn't exist. Everything is source code with maybe a few attached binaries. Operations teams are supposed to write a script that creates a container, pulls the code down from GitHub, injects it into the container, compiles it and runs it. If a bug develops, destroy the container and recreate using the latest source code.
Developers love containers, in large part because they're at the center of a series of industry movements that minimise, trivialize, eliminate or outsource quality assurance to the operations team.
But containers don't have to be used this way. They do see extensive use in internal development situations where shoddy code can be resolved by marching down the hallway towards the dev pen with a big stick and an attitude.
Unfortunately, cherry-picking the positive examples of container usage doesn't reflect the actual experiences of organizations trying to use them in circumstances other than highly cohesive DevOps teams. It's for this reason that the massive ecosystem of vendors growing up around containers exists.
Most of IT has nothing to do with technology. Huge chunks of our budgets are spent on software that -- if we're being honest about it -- exists to compensate for the fact that people are short-sighted, lazy, indecisive or all three.
If Microsoft wasn't constantly changing its mind about how applications are written for Windows, we probably wouldn't need hypervisor-based virtualization; the entire industry would have gotten comfortable with where different kinds of files were, and anyone who didn't code to that pattern wouldn't be able to sell software.
Virtualization would look different. It would be closer to App-V than anything else: applications and data would be packaged up separately from the operating systems and moved about easily.
But Microsoft is fickle, Linux is fragmented, developers are lazy and everyone is short-sighted. It was far less hassle for us to collectively package up entire operating systems in order to package a single application and its settings; that's how virtualization became a multi-billion-dollar industry.
Containers are no different. People are lazy, short-sighted and indecisive about their use. If you use containers with fantastic discipline and rigorous procedure they're great. You can do amazing things.
Unfortunately, most of us won't be seeing the results of that. Not internal to our organizations or from vendors who license us their software. Currently, containers are a solution for developer superheroes, not for the likes of mortal men.
The Waiting Game
Perhaps in a few years, once they're no longer fun, they'll be ready to run our hospitals and street lights. Until then, we wait.
Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.