Dan's Take

Red Hat and Containers

The company is combining containers and microservices in interesting ways.

Red Hat Summit 2016 has come and gone, leaving behind several interesting announcements that should be of interest to readers of Virtualization Review. I'm not going to try to analyze all of them here; instead, I'm going to look at just one area: Red Hat and containers.

Red Hat talks about the use of containers in several ways, including: a discussion of how they help in a DevOps environment; how they make it more easily possible to decompose applications into microservices then deploy them quickly; and how a containerized approach makes it possible for enterprises to modernize their environments by moving workloads from legacy platforms, one microservice at a time.

The DevOps Angle
Red Hat tells developers and operations staff that working with containers will make collaboration much easier. Functions can be built in containers and then easily tested, documented and put to work. This would serve both the needs of enterprise operations staff to offer a stable, reliable, manageable computing environment, and the needs of developers to move quickly for prototyping and refining functions.

Red Hat points out that increasingly complex, enterprise applications are being designed as a series of microservices that can be quickly implemented, tested and put into service. The process of breaking big applications down into smaller and smaller, but still testable, functions means that application development and deployment can both be faster and still result in reliable, robust applications.

Putting the resultant microservices in containers means that they can be developed quickly; thoroughly tested; documented; then quickly moved into the network. If problems arise, it's simpler to determine which microservice caused the problem and fix only that piece of a complex computing environment.

The Modernization Angle
Red Hat, like nearly every supplier, has been looking for a way to decompose established, very complex, but rock-solid mainframe computing environments and move the pieces into their own computing environment. The problem has always been that these computing environments are highly complex, highly interdependent and very hard to move.

The move to containers and the concept of microservices taken together means that the mainframe computing environment can now be picked apart, and very small pieces moved off the mainframe. Red Hat, of course, believes it's uniquely qualified to be supplier of the platform that's the recipient of these tiny pieces.

Red Hat hopes that tackling the mainframe no longer has to be a very big and complex step that looks impossible to accomplish. Now it's be possible to first pick off the low hanging fruit, like the user interface, some portion of application rules processing or even portions of the database; they can be re-developed, containerized and thus freed of the tangled, interdependent mainframe computing environment.

Dan's Take: Putting It Together, One Baby Step at a Time
What's so enticing about Red Hat's view is that it's based upon taking well-planned, eminently reasonable baby steps rather than trying to boil the application ocean all at once. Red Hat, however, isn't the only supplier thinking about containers as the new platform. If we consider other suppliers of operating systems, application development and deployment and even databases, they're all saying just about the same things.

Red Hat, however, appears to have done a better job of putting all the pieces together, packaging them and then delivering them as useful tools.

About the Author

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.


Subscribe on YouTube