Peeling Back the Cloud-Native Architecture Layers
Cloud-native architecture can be complex and confusing. Here's a roadmap to help you get started.
One of initiatives in the Cloud Native Computing Foundation (CNFC) Charter is to provide "Well-defined APIs at borders of standardized subsystems," and establish a standard systems architecture describing the relationship between parts. In the charter they have a block diagram of subsystems of cloud native architecture (Figure 1).
Although the block diagram is good start to understanding the confusing myriad of cloud native subsystems, CNCF released a cloud native landscape poster (Figure 2
) at KubeCon 2016 that included some of the products associated with the various subsystems. As this landscape is constantly in flux, the products listed in each layer is not designed to be inclusive, but instead only representative of the most common product(s) in each layer. This poster is also dynamic and will be updated frequently.
Today, I'll take you through a description of what some of these layers provide; but first I'll discuss one layer missing from the poster. If you're new to cloud native computing, this article will help you understand the many layers, components and products currently in use.
The one layer that is not on the cloud native landscape poster but is in the block diagram is probably the first one to come to most people's mind: the computer node operating system (OS), also referred to as the container OS. This includes OSes like CoreOS, RancherOS, Snappy Ubuntu Core, Red Hat Project Atomic, and VMware Photon OS. This layer is the operating system that allows containers to run. The container can be thought of as a jail or sandbox in the OS that gives the perception of a fully isolated and independent OS to the application, which is running in the container.
The application running in the container thinks that it has a virgin copy of the OSes files and memory. Whereas a full virtual machine (VM) does have total isolation of its files, a container actually uses some of the base OSes files and functions, and will only create a copy of the files and memory space when it writes to them.
Containers, for the most part, are ephemeral, short lived, and use separate mechanisms to store persistent data. This sharing of resources makes containers very space- and memory-efficient, and allows new containers to be created in seconds. The container OS may either run on bare metal or in a hosted VM. Running a container OS on a hypervisor gives the benefits of any other VM. Figure 3 shows the relationship of the underlying hardware or hypervisor, the container OS and the containers.
Containers are efficient and can be deployed very quickly. But in order for them to truly be useful, there needs to be a way to package and deploy the applications efficiently, too. Even though containers, in one form or another, have been around for a long time, they didn't take off in popularity until Docker was released. Docker fits in the container runtime layer of the CNCF diagram.
Docker allows deployment of applications inside software containers and wraps up the application in a complete filesystem that contains everything it needs to run: code, runtime, system tools and system libraries. It does so in such a way that the container can run in an agnostic fashion to the container OS on which it's located. In theory, this allows these images to be created and deployed in a matter of seconds.
Orchestration and Management
The next layer up from container runtime is orchestration and management. Once we have the container OS and the container image, a technology is necessary to place the images where needed, when needed, in an organized and structured fashion. Fortunately, Google has an answer to this in Kubernetes.
Kubernetes, as well as the other products in this layer, allows for automating deployment, scaling, and operations of application containers across clusters of hosts. It also takes care of common issues like host failures.
At the heart of it all, containers run on a container OS, which needs a physical infrastructure on which to reside. Because of the various layers of abstraction, they should be able to run on any public cloud or on any private datacenter's infrastructure. The reality is that we aren't there yet, and each of the infrastructure providers has its own quirks that need to worked around.
That said, each of the companies in the Infrastructure layer are more than capable of running containers, but the reality of bursting or moving from one provider to another is a ways off for any sophisticated deployment of containers.
Application Definition and Development
The final layers of the cloud-native landscape are source code management and registry services. I'll address these two layers simultaneously, although there is separation between the two.
Since containers are core OS-agnostic, once a container is built it can be deployed on numerous infrastructures. With a public repository to hold the container and a registry to describe what's in the container and its location, these images can be shared. Of course, not all containers are meant to be shared; these are stored in private repositories with private registries. These registries can be located on premises or in the cloud.
Where VMware Fits In
The landscape poster by design doesn't include products from all vendors. VMware has (or will shortly have) products that are in the layers, so I'll list its products and in which layers they reside:
- Harbor: Registry Services
- Code Stream:CI/CD
- vRealize Operations Manager: Monitoring
- vRealize Log Insight: Logging
- vSphere Virtual SAN: Storage
- NSX: Network
- vRealize Automation: Orchestration & Management, Provisioning
To give each of these cloud-native layers its due would need an article for each layer. Hopefully CNCF will do just that, but this article is merely an introduction. Note that the scope of each layer is not clearly defined, and most of the products overlap and provide services covered in other layers.
Tom Fenton has a wealth of hands-on IT experience gained over the past 25 years in a variety of technologies, with the past 15 years focusing on virtualization and storage. He previously worked at VMware as a Senior Course Developer, Solutions Engineer, and in the Competitive Marketing group. He has also worked as a Senior Validation Engineer with The Taneja Group, where he headed the Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on Twitter @vDoppler.