The Cranky Admin
Microsegmentation Is the Future
What it is, why it's becoming increasingly important.
Virtualization, arguably describable as "software-defined workloads," has become inextricably intertwined with both Software-Defined Storage (SDS) and Software-Defined Networking (SDN). The past decade had storage wars that redefined the IT landscape, and the upcoming one looks to see networking do the same. One term that will soon become commonplace for all virtualization administrators is microsegmentation.
To understand microsegmentation, we first need to understand Virtual LANs (VLANs). VLANs are a way to simulate separate physical networks without actually having to physically wire up separate networks. All devices on VLAN 10, for example, can communicate with one another. Those VLAN 10 devices cannot, however, communicate with devices on VLAN 20.
As previously stated, VLANs were designed to simulate multiple physical networks atop a single physical infrastructure. Physical networks in this conceptualization usually consist of hundreds, thousands or even tens of thousands of devices in a single layer 2 switch fabric.
Routers are used to bridge layer 2 networks, whether those networks are physical or virtual. The advantage to a big, flat layer 2 network is that all devices on that network can talk amongst themselves relatively unhindered. There is no router in between them to serve as a chokepoint. This is important, because high bandwidth routers have traditionally been alarmingly expensive.
Microsegmentation takes a different approach.
Making Microsegmentation Possible
There are two main problems with the classic VLAN approach. The first is that a single bad network card sending faulty frames can disrupt the entire network. If you're lucky, this will only affect a single VLAN. If, however, the NIC and switch port are configured to allow trunk access, one bad NIC can crater all
VLANs. (Do not do.)
The microsegmentation approach is to make use of modern VLAN protocols that are capable not of the classic 4095 VLANs, but instead the 16M VLANs that modern protocols like Shortest Path Bridging (SPB) and VXLAN are capable of. Instead of great big, flat layer 2 networks, the idea is to break everything up such that each virtual network only contains those devices which absolutely must talk to one another, and rely on routers to bridge the gaps.
Microsegmentation is only practical because network administrators have finally started to accept that virtualized routers are useful and usable. For over a decade their use has been anathema, considered by many to be evidence of professional malpractice. As that attitude has changed, so too has both network and software design.
Instead of needing expensive, powerful centralized routers to bridge virtual and/or physical networks, virtual machines (VMs) configured as routers can serve the same purpose. Virtualization- and networking-aware management software (VMware's NSX being the canonical example) can dramatically increase security by reducing the number of systems that need to be part of the same virtual network.
The 'Micro' In Microsegmentation
Let's say that I have a service that consists of 15 VMs. There is a load balancer, a database, a virtual file server and 12 Web servers. It's reasonable that these VMs be able to communicate with one another, but there is no good reason for them to communicate with anything else.
In a pre-microsegmentation network environment, the VMs that make up this service would likely be part of a virtual or physical network, with hundreds or even thousands of other workloads that were all in the same "zone." Being Web-facing, they would probably be part of the "DMZ" zone, and multiple services would be separated from each other through subnetting.
With microsegmentation the VMs, this service would simply be isolated in their own virtual network. If they had a need to talk to another VM, even if it was on the same host, they would go through that host's router. This has numerous security advantages.
Using subnets to isolate workloads is very weak security. It's trivial for an attacker to modify a compromised workload to attempt to access different subnets. It's much harder to get past a properly configured router implementing virtual networks.
When VLANs are properly implemented, switch ports -- whether virtual or physical -- don't allow workloads the opportunity to access arbitrary VLANs. Even in cases (such as virtual routers) where it might make sense to have guest-initiated VLANs, switches are generally configured to only pass packets from VLANs that guest actually needs to access.
This means microsegmentation can be configured such that a given service cannot possibly access other virtual networks except by going through the virtual router; nor can workloads on those other networks access the service you're securing, except through the virtual router. The virtual router, in turn, would only be granted access to the virtual networks that VMs for which it's responsible need to communicate with.
This is the principle of least privilege in action.
Whereas routers were traditionally an expensive bottleneck, the number of workloads that can be placed on a single physical host has changed this. Virtualization-aware SDN management software works to keep VMs that are part of a single VLAN together on the same host, ensuring communication between them doesn't have to traverse the physical network.
Similarly, if a group of services regularly interact, optimization routines can be employed to keep them physically proximate, so long as this doesn't override other location imperatives for that workload.
We might want to, for example, have a virtualized Hadoop cluster, an analytics service and a user-facing Web service that regularly interact to all live on hosts that share the same top of rack switch, so as to minimize the impact on physical network bottlenecks. Conversely, a database on physical site A might well talk frequently to its replication counterpart on physical site B, and it wouldn't make sense for the software to relocate one to live next to the other.
Microsegmentation is only possible because management software exists which can relieve the configuration burden. Humans aren't good at keeping more than a few hundred interconnections straight in their mind, and when we start isolating each individual service in a large enterprise, we're potentially creating millions of segments.
Microsegmentation isn't just about limiting which workloads talk to one another. It's about the automation of network configuration and network service provisioning. The possibilities for integration with other aspects of IT security have experts and analysts excited, and for good reason.
Increasingly, datacenters have no edge. IPv6 adoption is making all workloads publicly addressable, and workload compromises are now so regular it's foolish to believe that any of us can stop threats at the perimeter.
Microsegmentation only limits the impact of any given compromise. It can also be combined with tools that profile services to learn what they communicate with, and then both dynamically configure least-privilege access for that VM and freak out if what it's trying to communicate with changes.
Other network services beyond just routers are being virtualized. Intrusion detection systems, honeypots and various flavors of automated incident response are all part of the enterprise IT security toolkit today. While these are usable in a classical environment, they really shine in the kinds of highly automated and orchestrated environments that make use of microsegmentation.
There is no escaping microsegmentation. It is the future of networking. All that remains to be determined is which vendor(s) we will embrace to deliver this critical functionality, and how long we'll wait before taking the plunge.
Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.