The Cranky Admin
Overcoming the RAM Bottleneck
There are new ways to deal with memory issues and make your datacenter more efficient.
Virtualization is the quest to increase the efficiency of our compute nodes, driving up CPU utilization by cramming more and more workloads into a single server. As disk gives way to Flash -- and Non-Volatile Memory Express (NVMe) truly unlocks Flash's potential -- it seems the amount of RAM we can economically stuff into a server is now the new bottleneck. But there are some technologies in today's hypervisors that offer limited relief. Today we'll go over them, and point out some strengths and weaknesses of each.
Transparent Page SharingTransparent Page Sharing (TPS), also known as Kernel Samepage Merging to KVM users and Memory CoW to Xen users, is essentially RAM deduplication. RAM is accessed in pages -- contiguous segments of memory that are read and written to as a whole -- and if any two pages happen to be the same then hypervisors with TPS can keep only one copy around.
The idea is simple: for each subsequent copy of a given page of memory, simply place a pointer back to the first version. In the case of virtualization this can be very handy; there are plenty of instances where virtualization is used to stand up several nearly identical workloads, and much of the RAM used by those workloads will be identical.
If one workload decides to make a change to a deduplicated memory page, that change is written to a new memory page. This way the virtual machine (VM) making the change can have its altered memory page without affecting the original.
Memory CompressionMemory compression is exactly what it sounds like: memory pages are compressed so that they take up less space. Many memory pages don't change for long periods of time, or even at all once when an operating system is loaded. These memory pages are excellent candidates to be compressed to free up memory.
Computationally, this is less efficient than TPS, and is something of an older technology; but it has proven to be useful in a number of memory-constrained environments. Originally introduced in VMware's ESX 4.1, memory compression has made its way to other hypervisors as well.
The compcache project, now almost a decade old, became zram and was merged into the main Linux Kernel with version 3.14 in 2014. This makes it available for use by both KVM and Xen, though whether or not it is exposed for use is up to the individual distributions and/or vendors. Its use is still rather controversial.
Memory BallooningMemory ballooning is a more interesting approach to memory management than either TPS or compression. In a traditional hypervisor/VM relationship, the operating system inside the VM isn't "aware" that it is operating in a virtual environment. It acts against the virtual hardware just as it would if it owned the metal.
In more modern virtualization solutions, the hypervisor's guest tools allow operating system guests to coordinate directly with the hypervisor. One way in which this occurs is memory ballooning.
Memory ballooning allows a guest operating system, via the guest tools, to inform the hypervisor that it does not currently require some of the memory which has been allocated to it. The hypervisor can then reclaim this memory and spend it elsewhere, usually on other VMs.
VMs that give up their memory are still aware of their allocated amount. When these guests need more memory, they simply inform the hypervisor and they can reclaim RAM up to their allocated amount. Memory ballooning exists in VMware, KVM and Xen.
Combined, TPS, memory compression and memory ballooning allow for what is typically termed "memory overcommit". More VMs can be assigned to a host than the static RAM commitments of those VMs might indicate.
Hyper-V's Dynamic MemoryHyper-V has its own approach to memory management, called Dynamic Memory. Like memory ballooning, dynamic memory relies on guest tools being installed. Dynamic Memory in Hyper-V 2016 works with modern Windows guests and many Linux distributions, but it has its problems.
The idea behind dynamic memory is that administrators using it don't assign static amounts of RAM to guests; guests simply boot up and claim however much RAM is needed from the hypervisor. They then release memory as needed.
One issue is that guest OSes don't fully understand giving up memory. They seem to cope OK with the idea that they can claim more memory as needed from the hypervisor, but can't give it up until turned off. Like memory ballooning, the guest OS can signal to the hypervisor that a memory page is unused, ready to be reclaimed by the hypervisor; but when this reclamation occurs, the memory manager of the guest OS will still believe it has access to however much RAM it consumed at peak.
This can make dynamic memory confusing for newcomers. With traditional memory ballooning you set a static amount of RAM for a given VM, and it can never consume more than that; but it can (and usually does) occupy less than that maximum footprint.
With dynamic memory, RAM is not statically set, and unless specially constrained, a guest VM can consume as much RAM as is available on the host. A guest, however, can also occupy less RAM than it thinks it has available, if that RAM has been reclaimed.
Swapping
The final memory management option to discuss is swapping. The short version is that when a hypervisor is out of physical RAM it can write memory pages to disk, or to Flash. This is terrible if you use disk. It's significantly less awful if you have NVMe Flash.
Even NVMe Flash is no substitute for RAM; however, in many cases it is an acceptable emergency overflow capacity for memory-constrained systems. With available RAM now the bottleneck in many modern virtualization systems, the various memory technologies discussed here can help get more out of our existing servers. Even if, every now and again, we have to swap a few pages out.
About the Author
Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.