In-Depth

Hyper-V: Under the Hood

Microsoft's first effort at enterprise-level virtualization is a good start, but lots of rough edges remain.

Microsoft acknowledges the growth -- and the potential growth -- of the virtualization market with Hyper-V. It's Redmond's first enterprise-worthy virtualization product and is bundled with Windows Server (WS) 2008 (it's also available as a stand-alone server). It's a "Type-1," or bare-metal hypervisor, meaning it sits directly on the hardware, between the physical layer and the operating system.

The first public beta version, reviewed here, was made available last December. Microsoft has consistently maintained that the final version of Hyper-V will be out six months after the release of WS 2008. So keep in mind that this is beta software, and more changes are sure to be coming. With that caveat, let's peek under the hood and see what all the fuss is about.

Hyper-V is entering a server virtualization market dominated by the ESX hypervisor from VMware Inc., with the open source Xen hypervisor now receiving considerable attention as well. On the outside, Hyper-V looks pretty good, with support for:

  • Up to 16 virtual CPUs per virtual machine (VM)
  • Up to 64GB of RAM per VM and up to 2TB of physical RAM per physical host
  • Up to 8 virtual network cards per VM
  • Up to 4 virtual SCSI controllers per VM
  • High-availability clustering up to 16 nodes
  • Quick migration
  • Volume Shadow Copy Service (VSS) allowing live VM backups

As you can see, Hyper-V allows you to run some pretty powerful VMs. Now let's dig a little deeper.

Not Your Father's Virtual Server
Hyper-V was basically a code rewrite, so it's tough to even compare to Microsoft's previous virtualization incarnation, Virtual Server. Hyper-V is a 64-bit service and hence only available on x64 editions of WS 2008. Also, Hyper-V requires hardware-assisted virtualization (such as Intel VT, AMD-V), so it can't be loaded on 64-bit platforms without hardware-assisted virtualization support. The Hyper-V role can run on either the full-blown WS 2008 installation or on a WS 2008 Core installation (Server Cores are stripped down versions of Windows Server that are role specific, such as print, Web or DNS servers).

Hyper-V offers a host of new features as compared to Virtual Server, including:

  • Virtual network unicast traffic isolation
  • Quick Migration
  • Cluster support up to 16 nodes
  • Paravirtualization support

Virtual network unicast traffic isolation was one of my pet peeves with Virtual Server 2005. Virtual Server 2005 virtual switches behaved more like virtual hubs, because a VM connected to a virtual switch could potentially capture the unicast traffic of all other VMs connected to the same virtual switch. Isolation in the Hyper-V virtual switch is similar to what you've come to expect with physical Layer-2 switches -- a VM will only see the broadcast traffic of the other VMs connected to its virtual switch, so unicast traffic is fully isolated. Unicast traffic isolation is significant in that it prevents a compromised VM from easily capturing the network traffic of all other VMs on the same virtual switch.

Quick migration is a feature that lets you move a VM from one physical host in a Hyper-V cluster to another, with minimal client disruption. Quick migration requires a VM restart on the new physical host and, as a result, client connection state will be lost, thus requiring users to reconnect to the server or restart their applications after the quick migration completes. For less-critical internal server roles, downtime of a few seconds may be acceptable; however, a lack of full live migration support (described later) may prevent Hyper-V from being deployed for more-critical workloads.

The 16-node cluster support provides a good deal of flexibility for scaling a Hyper-V deployment as an organization becomes more comfortable with Hyper-V and its virtualization needs grow. I would expect that larger enterprises with an eye on data center workload automation will be asking Microsoft to support an even higher number of cluster nodes down the road, such as 32- or 64-node clusters.

Hyper-V supports paravirtualization in a number of ways. Paravirtualization is complementary to server virtualization and is used to make an OS or device virtualization-aware. OSes benefit from paravirtualization in terms of improved performance overhead and with additional system management options. Paravirtualized device drivers are used to reduce the emulation overhead of fully virtualized devices, such as when a virtualization platform emulates a virtual network interface card (NIC).

Hyper-V supports paravirtualized Linux guest OSes such as Novell's SuSE Linux Enterprise Server 10. Hyper-V also supports paravirtualization via virtualization-aware virtual devices. Note that Microsoft typically refers to paravirtualized device drivers as "enlightened" drivers, which are installed by mounting the Integration Services setup disk. In Virtual Server 2005, the integration services were known as "VM additions." An example of a paravirtualized device is shown in Figure 1. The Microsoft VMBus Network Adapter is a virtualization-aware paravirtualized network interface that offers a significant performance gain over an emulated legacy network interface, as was the case with Virtual Server 2005.

Microsoft Virtual Server 2005
Figure 1 The Microsoft paravirtualized network interface..

Getting Started
Getting started with Hyper-V is a straightforward process. For dedicated Hyper-V servers, I recommend using WS 2008 Core in order to limit the Hyper-V server's attack profile. Note that graphical management of the Hyper-V server can be performed on another host. After installing the OS, you'll need to configure the Hyper-V server role, which is well-documented in the Microsoft TechNet online Windows Server 2008 Technical Library.

The initial version of Microsoft System Center Virtual Machine Manager (SCVMM) doesn't support Hyper-V, so the easiest way to deploy new VMs is by using the Hyper-V Manager Microsoft Management Console (MMC). To import an existing Virtual Server 2005 VM, do the following:

  1. Copy the VM's virtual hard disk files to a storage location accessible to the Hyper-V server.
  2. Create a new VM using the Hyper-V Manager MMC and configure the VM to use the existing virtual hard disk.
  3. Boot the new VM.

Legacy Virtual Server 2005 VMs see a different set of emulated system hardware and hence rely on a hardware abstraction layer (HAL) not fully compatible with Hyper-V. Due to the differences in virtual hardware, several reboots are required to fully convert a Virtual Server VM to a Hyper-V VM. In my test, the following steps were required:

  1. In the VM guest OS, uninstall the Virtual Server VM additions and reboot.
  2. Mount the Integration Services Setup Disk on the VM to start the Integration Services setup. The setup program will first replace the HAL, which will require another reboot.
  3. When the reboot completes, log back on to the VM and the Integration Services Setup will continue. When setup completes, you will need to reboot one final time.

I'm confident that the next version of SCVMM will include tools for migrating a Virtual Server 2005 VM to Hyper-V, thus preventing the need to perform the three manual steps that I just listed. SCVMM includes tools to migrate VMware virtual machines to Virtual Server 2005, so migration capabilities to go from Virtual Server 2005 to Hyper-V should be a logical addition.

Management
Once SCVMM is updated to support Hyper-V, I expect management to be very good, as administrators will have a single pane of glass to manage both physical and virtual resources across their infrastructure. The Hyper-V beta includes management capabilities via the Hyper-V Manager MMC (see Figure 2), Windows PowerShell and via scripts leverage Hyper-V's Windows Management Instrumentation (WMI) provider.

The Hyper-V Manager MMC
(Click image to view larger version.)
Figure 2. The Hyper-V Manager MMC.

Like Virtual Server 2005, Hyper-V supports Volume Shadow Copy Services (VSS), which lets you back up live VMs -- provided that the VM's guest OS supports VSS. VSS backups can be executed using Microsoft Data Protection Manager, third-party backup products that support the VS Writer and scripts that call VS Writer. You can also use Hyper-V Manager to create new snapshots and revert VMs to old snapshots.

Overall, I like the Hyper-V Manager MMC. For small Hyper-V deployments, the tool should be all you need. However, if you're looking to automate elements of your virtual infrastructure, control VMs and Hyper-V servers via management policies, and provide users with self-service VM provisioning, you're better off buying SCVMM than developing similar solutions in-house.

Good Start, but Work Remains
Hyper-V is a big step in the right direction, but to compete with enterprise-class hypervisors such as VMware's ESX Server, more work is needed. The following features remain at the top of my Hyper-V wish list:

  • Live migration
  • Memory overcommitment
  • Security appliance integration
  • Serverless backups
  • Open standards support

Live migration is something we used to get by fine without, but now that we have it, we often wonder, "How did we ever get by without it?" Live migration lets you move a VM from one physical host to another with no interruption in session state; the network switchover typically occurs in a matter of milliseconds. Moving a VM to another physical host without any service interruption does wonderful things for IT professionals' quality of life. For example, suppose a server requires hardware maintenance, a task normally reserved only for weekends due to availability requirements. With live migration, you can migrate a VM to a new physical host in the middle of a workday, with its apps remaining available throughout the migration process.

Memory overcommitment lets you allocate more memory to VMs on a physical host than available physical memory. For example, a physical host with 16GB of RAM could run 10 VMs with 2GB of RAM allocated to each. Memory overcommitment is managed by the hypervisor, which allocates enough physical memory for a VM to perform its needed tasks, with the remainder of the VM's memory paged to disk. When a VM's workload increases, additional physical memory is then automatically allocated to the VM. From a planning perspective, memory overcommitment allows administrators to allocate enough memory to each VM in order for the VM to handle workload peaks. With staggered performance peaks among VMs on the same physical host, consolidation can be optimized without sacrificing performance. Without memory overcommitment, Hyper-V won't be able to run an equal number of VMs as hypervisors that support memory overcommitment.

VMware's VMsafe is arguably the gold standard in hypervisor security appliance integration. At this point, I'm not asking for a similar architecture from Hyper-V, although long term it would be nice. Right now, I would like to have the ability to connect an intrusion detection system (IDS) or intrusion prevention system (IPS) VM appliance to a virtual switch. Currently, there's no way to configure a monitor port on a Hyper-V virtual switch, so there's no way to effectively monitor VM-to-VM traffic.

Serverless backup support is also a key feature that I view as a requirement in production virtualization deployments, as it allows you to back up VMs from a backup "proxy" server connected to the storage network. As a result, a Hyper-V server being backed up will see no drop in performance, as the backup CPU overhead and I/O requirements would be offloaded to the backup proxy. I've spoken with a number of storage vendors who are waiting on VSS transportable snapshots (which allow an alternate server to create and manage VSS snapshots) to support Virtual Server's and Hyper-V's VS writer. Once transportable snapshots are available, Hyper-V will have a backup solution on par with VMware Consolidated Backup (VCB).

Standards support is another area where Hyper-V falls short. Nearly all vendors in the virtualization ecosystem have committed to supporting the Distributed Management Task Force (DMTF) Common Information Model (CIM) virtualization management profiles and Open Virtual Machine Format (OVF). When Hyper-V supports these standards, third-party vendors will have more opportunities to leverage technologies like OVF to innovate management solutions that encompass all major hypervisors, including Hyper-V.

Final Word
If you look under the hood of Hyper-V, you're not going to see all of the bells and whistles offered by Microsoft's major virtualization competitors. However, what you will find is a good engine and plenty of room to add features. To keep things in perspective, Hyper-V is a 1.0 release of a new virtualization platform. I expect that the features I'd like to see will start to become available in 2009. In the interim, it's up to you to decide Hyper-V's readiness for your environment.

Remember, virtualization is rarely a one-size-fits-all solution. While Microsoft's Hyper-V may not be able to haul your largest production loads today, it may still have its place on your lot.

Featured

Subscribe on YouTube