In-Depth

Planning Primer: Architecture

You can't just slap your virtual infrastructure together -- you need to know where each piece lives in the production stack to get the most out of it.

There's a lot of hype in hypervisors, especially with everyone and their dog getting into the virtualization game, but one thing is certain: virtualization and hypervisors are part of your future if you work in a data center. If you're not using virtualization yet, then you'll want to start with the design of your virtualization architecture. If you're already using virtualization, you've probably come to realize that a virtualization architecture is an absolute must as you start building your virtualization layer cake. After all, you'll be using virtualization for many years to come, and like a cake, you'll want your virtualization architecture to stand on its own and not flop the first time it comes under pressure.

Learn Your ABCs
Building a virtualization architecture and moving to a dynamic data center -- a data center where you completely control the provision of services on an as-needed basis in response to changing business requirements -- is as simple as A-B-C.

  • First, architect your virtualization solution.
  • Second, build your virtualization infrastructure.
  • Third, convert physical machines to virtual workloads.

Throughout this process, you'll need to keep key factors in mind and monitor the impact virtualization will have on various aspects of your infrastructure, including your network operations, business processes and the bottom line. Also keep in mind that you should manage the move to virtualization like a project and aim to get it right the first time.

The 7 Layers
Start by learning what can be virtualized and how it affects a complete virtualization architecture. A complete virtualization architecture includes seven different layers, each affecting a different aspect of the data-center infrastructure.

1. Server virtualization is focused on converting a physical instance of an operating system into a virtual instance or virtual machine (VM). True server virtualization products can virtualize any x86 or x64 OS such as Windows, Linux and Unix. There are two aspects of server virtualization:

  • Software virtualization runs the virtualized OS on top of a software virtualization platform, which is itself running on an existing OS.
  • Hardware virtualization runs the virtualized OS on top of a software platform running directly on top of the hardware-there's no existing OS. This is the typical hypervisor, like VMware's ESX, open source Xen or Microsoft's Hyper-V.

When working with server virtualization, the physical server is the host of all of the virtual OSes, which become workloads running on the host.

2. Storage virtualization is used to merge physical storage from multiple networked devices so that they appear as one single storage pool. One of the key strengths of storage virtualization is the ability to rely on thin provisioning, which is the assignation of a logical unit (LUN) of storage of a given size, but provisioning it only on an as-needed basis. For example, if you create a LUN of 100GB and are only using 12GB, only 12GB of actual storage is provisioned. This significantly reduces the cost of storage, as you pay as you go (see Figure 1).

Figure 1
[Click on image for larger view.]
Figure 1. Using thin provisioning through storage virtualization can significantly reduce storage costs.

3. Network virtualization lets you control available bandwidth by splitting it into independent channels that can be assigned to specific resources. In addition, server virtualization products support the creation of virtual network layers within the product itself. For example, using this virtual network layer would let you place a perimeter network on the same host as other production virtual workloads.

4. Desktop virtualization allows you to rely on VMs to provision desktop systems. Desktop virtualization has several advantages, not least of which is the ability to centralize desktop deployments and reduce distributed management costs. This is because users access centralized desktops through a variety of thin or unmanaged devices.

Planning Checklist

Don't try to build your entire virtualization architecture at once. Use a step-by-step approach.

Step 1: Lay out the different aspects of virtualization and how you might use them in your organization. Then see how they'll fit together in your architecture.

Step 2: Once you have an idea of what you want to do in terms of virtualization, lay down the foundation for the resource pool.

Step 3: Convert physical resources to virtual service offerings when you feel ready. Start with simple workloads such as Web servers and work your way up to more complex workloads.

The best place to start will always be in the lab. Test everything fully before you move to production systems.

-- D.R. and N.R.

5. Application virtualization uses the same principles as software-based server virtualization, but instead of providing an engine to run an entire OS, it decouples productivity apps from the OS. It also transforms the distributed-application management model because you only need to virtualize an application once; from then on, the application-virtualization engine will make the virtualized application run on any version of Windows. What's even better is that products such as Acresso Software's InstallShield will take all of the applications you've already packaged to run with the Windows Installer Service in .MSI format and convert them to virtualized formats overnight in a batch process. InstallShield supports both the Citrix and the VMware formats. Virtualize your applications and you'll never have to touch them again! Work is also being done by major vendors such as Microsoft, Citrix Systems Inc., InstallFree Inc., Symantec Corp. and VMware Inc. to apply application virtualization to server applications.

6. Presentation virtualization, until recently called Terminal Services, provides only the presentation layer from a central location to users. While the need for this is diminishing because of the introduction of technologies such as application virtualization, the protocols used for presentation virtualization are at the forefront of both desktop virtualization and server

virtualization technologies, as they're the protocols used to access and manage the virtual workloads.

7. Management virtualization is focused on the technologies that manage the entire data center -- both physical and virtual -- to present one, single, unified infrastructure for the provisioning of services. This isn't necessarily performed through a single interface. For example, in large data centers, you'll want to divide different service deliveries into layers and separate operations between them. In smaller data centers, you may not have the staff to divide the responsibilities, but you should at least ensure that administrators wear different "hats" when they work with the various layers of your architecture. In fact, you should make sure that two key layers are segregated at all times:

  • Resource pools, which include the collection of hardware resources -- host servers, racks, enclosures, storage and network hardware -- that makes up the data-center infrastructure.
  • Virtual services offerings (VSOs), which are workloads made up of the VMs -- servers and desktops -- that are client-facing and offer services to end users.
Products to Help You Get There

There are three main commercial solutions for server virtualization:

  • XenServer from Citrix Systems Inc. comes in four flavors. Express Edition is a free starter version of the product; Standard Edition is the basic version, which supports two virtual services offerings (VSOs) at once; Enterprise adds the ability to pool hardware resources and run unlimited VSOs; and Platinum Edition adds dynamic provisioning of both hosts and VSOs.
  • Microsoft's flagship hypervisor, Hyper-V, was released in June. Other offerings include Virtual Server 2005 R2 Service Pack 1 and Virtual PC 2007. Microsoft's Application Virtualization 4.5, nicknamed "App-V," should be available by the time you read this. It was at the release candidate stage as of mid-June.
  • VMware Inc. offers the most comprehensive virtualization platform to date. VMware Virtual Infrastructure is a complete platform based on ESX. VMware also offers the Virtual Desktop Infrastructure for desktop virtualization and ThinApp -- formerly called Thinstall -- for application virtualization.

Oracle Corp., Novell, Red Hat Inc., IBM Corp., Sun Microsystems Inc., Virtual Iron Software Inc. and others also offer their own hypervisors. Keep this in mind when you choose your virtualization platform and management products. Management products, especially, should be hypervisor-agnostic to ensure that you can continue to manage your infrastructure if you end up running more than one hypervisor.

-- D.R. and N.R.

Security Considerations
One key factor in this segregation is the creation of different security contexts between resource pools and VSOs. Because your administrative teams are likely not the same and don't have the same responsibilities -- resource pool administrators must ensure that proper resources are made available for VSOs; VSO administrators must ensure that proper services are delivered to end users -- you limit the possibility of contamination from the virtual to the physical world by using completely different security contexts between the two.

For example, your physical layer should use very strong passwords and ensure that all communications between management consoles and physical hosts are encrypted at all times, as passwords are communicated over these links. Your virtual layer should also use these principles, but the security context for users won't need such stringent policies.

In some instances, the segregation of physical and virtual layers is performed automatically. This occurs when you run a Windows infrastructure in the VSO, but use a non-Windows hypervisor in the resource pool. If you use the same OS at both layers, make sure you consciously create separate security contexts between the two.

Proper Structure Is Paramount
Working with multiple virtualization layers can become quite confusing without proper structure. Consider the following: To provide high availability for VSOs, you must ensure that your resource pools are designed in clusters and that all storage is shared among these clusters. After all, you don't want to have a host server running 20 VMs fail without backup; you'll get a lot of irate phone calls when the users of the 20 virtual workloads are left hanging.

For most organizations, this means rethinking how you purchase and structure hardware. Rely on a simple rule: For each host server you acquire, you must also acquire its twin. This means buying all host servers in pairs. This way, you'll always have a backup system in case of crashes.

But you can also cluster in the virtual layer. For example, when building an Exchange Server 2007 Mailbox server to store user mailboxes, you'll most likely want to take advantage of the failover clustering and replication capabilities to ensure that the Mailbox service provided by the VSO is highly available. The same might apply to a SQL Server database server. If you want your Web servers to be highly available, you'll probably build them into a Network Load Balancing cluster.

Clusters at the virtual layer are not only good practice, but are often a must. That's because clustered machines are easier to maintain -- offload the workload to another machine in the cluster so you can patch the current one, and so on. This means systems management has little impact, if any, on end-user productivity. And clustering in the virtual layer is practically free because you don't need custom hardware to support it.

But building clusters within a cluster can become confusing. Take another example: Creating virtual servers to provide services to virtual desktops running virtual applications inside virtual networks while relying on virtual storage. Quick: Which layer isn't virtualized in this scenario? You can see how confusion can develop.

Build from the Ground Up
This is where a visual architecture can help. In this architecture, you build seven different layers of virtualization and address each with a particular construct. The first layer is the physical layer and will include each component within the resource pool.

The second is the storage layer, which will rely on storage virtualization technologies to provision just as much LUN as you need for both physical and virtual resources.

The third layer is the allocation layer. Because VSOs are nothing more than a set of files in a folder somewhere, they can and should be moved dynamically from one physical host to another as additional resources are required, or when management operations are required on a specific host. It's this allocation layer -- or the tracking of VSO positioning -- that transforms the data center from static to dynamic. It's also within this layer that you create and assign virtual networks based on your hypervisor's capabilities.

The fourth layer is the virtual layer; this is where you determine what will be virtualized, and will eventually come to include server and workstation workloads.

The fifth layer addresses both physical- and virtual-resource management. Keep in mind that both resource levels rely on a different security context. In addition, you may use different tools to manage the two infrastructure levels.

The sixth layer is the PC layer. While many will move to virtual desktop infrastructures to reduce the cost of managing distributed systems, others will find that by moving to virtualized apps, the entire business model of distributed desktop management is positively transformed; so much, in fact, that they'll never go back to application installations.

The seventh and final layer is the business-continuity layer. Again, because VSOs are nothing more than files in a folder somewhere, business-continuity practices become much simpler: Just replicate the makeup of every VM to an offsite location and you'll have a duplicate of your entire data center available at all times. In addition, the physical resources in the remote data center need not be the same as those in the production data center. After all, in a disaster, you only need to start and run critical services. This should take considerably fewer resources than running the entire production system.

There you have it. A seven-layer virtualization cake, with each layer building upon the others to support the new, dynamic data center. Now that you understand each layer and what goes into it, you can start building the infrastructure that will run it. And when you feel ready, you can convert your physical workloads into VSOs and completely transform your data center once and for all.

Featured

Subscribe on YouTube