In-Depth

Au Contraire -- I Beg Not to Differ

If you want to pick a fight at any gathering of IT folks today, just question the value proposition of server virtualization products. Rarely has there been so much politicization of a technology tool, so much hype around a set of software products. Also curious is the patience among users with wares whose marketectural claims so often exceed actual architectural capabilities, as we have seen with hypervisor products. The situation is aptly described by a claim advanced recently by one VMware evangelist, who declared, "VMware isn't a product. We're a movement!"

That virtualization of servers, storage and networks is inevitable should be taken as a given. It's only logical that, as hardware becomes commoditized, software should scale independently of the kit. That, in the final analysis, is how you can capture the cost savings that should accompany the ever-improving economies inherent in the design and manufacture of commodity components such as processors, mainboards, memory and disks.

A corollary to this inevitability thesis is also true. Perpetuating a tight coupling of software to hardware eventually hobbles engineers and IT administrators alike with respect to important things like design improvement, scaling of performance and capacity and management. Tight coupling leads to unwieldy and proprietary stovepipe products that obfuscate heterogeneous management and drive up cost of ownership in heterogeneous environments. In the end, IT costs too much.

Data storage technology provides a case in point. In that space, the three-letter vendors have gone out of their way to bloat array controllers with "value-add" software features designed as much to perpetuate margins as to deliver value to consumers. While cost per GB of disk media has fallen 50 percent per year since the mid-1980s, the cost of a "finished array" -- one with value-add software embedded on an array controller -- has accelerated by as much as 160 percent per year in the same time frame. This is despite the fact that everyone is just selling a box of Seagate drives.

To some extent, this tight coupling of software to storage-controller hardware reflects a sincere effort to smarten storage arrays and reduce the administrative burden on consumers. But the value to the vendor of stovepipe designs in their array products -- raising barriers to cross-vendor product management and thereby locking in consumers and locking out competitors -- has been cited repeatedly as a distinct appeal of the strategy.

Virtualization in storage means separating much of the value-add feature set from the hardware controllers on arrays and instantiating it instead on an independent software layer. That layer enables the value-add goodness to be shared universally or selectively across lots of spindles. It provides a means for software improvements to be made without requiring a rip-and-replace of hardware, and for hardware improvements to be rendered without the horrendous interoperability-testing matrix associated with a complex controller design.

From an architectural standpoint, storage virtualization also enables value-add functions to be contextualized as application-facing services that can be allocated on some sort of intelligent basis to data itself. This creates the opportunity to provision data with exactly the set of services it requires, and is part of a strategy of purpose-building infrastructure with capacity allocation and capacity utilization efficiency in mind.

Regarding Servers
Server virtualization enthusiasts are seeking to achieve the same general advantages by separating applications from underlying machine resources. They note that server resources, including CPU and memory, are used inefficiently by one-server/one-OS designs. They want to decouple distributed computing software from underlying hardware in a way that enables flexible, multi-tenant hosting of applications, with or without their native OSes, on "any" hardware platform. "Any" is heavily caveated.

On its face, server virtualization is a noble quest, but by no means a new one. In fact, the virtualization technology concept is old news, perfected over a three-decade span in the mainframe environment where logical partitioning already supports multi-tenancy.

Issues with the distributed server virtualization story begin to arise when closer examinations are made of the capabilities and limitations of the enabling technology: hypervisor products themselves. Hypervisors are software products still very much under construction. Given this fact, caution should be the watchword for any IT shop pursuing what vendor pitchmen call a "VMware strategy" or a "Hyper-V strategy" or a "Citrix Xen strategy."

In fact, the use of the word "strategy" in connection with any shrink-wrapped software product should give the IT architect pause. When you need a full-fledged strategy just to deploy a product, it's more often than not a reflection of the fact that the product itself is not yet fully baked, and that its deployment may have consequences that aren't readily intuited or predicted. The more mature a product is, the more likely it is to deploy in a transparent and non-disruptive manner. Once ubiquitous, the product is usually invisible -- a given.

When vendors use the word strategy, it's different than what the term means in the practical world of business. Strategy has a meaning derived from the Greek for "from the office of the general." The "generals" in most business organizations live in the front office of the firm and have "chief" as the first word of their job title. They want a cost-containment strategy from IT, which often translates to an infrastructure-consolidation and -optimization strategy. Given an edict from the corporate generals to contain costs through consolidation, IT planners must open their tool box and survey the range of technologies that might be used to achieve the goal. A hypervisor is only one tool to meet that need -- and not always the best one.

This is the long way to make a simple point. Telling an IT planner that he needs a "VMware strategy" is like telling a mechanic that he needs a "wrench strategy." It makes no sense.

Get to Know File Systems
At the end of the day, the bulk of the servers that are being virtualized under the current crop of hypervisors don't need hypervisors at all. If analysts are correct, the preponderance of servers that are being stacked up in hypervisor hosting environments are file servers and low-traffic Web servers. Consolidating file servers can be accomplished using another virtualization product that gets little mention in the trade press -- something called the file system.

File systems, which are one of nine layers of virtualization commonly seen in contemporary distributed-computing platforms, provide the means to consolidate access to multiple physical data repositories using the metaphor of a file folder or library. If a file server is getting long in the tooth, simply move its contents into a file folder bearing the server's name in a larger server system.

For low-traffic Web servers, deploying a management front end like Plesk enables consolidation of the management functions in one place. Ask large Web-hosting facilities how they've been operating hundreds of small Web sites on a handful of servers for the past decade without the aid of a hypervisor. They'll usually extol the virtues of a common management front end.

Hypervisor vendors counter that more and more mission-critical business apps will shortly find their way onto virtualized server platforms, which may well be true. Only, as that scenario advances, the problems of hypervisors become more pronounced.

Just as there are no "massively parallel" accounting applications to take advantage of grid computing designs that were hyped as the next big thing a decade ago, today there are few business software products designed explicitly to run in a hypervisor environment. Standing up applications in a virtual server by spoofing each one into believing that it owns the entire machine is problematic, especially given the fact that different apps request underlying hardware resources in different ways.

You need a robust hypervisor to broker many different resource requests in an efficient manner. The current problems with inefficient I/O processing by hypervisors under relatively simple file and Web server workloads is the tip of an iceberg that will likely become more pronounced as more demanding apps are added to the stack.

Robustness is also partly defined by how well a system responds to a failure state. Current hypervisors are essentially Jenga! rigs: When a guest machine fails, the entire stack of guest machines fail. The insulation of guests in an x86 virtual server today is a far cry from what it is in a z-Series mainframe.

To cope with the problem, the hypervisor vendors are currently proffering a failover option involving the dynamic re-hosting of guests and their workloads on other virtual servers. Assuming that failover works -- which is by no means guaranteed -- this strategy introduces another twist into the virtualization story.

Every server that might conceivably become a host for a given collection of guest machines must be configured to support the most demanding guest machine in the collection. This can introduce substantial cost into server hardware configurations. As resource- or access-intensive applications -- those that require many connections to LANs and storage resources -- are virtualized, every possible target platform for re-hosting those applications must offer the same connectivity.

Today, with file servers and low-traffic Web servers, this is not a big issue. Many small servers requiring only one or two LAN connections can be consolidated fairly readily into a more robust server platform with one or two LAN connections. The virtualization of a resource- or access-intensive application, on the other hand, is a game changer. These apps, which were originally hosted on a dedicated server with 20 blue wires and 10 orange wires -- and the associated I/O cards, cables and switch ports -- must be accommodated not only on the primary virtual server host, but also on every potential host for the application and its workload. So, every potential host, whether for failover or for VMotion or other dynamic application re-hosting models, must be configured for 20 LAN connections and 10 Fibre Channel fabric connections. Thus, the hosting hardware costs of virtualization may well accelerate as the dream of virtualizing all applications on present-day hypervisors is realized. This can be expected, in turn, to offset some -- or most -- of the cost advantages of touted by server-virtualization advocates.

Unpopular Opinion
Other issues, such as management of virtual server sprawl and compatibility with underlying storage-infrastructure designs, should also factor into sound planning of server virtualization adoption. For all of these issues, there's nothing inherently wrong with hypervisor software, generally speaking. However, the selection and deployment of hypervisor products needs to be considered judiciously.

Just saying this aloud at a conference or seminar is to invite the wrath of both the vendor involved and, unfortunately, many IT pros in attendance. A vendor representative recently took great umbrage at a slide I presented suggesting that server virtualization should be used "judiciously."

I've also encountered the tech equivalent of town hall meeting shout-outs when advancing this view from IT operatives who simply don't want to hear it. Whether their opposition derives from cognitive dissonance and blind faith in hypervisor marketecture, deep-seated disgust with a dominant OS vendor, fatigue from all of the software patching and hardware refresh forced marching, or discomfort with views that contrast with what they've read in analyst reports or trade press media, they've reached the conclusion that they actually need a hypervisor strategy and that server virtualization is not a product, but a movement. Anyone who says otherwise is either wrong-headed or a paid shill for the one-server/one-OS industry.

As IT professionals, we're tasked with managing corporate information assets in a manner and on a platform that returns business value from IT investments. In my experience, this translates to selecting technology based on what the business and its applications and data require -- and not on the basis of what a vendor wants to sell us. Server virtualization may -- or in some cases may not -- be part of the solution.

Featured

Subscribe on YouTube