Virtual Observer

The Rise of the Virtual Infrastructure Performance Manager

The emerging class of performance management solutions for high-performance virtualization.

If you brought virtualization into your company, or have managed it the last few years, you've probably enjoyed a nice reputation bump. Consolidation savings, workload mobility, higher availability... all those virtualization benefits that have translated into some killer investment returns, right? So, virtualization guru, what are you doing for Act II?

Most likely, you're virtualizing more important, business critical apps (while keeping an eye on the cloud for even greater efficiencies). High-performance databases, office productivity apps, giant Web farms, and the like; these represent the new virtualization frontier, according to the customers I speak to most often. Virtualization has certainly matured, and I'd like to spend some time in this and subsequent columns exploring how this maturity is driving innovative and exciting new management solutions.

A few years ago, Taneja Group coined the acronym VIO, for virtual infrastructure optimization, to gather and compare the emerging class of management solutions designed to help customers virtualize more and virtualize faster, while retaining infrastructure command and control. VIO solutions then primarily focused on provisioning, allocation, and capacity control for lower-impact and easily consolidated workloads.

Shifting Focus to Virtual Infrastructure Performance
We've come a long way, baby. Companies now depend on virtualization for their most business-critical and performance-sensitive applications. But virtualization breaks most of our tried-and-true IT performance management strategies. And performance is the key word here: Managing top-tier applications is less about consolidation or accounting and much more about performance, security and availability.

The challenge has expanded, so we've expanded the category. Now, we're tracking Virtual Infrastructure Performance Management (or VIPM) solutions. Performance management is not a new discipline, but virtualization does create a new infrastructure layer from which to monitor, predict, and tweak it.

VIPM demands a new vantage point from which to analyze performance. Instead of bottom-up (traditional stove-piped resource/element management) or top-down (traditional APM, or application performance management), VIPM requires a "middle-out," more holistic, infrastructure-level IT performance approach.

Why? Because virtualization really has changed how we manage performance, in several ways that are hardly controversial:

  • Scale overwhelms processes: There are now too many moving parts to size, adjust, relocate and monitor without significant automation.
  • Abstraction obscures visibility: Virtual workloads connect to resources and to each other transiently, and differently than they did when they were statically deployed.
  • Boundaries break down: Virtual machines aren't just applications, servers, network devices, or storage, but a new amalgam of all of these, straining the traditional IT separation of duties.
  • Mobility creates complexity: Before we can hope to diagnose a performance problem, we have to find the virtualized resources involved and map them to one another.

In short, change is now the only constant. It's not very useful, now, to know how many IOPS a particular array can deliver, or the CPU utilization on a certain host, or the bandwidth utilized on one adapter, as independent data points. They each change too often, and none provides enough information to understand how the infrastructure is responding to load.

Performance management tools have historically been designed with particular IT jobs in mind: application managers, storage and server admins, NOC staff, and, most recently, VMware (or Xen or Hyper-V) managers, for example. In contrast, VIPM requires us to imagine a new IT role: The team that needs to understand and synthesize data across all legacy infrastructure tiers (most of them virtualized to some degree), analyze and plan, and adjust in real-time to maintain performance across a range of applications—with competing requirements.

There's no right way to build a VIPM solution, just like there's no right way to run a web farm or an OLTP system or Fibre Channel SAN. No two IT departments will have the same levels of expertise or requirements across each infrastructure dimension. By the same token, every VIPM solution vendor will bring different levels of domain expertise to the table, and partial VIPM solutions that excel in certain areas can still deliver significant benefits.

Today, no one vendor can lay claim to a complete VIPM solution. This makes the category exciting and contentious, so I'll try to help makes sense of it by taking a closer look at the technologies and tools of some of the top VIPM innovators in my next several columns. For each, I'll explore what I consider the essential elements of a VIPM solution:

  • Instrumentation: How are capacity and performance metrics gathered and from where?
  • Analytics: How is virtual infrastructure-level performance measured?
  • Visibility: How is performance communicated?
  • Diagnosis: How is troubleshooting simplified and accelerated?
  • Remediation: How are capacity and performance changes automated?

The virtualized infrastructure must be, more than anything else, elastic. The trick is figuring out how elastic it was under yesterday's workloads, where the flexibility problems are right now, and how much it can stretch for tomorrow's demands.

In the weeks to come, I'll investigate VIPM vendors that bring specialized storage and network expertise to the table, as well as those that specifically target virtualized servers and applications. Some of the names you may be familiar with, like VMware and Virtual Instruments, and some are up-and-comers like Bluestripe and Veloxum. Stay tuned!

About the Author

A senior analyst and virtualization practice lead at Taneja Group, Dave Bartoletti advises clients on server, desktop and storage virtualization technologies, cloud computing strategies, and the automation of highly virtualized environments. He has served more than 20 years at several high-profile infrastructure software and financial services companies, and held senior technical positions at TIBCO Software, Fidelity Investments, Capco and IBM. Dave holds a BS in biomedical engineering from Boston University and an MS in electrical engineering and computer science from MIT.

Featured

Subscribe on YouTube