Vendor View

Streamlining Virtualizing Tier 1 Apps

The magic to making tier 1 apps perform happens in the adaptive technology known as high-performance storage virtualization.

Back in 2010 when I was at VMware, I would have bet you money that within months, the virtualization movement was going to sweep up enterprise apps with the roar of an unabated forest fire. But it didn't.

What seemed like fait accompli at the time turned out to be far more elusive than any of us could have predicted. The naïve, invigorated by the thrill of consolidating dozens of test/development systems in a weekend, bumped hard against a tall, massive wall. On the vendor side, we fruitlessly threw more disk drives, sheet metal, and plumbing at it. The price climbed, but the wall ceded not.

Fast forward to late 2012. Many still nurse their wounds from those attempts, unwilling to make another run at the ramparts which keep Tier 1 apps on wasteful, isolated servers until someone they trust gets it done first. To this day, they put up with a good deal of ribbing from the wise systems gurus, who enjoy reminding us why business critical apps absolutely must have their own, dedicated machines.

The seasoned OLTP consultants offer a convincing argument. Stick more than one instance of a heavily loaded Oracle, SQL Server or SAP image on a shared machine and all hell breaks loose. You might as well toss out the secret book on tuning, because it just doesn't help.

To some degree, that's true, even though the operating systems and server hypervisors do a great job of emulating the bare metal. It's an I/O problem, Mr. Watson.

It's an I/O problem indeed, into and out of disks. Terms like I/O blending don't begin to hint at the complexity and chaos that such consolidation introduces. Insane collisions at breakneck speeds may be more descriptive. Twisted patterns of bursty reads queued up behind lengthy writes, overtaken by random skip-sequential tangents. This is simply not something one can manually tune for, no matter how carefully you separate recovery logs from database files.

That's before factoring in the added pandemonium when the shared array used by a DB cluster gets whacked by a careless construction worker, or a leaky pipe drips a little too much water on the redundant power supplies.

Enter the adaptive technology of high-performance storage virtualization. Whether by luck or design, the bedlam introduced when users collapse multiple enterprise apps onto clustered, virtualized servers, mirrors the macro behavior of large scale, discreet servers banging on scattered storage pools. The required juice to pull this off spans several crafts. A chunk of it involves large scale, distributed caching. Another slice comes from auto-sensing, auto-tuning and auto-tiering techniques capable of making priority decisions autonomically at the micro level. Mixed in the skillset is the mysterious art of fault-tolerant I/O re-direction across widely dispersed resources. You won't find many practitioners on LinkedIn proficient in cooking up this jambalaya. More importantly, you won't have to.

In the course of the past decade, this enigmatic mojo and the best practices that surround it have been progressively packaged into a convenient, shrink-wrapped software stack. To play off the similarities with its predecessors, the industry calls it a storage hypervisor.

But I stray. What owners of business critical apps need to know is that they can now confidently virtualize those enterprise apps without fear of slow erratic service levels, given, of course, that they employ a high-performance, fully redundant storage hypervisor to yield fast, predictable response from their newly consolidated environment. Instead of throwing expensive hardware at the problem, or giving up altogether, leave it to the intelligent software to manage the confluence of storage traffic that characterizes virtualized Tier 1 programs. The storage hypervisor cost-effectively resolves the contention for shared disks and the I/O collisions that had previously disappointed users. It takes great advantage of new storage technology like SSDs and Flash memories, balancing those investments with more conventional and lower-cost HDDs to strike the desired price/performance/capacity objectives.

The stuff shines in classic transactional ERP and OLAP settings, and beyond SQL databases does wonders for virtualized Exchange and SharePoint as well.

Sure, the advanced new software won't stop the veterans from showing off their scars while telling picturesque stories about how hard this was in the old days. Though, it will give the current pros in charge of enterprise apps something far more remarkable that they too can brag about -- without getting their bodies or egos injured on the way.

About the Author

Steve Houck is Chief Operating Officer of DataCore Software.

comments powered by Disqus

Virtualization Review

Sign up for our newsletter.

I agree to this site's Privacy Policy.