In-Depth

Going Cloud: The Changing Nature of Virtualized Datacenters

The push to the cloud is putting heavy pressure on virtualized datacenters to change with the dynamic times.

At the VMworld conference held last fall in San Francisco, VMware Inc. President and CEO Paul Maritz told attendees that most of them have virtualized their datacenters. He noted this would, in effect, create the basis for an Infrastructure as a Service (IaaS) model in which all servers and storage are combined in a single pool of resources that can be allocated on demand to applications via private clouds in the datacenter, or public clouds offered by services providers.

In VMware's grand plan, this is the first big step toward cloud computing. The next step calls for using those virtualized datacenters to deliver Platform as a Service (PaaS) technology, which will enable users or developers to provide application clouds that VMware runs, scales and manages.

Maritz's clear-cut vision covers a lot of ground in a hurry, which fits in neatly with his high-speed, one-way approach to the cloud. It also assumes that a critical mass of companies have virtualized their datacenters, which seems likely -- at least among a significant portion of VMworld attendees. But what about the wider audience of virtualization users? Conventional wisdom suggests that some 30 percent of servers have been virtualized, but that number may be skewed because so much of the low-lying server virtualization fruit has been picked, which could reduce the rate of future growth.

Appetite for 'Creative Destruction'?
According to a December 2010 survey of information technology managers conducted by Cisco Systems Inc. -- a major infrastructure player that bakes virtualization into its United Computing System (UCS) along with storage and networking capabilities -- 67 percent of respondents reported that they've virtualized less than half of their production environment servers, with 28 percent indicating they've virtualized half or more, and 5 percent saying they didn't know.

Asked what factors have been inhibiting datacenter virtualization, respondents cited six factors that were separated by only six percentage points: security (20 percent), stability of virtualized environment (18 percent), difficulty building operational processes for virtualized environment (16 percent), management/administration (16 percent), proprietary virtualization solutions tied to applications (15 percent), conflicts between IT organizations on ownership of the virtualized environment (14 percent) and other (1 percent).

Shifting its focus to cloud computing deployments, the survey found that 52 percent of respondents have deployed or plan to use cloud, while 48 percent have no plans or have rejected the idea for the immediate future, or are debating the value of cloud-based computing without having made any decisions.

When the survey asked respondents planning to use the cloud what percentage of their company's data and apps they expected to store and operate in a private or public cloud in the next three years, 32 percent said one-quarter to one-half, 27 percent said up to one-quarter, 21 percent said one-half to three-quarters, and 12 percent said three-quarters to all.

These numbers suggest that companies virtualizing datacenters face some significant obstacles, which could further deter the 48 percent of respondents that are not currently deploying cloud computing. It's also interesting to note that, at a time when security is the No. 1 factor inhibiting datacenter virtualization, 12 percent of respondents reported that in the next three years they would be storing and operating three-quarters to all of their company's data and apps in the cloud. This assertion is contradicted by a great deal of empirical evidence strongly suggesting that few -- if any -- companies have near-term plans to place such a high percentage of their data in the cloud.

The Gartner Inc. 2011 CIO Survey seems to agree with the Cisco survey, saying a little less than 50 percent of all CIOs expect to operate their applications and infrastructures via cloud technologies within the next five years. It also vividly predicts a rough road ahead, stating: "This change will necessitate that CIOs reimagine IT and lead their organization through a process of creative destruction."

The significance of a slowdown in datacenter virtualization takes on more importance when you consider the opinion of IT industry veteran and writer Tim Negris. While he acknowledges that many VMware customers are on board with their vendor's cloud program, when it comes to Maritz claiming virtualized datacenters are the first big step toward the cloud, he says: "In my view, it's a tiny first step to the cloud."

Dave Bartoletti, senior analyst at the Taneja Group Inc., says customers who use VMware for development and testing operations within businesses that create and deliver applications are right in line with Maritz's vision because they've gotten good at virtualization. However, that may be where VMware parts ways with some customers who prefer to take their second step to the cloud via the cheapest possible platform, such as the Amazon.com Inc. cloud, which can spin up their new apps for less.

"I think VMware's challenge is getting their platform to the service provider, getting the partnerships with the service providers so clients don't say, 'OK, it was fine when I bought it internally, and I could pay that premium price for VMware, but in a service provider model, nobody's going to want to pay a premium price for VMware infrastructure,'" Bartoletti says.

'Bloated' Datacenters Ripe for Change
Ellen Rubin is VP of products and co-founder of CloudSwitch Inc., developer of a software appliance that enables companies to securely run their applications in the cloud while remaining tightly integrated with their datacenters. Writing on the Ulitzer online media site, she said virtualization is not an automatic upgrade for datacenters. In the wake of virtualization, she wrote, companies must still deal with large datacenter infrastructure footprints, virtualization licenses, management challenges and "huge" energy bills that accumulate because servers remaining in the wake of virtualization work harder than they did before.

Rubin wrote: "IT is in the middle of a fundamental transition from the rigid, siloed world of traditional datacenters toward a more elastic, responsive model where needs are met far faster and more efficiently. Rather than perpetuating a bloated datacenter, the new model will allow companies to get out of the computing-infrastructure business where appropriate, retaining only the portion that's essential to the enterprise."

Despite the mission of her company, Rubin admitted some apps are "simply not suitable for any cloud," while others belong, at least for now, in the private cloud. However, she added, some datacenter applications are currently cloud-ready.

Negris thinks companies like CloudSwitch may be in the right place at the right time -- if they can convince customers that their data will be safely stored. "That's a big if," he says, "because if you're putting all your data in the cloud, what happens if whoever's managing it for you goes out of business, burns down or whatever? This is why a lot of telephone companies probably have a strong role to play, because they're already delivering five nines of service."


[Click on image for larger view.]

VDI Impact on Datacenters
What about Virtual Desktop Infrastructure (VDI) in the cloud? Bartoletti believes VDI deployments are stalling while users weigh two factors: Do they want to spend $400,000 on new software to consolidate desktops for a three-to-four-year TCO reduction that will leave them owning a lot of storage and Windows workspaces? And, will they push Windows workspaces out of their company and into the cloud? Even though VDI systems do increase the load on datacenter storage, Negris says that's happening in a controlled way with certain kinds of applications such as call centers. However, he adds, if you consider the costs of provisioning and managing a particular desktop one way versus the other, having a lot of physical resources on the desktop is considerably more expensive than having a little more bulk on the server side and almost nothing on the desktop.

"It's important to keep that within the realm of certain applications," Negris comments. "When it comes to general workers, I just don't have any reason to believe that a whole lot of them are getting virtualized back to the datacenter." He goes on to point out that intense apps such as those in call centers are "seat-count heavy," meaning a lot of people are doing the same job, with identical desktops that all have the same componentry and same files. "So creating a single image that can be replicated across a bunch of instances in the datacenter can lead to considerable savings, even though the net reach within the particular enterprise is actually quite small," he explains.

Breaking down the silos Rubin alluded to is complicated by the fact that applications, compute, storage and networks are frequently managed separately, which means they have different degrees of elasticity. For example, if a server is set up to provide maximum compute performance, then storage and networking layers may not flex enough to accommodate that workload. Delivering uniform elasticity up and down the stack is the dominion of a converged infrastructure such as the Cisco UCS, and according to Negris, achieving that flexibility is the hardest challenge for companies building internal clouds.

Know Your Infrastructure
One way or the other, Bartoletti says before users start moving their apps to the cloud, they must first assess their internal IT requirements. If their companies are strongly oriented toward spinning up development environments in order to build and test software before delivering applications to end users, they may benefit more from an elastic compute and storage environment. On the other hand, in more traditional environments associated with industries such as financial services, health care and manufacturing, businesses have developed custom apps, so they don't need all the development tools and elasticity. For them, it's all about the cost per gigabyte of storage, maintaining data security and providing the fastest possible data access.

Once they understand their internal IT requirements, the next step on the way to the cloud is ensuring that application performance is "rock-solid from end to end." Bartoletti says this calls for a new view of management based on a cross-domain, cross-disciplinary approach that deploys tools available from companies such as Akorri Networks (recently acquired by NetApp) and Virtual Instruments, which offers a SAN monitoring and troubleshooting product.

Saving Big Bucks
Cisco is a well-connected infrastructure player, having signed a variety of deals with companies such as EMC Corp., VMware, Citrix Systems Inc., Red Hat Inc., Wyse Technology Inc. and NetApp that are aimed at enhancing enterprise datacenter environments. Despite the value of these many relationships, however, Cisco UCS -- which unites compute, network, storage access and virtualization resources in a single, energy-efficient system designed to reduce IT infrastructure costs and complexity -- is arguably the company's crown jewel.

UCS is not about denuding datacenters; it's about empowering them. In other words, Cisco is pushing the datacenter pedal to the metal. UCS has been working well for Seth Mitchell, infrastructure team manager at Slumberland Inc., a retailer offering mattresses and home furnishings through stores and its Web site. Mitchell is in charge of servers, storage, networking, client devices, security and shrink-wrapped apps such as Exchange and SQL Server. He says that in its Little Canada, Minn., datacenter, Slumberland has two UCS fabrics, each of which is comprised of two clustered Model 6120 Fabric Interconnects and connected to two chassis that uplink to the company's Fibre Channel network with a pair of Cisco MDS 9134 Multilayer Fabric Switches, and to its IP network, via a pair of Cisco Catalyst 4900M top-of-rack datacenter access layer switches.


[Click on image for larger view.]

Within the four chassis, Slumberland has 10 B200 half-width blade servers and six full-width B250s. Mitchell says because the B200s are commodity desktop virtualization platforms, the system provides connectivity via the Terminal Services capability of Windows Server 2008 R2. It's at that point where the majority of users enter the system to access their desktop sessions and applications, which are virtualized on the B250s. Each of the B250s is running Microsoft Hyper-V Server 2008 R2. Slumberland runs a production cluster, a QA cluster and a development cluster, each of which has two wide blades as cluster members. The system includes anywhere from 30 to 70 virtual machines, depending on which of the three clusters are addressed. The system also includes live migration between blades. Mitchell says that, as of December 2010, the company had 135 virtual servers and 15 physical servers.

Asked what the primary benefits of the UCS technology have been, Mitchell says that because it has provided the company's first virtualization platform, consolidation and efficiency have been the biggest benefits. He also cites network connectivity, saying the company previously ran blade servers in a core-edge configuration in which switches were placed in the blade chassis. This made it difficult to determine if network analysts or systems administrators were responsible for handling problems.

"What we found is that by using some of the tools that are in UCS -- like the templates and static configuration of trunks -- it really just makes it a lot simpler and less confusing," Mitchell notes. "So because it's all based on templates, and there are essentially no changes after initial configuration in our environment, there are fewer questions about who's responsible for what."

Mitchell says the company has saved $368,000 so far, and will continue to save $1,678 on each logical server he deploys.

What Do CIOs Want?
Clearly, it's a time of change for enterprise datacenters. Given the dynamic nature of that change, what plans should CIOs be formulating for their presence in the clouds? According to Bartoletti, there are two questions to ask. No. 1: Do CIOs want customers to be able to reach them through the cloud? And No. 2: Do they want a major cloud presence, meaning they'll start cutting datacenter costs and moving data assets to the cloud? If the choice is No. 2, the objective is likely some form of a private cloud, and the best way to prepare for that is by evaluating tier 1 apps and making sure they're redesigned and reworked with cloud protocols in mind. In this environment, any hard database application that requires direct connectivity and access is going to be an obstacle in the path to cloud migration.

After reviewing core applications and understanding how they operate, the next step is reviewing how much money and time are being spent managing backup and replication. The key here is not falling into a state of dependence on recovering data from tape.

"If you're still managing a huge data environment yourself, and haven't explored pushing at least your non-critical archive data to a cloud service, that's going to be another millstone around your neck," Bartoletti declares.

Reviewing tier 1 apps can lead to significant cost savings. For example, if you have a 15-year-old Oracle database application, Bartoletti advises asking Oracle Corp. if it offers a solution for running it off-site while maintaining the same service level and security. This is exactly what happened with a large user in a university environment that bases its business on Oracle Financials and wanted to upgrade its operational efficiency and service level agreement (SLA)-compliance performance. The user met with Oracle and discovered that via Oracle Online, it was possible to run its business on the Web. The user closely evaluated the change for three months, liked the huge cost savings it would realize, and committed to the move. Then it all went south when the user discovered the prohibitively high costs of training the Oracle support team that would work with it. The problem was, the university had so many internal ad hoc features that weren't documented, and so many end-of-month routines based on manual scripts, that it wouldn't be possible to be assured of having the same service levels it was used to. The user canceled the deal because it no longer made sense.

Utilizing Open Source
Nathan Krick is a senior analyst at Rogers Corp., a midsize manufacturing company with 1,500 employees based in Chandler, Ariz. Although the company isn't a UCS user, it is a Cisco customer using a range of the company's switch and router products throughout its IT infrastructure, which includes a mixture of two large core datacenters and small divisional shops.

Krick says Rogers -- which is a 100 percent VMware shop -- got an early start on virtualization five years ago to contain the "explosion" of servers and storage it was facing. As a result, he explains: "We are now more than 99 percent virtualized. Anything that doesn't have specialized hardware requirements is virtual and sits in a pool of resources."

What he describes sounds similar to Maritz's vision of virtualized IaaS-based datacenters. However, despite Rogers' advanced state of virtualization, it's not yet ready to take the PaaS step to application development in the cloud because of certain limitations with SpringSource, which VMware bought in 2009 for its Web application, development and management expertise. As Krick says, the SpringSource approach is based on open source technology, and Rogers has done a lot of work with Microsoft technologies such as ASP and ASP.NET.

Noting that his company is moving more toward Java, he says: "If we start making some of those transitions, I can see we'd definitely want to start leveraging the SpringSource-type technologies that VMware offers."

Overall, there's no doubt that cloud technology is driving the further virtualization of datacenters in companies large and small. That puts the burden on users to configure those datacenters so that they provide the most bang for users' virtualization bucks as they head for the clouds.

Featured

Subscribe on YouTube