In-Depth

Transforming Private Clouds

Maintaining control and security are essential when building out virtualized datacenters.

No one can deny that the cloud has gone mainstream. What used to be a niche way to bootstrap a new project or company is now supporting mainstream corporations and high -- end applications. As with many trending technologies, the market has become fractured and cloud now means more than off -- site virtual server instances or Internet -- delivered applications. It can also be a technology that transforms your internal datacenter and its resources.

One of the big selling points of public cloud offerings is the ease of acquiring computing power without the up -- front capital expenditure. When looking at private clouds, however, you have to switch your thinking from public commodity resources to considering what you already have, and transforming it into a more flexible, on -- demand and singular resource.

This is where the argument about cloud exists. Some argue that a public cloud really defines cloud computing because you and your company no longer manage the infrastructure layer of your information technology, but you do gain access to limitless resources. Public or private, the real power of cloud is its efficient use of scalable resources. For example, while a private cloud will have limited amounts of storage and computing power, it does have the benefit of offering full control over your compute resources and—more importantly—your data.

What Is the Value of Private Clouds?
Consider a cloud datacenter. Is it really much different than the many datacenters across the world hosting large numbers of servers with big networks? Not really. They all run servers with mostly Intel or AMD processors, have storage area networks (SANs), and manage TCP/IP networks to access those files and applications. Your datacenter may not be as big as those hosting the Amazon.com Inc. public cloud, for example, but many companies have built up significant computing resources, put them online 24x7 and have gotten pretty good at ensuring that their infrastructures are highly reliable.

What public cloud vendors have that doesn't exist in many corporate datacenters is the layer that allows flexible utilization of previously purchased resources. Driving full utilization and reducing waste, as well as providing quick provisioning of services on legacy infrastructures, are the key to low -- cost, profit -- making public cloud infrastructures, and this model can become integrated with your datacenter via a private cloud.

The first question is always: How much does it cost? When you take a look at the typical total cost of ownership (TCO), you must include servers, storage, hardware, software, labor and other operational needs over the life of the project or service. The problem with TCO is that it often assumes specific pieces of your IT infrastructure go with certain projects and software deliveries. When you're evaluating a new architecture, it's necessary to consider the totality of your cloud delivery and how that will pay dividends not only for current projects, but for future ones as well. Think of private cloud as the next evolutionary step in efficiencies that go beyond server consolidation and extend to storage, networks and the general compute power of your datacenter.

What a Private Cloud Entails
Many organizations now rely on virtualization to the point where they've put themselves in a position to transform their existing infrastructures using a cloud layer. If you've already deployed a virtualization platform such as VMware, Citrix Xen or Microsoft Hyper -- V, you now have a basis upon which to build a private cloud. Server virtualization is a cornerstone of cloud computing and is a common requirement for private clouds.

Private cloud users deploy a centralized management layer that controls their compute, storage and in some cases networking resources. A good private cloud solution will also include some sort of self -- service portal for automatic provisioning of required resources. Today, many users deploy and test components such as servers for use with new software projects. That process should be truncated, and those resources should be available just by deploying the code. This kind of integration could include specific APIs that allow code to be deployed in an automated fashion according to specific processes.

Cloud Stacks
The private cloud market is hot and many vendors are offering a wide range of stacks. In order to understand these offerings clearly, let's see how they're being put together by vendors. Microsoft cloud initiatives include the Fast Stack, a set of software designed to get users up and running. Windows Server 2008 R2 with Hyper -- V is the solution core that works closely with the trinity of Microsoft System Center Operations Manager, System Center Virtual Machine Manager (VMM) and System Center Service Manager. Microsoft relies on Opalis to provide workflow automation, and depends on the VMM R2 integrated Self -- Service Portal for easier provisioning. The Microsoft solution stack doesn't call for specific servers or hardware, but instead leaves the way open for business partners such as Dell Inc., Hewlett -- Packard Co. and Hitachi Ltd. to integrate their hardware.

VMware Inc. has bet its future on cloud, and it's using ESX to integrate cloud services into vSphere. The latest Cloud Infrastructure Suite release is strongly pushing vSphere implementations toward a cloud structure. This suite includes vSphere 5, vCenter Site Recovery Manager and vShield for security. It also includes the vSphere Storage Appliance, which adapts all types of server storage to add into the cloud storage pool. Last, vCloud Director handles management. The controversy over vSphere 5 pricing continues, as competitors such as Microsoft attempt to clearly demonstrate that they offer comparable products for far less money. How well this argument resonates with dedicated VMware shops remains to be seen.

These integrated offerings from Microsoft and VMware are far from the only games in town. Most notably, OpenStack has the support of more than 115 companies in its efforts to provide interoperability standards that avoid vendor lock -- in. This is no small open source project. Big players such as Citrix Systems Inc., RackSpace Inc., Microsoft, AT&T and NASA, just to name a few, are supporting the standards -- based cloud layer because OpenStack offers its followers the freedom to implement products such as hypervisors through APIs.

Achieving Private Cloud Scalability
When cloud is discussed, the scalability of any solution should be a primary question. Of course, the ultimate in scalability is to be found with public cloud solutions. Their seemingly unlimited ability to add computing power, storage and data, along with the multiple datacenter locations offered by Tier 1 providers, is a big plus.

Before users convert their virtualized datacenters into private clouds, they must decide on how to deploy their compute power. The first step is isolating some portion of their datacenters and dedicating it to the new cloud solution. It will probably also be necessary to deploy some new hardware, while relying on the software to bring existing systems along for the ride. This engenders even more efficiencies.

Now you need to think like a cloud customer. Amazon.com has experienced cloud outages within specific segments of its datacenter locations that prevented customers from conducting business, but others who dispersed their cloud instances were able to weather the storm. If you have multiple datacenters, be sure you tie other locations into your cloud solution, or at least have traditional solutions like straight virtual machines (VMs) or an entire server and network infrastructure available in case of a large -- scale outage.

One of cloud's primary tenets is the ability to expand and contract compute resources as demand dictates. This means understanding your typical loads and providing enough headroom for the addition of more resources. You may want to create multiple resource pools and apply your business rules to those pools to ensure proper sizing. For example, high -- performance processors may be made available for database tasks, while standard 1U servers are pooled for Web services. This approach to resource pools includes storage and networking in addition to CPU and memory. With the right kind of management and workflow, this planning will enable automated scale -- out on demand.

When scaling out, there will be times when available capacity is insufficient. Traditionally, this would entail deploying additional capacity in the form of more servers and storage. In a cloud environment, however, keeping up with your expanding needs is easier because it isn't necessary to buy for individual projects or workloads. One option is to take advantage of the hybrid cloud features being built into many of these solutions. Hybrid cloud lets users provision some public cloud Infrastructure as a Service—adding enhanced scalability—especially in cases where there are spiking demands that wouldn't normally justify more permanent equipment purchases.

Private Cloud Does Not Always Equal Security
The public cloud still hasn't satisfied a major question for many enterprises, which is: How will they guarantee security for all of this mission -- critical, sensitive data? This quintessential question never seems to go away. While we wait for that to happen, keeping important data on -- site and behind the firewall works just fine for many IT shops, even though it doesn't guarantee security for remote users.

A server is a very clear base that you can build a moat around using specific network connections, firewalls and multiple physical access. However, the private cloud diminishes that base as a result of resource pooling. Ease of provisioning can further diminish it, because it allows unchecked expansion of instances that aren't tested or secured properly. In addition, because instances, storage and data are constantly on the move between various servers and networks, they can expose data leaks or cause sensitive data to be lost. With this in mind, be sure to always:

  1. Insist on understanding any encryption mechanisms that will be used.
  2. Manage your resource pools according to security parameters.
  3. Maintain security in your VM environment that will self -- defend and provide robust reporting. In addition, set any self -- service tools to limit unnecessary access to functions.

Other Cloud Options
Private clouds have some additional core functionality that should be considered. Chargeback is an important feature that enables organizations to charge according to usage, much as Amazon.com does when customers use an EC2 instance or transfer data. Early implementation chargebacks are important because they discourage wasting resources up -- front.

Instant provisioning is a blessing and a curse, so track instance inventory properly and rely on good reporting to keep control of the cloud. You don't want to end up without resources when you need them most because someone forgot to decommission some servers and storage after they were done with their project.

Have no doubt that cloud computing is a transformative addition to the IT landscape, and keep in mind that embracing private clouds allows users to closely maintain control and security, while still taking advantage of many of the benefits cloud has to offer. Just remember that once you pick a solution, you'll be living with it for a long time. If that's off -- putting, you may want to consider a Tier 1 player or go open source to avoid lock -- in. Either way, transformations are often a difficult sell up -- front, but they provide many benefits down the line. The cloud is that kind of transformation.

Featured

Most   Popular

Virtualization Review

Sign up for our newsletter.

Terms and Privacy Policy consent

I agree to this site's Privacy Policy.