In-Depth

Virtualization Enables Cloud Computing: 7 Top Transition Questions Answered

How will your systems make the transition? We offer answers to seven questions about cloud migration.

Administrators focused on virtualization have come to the realization that IT performance isn't locked away in any one device, whether that's a server, a storage array or a networking switch. Performance is the combination of those devices in the datacenter that unites to form resources that can be flexibly used for your computing needs. The same concept is also applicable to the cloud. The reality is that virtualization enables cloud computing, so its concepts hold true in the cloud. This being the case, what's the best way to approach the cloud? How will your systems make the transition? And can you merge the virtual administrator's skill set with the cloud provider's needs? If it's all virtual, you can make the transition to support cloud technology.

Cloud technologies come in multiple flavors. There's Software as a Service (SaaS), which essentially provides hosted applications via a Web browser; Platform as a Service (PaaS), which provides a platform to deploy applications without regard to the hardware or software layers; and Infrastructure as a Service (IaaS), which is closest to virtualization inside of a corporate datacenter. IaaS runs virtual machines (VMs) with an exposed operating system and a clear delineation between virtual instances. The most popular implementation of IaaS is the Amazon EC2 elastic compute cloud, which enables users to spin up their own machines with a unique OS implementation.

Virtualization vs. the Cloud
The basic infrastructure of the cloud is enabled by virtualization. The platform used by the cloud vendor is not necessarily important, but many use similar technologies from Citrix Systems Inc., Microsoft and VMware Inc. The point is, many organizations are using the cloud to run VMs without the costs and effort of maintaining hardware, bandwidth and datacenters. With the cloud, you don't worry about VM balance, networks or where a storage partition comes from. You do still have to worry about performance, scalability of the applications and the added challenge of dealing with an infrastructure you can no longer touch.

Although the promise of the cloud for decision makers may come down to cost (as it usually does for virtualization), a virtualization administrator's approach to the cloud is usually different than the business approach. As usual, the administrators need to push past the marketing and into the technical details. End users don't care where an application is served from, as long as it continues to work as advertised.

What appears as a SaaS application to end users may be really be IaaS from an IT perspective. In the end, line-of-business applications and custom code will be pushed into a cloud to make it really effective. In other words, it still needs to run on a machine unless the code is being completely written or rewritten for available PaaS cloud platforms that run without the confines of a server.

In this article, we'll look at seven questions about moving smoothly to the cloud using virtualization, and attempt to provide answers that will make your transition problem-free.

1. How do you deploy/provision and manage a cloud instance as opposed to one on your current VMs?
The ease with which you deploy in the cloud is really a question of your implementation of applications. Every IT department is in a different place with toolsets, management and automation. Some make it easier than others to integrate into the existing infrastructure. For example, the VMware vCloud program certifies specific cloud providers as compatible with the vCloud infrastructure provided by vSphere. This can make it easy to take VMs from your local datacenter into the cloud without much effort by viewing internal and external cloud services as a pool of resources. The main benefit is provisioning and moving VMs between hosts and those cloud instances with ease.

That's not to say providers without this kind of integration are hard to use. It's not necessary to buy into these cloud-infrastructure tools in order to realize cloud integration, especially on a smaller scale. In fact, depending on your point of view, you may want to keep your cloud resources distinctly separate from your normal virtual environment. Take, for example, the Amazon EC2 cloud platform. Just as you would create images with your base software and OSes to deploy new virtualized servers, Amazon allows users to create instances using the Amazon Machine Image (AMI) standard. You can even use its set of shared AMIs, which include base installations of Windows and various flavors of Linux servers, as well as certain applications from companies like IBM Corp. and Oracle Corp. You simply find an existing AMI from the list and create and bundle it with your customized settings and software.

The extra steps required when you first implement your cloud instance on Amazon EC2 are related mostly to initial security setup. Every EC2 instance needs a key pair and authorization for a connection. For example, with a Windows EC2 instance, it's necessary to create the key pair based in your public IP and authorize Remote Desktop Protocol (RDP) using a series of commands. Once completed, you can connect with an RDP session and manage that machine remotely.

The Microsoft Windows Azure platform isn't as easy simply because Microsoft started with the concept of Windows Azure as a cloud platform, not as VMs in the cloud, as is allowed by Amazon or Terremark Worldwide Inc. However, Microsoft is moving toward a cloud infrastructure model as the company adds RDP accessibility and VM support. For now, most Windows attributes on Windows Azure can be accessed programmatically. Simply put, Windows Azure requires a bigger change in your approach and your server management, depending on your level of management from a scripting perspective.

2. How integrated can these virtual instances be in your normal infrastructure?
Many organizations initially approached the cloud as a singular solution, meaning they used it as a single point for their software. While this is a great fit for startups, established IT organizations rarely have the ability to start with a green field. Integrating the cloud into existing infrastructures requires a firm understanding of your service catalog. The hard part is not running your machine in the cloud, but understanding which software that instance needs to interact with. Running applications that require little to no interaction with your back-end will initially be the most successful. This includes Web sites and other self-contained applications.

3. What software and applications are appropriate to move into the cloud?
Virtualization takes more consideration than just how many machines will fit on a physical server. It also requires a firm understanding of storage, networks and operations. The cloud is no different. It requires a closer look at aspects of your overall systems you may not normally consider. Consider the architecture of the software you want to move to the cloud. Old, inflexible applications that have the limitations of stovepipe or client-server architectures are classically inflexible, so moving them around means they may be hard to find. Issues such as dependencies on names, servers, client software and databases are all possible issues. The problem? Our organizations are all filled with these older applications that still run our businesses.

The best kind of application to move to the cloud is one using service-oriented architecture (SOA). These are services that usually utilize Web services to expose functionality. SOA allows the various functions of a software system to be defined with services. These kinds of modern systems enable a clear delineation of functionality, with proper documentation on how services work together to form contracts. If these systems are built with a registry at the center of their documentation, it's easier to understand the required dependencies. This allows users to take advantage of things like DNS load balancing to deploy hybrid resources, which makes it possible to have some virtual servers internally and utilize the cloud for additional capacity. It may also enable dispersing applications across regions where cloud providers may maintain their datacenters.

Understanding what to deploy in the cloud also has to do with understanding application requirements, additional infrastructure and service-level agreement (SLAs). The same principles apply to a cloud deployment, except with the cloud you have to come to terms with the fact that your cloud instance lives outside of the network. If the software you want to host in the cloud is not based on SOA, you can expect nothing but headaches.

4. Are there dependencies holding you back?
The one thing you want to avoid is causing an unintentional outage when providing services in the cloud because there are dependencies on an internal system. Such dependencies include databases, batch processes, server names, persistent connections and specific file locations. This gets back to understanding your services and how they have to change if you're running them in the cloud. Frequently, a quick configuration-file change is all that's needed to target the proper resource.

5. If there are internal dependencies, will you always be limited in what you can move into the cloud?
There are new approaches that may be able to solve some of these infrastructure-related issues. For example, Microsoft Active Directory (AD) now supports federation, allowing users to extend the internal AD security system to the Internet and potentially run the vast majority of internal systems that are dependent on AD users and groups.

There's another option: making cloud instances appear as a part of your internal infrastructure. A service such as Amazon Virtual Private Cloud enables users to set up a dedicated VPN to the cloud provider. This further enables them to address their cloud instances as internal resources and even extend their internal subnet addressing to those cloud resources. This kind of solution, while an additional overhead and expense, makes addressing cloud resources much easier.

The tower model on which many IT organizations are based can also cause you heartburn when adding cloud resources into the mix. The typical datacenter includes many manual processes that must be completed by specific groups in order to make today's VM software functional. It's just an extension of what we developed 10 years ago when you no longer had general-purpose administrators, but groups specialized in specific roles like storage or applications like e-mail.

In this tower structure, you may have to request storage on the SAN and request IP addresses and VLAN settings from the network team. Now include any additional complexities of external resources and scalable applications. For example, how long does it take to request new firewall rules? What about requesting secure service accounts to use with your applications or modifying which machines are configured for a specific load balancer? How long do these requests take? Your virtualization effort may have automated many of these previously labor- and time-intensive processes. If so, you're in a good position to extend your virtual infrastructure to the cloud. If not, you need to consider getting your organization up to date with better automation.

6. How do you ensure efficiency, continue to keep a check on costs and ensure performance?
Two words: Control scalability. When you push to the cloud, you're no longer relying on the concept of "big iron." You need to understand how resources are scaled out as they're required. Sure, some things just need a big server, like a SQL back-end, and cloud providers will give you one -- but for the most part you'll be relying on distributed instances that scale out to add capacity. Each cloud provider has a different way to add new instances, decide when enough is enough and shut down that extra capacity. The cost lies with the number of hours you use per cloud instance, and how many gigabytes you push to and from that cloud service, so maintaining control is important.

Initiating new resources from the application level is the quickest path to success. Take, for example, a Web application that's normally steady, but may incur occasional spikes in usage that require extra resources. If proper stress-testing has been done to the application, you'll know how much load will cause the application to slow. With automated counters, the system can programmatically launch a new instance that automatically scales out your application without intervention from system administrators. The same goes for the slowdown, when you can automatically spin down those same instances when you're done with them, saving money on cloud resources.

Implementing this kind of automation isn't a challenge -- Amazon and Microsoft make it easy -- but your manual processes may get in the way. In fact, having to make manual requests for infrastructure-related resources like a new machine instance for additional capacity means you're only using half of the cloud's ability. You have to sell the automation of the cloud platform you choose as well as the cost savings in order to take full advantage of that agility. The ability to program and script for that agility exists, and cloud vendors are banking on further automation to make them even more efficient.

7. How can you ensure system performance when machines are outside of your controlled infrastructure?
The traditional model of datacenters dictates that you put everything in one spot and watch it all closely. In that environment, understanding how to set and meet SLAs has become the art of operations. Cloud services now give operations a resource that requires less governance, but must still be monitored.

You'll want to have a reliable baseline for those applications that may touch cloud resources. This means proper stress-testing and understanding of the typical load. The traditional metrics of CPU load, uptime and memory mean less than they used to. Understanding how the network performance of the Internet connection will affect performance of important systems -- especially those that will have a hybrid model using some internal systems and some cloud systems -- becomes critical. If you take advantage of Microsoft SQL Azure as your database platform of choice, you want to ensure you're not bottlenecking that database via an unreliable Internet connection. The only good way to measure all the tiers of the application is to follow each transaction from front to back. If this kind of measurement can't be built into the application itself, or the application isn't properly load tested, then the ability to respond quickly with the resources at hand will become a manual effort.

You don't want a finger-pointing method of troubleshooting; it's hard enough when applications live in a single datacenter. Understanding how each service tier should be performing and when the applications need more or less of a resource is the hard part that requires proper stress-testing and SLA definition from QA, developers and management, as well as your operational support.

Moving Methodically into the Cloud
Moving to the cloud is a given for almost every IT organization. You want to take all those reasons to do it and apply some perspective regarding your systems. Sure, you can now implement tools to transfer your existing VMs to the cloud. VMware and Microsoft certainly want to make that easy, but that's a small technical hurdle. The big leap is the structure of your applications and how they interact with your overall infrastructure. Once you have a grasp on that, the automated control via programmatic triggers will allow you to truly step back and enjoy the elasticity that the cloud can bring to an already elastic virtual environment.

Featured

Subscribe on YouTube