Vendor View

The Cloud Is Not a Science Project

Here are the considerations for designing your cloud strategy for long-term success.

As businesses seek the right way to harness the cloud for the agility and efficiency it can offer, a flexible hybrid cloud approach is emerging as a key IT strategy to enable IT as a Service (ITaaS). By blending both internal and external cloud elements in a single architecture, companies can seamlessly aggregate and deliver self-service access to a variety of cloud services, from virtualized enterprise applications to third-party cloud services, and run these workloads on any cloud that best fits the service. This allows IT to manage governance, control and service levels, while lines-of-business can achieve the agility and rapid response they need.

In a recent study conducted by NorthBridge Venture Partners, over 76 percent of respondents indicated hybrid cloud models to be at the core of their cloud strategies in five years.

To realize this vision of ITaaS across hybrid clouds, companies need to find a way to seamlessly orchestrate all the applications they have in a cloud environment. The challenge here is that different workloads have different architectural needs, and companies need to proceed carefully as they build their cloud strategy to avoid problems later on.  We'll explore some of these challenges, and how companies have addressed them, starting with how companies can structure their early pilot deployments for long-term success. This time: the workloads that matter most in the cloud, and why.

2 Types of Cloud Workloads, 2 Architectural Models = 1 Hybrid Cloud?
Cloud pilot projects typically fall into one of two categories: greenfield initiatives revolving around cloud-native applications, and initiatives to make the delivery of enterprise applications from virtualized servers more cloud-like.

With the first type of initiatives, companies deploy applications that are written entirely for the cloud and designed to fit its defining characteristics: pools of good-enough commodity compute and storage designed for low cost and massive scale. Because this hardware is not expected to be resilient, cloud-era apps are built with the intelligence to handle the failure of any given node simply and efficiently. Applications that fit this execution model include Big Data and analytics, web-scale apps, and test and development apps.

The second type of initiative focuses on the delivery of enterprise applications. When enterprise datacenters are virtualized, their runtime operations usually remain largely traditional, with the same service tickets, approval workflows and long lead times to deliver a server—even though it's a virtual server, not a physical one. Rework remains high, customer satisfaction lag and users flock to unmanaged relationships with third-party public cloud providers. This so-called “shadow IT” scenario exposes enterprises to significant risk of data loss or leaks and a compromised regulatory compliance posture. In response, organizations are working to enable cloud efficiency and self-service through extensions to server virtualization that provide runtime orchestration and automation of these workloads.

Either of these approaches is straightforward enough. The challenge comes when you start with one approach without considering the other. If you only build for cloud-era workloads, how will you extend to your traditional enterprise apps? Since the typical enterprise datacenter houses mission-critical applications that power day-to-day business operations, they are designed to avoid failure, relying on largely static racks of high-priced, fully redundant, specialized hardware designed for 99.999 percent uptime. Moving enterprise apps out of this resilient execution architecture into one deigned for cloud-era apps simply isn't an option. Conversely, if you design primarily for server virtualization workloads, how do you achieve optimal cost efficiency for your cloud era workloads? Are you really moving toward a true cloud strategy at all? Supporting separate private clouds for each type of workload would be grossly inefficient, requiring duplication of software, infrastructure, labor and operational overhead.

This is why the types of workloads and applications you plan to deliver in the cloud should have a significant impact on your selection of a cloud orchestration platform. Most cloud orchestration solutions closely follow the commodity cloud architectural model and do little to accommodate traditional enterprise workloads. Solutions designed to create a private cloud for traditional applications support cloud era workloads only by force-fitting them into traditional enterprise architectures -- again, a wasteful and inefficient approach. Instead, you need a way to orchestrate both types of workloads within a unified architecture -- one designed to support both sets of needs.   

Next time, we will delve into the details of this unified approach for supporting both traditional and cloud-era workloads, with examples of customers that are successfully using this foundation and transforming to ITaaS.

About the Author

Krishna Subramanian is vice president of product marketing for Citrix Cloud Platforms group. She joined Citrix through the acquisition of Kaviza, where she served as chief operating officer of marketing, sales and alliances. Prior to Kaviza, Subramanian led mergers and acquisitions for the Sun Microsystems cloud computing business and was the CEO and co-founder of Kovair.

Featured

Subscribe on YouTube