Imagine taking your car to the shop for major repairs and finding that the mechanic uses just one tool for everything, from changing a flat tire to replacing your transmission. He explains that he doesn't need all those expensive, special-purpose tools anymore now that he's found this one, All-Powerful Tool.
It sound ridiculous, but similar images come to mind when I hear the suggestion that network appliance vendors need to abandon their purpose-built platforms and deliver their solutions on general-purpose hardware. The argument is that raw compute power has increased to the point that specialized hardware solutions are no longer necessary. But before we write off purpose-built network solutions, let's revisit the reasons why they exist in the first place and consider how significantly they improve an enterprise's ability to deliver mission-critical applications.
Today's advanced application delivery controllers (ADCs) evolved from software-based load balancing solutions designed a decade or more ago to distribute comparatively unsophisticated applications and light user loads across low-speed physical networks. As hardware, operating systems, applications, and networks grew infinitely more sophisticated and mobile devices became pervasive, software-based load balancers couldn't keep pace and were soon moved to hardware platforms. To handle today's massive amounts of traffic and deliver mission-critical applications and data to users from virtually any location and on any device, enterprises need advanced ADC solutions they can trust to be fast, secure, and available, not to mention scalable and fault tolerant to accommodate rapidly changing business needs.
It's tough for a network vendor to deliver on these requirements and achieve the highest levels of performance with a solution that's based on a general-purpose platform. By definition, a general-purpose machine is designed to support many types of applications and workloads and specialize in nothing. And while it's true that technology advances have made today's general-purpose machines magnitudes more powerful than those of a decade or two ago, raw processing power alone doesn't equate to performance. Even if it did, it would be tough to cite that as a reason to do away with purpose-built solutions because they, too, benefit from those same technology advances.
Architecture, not raw processing power, is where performance strides can be made, and that's what gives purpose-built network solutions a distinct advantage. Because vendors of such solutions can choose not only the hardware components (such as CPUs, RAM, and networking devices) but also leverage customized hardware (such as ASIC and FPGAs) to add value, offload processing, and relieve architectural bottlenecks, they are able to provide fully integrated, high-performance, predictable, and highly reliable solutions.
Hardware accelerators integrated into such solutions are specifically designed to greatly speed certain computational processes, such as cryptographic operations. When performed on a general-purpose server, these operations can consume 30 percent or more of the server's CPU and memory cycles--and that's just one example. Acting as a proxy between client and server, a purpose-built ADC can aggregate millions of client requests into hundreds of server connections and cache them for reuse; it can intelligently manage and prioritize SSL sessions; it can apply intelligent compression to data--a task so compute-intensive when performed on a server that it can actually degrade rather than boost performance. It is these specialized hardware components, carefully integrated into the architecture, that enable purpose-built network solutions to scale to carrier-class levels of performance.
In the end, if it were feasible for network vendors to optimize these compute-intensive tasks on general-purpose platforms, don't you think they would? After all, it would certainly make these solutions easier to architect and more affordable to manufacture. When performance counts, however--and increasingly that seems to be the critical need--enterprises gain nothing by abandoning purpose-built network solutions. So when you hear such suggestions, remember why these solutions were built in the first place, not on a whim to benefit vendors, but rather to help enterprises improve efficiency, control their costs, and meet their most difficult and challenging needs: managing massive amounts of traffic in the delivery of mission-critical applications and data.
Posted by Karl Triebes on 01/18/2011 at 12:47 PM3 comments
Playing off the title of an earlier blog entry, "Out of Resources in the Twilight Zone," this entry explores the performance implications of moving from 1024- to 2048-bit key lengths--a recommendation by the National Institute of Standards and Technology (NIST) to increase the security of data encryption. The issue involves Secure Sockets Layer (SSL), a security protocol that uses RSA's public and private key encryption system to establish an encrypted link between a web server and browser. The protocol uses SSL certificates that currently employ 1024-bit RSA key lengths, but NIST has set January 1, 2011 as a target date for organizations to begin using 2048-bit key lengths.
The "recommendation" is more like an imperative, because the move is well underway already. Many certification authorities (CAs)--Entrust, GeoTrust, and VeriSign, to name just a few--are now only issuing 2048-bit certificates, and some major web browser and software vendors have thrown in their support, as well. It seems inevitably that everyone will need to make the shift, so what does that mean for your organization?
If your application and web servers are still doing all your SSL processing using 1024-bit key lengths, that's consuming about 30 percent of your CPU and memory resources just to establish the connection (handshake) and then decrypt and encrypt the transferred data. Problem is, when those same servers start trying to process 2048-bit keys, your users are going to see a reduction in performance (based on transactions per section) of about 5 times, regardless of the vendor platform. Stated another way, it's going to take 5 times more CPU cycles to process those 2048-bit SSL transactions.
Why is the decrease in performance (increase in CPU cycles) exponential rather than linear? The longer the key length, the harder the key is to decode, and that requires more processing power. In fact, if you were to leap directly from 1024 to 4096-bit key lengths, you would probably see a whopping 30x reduction in server performance. With reduced performance of 5 times (much less, 30 times), it's safe to assume you're not only going to have a lot of unhappy users, you're going to need additional server capacity to make up for that added load.
If you're not already offloading SSL functions from your virtualized application or web servers, now is a good time to give it some serious consideration. By offloading SSL processing to an advanced application delivery controller (ADC) device, you can reclaim those CPU cycles. And, if the ADC includes specialized hardware designed for RSA acceleration, it will handle that SSL processing far more efficiently than your application and web servers ever could.
If you're wondering whether a virtual network appliance would fit the bill, consider this: early testing of 2048-bit SSL processing on a virtual network appliance running on 64-bit commodity hardware revealed that it could only handle hundreds of transaction per second. Compare that to tens of thousands of transactions per second on a physical ADC, and it doesn't take long to figure out that a virtual appliance isn't a viable alternative. (Virtual ADCs are an excellent alternative to physical devices for some use cases, but this isn't one of them.)
For some organizations, the challenge is that they're still running legacy applications, many of which don't (and possibly never will) support 2048-bit keys directly. If you're in that boat and yet still must comply with certain regulatory requirements (for example, FIPS 140-2), you have no choice but to find a workaround to support 2048-bit key lengths. Again, here's where an advanced ADC device placed at the edge of your network can help. Install your 2048-bit certificate on your ADC device to process incoming client requests, and then use your existing 1024-bit keys to re-encrypt the data and forward it on to your application and web servers. With this solution, you can continue providing secure SSL connections, keep pace with new NIST guidelines, and avoid purchasing additional web and application servers to make up for the huge performance hit they would otherwise take.
Posted by Karl Triebes on 12/22/2010 at 12:47 PM0 comments
Although server virtualization is now a common deployment model in data centers, desktop virtualization hasn't caught fire as quickly. But with major vendors like VMware and Microsoft providing virtual desktop infrastructure (VDI) solutions, the trend
is quickly moving in that direction. VDI solutions replace traditional desktop and laptop PCs with thin clients, giving users access to applications and services from centrally managed servers in the data center.
For IT organizations, VDI solutions can be easier and more cost-effective to manage and secure than traditional PCs, so why the sluggish adoption rate? One reason is that thin-client solutions of the past--and there have been many--haven't delivered the features and performance that users expect from traditional PCs. Another is resistance to change. PC users don't want to give up the perceived control they have over their applications and data, and IT is understandably reluctant to implement any solution that will degrade performance, frustrate users, and ultimately reduce productivity.
But today's VDI solutions have come a long way from their predecessors, combining the best of thin-client technology with server-side virtualization. For each user, IT creates a customized virtual desktop that resides on a virtual machine (VM) in the data center. Instead of connecting via a dumb terminal to an application on a shared server running a specific operating system (thin-client model), users connect to their own customized, virtual desktops from a range of devices and locations.
The ability to customize virtual desktop images and store them on a VM is a significant distinction and advantage of VDI over previous thin-client solutions. Yet, for all the benefits they can provide--centralized management and control, easier desktop management and recovery, enhanced ability to meet security and regulatory requirements, improved user productivity, reduced OpEx--VDI solutions are, like their predecessors, inherently dependent on the network. That makes them potentially subject to LAN and WAN issues such as network latency, lost connections, and poor performance. These factors alone can be significant enough to kill a corporate VDI initiative because they have a direct impact on user experience and productivity.
For VDI solutions to succeed, then, IT's challenge is to ensure that these solutions perform well and are scalable, reliable, and secure. Not coincidentally, these challenges are common to all network-based applications, and many IT organizations already realize that advanced application delivery controllers (ADCs), which introduce a layer of control into the network, can address these challenges. That's why many network vendors work closely with application providers to develop joint solutions that optimize application delivery over the network.
By intelligently managing traffic, offloading compute-intensive processes from servers, and providing session persistence, ADCs can provide the scalability that's needed for a VDI. Scalable solutions are less susceptible to availability issues, and availability is key to maintaining acceptable performance. Monitoring VDI resources so user requests can be intelligently distributed across all available resources (especially among multiple data centers), improves cross-site resiliency, which in turn improves availability and performance.
VDI solutions offer inherent security benefits because all applications and user data are stored in the data center under IT control. An advanced ADN solution that uses SSL to protect all data exchanged between client and server can enhance a VDI solution, adding stronger security. With the use of new 2048bit key lengths, however, SSL can consume a lot of server resources, so an ADC solution that intelligently directs traffic and offloads processes from the servers is even more critical to ensure high performance.
While the pros and cons of VDI will likely continue to be debated, organizations that want to move in this direction need to understand that the delivery mechanism--the network--is key to making these solutions successful. Many of the challenges presented by VDI solution can be overcome by placing an intelligent mediator in the form of an advanced ADC between the user and server.
Posted by Karl Triebes on 11/16/2010 at 12:47 PM1 comments
Talk to the CIOs in virtually any organization today and they'll tell you they're drowning in data--creating, consuming, and digitizing more existing content than ever before. They're also retaining data longer because of business and regulatory requirements. Many are still dealing with flat (or nearflat) budgets, so this explosive growth in data is a daunting challenge for them. They need a strategy for creating an agile storage infrastructure that enables them to proactively manage growth and change without increasing capital and operational costs.
The Constraints of Traditional Storage Environments
Traditional storage environments have several disadvantages, the most obtrusive one being inflexibility. Users and applications are discretely mapped to physical file storage such as file servers and NAS devices, which creates an intricate web of connections. That web gets more and more complex as user demand increases, more application instances are launched, and more storage devices are provisioned. When that happens, IT can't manage data effectively and perform tasks such as moving a directory, migrating data, decommissioning a file server, or provisioning new capacity without disrupting users. For every change, they have to manually reconfigure the environment, so backup windows increase, downtime keeps growing, and on and on it goes.
Advantages of Storage Virtualization
Storage virtualization, on the other hand, takes away all that complexity by creating a layer of abstraction that breaks the bonds between users or applications and physical file storage. With a virtualized storage layer, storage resources can be pooled so that many storage devices (including heterogeneous ones) appear as one, and data can move freely among devices. That means IT can do away with discrete mappings of users and applications to physical file systems and instead, map clients to the virtualization layer itself, which represents all of the physical file systems in the pool. With storage virtualization, users and applications are shielded from all the data management tasks that IT takes care of at the file system layer. Now users and applications aren't disrupted, and downtime starts to disappear.
However, a storage virtualization solution that only provides a virtualized storage layer isn't very effective. To truly help IT manage rising storage costs and get control of its data, the solution must:
- Work across multiple operating systems, storage platforms, and file systems, because nearly every IT organization has a variety.
- Provide an intelligent virtualized storage layer, meaning, it can monitor client capacity, resource capacity, and network conditions and respond to changes in real time.
- Facilitate and automate tasks like storage tiering, which identifies the business value of data and matches it to the appropriate class of storage.
The value of storage tiering can't be overemphasized. With it, IT can retain high value data on high performance, highly available storage devices and automatically move lower value, less frequently accessed data to lower cost storage devices. Without it, identifying and moving data is a painful process--and one reason that many organizations just avoid it. Another benefit of storage tiering: it gives IT visibility into the composition of its data, for instance, what type of data is consuming the most storage capacity, how frequently data is being accessed, and who's consuming the most storage.
Storage Virtualization and the Cloud
As organizations start taking a closer look at deploying applications in the cloud, they're realizing that the cloud might also provide a viable storage alternative for them. However, because of the high overhead involved in identifying which data should be moved to the cloud, and the disruption that moving data might cause, many organizations are still not completely sold on the idea. Automated storage tiering will play a pivotal role in getting enterprises to move in that direction. The ultimate goal is for the cloud to become just one more class of storage to which IT can easily move data. To help enterprises achieve this goal, storage virtualization vendors must provide interoperability with third party, value added solutions that support powerful data management policies, giving enterprises more control of their data.
Posted by Karl Triebes on 11/02/2010 at 12:47 PM2 comments
Server virtualization is no longer a trend but rather a common deployment model adopted by the majority of IT organizations. Even so, virtualization is not without its deployment and management challenges. Many organizations still struggle to balance server workloads and maintain reliable application performance.
Because network appliances can affect both the performance of applications and traffic management, it's no surprise that network vendors have felt the pressure to develop virtualized versions of their physical appliances, from simple load balancers to advanced application delivery controllers. Virtual ADCs have typically been thought of as replacements for their physical counterparts (pADCs), and in some cases, replacement might be the appropriate solution. In other cases, however, replacement is clearly not the right strategy. To create the dynamic data centers IT organizations want, and to support the flexibility and scalability they need, architectural considerations must be taken into account. In the end, a hybrid architecture often turns out to be the best solution.
Who Stands to Benefit Most?
Let's start by looking at who can benefit from a vADC. In the enterprise data center, deploying a vADC in testing and QA environments can be of great benefit to network administrators and architects. For little cost, a vADC enables organizations to test and optimize new solutions before deploying them in production.
Ideally, it would be advantageous to make vADCs available to all teams involved in the application development lifecycle. Due to cost constraints, ADC technology has been largely unavailable to developers and architects--even when an application will ultimately be used in production with an ADC. With access to a vADC throughout the entire development process, architects and developers can include advanced ADC features such as acceleration, security, and optimization in the application delivery platform. Doing so can help foster collaboration among teams that, in the past, have often not worked together. And it can speed time to market and produce a better product that takes advantage of all the advanced features of application delivery controllers.
For similar reasons, independent software vendors can also benefit by deploying a vADC in their development environments. In particular, a vADC gives ISVs the opportunity to create new and innovative tools for managing and orchestrating both physical and virtual applications and network components--tools that are now in great demand, especially with the move toward cloud computing. Until effective management and orchestration tools exist, it's understandable that many organizations are reluctant to fully embrace virtualization and cloud computing solutions.
Many cloud providers have come to rely on physical ADCs to provide the flexibility and scalability that they require in these highly dynamic, virtualized environments. But, for cloud providers with many thousands of customers, it's critical that they be able to isolate components and application delivery policies for each customer, and that's where pADCs aren't so efficient. In such cases, a vADC can be an excellent fit because the cloud provider, rather than sharing its pADC among many customers, can instead enable customers to deploy their own vADCs, giving them control over their own application delivery policies.
To create and maintain truly dynamic data centers, enterprise IT (and cloud providers as well) must not only look at who can benefit from vADCs but also at the potential effect of vADCs on characteristics such as scalability and mobility. vADCs are often thought to be the best choice for scalability because they can be deployed more quickly than pADCs and they are far less costly. But to assume that quick deployment makes vADCs less disruptive in the infrastructure is a mistake. Each is disruptive in its own way; pADCs for the obvious reasons of having to procure, install, configuration, and integrate a new device in the existing infrastructure. vADCs, on the other hand, can be disruptive because of their potential to degrade performance and application availability. One reason is that vADCs run on commodity hardware, so they can't take advantage of the specialized, optimized hardware typical of pADCs. For this reason, a vADC will never be able to achieve the same performance levels as a pADC.
Mobility is a particularly important consideration for organizations that plan to expand their operations into the cloud. Specifically, they need to consider the ease with which applications can be moved from data center to data center, and from the data center to the cloud and back again.
Organizations that rely on pADCs in their data centers will want to choose a cloud provider that does the same. But this, again, is where a vADC can be extremely applicable, if the application can be bundled with the vADC. In that case, it's far easier and more reassuring to move an application into the cloud knowing that all the configurations and policies associated with the application will move with it.
Virtual network appliances have the potential to offer significant cost savings and new kinds of flexibility for IT organizations, but they also present more architectural challenges than do virtualized servers. If you're still uncertain about how best to deploy a hybrid application delivery network, deploy pADCs at key aggregation points in the infrastructure to get the benefit of server and application offloading functions (and the benefits of specialized, optimized hardware) that pADCs provide. Use pADCs to support application workloads that require high throughput, and for complex deployments that require advanced ADC functions like application security, acceleration, and access control.
vADCs are preferred for lab and QA environments, for example, when workloads require compute intensive processing. In these cases, deploy vADCs in a tier behind the pADCs that handle the server offloading functions. In the application development lifecycle, a vADC can help to enable quicker development, yield a better integrated product, reduce time to market, and encourage teamwork among network, security, and application development groups.
Posted by Karl Triebes on 10/05/2010 at 12:47 PM1 comments
There is a wide variety of use cases for cloud computing, many of which rely heavily upon the particular model of cloud discussed. SaaS (Software as a Service), for example, has a different usage than does IaaS (Infrastructure as a Service), particularly when leveraged in the context of a hybrid cloud data center model.
One of the first use cases for IaaS was centered on the concept of extending the data center on-demand. This was loosely coined as "cloud-bursting" and continues today to be the primary use case upon which many cloud-based solutions are built. This is primarily due to the fact that cloud-bursting -- the ability to extend capacity of existing applications by leveraging cloud-based compute resources dynamically -- requires addressing several technological challenges to become reality, many of which are applicable across the cloud computing spectrum.
The Challenge of Real-Time Migration
When offered an overview of a cloud-burstingcapable architecture, the first question that was raised--and is still often raised--is, "How do you get the application into the cloud in the first place?" Answering this question became a primary concern for solution providers because unless an application could be migrated on demand to a cloud computing provider, the value proposition for cloud-bursting was suspect. If the application had to exist prior to being "bursted" then the organization was paying for resources in the cloud all the time, which negated the proposed cost savings associated with the solution. Thus it became an imperative to address the challenge of migrating an application in real time as a means to enable cloud-bursting without negatively impacting the value proposition of cloud computing.
This process turned out to be a lot harder than anticipated. The assumptions and requirements continue to be restrictive:
- Both the data center and the cloud computing environment must utilize the same virtualization platform, or have the means by which an application can be packaged in a cloud computing environment specific container in real time.
- The networking layer in both the cloud provider's environment and data center must be bridged as to allow communication with existing infrastructure, ensuring availability and continued operation of delivery infrastructure.
- The application package must be transferred across often high-latency WAN links to the cloud computing provider as quickly as possible because limitations integral to migration capabilities of virtualization platforms will cause such a transfer to fail if not completed in a fairly narrow window.
- The existence of the new application resource in the cloud computing environment must be made known and subsequently accessible to the infrastructure responsible for directing end users to that instance of the application.
While addressing these issues it has become clear that the transfer of the application from one location to another posed the biggest obstacle to enabling a truly dynamic cloud-bursting-capable infrastructure. That's because a number of underlying technological issues--distance, size of application packaging in virtual containers, and architectural limitations on existing solutions--made it difficult the use well-understood and traditionally used methods to counter those issues.
Using Traditional Solutions in Untraditional Ways
The network--specifically, the external WAN--is at the heart of the challenge when enabling elastic applications that are capable of cloud-bursting on-demand. The very organizations that would benefit most from the cost savings associated with leveraging ondemand cloud computing resources are the same organizations whose Internet-facing network connectivity is likely to be less than optimal in terms of quality and proximity to the Internet backbone. These same network links are utilized by other applications and users as well, which makes it difficult for the organization to ensure the quality and speed necessary to successfully transfer a virtualized application from its data center to an off-premise cloud environment.
The obvious solution is to leverage accepted architectural solutions: WAN optimization technologies. The hurdle with a traditional WAN optimization solution, however, is in the network configuration and deployment, which requires specific placement in both the data center and off-premise location. This is required because WAN optimization solutions use data deduplication to address the challenge of transferring big data across small pipes. This process requires a symmetric deployment model and further requires specific placement of the solution in the network. While the former has been addressed by the virtualization of WAN optimization solutions, the latter is not. Cloud computing providers allow little to no control over topological decisions regarding the deployment of solutions in their environment, making a traditional WAN optimization solution unfeasible.
What is feasible is the use of data deduplication and other WAN-related optimization technologies to improve the conditions of the network connection and simultaneously ensure the successful migration of a virtualized application. Managing data deduplication at a strategic point of control in the architecture becomes necessary to enable the functionality of a WAN optimization solution.
An Architectural Strategy Can Achieve Where Individual Products Fail
By applying WAN delivery services within the context of holistic application delivery, a solution is capable of addressing not only the successful transfer of on-demand of virtualized applications from the data center to the cloud environment, but also the networking and application routing challenges associated with an on-demand cloud-bursting architecture.
On-demand cloud-bursting architectures have been demonstrated as feasible and capable of maintaining cost-reducing benefits with a strategic architecture that optimizes not only layers of the network but the entire process. This comprises triggering the event that initiates the process to monitoring the entire application set across environments as a means to ensure that extraneous cloud-deployed applications are in use only as long as they need be, and no longer.
By architecting a solution rather than deploying individual products to address each challenge, on-demand cloud-bursting has become not only possible but feasible. The challenges associated with leveraging cloud computing in ways that preserve its value proposition will continue to be best answered with a holistic architectural strategy rather than individual tactical product deployments. Such tactical solutions are incapable of providing the visibility, control, and flexibility needed to enable the dynamic infrastructure required to fully take advantage of a dynamic compute resource model such as underpins an on-demand, cloud-bursting solution.
Posted by Karl Triebes on 09/16/2010 at 12:47 PM0 comments
Every time we hop into our cars and head onto city streets or highways, we immediately become part of a complex system that, for the most part, we take for granted. Most days, we get to our destinations safely (and in a reasonable amount of time), thanks to stop signs, traffic signals, yield signs, roundabouts, merge lanes, on ramps and off ramps. These mechanisms are integrated into roads and highways at strategic locations to help control and guide the flow of traffic and keep us safe. The key words here are strategic and control. Imagine what traffic would be like if we had none of these controls on roads and highways or, worse yet, if every intersection were controlled by unsynchronized traffic signals. Chaos and gridlock would ensue. The challenge in controlling the flow of traffic and minimizing traffic jams is to implement the right mechanisms at the right locations, typically where traffic converges -- in other words, at strategic points of control.
Successfully directing and controlling the flow of traffic in computer networks is not much different; it requires similar strategic points of control. Without them, IT is forced to respond to new business demands on a case-by-case basis with solutions that aren't integrated into the existing architecture. That's sort of like building a new road every time you want to travel to a new destination instead of planning a logical route using existing roads and highways.
In the context of the data center, strategic points of control are the locations at which decisions are made about how best to deliver applications and data. These often occur at aggregation points -- the points through which all traffic flows. One of the most important points of aggregation is at the network perimeter. In the same way that a drawbridge across a mote provides the only access to a castle, a network router and firewall provide the only outside access to the network. Because the router and firewall are on the network's perimeter, it's a logical place to implement and enforce access policies. (Even kings had "access policies" of sorts -- although failing to meet them might land you in the dungeon.)
Once inside the firewall, there are other strategic control points within the data center architecture. Virtualized storage, which controls access to the resources it manages and gives IT visibility into all storage resources, is a point at which IT can apply security policies. An application delivery solution is also a strategic point of control. Because all application requests and responses are funneled through it, it is a logical place where application security, acceleration and optimization can be applied.
Another strategic control point that's becoming more common in the data center is the virtual network -- it provides more efficient connectivity between virtual machines than a traditional network does. In a traditional network, communication between applications deployed on a single server with virtual machines and virtual switches might require exiting and re-entering the server's network card. In a virtual network, however, that physical path along which data travels (and the latency associated with it) no longer exists, so communication between virtual machines is more efficient. The network layer is a prime location to enforce access policies, especially in public cloud environments where multi-tenancy is a probability.
These three strategic control points -- virtualized storage, application delivery solutions and the virtualized network -- share one thing in common: They are all points at which virtualization, and by extension, aggregation, occurs. Aggregation is an example of the traditional "many-to-one" type of virtualization (typically associated with load balancers and other proxy-based solutions) that makes multiple resources appear to be a single resource. A many-to-one type of virtualization solution also provides a strategic point of control because all traffic must flow through that solution. That makes it a perfect point at which to apply access and security policies in a single, centrally managed location.
Although the many-to-one type of virtualization is not new -- it has been around since the mid 1990s -- it has evolved over the years to give IT more precise control over the data that traverses strategic points in the network. Again, "strategic" and "control" are the key terms. Any point in the network is considered strategic if it offers the opportunity to consistently and efficiently apply policies (i.e., control) to data at a single point in the data path.
Today, many IT organizations are still struggling with the static one-to-one connections between technologies that they have had for years. This forces IT to respond to new business demands with manual, one-off technology fixes (remember the idea of building a new road every time you want to go someplace new?). In contrast, having strategic points of control throughout the infrastructure gives IT the ability to add, move or redefine services on demand. In turn, IT can create, modify and scale the infrastructure in line with changing business demands -- and without compromising the organization's long-term objectives.
Posted by Karl Triebes on 08/24/2010 at 12:47 PM0 comments
It's no secret that security is on the minds of most IT professionals who are considering cloud computing. In fact, some surveys show that as many as 80 percent of businesses believe that the security, availability and performance risks of cloud computing outweigh the potential benefits, such as flexibility, scalability, and lower cost--so much so that they're holding back from fully embracing cloud computing, at least, for now.
It would be a mistake, however, to assume the reason for their concern is that cloud providers are taking a cavalier attitude toward security. That assumption is an oversimplification and, more importantly, obscures the legitimate security concerns of IT organizations.
The reasons for concern have more to do with organizations losing their ability to quantify risk in the cloud. Without that ability, it's tough for them to justify taking the risk. Assuming they could quantify risk, how much control would they have in the cloud for mitigating that risk through the use of processes and technology? Today, little, if any. Organizations' hesitation to jump headlong into the cloud has more to do with these factors than a lack of confidence in cloud providers' security implementations.
In the data center, organizations typically determine their threshold for risk by considering the impact of the risk and the probability of its occurrence. As an example, take the potential impact of an outage on application availability. When an outage occurs, the monetary impact--measured by lost revenue and customers--is quantifiable. Similarly, organizations can reasonably assess the probability of a data center outage--and its impact on applications--but what about in the cloud? Today, a cloud provider's track record for uptime is more readily available than it was, say, even a year ago, making it easier to determine the chances of an outage, but uncertainty still exists.
In addition to these, there are other reasons that keep "security concerns" at or near the top of the list of barriers to cloud adoption. A significant one is that cloud computing environments don't give organizations the benefit of deploying a holistic security strategy. Organizations that are happy with their security practices in the data center have reason to be concerned about their ability to implement those same practices in the cloud. They won't have control over web application firewalls or application-specific firewall rules; they won't have data leak prevention solutions or intrusion detection/prevention systems in the cloud. They won't have any of that for the simple reason that today, the cloud is designed to deliver compute on demand. In other words, it's meant to run applications that can take advantage of that compute power. Other than load balancing, the cloud offers very few "infrastructure" services. That severely limits an organization's ability to apply internal security policies to the applications it moves to the cloud.
Many unknown variables still exist in cloud computing environments, which introduce security risks that haven't yet been quantified. Two of those variables are virtualization and cloud computing management frameworks. While virtualization may get the most attention of the two, the importance of computing management frameworks is inching forward. Exploits around virtualization have been theorized, but to date, few, if any, hypervisor "breaches" in a public cloud environment have occurred. Still, even the possibility of a breach and its potential damaging effects may pose too much risk for some organizations. And with few known vulnerabilities in hypervisor technology, it's almost a foregone conclusion that vulnerabilities will be discovered and ultimately exploited. So far, cloud APIs have not been taken over either, but the possibility of an attacker having complete control over an organizations' cloud computing deployment is frightening.
The fact that CIOs will likely continue for some time to cite security risks as a reason not to adopt cloud deployments doesn't mean they believe cloud providers are taking security concerns lightly. It just means they still have legitimate concerns about the security risks of new technologies in the cloud--concerns that haven't yet been answered to their satisfaction. They are highly sensitive to these risks, not just for their own sake but for the sake of their customers. Until they are alleviated, those risks will likely still outweigh the potential benefits of cloud computing.
Posted by Karl Triebes on 07/27/2010 at 12:47 PM2 comments
When organizations are choosing between hardware and virtual servers, some would argue that it doesn't make sense to purchase a hardware solution when all you really need is the software. As such, you should just acquire and deploy a virtual network appliance.
One point this argument fails to address is that we see an increase in compute power when using general purpose hardware as well as purpose-built hardware and the specialized hardware cards that provide acceleration of specific functions like compression and RSA operations (SSL). But for the purposes of this argument we'll assume that performance, in terms of RSA operations per second, are about equal between the two options.
That still leaves two very good situations in which a virtualized solution is not a good choice.
Compliance with FIPS 140
For many industries--federal government, banking, and financial services among the most common--SSL is a requirement, even internal to the organization. These industries also tend to fall under the requirement that the solution providing SSL be FIPS 140-2 or higher compliant. If you aren't familiar with FIPS or the different "levels" of security it specifies, then let me sum up: FIPS 140 Level 2 (FIPS 140-2) requires a level of physical security that is not a part of Level 1 beyond the requirement that hardware components be "production grade," which we assume covers the general purpose hardware deployed by cloud providers.
FIPS 140-2 requires specific physical security mechanisms to ensure the security of the cryptographic keys used in all SSL (RSA) operations. The private and public keys used in SSL, and its related certificates, are essentially the "keys to the kingdom." The loss of such keys is considered to be a disaster because they can be used to (a) decrypt sensitive conversations/transactions in flight and (b) masquerade as the provider by using the keys and certificates to make more authentic phishing sites. More recently keys and certificates, PKI (Public Key Infrastructure), has been an integral component of providing DNSSEC (DNS Security) as a means to prevent DNS cache poisoning and hijacking, which has bitten several well-known organizations in the past two years.
Obviously you have no way of ensuring or even knowing if the general purpose compute upon which you are deploying a virtual network appliance has the proper security mechanisms necessary to meet FIPS 140-2 compliance. Therefore, if FIPS Level 2 or higher compliance is a requirement for your organization or application, then you really don't have the option to go virtual because such solutions cannot meet the physical requirements necessary.
A second consideration, assuming performance and sustainable SSL (RSA) operations are equivalent, is the resource utilization required to sustain that level of performance. One of the advantages of purpose-built hardware that incorporates cryptographic acceleration cards is that it's like being able to dedicate CPU and memory resources just for cryptographic functions. You're essentially getting an extra CPU, it's just that the extra CPU is automatically dedicated to and used for cryptographic functions. That means that general purpose compute available for TCP connection management, application of other security and performance-related policies, is not required to perform the cryptographic functions. The utilization of general purpose CPU and memory necessary to sustain X rate of encryption and decryption will be lower on purpose-built hardware than on its virtualized counterpart.
That means while a virtual network appliance can certainly sustain the same number of cryptographic transactions it may not (likely won't) be able to do much other than that. The higher the utilization, too, the bigger the impact on performance in terms of latency introduced into the overall response time of the application.
You can generally think of cryptographic acceleration as dedicated compute resources for cryptography. That's oversimplifying a bit, but when you distill the internal architecture and how tasks are actually assigned at the operating system level, it's an accurate if not abstracted description.
Because the virtual network appliance must leverage general purpose compute for what are computationally expensive and intense operations, that means there will be less general purpose compute for other tasks, thereby lowering the overall capacity of the virtualized solution. That means in the end the costs to deploy and run the application are going to be higher in OPEX than CAPEX, while the purpose-built solution will be higher in CAPEX than in OPEX – assuming equivalent general purpose compute between the virtual network appliance and the purpose-built hardware.
Posted by Karl Triebes on 07/13/2010 at 12:47 PM0 comments
Infrastructure 2.0, also known as dynamic infrastructure or dynamic data center, is the next-generation data center model in which IT resources (these days, likely virtualized) are pooled and leveraged to provide flexible and scalable IT capacity on demand--whether in the data center or in the cloud. In theory, everybody wants it, but how easy will it be for enterprises to implement? And what does it take to achieve it?
Any IT organization that has struggled with enterprise application integration (EAI) knows how difficult it is to integrate disparate systems, applications, and data sources. Implementing infrastructure 2.0--a fully integrated, collaborative infrastructure--isn't any easier. At least, not yet. It will require whole new levels of integration, automation, and orchestration beyond what's required for EAI.
Many organizations have already taken steps toward automation--converting the steps required to complete management tasks (such as provisioning or deprovisioning a virtual machine) from manual to automated processes. Automation is about managing infrastructure components to provide consistency, eliminate redundant tasks, cut down on errors, and improve response times. In contrast, orchestration is about using the results of those automated processes to make intelligent decisions based on business goals. Where automation helps improve IT performance, orchestration uses that performance improvement to achieve specific business goals and initiatives. Because it enables orchestration, automation must be mastered before enterprises can even think about orchestration. (It's counterproductive to make business decisions based on processes that are not proven and reliable.)
Enterprises are not alone in their struggle to implement Infrastructure 2.0; network vendors and infrastructure providers share the same challenges as they develop solutions to help customers achieve these goals.
Ideally, the best way to achieve advanced levels of integration, automation, and orchestration is through service-enabled APIs. The problem is, most vendors have their own APIs, and no two are alike--you can't use one vendor's API to manage another vendor's network devices.
The good news is that many of today's infrastructure management solutions are API-enabled--SOAP, HTTP, REST, XML, JSON, etc. That's why many network vendors are working closely with partner/vendors that provide these solutions. Through collaborative partnerships, network vendors can tightly integrate their networking (load balancing and application delivery network) solutions with popular infrastructure management solutions such as HP Operations Orchestrator, Microsoft Virtual Machine Manager, and VMware vCenter Orchestrator. (These solutions are somewhat analogous to EAI solutions that include vendor-specific software adapters.) Through automation, these solutions greatly simplify network deployment, management, and maintenance tasks--and bring organizations one step closer to implementing the orchestration that's ultimately required to achieve a fully integrated, collaborative infrastructure.
Posted by Karl Triebes on 07/01/2010 at 12:47 PM0 comments
While many organizations are still trying to get their arms around cloud computing concepts and terminology, "cloud balancing" might seem like just one more piece of jargon to add to the confusion. In fact, cloud balancing is an important concept that will play a significant role in cloud deployments, opening up new possibilities for organizations with limited resources to benefit from cloud computing.
So, what is cloud balancing? Think of it as the next generation of global server load balancing; it extends into the cloud the architectural deployment model currently used in global server load balancing.
With cloud balancing, application requests are distributed across application deployments in multiple data centers and in the cloud, thereby increasing the number of available locations from which an application can be delivered.
With traditional global server load balancing, the technical goals are to deliver high availability at maximum performance. Application routing decisions are made based on variables such as application response time, application availability at a given location, current and total capacity of the data center and cloud computing environment, user location, and time of day.
Increasingly, however, customers want to make application routing decisions based on other variables, such as the cost to execute a request by location, total cost to deliver the request, regulatory requirements, and the services required to fulfill a request per SLAs. High availability and performance are still important, but they also want to deliver applications using the least amount of resources at the lowest possible cost. To do that, they need a way to supply those additional variables to the global application delivery solution. Cloud balancing itself doesn't provide the technical framework for collecting these variables; what's required is a global application delivery solution that is context-aware--one that integrates the network, application, and business variables into the decision-making process. When these additional business-driven variables are available, cloud balancing becomes a viable solution for organizations to optimally deliver applications at the lowest possible cost with the fewest amount of resources.
Challenges persist with cloud balancing, some due to the immaturity of current cloud-based offerings and will eventually correct themselves with maturity; others will inevitably call for new standards to be written:
- Finding the right cloud provider remains one of the most difficult and time-consuming challenges. With cloud computing still so new to many organizations, it's difficult for organizations to sort through the myriad services, comparing one provider's offerings to another. Compounding that problem is that offerings change as market demands change.
- Application portability ranges from difficult to impossible because there are no standards among cloud providers for migrating applications. Lack of interoperability at the application layer compounds this problem. Proprietary platforms make it difficult to incorporate local data center deployments into a cloud balancing solution. In contrast, commercial platforms potentially make it easier to implement a cloud balancing solution, but only if the virtualization platforms are the same. If not, this hurdle may prove just as challenging as that of proprietary platforms.
- Lack of integration between the global and the local application delivery solutions makes cloud balancing solutions impossible. Cloud balancing requires variables supplied by the local environment, so the global and local environments must be able to share information. One way to get this integration is to use a single vendor, but many organizations don't want to be locked in. What's needed is a dynamic, cross-environment, vendor-neutral solution, which will most likely emerge from standards-based APIs and Infrastructure 2.0 efforts. Until then, however, organizations will need to leverage existing component APIs to integrate variables.
- Architectural and operational differences between global and local application delivery solutions present similar kinds of issues. Virtual appliances are helpful, but only to a point, because many cloud computing models are based on proprietary virtualization technologies. That makes it difficult for organizations to replicate architectures across cloud deployments.
Deploying a virtual application delivery controller (vADC) along with the application in the cloud can help solve some of the architectural inconsistencies. A vADC provides the local load balancing component that's needed to implement a cloud architecture. With a vADC, an organization has the means by which to monitor and manage the health of the cloud-based deployment. A vADC also solves the integration issue, providing a way for the global application delivery controller to include the variables used in cloud balancing and choose the best location from which to serve applications.
For organizations that want to improve application performance, ensure application availability, and implement a strategic disaster recovery plan, cloud computing is a cost-effective alternative to building additional data centers. Cloud balancing extends those advantages by enabling organizations to leverage cloud deployments along with their local application delivery deployments. As a result, smaller organizations with limited resources will have the same opportunities as large organizations to optimize application delivery.
Posted by Karl Triebes on 06/18/2010 at 12:47 PM0 comments
You can hardly open any technology or business journal, Web site or newspaper today without hearing some commentary on cloud computing -- what it is and how it will change IT and business. Very few, if any, would argue that its impact will not continue to be felt for many years, regardless of how it all comes together in the end. At the same time, you might also begin to notice that no single definition exists of cloud computing as being talked about, planned and, even, implemented in today’s enterprise networks. Furthermore, any attempt by vendors to define cloud computing is seen as simply a marketing ploy to make their products seem more necessary.
While they may still disagree on whether SaaS is or isn’t cloud computing, many agree that SaaS could be -- and often is -- part of or delivered via cloud architectures, as well as the fact that it can be so much more. Enterprise customers are interested in more than simple server virtualization and consolidation, but realize the impacts of security and maintaining access control as applications and data are dispersed through the various technologies.
One of the key components is the idea that cloud computing is a “style” of computing -- an architecture. It is simply a way of combining mostly existing tools together, automating and orchestrating those processes in order to achieve specific results. What are those results? Ultimately, it is to provide a computing infrastructure that users -- and not necessarily technical users at that -- can simply plug into.
The question then, is what does this architecture look like and what is required above and beyond the standard tools we have today to qualify as a “cloud”?
An important strategic consideration is the integration of all the pieces of the infrastructure to create the style known as cloud. This includes everything from the bare metal to the users themselves and all the elements in between. In addition, it suggests different ways to view the interaction of various operations within the architecture depending on your role as a technical builder or a business manager.
In general, we see cloud computing architecture built upon several functional component blocks (e.g., compute resources or deployment environments), organized into specific layers of a pyramid whose width represents the amount of technical skills or depth of expertise required to build and/or deploy that layer. These layers are roughly synonymous with the notions of Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). At the very apex of the pyramid are the users accessing the applications built upon the cloud architecture; in the center is a block which traverses all others and provides real-time connectivity, information coordination and flow control between the blocks/layers.
We are convinced that in order to maximize the value of cloud architecture, each component must exist in some state or another. For example, dynamic control plane elements are a requirement at every layer of the cloud architecture due to the necessity for cloud environments to be operationally efficient and on-demand. This calls for a level of automation and orchestration that can only be achieved by integrating components across the architecture. Without this collaborative capability, a given cloud architecture model is not capable of enabling an organization to realize the benefits associated with the model. If any of the core components is not implemented, such collaboration will fail and ultimately a true cloud architecture cannot be achieved.
Built from core components that include compute resources and management resources, the base layer of the cloud architecture requires the most technical competence to build and/or deploy. This is the very foundation upon which a cloud is built and, as suggested, is the components most often supplied by vendors who provide IaaS solutions to their customers.
Many applications are built upon software platforms that run on top of infrastructure services. These platforms may be environments like Oracle, BEA or ASP.NET and provide a convenient way for businesses to build custom applications without needing to concern themselves with the details that lay beneath the platforms. While many platforms are based on standards, e.g. Java EE, others are proprietary in nature including Google AppEngine and architecture frameworks developed and deployed by enterprise architects.
At the top of the pyramid, we find general business computing. This is where many organizations find themselves -- especially business organizations -- with the ability to identify a business need, but without the ability to build an application or the infrastructure upon which it runs. Instead of relying on an internal IT organization to build and/or deploy infrastructure and platforms, business stakeholders simply select an application and run it. This occurs for many different reasons, but the most common theme is that the capital, operating expenses, and man-hours required to implement applications that have standardized across industries, is not financially feasible, not an efficient use of IT resources or simply beyond the capabilities of the organization.
Putting It Together
As we move from the building blocks of infrastructure to the pinnacle of the application, the skill and knowledge necessary to build the components becomes less. This is simply because each layer can be built on top of the previous without having to fully understand the layer beneath. An organization with limited infrastructure skills can readily purchase IaaSfrom a vendor and build their own platform (or several) upon that infrastructure without needing the expertise to completely build the infrastructure from scratch. This has been happening for years in the managed hosting business. The organization does not have to be hardware or networking experts and therefore, as an organization, they require less technical expertise of the entire picture.
One of the most exciting, and possibly frightening, aspects of cloud computing architecture is that it finally brings the dynamism of business aligned IT to a workable model. IT organizations build IT systems; Business units deploy solutions to their business problem, which often involve the use of IT systems.
Posted by Karl Triebes on 06/01/2010 at 12:47 PM0 comments