App Delivery On-Demand

Blog archive

Sometimes, Virtualization Makes No Sense

When organizations are choosing between hardware and virtual servers, some would argue that it doesn't make sense to purchase a hardware solution when all you really need is the software. As such, you should just acquire and deploy a virtual network appliance.

One point this argument fails to address is that we see an increase in compute power when using general purpose hardware as well as purpose-built hardware and the specialized hardware cards that provide acceleration of specific functions like compression and RSA operations (SSL). But for the purposes of this argument we'll assume that performance, in terms of RSA operations per second, are about equal between the two options.

That still leaves two very good situations in which a virtualized solution is not a good choice.

Compliance with FIPS 140
For many industries--federal government, banking, and financial services among the most common--SSL is a requirement, even internal to the organization. These industries also tend to fall under the requirement that the solution providing SSL be FIPS 140-2 or higher compliant. If you aren't familiar with FIPS or the different "levels" of security it specifies, then let me sum up: FIPS 140 Level 2 (FIPS 140-2) requires a level of physical security that is not a part of Level 1 beyond the requirement that hardware components be "production grade," which we assume covers the general purpose hardware deployed by cloud providers.

FIPS 140-2 requires specific physical security mechanisms to ensure the security of the cryptographic keys used in all SSL (RSA) operations. The private and public keys used in SSL, and its related certificates, are essentially the "keys to the kingdom." The loss of such keys is considered to be a disaster because they can be used to (a) decrypt sensitive conversations/transactions in flight and (b) masquerade as the provider by using the keys and certificates to make more authentic phishing sites. More recently keys and certificates, PKI (Public Key Infrastructure), has been an integral component of providing DNSSEC (DNS Security) as a means to prevent DNS cache poisoning and hijacking, which has bitten several well-known organizations in the past two years.

Obviously you have no way of ensuring or even knowing if the general purpose compute upon which you are deploying a virtual network appliance has the proper security mechanisms necessary to meet FIPS 140-2 compliance. Therefore, if FIPS Level 2 or higher compliance is a requirement for your organization or application, then you really don't have the option to go virtual because such solutions cannot meet the physical requirements necessary.

Resource Utilization
A second consideration, assuming performance and sustainable SSL (RSA) operations are equivalent, is the resource utilization required to sustain that level of performance. One of the advantages of purpose-built hardware that incorporates cryptographic acceleration cards is that it's like being able to dedicate CPU and memory resources just for cryptographic functions. You're essentially getting an extra CPU, it's just that the extra CPU is automatically dedicated to and used for cryptographic functions. That means that general purpose compute available for TCP connection management, application of other security and performance-related policies, is not required to perform the cryptographic functions. The utilization of general purpose CPU and memory necessary to sustain X rate of encryption and decryption will be lower on purpose-built hardware than on its virtualized counterpart.

That means while a virtual network appliance can certainly sustain the same number of cryptographic transactions it may not (likely won't) be able to do much other than that. The higher the utilization, too, the bigger the impact on performance in terms of latency introduced into the overall response time of the application.

You can generally think of cryptographic acceleration as dedicated compute resources for cryptography. That's oversimplifying a bit, but when you distill the internal architecture and how tasks are actually assigned at the operating system level, it's an accurate if not abstracted description.

Because the virtual network appliance must leverage general purpose compute for what are computationally expensive and intense operations, that means there will be less general purpose compute for other tasks, thereby lowering the overall capacity of the virtualized solution. That means in the end the costs to deploy and run the application are going to be higher in OPEX than CAPEX, while the purpose-built solution will be higher in CAPEX than in OPEX – assuming equivalent general purpose compute between the virtual network appliance and the purpose-built hardware.

Posted by Karl Triebes on 07/13/2010 at 12:47 PM


Featured

Subscribe on YouTube