In-Depth

Mainframe Clouds Roll In

More and more x86 workloads are returning to the big box.

Prototypical cloud capabilities in the x86 world include service level agreement (SLA) guarantees; pooled resources; dynamic, self-service allocation and reallocation of resources; strong security; horizontal scaling; and multi-tenant hosting with isolation. Today, there are very few clouds—private, hybrid or public—that can boast of all these capabilities, but there's one platform that more consistently offers them than any other: the mainframe computer.

The mainframe is anything but a latecomer to the cloud game. IBM first seeded the cloud back in the mid-60s when it began virtualizing its legendary System 360 series, and in the course of evolving it over the past 40-plus years, IBM brought mainframes to the point where they now constitute a direct, if still largely unknown, threat to x86 clouds.

How unknown? Enough so that some prominent vendors have never heard of the mainframe cloud. But that is changing as user organizations increasingly migrate their x86 workloads into mainframe environments, which they consider to be technologically superior to the VMwares of the world.

Jon Toigo is the outspoken CEO of Toigo Partners International LLC, a provider of consumer-focused IT research and analysis data. He began his career working with mainframes, strongly believes in their cloud prowess, and is quick to point out the comparative weaknesses of x86 platforms. For example, he asserts that most of the current hypervisor-based technologies are not resilient, and if one application is pulled out or fails, the entire stack goes down with it.

Toigo maintains that it took mainframe technologists 10 years of multitenant computing in logically partitioned environments to figure out ways to circumvent that glaring hypervisor shortcoming. And they've been successfully doing so for almost 30 years.

"Mainframes have a leg up on multi-tenancy, application insulation, and the ability to corral apps from one operating system into logical partitions, so the mainframers have been doing virtualization forever, and everything old is new again in the current generation of x86 hypervisors," Toigo declares.

According to Toigo, mainframe resources have always been pooled, and have utilized systems layer management, in which the system manages all infrastructure resources, requiring them to comply with both the physical (hardware) layer interconnect standards, and the management services that are used in the system layer. The result is a highly managed infrastructure and the ability to dynamically allocate and reallocate resources, including storage, processors and bandwidth, in a highly manageable way.

"This is the baseline for building multitenant, resilient infrastructures with predictable service levels, and that's exactly how clouds are described," Toigo says.

The IBM Perspective
Bill Reeder, linux and cloud computing leader for System z at IBM, says mainframe clouds are superior to their x86 counterparts because they offer centralized infrastructures, are more energy efficient, and run 90 percent to 95 percent CPU utilization (compared to an upper-level limit of 55 percent for x86)while maintaining transactional throughput. Reeder describes the CPU utilization advantage of mainframes in terms of a reduced energy tax: he says if a company is running at 50 percent CPU utilization, it's paying a 50 percent energy tax for unused capacity, which is also true for energy and software costs.

Reeder says IBM is definitely marketing the mainframe cloud, but primarily to existing customers in certain verticals, such as finance, where banking customers looking for a better banking front-end are using cloud on Linux on System z. IBM is also targeting small companies building Software as a Service (SaaS) offerings. One such company is Transzap Inc., which enables energy companies to electronically process and track their critical business information in real time, so they can quickly make better financial and operational decisions. Reeder says Transzap migrated from its x86 architecture to System z because the company wants to prevent its SaaS customers from changing providers, which they can do with a single click.

Big Blue raised its cloud profile in July 2010 by offering zEnterprise, a new mainframe that enables customers to integrate mainframe and distributed computing solutions in a unified management environment. Although zEnterprise has been cited by some observers as a major step forward for mainframe clouds, Toigo calls it a "kluge" that cables IBM PowerPC chip-based blade servers to mainframes, adding that the blade servers are largely managed separately. He did allow that zEnterprise provides some value by hosting "recalcitrant" Microsoft Windows workloads.

IBM must have been getting similar feedback, because Reeder says that zEnterprise has been simplified because "what we were selling seemed too complicated." The most recent version features the mainframe running the z/VM OS and Linux servers, with a Tivoli provisioning manager enabling users to create images from the zEnterprise Cloud Starter Edition.

CA Technologies: The Mainframe Is Here to Stay
CA Technologies is one of the few major mainframe software companies, and Scott Fagen is distinguished engineer and mainframe chief architect. He says that mainframes are very cost-effective and highly controllable, traits that aren't otherwise currently available in the cloud.

Fagen says that when companies have SLAs, they need to be very careful about outsourcing to cloud providers because if those providers go out of business, or their service becomes interrupted in any way, user data becomes unavailable. "There's a world of difference between someone printing your bills, and if you're Visa, someone else managing your credit-card transactions," he notes.

CA Technologies sponsored a 2010 survey in which slightly more than half of the companies reported revenue of more than $1 billion per year, and 64 percent of respondents were either vice presidents or CIOs. According to the report, "80 percent of the executives surveyed confirmed that the mainframe remains an important part of their current business strategy, and almost all respondents, 97 percent, rated their IT organization as moderately or highly prepared to ensure the continuity of mainframe database administration, but needed the skilled workforce to support this strategy."

While another 80 percent of respondents also said they were either actively investigating or have already implemented multiple cloud solutions, 68 percent reported that mainframes are currently part of that strategy, or will become part of it.

Fagen says that when he talks to CIO mainframe users, he asks them if they're getting what they paid for with their IT dollars, and if they've done a true cost accounting of what they own. He says he's talked to customers who are working on or have completed TCO studies that show some "very interesting errors in judgment, or errors of commission" in how users cost their assets. For example, one customer told him that his company was charging the corporate jet off the mainframe cost center—at a cost of $10,000 per hour for fuel alone.

Appraised of that $10,000 outlay, Fagen asked, "Who pays for the air conditioning in the room?" The datacenter manager said, "That's datacenter, so it goes to the mainframe cost center." Fagen asked why and the datacenter manager said, "Because when the datacenter was built, it was filled with mainframes." At that point Fagen and the datacenter manager walked over to a corner where there were a couple of z10 mainframes occupying a small area, while the other several thousand square feet of space in the room was filled with rack after rack of distributed servers, storage and communications controllers. Fagen said that it would be a good idea to divide the floor space up, do a fair accounting, and charge the distributed team for the air conditioning they were using. He also suggested determining how much heat all the distributed hardware was generating.

The Big Sticking Point: Dynamic Self-Service Provisioning
Many experts believe that for all their strengths, mainframe clouds have one major chink in their armor: dynamic self-service provisioning on user demand. Critics say the self-service capability is missing because first, tasks must first be approved, and second, admins—not users—are responsible for provisioning them.

When Toigo is asked about this issue, he responds by saying, "Actually, that's a good question," and follows up with a discussion of what's being done to make the dynamic self-service provisioning model a mainframe reality.

Reeder says dynamic self-service provisioning is provided via products from Tivoli, as well as from IBM distribution partners such as Red Hat Inc. and SuSE, which haveopen source, self-provisioning apps.

For his part, Fagen says of the situation, "I think that's folklore rather than truth." He describes how a virtual server can be created, used and decommissioned, saying that what people really want after they've done their development, testing and quality assurance for new applications or infrastructures, is that those changes be implemented quickly, "so it doesn't take me a year to roll the OS in the middleware stack, and then adjust the application stack on top of it."

Uncooking the Books
In order to assuage the concerns of cost-conscious IT execs, Fagen engages them in "the right apples-to-apples comparison" between distributed and clouds. In one case, the executive he was talking to worked for a large multinational bank—the same one charging $10,000-per-hour jet expenses to the mainframe cost center—who had recently been approached by one of the major cloud providers, which said it would handle the bank's cloud serving for 10 cents per CPU hour. At the time, the bank was an Amazon Web Services user.

After it did a true cost accounting, the bank discovered that its distributed system was costing five cents per CPU hour. "They threw Amazon out immediately," Fagen declares. "Then the mainframe guys came back and said, ‘You know, we did a real analysis of the cost of running z/OS and Linux on System z, which factored in a more realistic accounting of power and air conditioning costs. We also dropped the cost of the corporate jet. The true cost is two cents per CPU hour.' It was a staggering revelation for them."

Buoyed by this newfound situation, the bank IT team got together and found out that there was a long list of applications that were very compatible with the z/OS back-end transactions and databases, so it began migrating these apps out of x86-based Websphere environments and onto Websphere on z/Linux because it would eliminate a great deal of latency. This is because IT could use hypersockets to go from z/Linux to z/OS. The change also eliminated the need for an extensive amount of Cisco routing gear.

"In the end," explains Fagen, "when all the accounting inconsistencies that had made the mainframe look bad were rectified, it was easily demonstrated that the mainframe systems did not even come close to dominating the power and A/C usage, and nobody could find any trips that the mainframe systems or support teams ever took on the corporate jet. That changed the view from ‘How can we get rid of this expensive sinkhole?' to ‘How can we use this very economical resource more?'"

Implementing the Social Contract
In order to enhance relationships with customers, mainframe clouds need to implement what Fagen calls a "social contract" based on the pay-as-you-go price model endemic to many private, hybrid and public clouds. As he puts it, many customers are approaching CA Technologies and saying they want to pay fair value, but basing it on capacity isn't fair because most of the time they're not running their box to its full utilization. As an alternative, many of them are asking CA Technologies to join them in finding a way to measure their business success and then charge for the use of their software based on that business metric.

CA Technologies has responded by striking deals with several insurance companies in which, rather than charging them for millions of instructions per second, or MIPs, on the floor, it charges them for the number of new subscribers to their insurance policies, so if the number of subscribers goes up, CA is paid more, and if that number goes down, there's a renegotiation that reduces what CA is paid.

Fagen says this cloud-oriented, pay-for-what-you-use approach has been successful for his company so far, and he believes it represents a pivot point that will help customers make their best price-related choices about what they should and shouldn't put in the cloud.

"Inevitably, if you're delivering service across all these things, your SLA, the people who you're selling to don't care how the service is delivered," Fagen says. "They don't care if you have an army of accountants, or a really good piece of software that generates reports. What they have is an agreement with the IT department to deliver those reports at a particular time."

Featured

Subscribe on YouTube