Are Enterprises Running Away From The Mainframe? BMC Survey Says 'No'
Rumors of mainframe death have been greatly exaggerated.
- By Dan Kusnetzky
If I paid attention only to the press releases and briefings I receive from vendors of cloud computing services, virtualization technology, application frameworks and even industry-standard x86 applications, it would be easy to come to the erroneous conclusion that enterprises were running away from their mainframe workloads and mainframe investment as quickly as they can.
Having conducted many surveys myself, I know that this is simply not the case. A quick read through the results of the recent BMC 11th Annual Mainframe Research study, along with others, reinforces my view that mainframes are useful tools that continue to be widely used in major enterprises.
Let's look at a few of the survey results BMC just reported.
Dan's Take: Why Mainframes Continue To Make Sense
The survey – the industry's largest mainframe survey with more than 1,200 executives' and technical professionals' perspectives included – also indicated that mainframes are a critical core IT platform supporting the volume and velocity of data and transactions being created by digital business.
With nearly 60 percent of companies seeing increased data and transaction volumes, and a growing number of databases, companies continue to select the mainframe as a key platform. The mainframe is a highly secure, superior data and transaction server, particularly as digital business adds unpredictability and volatility to workloads.
Respondents surveyed fall into three groups based on their mainframe investment strategies:
- 58 percent of companies surveyed are in the increasing group and looking to grow their investment and use of the mainframe.
- 23 percent indicate they will keep a steady amount of work on the mainframe.
- Only 19 percent plan to reduce the usage of the platform.
Executives planning to grow their investment see value in the mainframe for its availability, performance, and security strengths. These respondents often have growing revenues and are focused on modernization and taking advantage of technologies such as Java, advanced automation, and lower-cost specialty mainframes. Respondents who plan to remain steady, view mainframes as a secure and highly available engine for running their businesses, but are not looking to add new workloads.
When enterprises have throw things overboard to lighten the load in stormy weather, they don't throw the oars and sail overboard. For some reason, many in the industry believe that the mainframe no longer fits in those categories and deserves to be quickly jettisoned.
Part of the reason mainframes won't die is that often they simply cost less to operate when all the costs of ownership and workload operations are considered. Furthermore, it's often the case that a centralized workload is easier to manage and secure than one that's highly distributed.
Part of my career was spent being an industry analyst at IDC. We conducted many return on investment and cost of ownership studies for clients. These studies were designed to shine a light on how much it would cost to deploy a specific workload or service on a number of different platforms.
These studies were based on surveying organizations that were actually deploying these workloads or services on different platforms so we could learn from their example. These studies were not "put the costs of some hardware and software into a spreadsheet and crank through some numbers" modeling exercises that some vendors present as their proof that their own solution is the best.
We'd hunt down executives at many organizations worldwide to learn about their actual costs. These studies were quite comprehensive. A typical survey would ask questions about cost factors that typically ran between 75 - 300 different categories.
Once the survey responses were obtained (often tens of thousands), the analysts would tabulate the results with the goal of modeling the real costs for hardware, software licenses, staff equivalent costs and, in some studies, the costs of datacenter floor space, power and cooling.
Although these studies are now ancient history, and the underlying technology has improved dramatically, I believe what was learned still has value.
While wading through the results, a factor that often turned out to have a major impact on the total costs was whether the solution was centralized or distributed. Typically, the more distributed a computing solution was; that is, the more devices -- including systems, storage, networking and power equipment -- that were involved, the higher the total cost was to the company being surveyed.
We learned that a distributed computing approach almost always required a larger number and type of expertise to be available. Thus, the organization in question was likely to need a bigger staff.
If we step back a bit, we can see that this one single cut of the data shows a major reason why mainframes have stayed in the enterprise datacenter. It also underlies why the x86 systems and software industry is moving to create a more centralized view of distributed systems. Regardless of whether they're calling them "converged," "hyper-converged" or event "ultra hyper-converged," they're really trying to recapture some of the benefits mainframes have always had.
Perception vs. Reality
Another thing these studies almost always showed was that over a five-year period, costs often differed from preconceived notions. Costs for hardware and software, when taken together, were typically less than 20 percent of the total five-year cost of ownership. Staffing, networking and power costs typically were significantly higher than hardware and software.
(As an aside, whenever a vendor claims that their software product can reduce an enterprise's cost of ownership, I want to know if that claim was based on research that included those factors or merely looked at software-related savings. Even saving 50 percent of a category that only makes up 10 percent to 12 percent of the total might not result in a huge savings for the company.
Suppliers such as IBM use this type of insight to point out that companies relying on Linux, Java, and databases to support their cloud computing, analytical, and transactional workloads would experience higher levels of performance at a lower overall cost if they deployed a small number of IBM's mainframes, rather than a larger number of any vendor's industry-standard x86 offerings to do the same work.
Not Going Anywhere
Most mainframe workloads are complex beasts that are built to be tightly integrated into the mainframe's transaction processing system, its database engine, its storage architecture and event its style of network I/O. So, it isn't at all easy to uproot one of these beasts to move it anywhere else. What has been successful, on the other hand, is to peel away specific aspects of the workload, like enterprise resource planning or customer relationship management, and substitute something else, like a cloud service.
All this data shows that mainframes are likely to remain at the heart of enterprise datacenters for quite some time to come.
Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.