The Infrastruggle
Back to the Future in Virtualization and Storage
What's old is new again. Marty McFly would get it.
If you're on social media this week, you've probably had your fill of references to Back to the Future, the 1980s scifi comedy much beloved by those of us who are now in our 50s, and the many generations of video watchers who have rented, downloaded or streamed the film since. The nerds point out that the future depicted in the movie, as signified by the date on the time machine clock in the dashboard of a DeLorean, is Oct. 21, 2015. That's today, as I write this piece.
Interestingly, it does set up a column I've been thinking about for a while. In part, this tracks the topic of a keynote address I'll be delivering in NYC next month at an industry confab on the topic of how to stay relevant as an IT pro in the current era of virtualization, software-defined and cloudified everything.
Does Old = Obsolete?
I did a lot of research for the talk, which will be delivered in Brooklyn in November, and discovered a dark undercurrent in the IT business, called ageism. I read that men over 35 in Silicon Valley were finding it difficult find work in tech companies or to find investors for tech start-ups. Numerous idiots in the venture capital world and in cloudy-land were making inane statements in public that 20-somethings have their best work ahead of them, while 30-40s had already done what they could do and were more at risk of having work distractions -- like family, for example -- that limited the ROI on hiring or funding them. There was even a piece on the increase in numbers of men getting plastic surgeries, BOTOX injections, and hair transplants in an effort to look younger, to try and game an ageist system.
I began to feel a bit like Arnold in the Terminator movie that came to movie theatres this summer. His character says, "I am old … and obsolete."
Fortunately, I fired my employer back in the 1990s and have been a business owner since then; I am, therefore, answerable to no one but my customers. I do not have the pressures or concerns about being viewed as irrelevant or obsolete or about being fired for having old ideas. Unless, that is, I fire myself at some point.
But many of the IT folk I deal with today are employed by companies, either tech sector or mainstream, and many do appear to be concerned. One IT operator at a kitchen appliance manufacturer sat across from me at a recent Las Vegas vendor show lunch break and bemoaned that her infrastructure was falling apart and she was getting a lot of heat.
Tales From the Trenches
Truth be told, some idiot in management read a
Forbes article (actually an advertorial, though you couldn't tell) stating that clouds were a ticket to the corner office. The fellow then called down to the IT department and told them absolutely no more equipment was to be purchased, and that the strategy going forward was to find a cloud, then unplug all local servers and ship them there. That was eight years ago, she said. Since then, no acceptable cloud service could be found, no new equipment had been purchased, and the company's IT department was in shambles.
At another event, an unemployed fellow told me that the cloud sales folks had contributed to his layoff by telling senior management that they were perfectly prepared to take over the company's IT, but that he was blocking the strategy from happening in order to preserve his own job. Old folks (he was 45) are like that, the sales guy said. Ultimately, the deal was done, the IT fellow was shown the door, and the company has been in turmoil ever since. The man said that he was only doing the due diligence that history had taught him needed to be done whenever you outsource anything.
Short-Term Thinking, Long-Term Failure
The Regan Recession back in the 1980s produced the Service Bureau Computing model (a kind of IT outsourcing service); the DotCom debacle in the late 1990s introduced application service providers and storage service providers (ASPs/SSPs, or Internet-based service bureaus); and the Bush Recession brought us clouds. With age and experience, the fellow noted, you can see a pattern:Â business is lulled by promises of reduced CAPEX and OPEX through IT outsourcing that appeals when times are tough, but tend to pull all of their resources back inboard when the economy is on the mend. He wanted to make sure that the service provider was on the up and up, that appropriate and reasonable SLAs were negotiated, and that an exit strategy existed to return IT back into the company if (and when) things didn't work out.
In both cases, the wisdom that came with age and experience seemed to be less important to management than the promise of short term CAPEX/OPEX savings, and the prospect of looking fresh and fashionable. It was all very depressing.
Return of the Mainframe
But then something happened. I looked around at some other data points. For one, IBM is enjoying an uptick in revenues from the sale of … wait for it … mainframes. The z13, introduced at the beginning of the year, has become a darling of many companies who are switching off hundreds of servers, and moving their virtual machines (VMs) as KVMs over onto the mainframe, where the cost model is better.
IBM claims you can stand up thousands of VMs in a zSystem mainframe for less than $100 each; that's significantly less than what it costs for licenses to hypervisors from VMware. Moreover, mainframes generally have a much smaller complement of staff to manage and operate the system, they tend to have far better uptime than do x86 tinkertoy servers, and software costs for the systems have come back down to earth -- a key reason folks migrated off of the platforms in the 80s and 90s. We've come back to the future, and it looks a lot like the past. Oldsters, with their knowledge of technology and best practices in the mainframe world, are suddenly in high demand.
Legacy Storage Is Not the Problem
If you stick with x86 and virtualization, you may be concerned about the challenges of achieving decent throughput and application performance, which your hypervisor vendor has lately been blaming on legacy storage. That is usually a groundless accusation. The problem is typically located above the storage infrastructure in the I/O path; somewhere at the hypervisor and application software operations layer.
To put it simply, hypervisor-based computing is the last expression of sequentially-executing workload optimized for unicore processors introduced by Intel and others in the late 70s and early 80s. Unicore processors with their doubling transistor counts every 24 months (Moore's Law) and their doubling clock speeds every 18 months (House's Hypothesis) created the PC revolution and defined the architecture of the servers we use today. All applications were written to execute sequentially, with some interesting time slicing created to give the appearance of concurrency and multi-threading.
This model is now reaching end of life. We ran out of clock speed improvements in the early 2000s and unicore chips became multicore chips with no real clock speed improvements. Basically, we're back to a situation that confronted us way back in the 70s and 80s, when everyone was working on parallel computing architectures to gang together many low performance CPUs for faster execution.
A Parallel Comeback
Those efforts ground to a halt with unicore's success, but now, with innovations from oldsters who remember parallel, they're making a comeback. As soon as Storage Performance Council audits some results, I'll have a story to tell you about parallel I/O and the dramatic improvements in performance and cost that it brings to storage in virtual server environments. It's a real breakthrough, enabled by folks at DataCore who remember what we were working on in tech a couple of decades back.
Also on the cutting edge is new/old network technology. Here, I'm referring to the tech from a company called RockPort Networks, which has a cadre of former Lucent folks working on a next-generation network technology derived from the work of Stanford University and IBM several decades ago. Look it up:Â torus networking.
Cray supercomputers were developing the technology, which involves peer-to-peer, hub-centric networks with three dimensional node connectors. If they gain mindshare (and find additional customers and investors), RockPort's technology (currently implemented as software on any NIC) promises to revolutionize network bandwidth and throughput while slicing network costs to the bone. Cisco won't like it a bit, I guess, since switch-based branch and leaf nets could become a thing of the past; but again, this is an example of a new/old technology being brought back to the future to solve new problems and challenges.
Are 8-Tracks Next?
Finally, I'd just point out that tape is enjoying a renaissance. With the specter of 20-60 Zettabytes of data (a 1 followed by 21 zeros, or one billion terabytes) looming in the 2020 timeframe, tape is the only technology that will be able to meet the space requirements (I'll break it down in a future column). Again, the tape mavens -- defined as oldsters by their inherently un-hip storage technology -- have been vindicated. Everything old is new again.
Word up to IT pros:Â quit being concerned about being perceived as irrelevant or out of step. The kids (and your employers) are going to need your knowledge and skills going forward. You are IT.
About the Author
Jon Toigo is a 30-year veteran of IT, and the Managing Partner of Toigo Partners International, an IT industry watchdog and consumer advocacy. He is also the chairman of the Data Management Institute, which focuses on the development of data management as a professional discipline. Toigo has written 15 books on business and IT and published more than 3,000 articles in the technology trade press. He is currently working on several book projects, including The Infrastruggle (for which this blog is named) which he is developing as a blook.