The Infrastruggle

4 Possibly Correct Predictions for 2016

What does Jon Toigo see when he looks into his crystal ball?

Every year, publication editors send out notes to their writers requesting an obligatory prediction column (OPC) for the coming year.

I use my OPC to consider the pain points that consumers will confront in the coming year. It isn't rocket science to anticipate the logical outcomes of some of the current fashions and trends, except perhaps for the collateral effects that no one can predict. So here goes.

1. The Zettabyte Apocalypse Will Not Come in 2016
IDC has been promoting its "exploding digital universe" premise ever since EMC began paying for the annual end times report a few years ago. But, with its sponsor shortly to be absorbed by Dell (maybe, depending on some tax matters), it remains to be seen whether the digital universe will continue to explode -- moving steadily toward the 20ZB to 60ZB range we're seeing in many analyst reports and vendor slide shows -- or implode into a kind of dwarf star or black hole.

The implosion wouldn't mean that there's less data being produced, only that we've decided collectively either to discard some of the data or to dramatically reduce the number of copies we generate through processes like replication-based sharing. Some software products, like Catalogic and Actifio, have been trying to establish a new niche -- data copy management -- that might play into a data volume reduction trend.

Something else we might see is the wholesale avoidance of multi-nodal-with-replication storage topologies advanced under the monikers software-defined storage and hyper-converged infrastructure. Multi-node storage with replication might be the latest thing in storage high availability and the darling of the VMwares and Hyper-Vs and others out to own the entire compute/network/storage stack, but such topologies are capacity-demand accelerators that companies can ill afford. Ultimately, how much storage costs will become the gating factor on how much storage is deployed and how much data gets stored.

2. Better Use of Multi-Core Chips
With the release of DataCore Software Parallel I/O technology, I expect to see a flood of parallel I/O woo enter the market. Parallel I/O involves the use of spare logical CPU cores ganged together into a very fast I/O processing engine to deliver phenomenal throughput improvements without much cost (you already own the multi-core processor). DataCore has paved the way to an extremely low-cost, high-performance storage tier by combining its P-I/O algorithm with its storage virtualization capabilities that include adaptive caching and interconnect load balancing. I suspect that many vendors will seek to pursue a comparable strategy, though most lack the experience in multiprocessor architecture that DataCore still has on staff.

3. Tape Will Continue Its Comeback
LTO-7 tape will be widely available in early 2016, providing 15TB compressed data storage capacity per cartridge, and enabling companies with brain cells to begin archiving older data in a cost-efficient way. As David Cert of Crossroads Systems put it plainly in a recent chat, tape will combine with parallel I/O disk and flash to deliver many petabytes of tiered storage for a fraction of the price of a single high-end "converged" storage platform, delivering speeds and feeds for production data and capacity and resiliency for long-term data archive.

4. Mainframes Are Cool Again
The latest surveys from CA Technologies, SyncSort and others show the mainframe enjoying a renaissance. As a transaction platform, there's nothing finer than a z13 from IBM for handling the starburst of transactions generated by every idiot with a smartphone ordering widgets from an app. And there is no more affordable way to host hundreds or thousands of virtual machines (VMs) than to use the platform that is, by definition, a cloud that can allocate and deallocate resources to applications and VMs with agility, resiliency and dependability. The only stumbling block in IBM's story is where the smart folks are going to come from to operate the big iron. But that might be a problem confronting x86 server environments even before it becomes an issue in the mainframe world.

There you go. Safe predictions, all. And, like a broken watch, they're likely to be correct at least a couple of times a year. Have a safe and happy 2016.

About the Author

Jon Toigo is a 30-year veteran of IT, and the Managing Partner of Toigo Partners International, an IT industry watchdog and consumer advocacy. He is also the chairman of the Data Management Institute, which focuses on the development of data management as a professional discipline. Toigo has written 15 books on business and IT and published more than 3,000 articles in the technology trade press. He is currently working on several book projects, including The Infrastruggle (for which this blog is named) which he is developing as a blook.

Featured

Subscribe on YouTube