The Question of Data Protection Efficiency
Developing data protection software compatible with any hardware and any workload.
For quite some time now, we've been hearing from advocates of software-defined storage (SDS) -- and especially hyper-converged infrastructure appliances -- how their technology was a boon for data protection. Such claims should be followed in most cases by an asterisk -- and in some cases by a quick Miranda warning.
Let's start by acknowledging that the fundamental premise of software-defined is correct. Many functions that have been deployed as expensive "value add" services on array controllers, jacking up the price of an otherwise commodity stand of disk, flash or both, and often contributing to the proprietary and siloed nature of legacy storage arrays could be better done in a server-based software stack. From a server-side instantiation, the functionality could be shared across many strands of storage and, theoretically, in a manner agnostic to the nameplate of the vendor on the array.
That was, and is, among the problems with legacy storage. When you implemented a function like de-duplication on a specific array, the reach of this functionality was limited by the vendor; usually only to the total complement of storage trays that could be connected to a single controller. When you needed more capacity, you had to buy a whole new rig with a separate instantiation of the value-add software, all of which needed to be managed separately from the first rig and first instance of on-array software.
Similarly, in the case of data protection, mirroring software would only work with an identical array from the same vendor having the same firmware and the same hardware and geometry as the primary array. This is known as an "identicality" requirement, and it contributed majorly to the cost of a protected storage infrastructure and the hassle of creating and maintaining workable data protection strategies.
Truth is, we were just moving beyond such silliness when truly shared storage, in the form of virtualized Fibre Channel fabric SANs, began to reach the market. Storage virtualization advocates stressed that replicating data between logical volumes from virtualized storage infrastructure meant never again having to buy multiples of the same brand of storage to satisfy identicality requirements. Logical volumes weren't subject to the differences of petty industry infights between vendors; data could migrate and replicate between them at will.
Identicality requirements began to show up quickly in SDS "virtual SANs." Users couldn't mirror data from a VMware SDS-controlled storage infrastructure to a Microsoft Hyper-V SDS-controlled storage infrastructure. And in some cases, data could not be replicated or mirrored between the different nodes of virtual SAN storage from the same hypervisor vendor if the kit of each node was not absolutely identical. While some improvements are appearing to reduce intra-nodal identicality requirements, cross-SDS stack replication is still problematic. This throws some significant wrenches into the gears of enterprise-wide data protection strategy: You now need to run different data protection processes depending on the hypervisors and SDS stacks you're using because each hypervisor has siloed its turf.
Truth be told, smart firms had already begun to abandon siloed on-array data protection technologies before the hypervisor and SDS phenoms appeared. Companies like ARCSERVE and Acronis and several others were developing next-generation data protection software that was compatible with any hardware and any workload, capable of "federated" deployment but manageable centrally by fewer administrators.
Recently, Acronis VP of Product Marketing, Frank Jablonski, demonstrated the latest version of Acronis Backup 12, the company's flagship data protection software designed for integration with the company's Acronis Backup Cloud Solution, Cloud Storage and Monitoring Service. According to Jablonski, the goal is to deliver data protection "with a hybrid cloud architecture" so that do-it-yourselfers have everything they need for protecting data on physical, virtual, cloud, and mobile environments, while service providers can roll out a backup, storage and monitoring suite for customers who prefer to outsource the data protection task.
The Acronis solution provides everything you should be receiving from "open" SDS stacks, but aren't: backup of any device from servers to Samsung phablets, restoration to dissimilar hardware or hypervisors, the capability to run your backup image as a virtual machine, and even a nice-to-hear-though-difficult-to-prove 15 second recovery time.
By the end of the demo, it looked to me like Acronis was delivering an essential data protection capability that could quickly gain market share against a lot of the SDS stuff. That raises the question of whether it makes sense for SDS to provide data protection services at all. Acronis Backup 12 is worth a look.
Jon Toigo is a 30-year veteran of IT, and the Managing Partner of Toigo Partners International, an IT industry watchdog and consumer advocacy. He is also the chairman of the Data Management Institute, which focuses on the development of data management as a professional discipline. Toigo has written 15 books on business and IT and published more than 3,000 articles in the technology trade press. He is currently working on several book projects, including The Infrastruggle (for which this blog is named) which he is developing as a blook.