The Infrastruggle

Storage Between Two Worlds

The current fascination with software-defined storage (SDS) isn't really new.

The current fascination with software-defined storage (SDS) isn't really new. The idea of making storage more agile -- that is, capable of easier deployment without a lot of steps to provision capacity and services to workload -- might seem to be an extension of contemporary cloud rhetoric, but the idea of separating storage services (software) from storage kit (hardware) dates back at least to 1993, with system-managed storage (SMS) in the mainframe space.

That said, it should also be noted that SDS is a mess. From an architectural perspective, there's little agreement on what the components of an SDS stack should be. While nearly all products in the market have a very similar set of software services that were previously delivered as value-add functions on array controllers -- like storage snapshots, de-duplication, compression and other basic data-protection functions -- these commonalities shouldn't be mistaken for general agreement on the definition of a common SDS stack. Most solutions, for example, ignore the file system or object system for data storage. Most SDS vendors claim that these things are outside of the SDS domain, despite the fact that the file/object system often defines how storage scales, how it can be accessed and what workload can use it.

The benefits of including object or file storage systems as part of the SDS stack becomes clear when you spend time with Caringo, an "agile storage" vendor out of Austin, Texas. Caringo's VP of Product, Tony Barbagallo, took me on a tour of his flagship product recently and showed me some functionality that should be getting some attention about the time this article publishes. And it's worth the time of any IT planner to become familiar with what Caringo is doing, because it's likely to be emulated by others over the next few months.

First, forget what you know about Caringo. The company hardly talks object storage at all these days, though it was among the initial leaders in that space. Instead, like everyone else, it has jumped on the SDS and agile bandwagon.

Object storage is still the core value of the product, but less in terms of the value I associate with object, such as its inherent ability to describe data much more exactly with metadata so it's easier to manage via data management policies. That should be why we all flock to object storage: so we can sort out the storage junk drawer once and for all.

One thing that resonates with an increasing number of firms is the notion of ease of scaling and ease of provisioning in storage. We want to shorten the number of days required to provision storage to a workload, from the average 15 days following receipt of request to as little as 15 minutes. Caringo's object storage, called SWARM, can get you there, as I saw demonstrated while I was on-site. Plus, SWARM can provide expedient capabilities to "heal" around failed storage components, and for media filled with self-describing data objects to be "transplanted" into different trays behind different servers in a Caringo SWARM cluster. The resiliency and elasticity of the SWARM cluster solution is just what the cloudies have been seeking from their SDS infrastructures.

The latest innovation by Caringo is the ability to access SWARM object storage with the most popular file access method in use today, the Network File System (NFS). Before rolling your eyes, understand that Caringo's NFS integration doesn't take the form of its competitors; in other words, an NFS gateway that transmutes file system requests into object calls. Everybody does that today, Barbagallo says, except for a few that have been recreating the file system in order to map file hierarchies to objects -- "a huge performance bottleneck," he observes. "Gateways, themselves, are typically a single point of failure in object storage."

"Caringo SWARM's NFS access consists of a completely stateless protocol delivering elegant access to the back-end of the SWARM cluster," Barbagallo says. This innovation comes at the request, he says, of users "who have legacy applications that speak only NFS. These firms need the flexibility and scalability -- the agility -- of object storage, but they need to preserve NFS as a way to ingest data into and to retrieve data from SWARM object storage to preserve legacy apps. NFS isn't going away anytime soon, so we needed to accommodate it in a manner that was more efficient than a bolt-on NFS gateway."

The Caringo SWARM NFS protocol is in beta testing now, and will be available to customers in September, says Barbagallo. SWARM NFS, and protocols like it, may well find itself appearing in the stacks of an increasing number of SDS offerings, especially as firms decide they want the agility of clustered object storage but need to preserve their investment in legacy file system data.

About the Author

Jon Toigo is a 30-year veteran of IT, and the Managing Partner of Toigo Partners International, an IT industry watchdog and consumer advocacy. He is also the chairman of the Data Management Institute, which focuses on the development of data management as a professional discipline. Toigo has written 15 books on business and IT and published more than 3,000 articles in the technology trade press. He is currently working on several book projects, including The Infrastruggle (for which this blog is named) which he is developing as a blook.

Featured

Subscribe on YouTube