The Infrastruggle

Decoupling Price and Performance in Storage

DataCore's "Adaptive Parallel IO" represents a breakthrough.

OK, I admit to being a bit of a science geek, especially when it comes to technology. I like my cup of truth with a side of empirical data, especially when supplemented by repeated experiments that yield identical results.

Fact-Based Opinion
In short, I like it when the foundation for a knowledgeable opinion is established using the scientific method; that is, when an opinion is based on a set of demonstrably causal theories with lots of data to validate them. This kind of knowledge is so much more persuasive than the combined ruminations of analysts and pundits whose "experience and judgement," proffered in lieu of scientific data, may or may not qualify them to actually know anything.

Unfortunately, the "woo peddlers" continue to proliferate in contemporary technology discussions, making my life much more challenging. You know what I'm talking about: server virtualization was supposed to fix the problems created by distributed computing, just as distributed computing was supposed to improve upon the perceived limitations of centralized mainframe computing models.

These were never scientifically-validated assertions (certainly not at the time they were advanced), yet they were opinions that quickly took on the appearance of wisdom with so many experienced IT practitioners and vendors that they came to be treated like they had the authority of a Papal Bull. Argumentum ad populum typically ensues and all of the lemmings head to the nearest cliff.

Chris Mellor over at The Register pounced on this story in early August, noting that EMC had suddenly submitted their VNX8000 system to an SPC-1 test (measuring standardized transactional workload typical of Tier 1 OLTP database and messaging workloads and reporting the results as SPC-1 IOs per second), and their VMAX 400K rig to an SPC-2 (testing streaming storage performance in MB/sec).

Sticking with the SPC-1 test, the EMC gear yielded 435K IOPS with an overall response time at peak load of .99 milliseconds. While respectable, the numbers placed EMC pretty far down the list of tested systems. In fact, it was outperformed by a model of the HP 3PAR P1000 dated October 2011.

SPC-1 also seeks to provide a metric to describe how much coin you must spend for the IOPS you are getting. Based on a list price of about $177,000, the cost per IO of the EMC VNX8000 was about $0.41 per SPC-1 IOPS. Competitors were more or less expensive for the same performance. For example, an entry from Kaminario blew the doors off of the EMC SPC-1 IOPS, but being an all flash unit, it was also more expensive in terms of cost per IOPS.

Other recent tests show the Dell Compellent SC4020 delivering performance at $0.37 per IO, the X-IO ISE-820 at $0.32 per IO, and the Infotrend EonStor DS-3024B at $0.24 per IO. Together with the VNX, these were the top price/performance leaders on the SPC-1 test card late last year.

Shadowy Data
What does the SPC-1 test really mean? EMC used to diss SPC tests as so much bogus data, created using a workload that really didn't simulate anything in the real world. Their internal spokespersons and paid analyst echo chamber insisted that SPC-1 was just some sort of NASCAR-style demonstration of pointless performance (a.k.a. seeing how fast a car would travel going around a track to the left).

Despite the earnest efforts of the Storage Performance Council to build a consistent workload and require that it be used only on rigs configured as they would be in the real world, without EMC's willing participation, a shadow was always cast over the voluntary industry metric.

DataCore Software may have had something to do with that. Back in 2003, DataCore used its SANsymphony storage virtualization software with commodity kit totaling about $306,000 and demonstrated that it could achieve more than 50K IOPS with a response time of 1.68 milliseconds, at a cost of $6.11 per IO. In 2003, that result was remarkable; it was achieved using a low-end Dell server, some Qlogic switches and some disk drives in 10 drive enclosures and produced performance that put many proprietary brand name storage platforms to shame. At the time, it was half the cost per IO of the lowest priced midrange storage system and one-third to one-fifth the cost of high-end arrays.

DataCore repeated SPC-1 testing when it introduced its SANmelody product a couple of years later, but after that it pretty much steered clear of benchmarking, for the simple reason that it had made its core point:  you didn't need high-dollar, branded storage rigs to get the price/performance mix you were seeking from your storage. After DataCore made its point, it didn't make a lot of sense to rub the other storage vendors' noses in the data.

The Bloodletting
However, the Ft. Lauderdale vendor has decided to open new wounds by demonstrating its latest software-defined storage platform innovation, Adaptive Parallel IO, using the SPC-1 process. This summer, the company submitted a result of the test it had conducted using a low-cost "commodity" Lenovo server, combined with an assemblage of bare bones SAS disk drives and generic SSDs, all totaling less than $50,000 in hardware and software. In other words, a hyperconverged infrastructure appliance. The resulting measure of SPC-1 price/performance was amazing.

You can read the results for yourself on the Storage Performance Council Web site. The DataCore Software SPC-1 Benchmark of its new Adaptive Parallel I-O technology delivered 459K SPC-1 IOPS on a platform costing about $38,000, including three years of support. That works out to a cost per SPC-1 IOPS of 8 cents, which is three times better than anyone else. It also demonstrated the fastest response ever measured on SPC-1 at 100 percent load: 0.320 milliseconds.

4 Lessons
As I see it, there are at least four takeaways from these developments. First and foremost is that DataCore shows us that software-defined storage (SDS) implemented as hyperconverged infrastructure can be just as performant, if not more so, than what IDC calls "converged" and "discrete" systems (e.g., hardware-centric approaches with exotic controllers and complex designs.)

DataCore's SDS stack, which features not only adaptive parallel I/O technology, but also virtualized storage infrastructure, can deliver both the bucks and the Buck Rogers that companies are seeking to drive down latency and improve application performance, even in the most demanding virtual server settings.

A second key takeaway is that DataCore has decoupled price from performance. It used to be that the two were joined at the hip: if you wanted the fastest play, you needed to have the deepest pockets. With adaptive parallel IO and hyper-converged infrastructure, that is no longer the case.

A third observation is that DataCore's stack may just be the beginning of a new tick-tock in storage performance. DataCore is not permitted to advance its own views of coming improvements, but Generation One of their technology delivered nearly a half million SPC-1 IOPS off of a commodity server rig with direct-attached internal drives and SSDs; something you can buy off-the-shelf.

The secret sauce was a new algorithm for using logical CPU cores sitting idle in multicore server chips. DataCore grabs the idle cores and creates a parallel IO processing engine that reduces latency at the source and delivers accelerated application performance. Doubtless, speeds and feeds will increase as the number of cores increases and as other enhancements are made to the storage IO path beyond what DataCore already does. So it is not beyond the pale to expect storage performance to begin accelerating along a curve -- or tick-tock, to use the terminology of the CPU makers -- that approximates some of Intel's old laws.

Ill Tidings for Some
Finally, these numbers bode ill for DataCore's storage competitors, especially those with very high-priced rigs. As the hyperconverged infrastructure appliances featuring DataCore's parallel IO technology begin to demonstrate better performance at a significantly lower price than an EMC, HDS, HP, or XYZ storage hardware platform, a lot of consumers are going to start rethinking their storage strategies. For DataCore Software, the SPC-1 Benchmark is providing a lot of "drop the mic" moments going forward.

New Competition
For the record, I am already starting to hear rumblings from other vendors that they have some tricks in the offing that will drive their performance numbers upward and the price per IOPS downward in the near future. I am delighted with this renewed competition, mainly because the consumer will be the beneficiary.

About the Author

Jon Toigo is a 30-year veteran of IT, and the Managing Partner of Toigo Partners International, an IT industry watchdog and consumer advocacy. He is also the chairman of the Data Management Institute, which focuses on the development of data management as a professional discipline. Toigo has written 15 books on business and IT and published more than 3,000 articles in the technology trade press. He is currently working on several book projects, including The Infrastruggle (for which this blog is named) which he is developing as a blook.

Featured

Subscribe on YouTube