In-Depth

Data Storage 101: How To Determine Business-Critical App Performance Requirements

Measuring IOPS, latency and throughput are important for determining the characteristics of physical storage that’s added to a modern data fabric.

Storage is not one-size-fits-all, and determining what the right storage is for your needs is difficult. There are many storage vendors out there, each selling a combination of complex engineering, bitter experience and marketing pixie dust wrapped up together into a saleable product. So how do you determine which bit of storage is the best for the job, and what measurements can help you accomplish this task?

As with most things, when you attempt to determine the relative value of storage there are both quantitative and qualitative measures to consider. Quantitative measures of storage are what can be readily determined through benchmarking. Input/output operations per second (IOPS), latency and throughput are the big benchmark measurements.

Qualitative measurements are more subjective. These include things like ease of use, or the importance and utility of various storage features to the organization considering the storage.

Quantitative Measurements
IOPS, latency and throughput are highly interdependent measurements. As a general rule, the higher the IOPS the lower the throughput, and the more stress put on a system (IOPS or throughput), the higher the latency.

With the right benchmarking, it's possible to make some straightforward graphs of basic storage performance. On one side of the scale there's extreme IOPS with minimal throughput, on the other side there's extreme throughput with minimal IOPS. This effect is more dramatic with magnetic storage media, but the basic principle applies to solid-state storage, as well.

Storage obeys certain basic commands: read, write, modify and delete. These commands represent inputs and outputs (I/Os), and IOPS is a measure of how many of these commands can be performed by a storage solution in a given second.

The commands issued affect the numbers obtained. Flooding a storage solution with 100 percent read requests will result in a different IOPS reading than 100 percent write requests. Magnetic storage media will respond differently to modify requests than solid-state media. The block size of the I/O requests (measured in kibibytes), the number of simultaneous requests, as well as the randomness of the requests, also impact the results obtained.

At first glance this may make IOPS seem a highly subjective measurement, but it's not. While a configuration of 70 percent write/30 percent read/64K block size/100 percent random will differ greatly from a configuration of 50 percent write/50 percent read/16K block size/50 percent random, most storage solutions should perform identically under the same configuration.

This makes it important to pay attention to the testing profiles used by vendors or reviewers, and to compare storage solutions using the same testing profiles. A headline benchmark result of 1 million IOPS, for example, may have been achieved using a benchmark profile designed to give a maximum IOPS instead of one designed to mimic "real world" usage scenarios. In addition, each organization's "real world" usage will differ from the next.

With all of that said, if you take a given storage profile and throw it at multiple storage solutions, the result is a good understanding of the performance characteristics of the underlying storage. Identical storage profiles for benchmarking form the basis for a rational comparison between storage solutions, however, the raw numbers can only tell so much about the overall value of that storage solution to an organization.

Qualitative Measurements
In a perfect world, storage solutions are benchmarked both with all of their advanced storage features off and on. Features such as data efficiency, tiering, caching and so forth all affect how storage will perform under various circumstances, and they're all highly variable based on the data being used for testing.

Consider two storage arrays: one has hardware-assisted deduplication and compression, the other performs these data efficiency tasks entirely in software. Let us assume that with data efficiency turned off, these arrays perform identically.

Running a benchmark against these systems with data efficiency turned on you might discover that both solutions perform identically up to a specific threshold. Once data volumes reach this threshold one of the arrays will reach the maximum amount of data it can perform data efficiency operations upon per second, and a demonstrable difference between the two arrays will have appeared.

Similarly, the arrays could use completely different approaches to different data efficiency. Different approaches to data efficiency have different performance costs, and produce different results in terms of data reduction. The effectiveness of data efficiency at the scale of a single storage array is also often quite different when compared to solutions that span multiple devices and perform data efficiency tasks across the entire solution, instead of just at the level of an individual array.

Different types of storage have different levels of reducibility. RAW images, for example, compress more readily than JPEGs. Virtual desktop infrastructure (VDI) VMs generally provide a much higher level of deduplication than a collection of VMs all running different OSes.

In addition to this, storage features such as data efficiency typically compete with one another for array resources. Features like data tiering take processing power; if a storage solution is busy working on data efficiency, it might have to delay tiering tasks or vice versa. Which advanced storage features are enabled -- and how they're used -- can make a noticeable difference on the performance delivered, even when comparing two arrays that would perform identically with those features off.

The Advanced in Advanced Storage Features
At first glance, advanced storage features would seem to be quantitative. Features like data efficiency are affected by multiple variables, but if you can control enough variables they should hypothetically perform the same every time. There is some truth to this, but it's also more complicated than that.

Fifteen years ago, how a storage solution performed data efficiency probably would've been quantitative. The only data efficiency most storage used was compression, and an individual storage array would apply the same compression algorithm to all data, all of the time. That was then, this is now.

Today, storage tries to be smart. Some solutions will test a small piece of data to see how compressible it is before deciding whether or not to compress the whole data stream, or which algorithms to apply. Deduplication comes in flavors, and may only be applied on certain tiers of data, when the array is idle or in response to other parameters.

Tiering of data between different storage media -- or even between different arrays, sites or clouds -- can occur based on any number of criteria, and the criteria themselves can change as the array learns storage patterns and adapts which features it implements under which circumstances. The smarter storage becomes, the harder it is to predict.

Profiles, Policies and SLAs
Storage solutions are no longer discrete, self-contained items. An individual SAN or NAS is often just one part of a larger whole. When multiple arrays are joined together with server-local storage, cloud storage and who knows what else, an organization's storage becomes a data fabric.

Data fabrics store data on multiple devices. These devices can use multiple physical storage media, be located across multiple sites, and even across multiple infrastructures, where the different infrastructures are owned and operated by different organizations. A single data fabric today could easily join multiple on-premises sites' worth of storage, services provider storage and public cloud storage.

Data fabrics typically have the ability to add or remove storage in a non-disruptive fashion, meaning that the overall physical composition of the data fabric is itself a variable. This changes how you must measure data fabrics, both qualitatively and quantitatively.

Hyper-converged infrastructure (HCI) and scale out storage are examples of simplistic data fabrics. HCI and scale out storage both take storage located inside individual servers, lash all that storage together and present it using a single interface. In the case of HCI, workloads are run on the same nodes that supply storage to the fabric, while scale out storage dedicates nodes to storage alone. Data fabrics can get much more complicated, however, and consist of any or all storage that an organization uses.

Because data fabrics are a logical construct instead of a fixed physical asset, you would rarely attempt to measure the performance of the whole fabric. Instead, profiles, policies and service-level agreements (SLAs) are set in the data fabric, and tests are performed to see if the fabric can deliver.

What is the maximum ingestion rate of data into the fabric? Can the fabric deliver a LUN with x IOPS, y throughput and z latency? How many of these can it deliver simultaneously? Does the fabric warn when it has been asked to deliver performance beyond its capabilities, and how does it do this warning?

The response of fabrics to change is a critical measure. If you manage to ask more of a fabric than it's currently capable of delivering, how smoothly will it absorb additional storage into the fabric, and how quickly will that help meet the storage demands, especially if the fabric is overburdened? Does the fabric perform parallel I/O, or is everything funneled through a single orchestration node?

Fabric Softener
The existence of data fabrics doesn't make classic qualitative measurements of storage irrelevant. Data fabrics are an orchestration layer that smooshes together multiple storage devices, layers a universal set of advanced storage features on top, and then presents a single storage interface for storage consumers. This makes storage easier to use, but it does not magically solve performance problems.

In order for data fabrics to do their thing, they must be supplied with adequate amounts of task-appropriate storage. If the fabric is supplying storage with unacceptably high latencies, you might consider adding NVMe solid-state drives to the mix. If capacity is a problem, but performance is fine, then perhaps a big box of magnetic hard drives is the ticket.

Measuring IOPS, latency and throughput are important for determining the characteristics of physical storage that's added to a modern data fabric. That said, extended proof-of-concept testing with real world workloads and copies of real data are equally important for teasing out the qualitative performance characteristics of a storage solution, especially when the data fabrics used start doing spooky things like tiering solid data to the public cloud, or dynamically changing data efficiency approaches based on load.

About the Author

Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.

Featured

Subscribe on YouTube