Virtual Observer
What Virtualization Vendors Have on Storage Tiering
What VMware, Dell, NetApp and DataCore are doing to add support for tiered storage in their offerings.
Last time, we talked about the need for VDI densities to dramatically improve in order for VDI to deliver sustainable TCO advantages over traditional PC desktop deployments. Now let's take a look at how a few storage innovators are capitalizing on View 4.5 to do just that.
Last year, VMware introduced support for tiered storage in View Composer as part of its View 4.5 release. With storage tiering, the virtual desktop can be further decomposed by placing replicas and linked clones on different datastores with different performance characteristics. So, replicas can be placed on SSDs, which have lower storage capacity but extremely high read performance, typically supporting tens of thousands of IOPS. Linked clones can then be placed on traditional spinning disks, which provide lower performance but are less expensive and deliver higher capacity, making them well-suited for storing the many linked clones in a large pool.
All of the major storage vendors, and a few smaller ones, are rushing to capitalize on the efficiencies to be gained with storage tiering. In Dell's EqualLogic PS6000XVS (and PS6010XVS) SAN arrays, eight high-performance 100GB SSD drives and eight high-capacity 450GB 15K rpm SAS drives are combined with intelligent, automatic workload tiering in a storage platform designed to support just the types of variable, multi-tiered workloads common in high-density VDI environments.
The PS6000XVS offers the low latency of SSD with the high capacity of SAS in a single array "member." These two storage tiers provide the foundation for multi-tiered workloads within an array. Dell has then added new Automatic Tiering software that places the lined clone parent replica image on the low-latency, high-performance SSD tier to ensure maximum throughput, while temporary data as well as users' unique application data is placed on lower cost, capacity-optimized SAS drives.
To do this, the PS6000XVS categorizes workloads as hot (high I/O), "warm" (medium I/O), or cool (low I/O), and then places them, as appropriate and automatically, onto either SSD or SAS disk tiers—with no operator intervention required.
NetApp approaches the problem from a different angle, using its Flash Cache technology. NetApp Flash Cache and Transparent Storage Cache Sharing (TSCS) work to mitigate boot and login storms. Since Flash Cache is deduplication aware, it not only acts as an intelligent solution for VDI, it works just as well with ANY data. And since it deduplicates at the 4K-block level, Flash Cache can greatly reduce the number of disk spindles needed to satisfy demanding VDI performance requirements, without requiring manual data migration or management.
Last fall, NetApp put this to the test at VMworld, with a 50,000-seat reference VDI architecture based on NetApp arrays plus components from Cisco, Fujitsu, and Wyse on top of View 4.5. This super-high-end deployment can scale down, of course, and the benefits remain.
DataCore has also grabbed some headlines recently, coming from the other direction: making the case for affordable density for smaller VDI deployments. DataCore's recent benchmarks were based on a 220 virtual desktop deployment size, and the company claims a storage cost decrease of almost ten-fold over traditional desktop virtualization requirements.
What does this all mean? If you're building a VDI environment on shared storage, both VMware and the leading storage vendors are rushing to take the storage cost and performance problem OUT of the picture. VMware's doing it by letting the arrays optimize the type of disk or cache that's used to deliver the master replica in a cloned deployment, and the array vendors are bringing the best of intelligent caching and intelligent data movement (tiering to SSD in Dell's case) to overcome the boot/login storm problem.
These are impressive densities and impressive technologies. If you've tested and rejected VDI for your desktop virtualization projects in the past, maybe it's time to revisit your ROI calculations.
About the Author
A senior analyst and virtualization practice lead at Taneja Group, Dave Bartoletti advises clients on server, desktop and storage virtualization technologies, cloud computing strategies, and the automation of highly virtualized environments. He has served more than 20 years at several high-profile infrastructure software and financial services companies, and held senior technical positions at TIBCO Software, Fidelity Investments, Capco and IBM. Dave holds a BS in biomedical engineering from Boston University and an MS in electrical engineering and computer science from MIT.