The Infrastruggle
        
        Whatever Happened to the VDI Revolution?
        Maybe, muses Jon Toigo in the return of his column, it  failed through lack of interest, rather than  poor technology.
        
        
        
  Storage revolution is once again in the air -- at least in  the branding of a lot of annual IT conferences this fall. Both VMware and  Microsoft have been crowing about improvements in their virtualization products  that will enable the revolution in "virtual desktops" that has been promised,  but never delivered, over the past decade or so. Storage vendors are lining up,  it seems, to show off their latest flash arrays and hyperconverged  infrastructure appliances that, to hear their pitches, will make short work of  the obstacles to VDI deployment and make the technology so easy-peasy, we will  all be trading in our PCs for netbooks or however they decide to rebrand them  around Christmas.
  According to analysts at Gartner Inc. and elsewhere, two issues have  been holding back VDI adoption: 
  - Storage congestion due to evil shared legacy  storage, which impairs I/O throughput in the virtualized server stack, which  causes virtual machines (VMs) -- virtual desktops are conceived as a desktop and  OS running in a VM wrapper -- to provide poor  performance, which annoys users and engenders hatred and despise for IT. 
- "Mass events" such as software patch  processes, virus scans, backups, and so on, which tend to consume a lot of  processor and memory and network resources as they act on VDI VMs, which slow  down application performance at the user terminal, which bend users out of  shape and encourage even more anti-IT sentiment. 
These, the analysts say, are the key hurdles to VDI. 
  I have two issues with these analyses. First, I'm not sure  that either is a comprehensive list of the technical challenges to VDI  adoption, or even the correct ones on which to blame the slow rate of VDI  uptake. 
 The I/O Blender Problem
 
  VDI, as currently conceived, is a bunch of VMs that, when  operated on a single physical host using the not-very-robust hypervisors in the  market today, tend to make a mess of the I/O process. They do this with or  without the involvement of flash memory or other spoofing/caching/buffering  techniques, because all the I/O from all the VMs gets shoved into the I/O path  and written to disk. This creates a randomized mess of stored data that must be  sifted, slowly and carefully, every time a file is requested for reuse or  update.
  This "I/O Blender" is one of the banes of server  virtualization, and there are multiple ways to deal with it:
  - You can attack it, as PernixData tries to do,  with a large caching resource that simulates faster read/write performance. But  this doesn't really fix the problem. 
- You can go to a log structuring approach like  Starwind Software and a few other software-defined storage (SDS) vendors do. It  works, but entails an overhead expense in extra storage capacity
- You can maybe experiment with VM  workload-specific queuing, as Tintri does. But that requires changing the file  system to a VM-based one rather than the traditional  directory/subdirectory/volume/LUN structure with which we're all familiar
- You can coalesce writes in memory, reorganize  them, optimize them and write them to a target after buffering, which seems to  be the DataCore model. 
Bottom line:  there  have been several ways to skin the supposed storage congestion problem for  years, but it hasn't helped VDI adoption one whit.
  And that's just in the cases where poor app performance can  be honestly blamed on storage I/O. More often than not, in the shops I visit,  application performance issues occur ahead of storage I/O. There are no extraordinary  queue depths to be found, only really hot processors. Even without a  certification from VMware school, most of us can see that those issues are  rooted in the applications or OSes inside VM wrappers themselves,  or maybe in problems with the hypervisor's own microkernel code.  And that's before the apps even start  generating reads and writes to back-end storage.
 Hyperconvergence to the Rescue?
 
  Could hyperconverged infrastructure, which is a slick way of  saying going back to direct-attached storage but in an appliance footprint (which  is, ironically, the exact same proprietary storage island model that we were  once trying desperately to put into the rear view mirror) be an answer? To what  question, exactly? Does hyperconvergence fix the complexity of shared storage? Sure. Does it perform better than shared storage? In some ways. Will it make  DVI VMs run faster? Maybe. But then, aren't you just segregating storage and  other resources by workload? Wasn't that what server virtualization folks were telling  us was driving our IT to perdition just a decade ago: one server, one app?
  The other issue, of course, is mass events. The simple  solution there has been available from companies like Unidesk for quite a while:  application layering. Actually, exotic technologies for staging app events are  less necessary than good end-to-end management of the environment in an  application-centric view. That's what SolarWinds seems to be striving to  deliver, and it would enable admins to see when and what mass events are  disrupting application performance, so that schedules can be tweaked to avoid  the problems. We've been doing this, albeit often poorly, in networks for  years.
 You Say You Don't Want a Revolution
 
  Nothing about this is revolutionary, at least not from where  I'm standing. At the end of the day, my problem with VDI isn't the technology,  and it isn't the purported (and mostly theoretical) business-value case -- you  know, cost savings, risk reduction and increased productivity. What's holding  back VDI is that we're asking folks to spend money on something they don't  really see much of a need for, even if it might return its investment, and then  some, in the future. Until you can show me how replacing my desktop will make  my workload easier to bear, or help get me out the door at 5 p.m. on Friday, I feel no big need to change.
  Revolutions don't happen for their own sake. They need a  tangible provocation and a compelling vision of the future. At this point, I'm  not sensing either one when it comes to VDI.
        
        
        
        
        
        
        
        
        
        
        
        
            
        
        
                
                    About the Author
                    
                
                    
                    Jon Toigo is a 30-year veteran of IT, and the Managing Partner of Toigo Partners International, an IT industry watchdog and consumer advocacy.  He is also the chairman of the Data Management Institute, which focuses on the development of data management as a professional discipline. Toigo has written 15 books on business and IT and published more than 3,000 articles in the technology trade press. He is currently working on several book projects, including The Infrastruggle (for which this blog is named) which he is developing as a blook.