Hypervisor Test Fallout Continues
Well, the hypervisor test
story is stirring up a hornet's nest, as I expected. The latest entry
is from Alessandro Pirilli at virtualization.info. Unfortunately, some of Alessandro's facts are wrong. He states that "last week a group of brave reporters at Virtualization Review challenged VMware and published an independent analysis without asking any permission."
While appreciate his calling us "brave", his assumptions are incorrect. We talked extensively with VMware during the process, and an engineer in the benchmarking department approved our methodology before we went to press. Here's what I said on the virtualization.info blog posting in response to questions:
"Let me set the record completely straight on our communications with VMware: they were aware, beforehand, of what we were doing. One of their engineers in the benchmark section communicated with us via e-mail and a phone conversation. He asked a number of different questions about our methodology. We answered all of them. Thus, VMware had two different opportunities to grill us. We confirmed during the conference call that they were satisfied with our methodology.
We did not tell them the results of our testing during those contacts; we mainly wanted to honor the EULA by informing them of our intent to publish the results, and making sure they felt the testing was fair. They agreed that the testing was fair.
We worked through VMware's public relations department throughout the process, which coordinated the communications. That was important for us, so that VMware didn't feel like we were doing an "end run" around the company by finding some obscure engineer to validate what we were doing, while the rest of the company was kept in the dark. This was above-board the whole way.
Finally, we stand by the results of our testing. As was made abundantly clear both in the article and in my blog posting about it, this was not a test of which hypervisor was "best": it merely determined raw speed under a very specific set of circumstances. To read more into the results than that would be a mistake. As I mentioned, the author, Rick Vanover, is a fan of VMware software; it's what he uses in his datacenter. There was no axe to grind here."
We take end user license agreements (EULAs) seriously, which is why we stayed in communication with VMware throughout the process. This seems to be a point of serious misunderstanding -- that we went on some rogue quest to make ESX look bad. That is simply not the case, and anyone who implies otherwise is either mistaken or lying. (Note that I am not accusing Alessandro of this -- he is misinformed, relying on blogs from VMware when he should have come to me directly.)
He's not the first person I've talked to that's under the impression that we did all this on our own, however. It makes me wonder what FUD VMware is spreading in the community about this. And I don't know how I can state it any more clearly: This test only determined one aspect of these hypervisors. To make a buying decision based on this article would be foolish in the extreme. As Burton Group analyst (and Virtualization Review magazine columnist) Chris Wolf recently pointed out, ESX has more enterprise features than any other hypervisor on the market. Our tests don't change any of that.
All that to say we were not out to "get" any vendor. We thought it would be valuable to have some independent testing metrics on hypervisor speed. We did our due diligence every step of the way, including bringing in an independent testing expert. We think we did provide a service. Of course, anyone can disagree; no one likes free and open debate more than me, I can assure you.
But I won't allow misinformation about our methods to go unchallenged, no matter who's bringing the challenge. Our success depends on our reputation, which depends on our credibility. I will fight tooth and nail to maintain that credibility.
Posted by Keith Ward on 03/16/2009 at 12:48 PM