Mental Ward

Blog archive

A Laughable 'Benchmark'

Hi folks,

First, a few announcements: everyone (in the U.S., that is), have a greath 4th of July! Eat hamburgers and watermelon until you get sick, then work it off playing hours of volleyball.

Second, I'll be out for the balance of the next week. I'll be in Houston at the Microsoft Worldwide Partner Conference, doing some video work for my buddies over at sister publication Redmond Channel Partner magazine. While I'm there, however, I'm going to try my best to grab some Microsoft execs who work in the virtualization arena, and talk to them about Hyper-V, System Center Virtual Machine Manager 2008, Kidaro, App-V and more. I should get some good blog material out of that. The downside is that I won't be blogging here for most of next week, although I'll try to get some stuff in toward the end of the week.

OK, announcements out of the way. I found an interesting tidbit from our "Virtual Advisor" columnist Chris Wolf. Chris saw an announcement from QLogic and Microsoft about an amazing performance benchmark. The press release claimed that a QLogic SAN and Hyper-V together "achieved industry leading performance results that demonstrate near-native transaction performance for a virtualized computing environment."

Hmm. Sounds impressive. But the intrepid Mr. Wolf's skepticism antannae went up, and he started digging into the data itself. What he found, he said, made him feel duped:

"If I was watching an Olympic event, this would be the moment where after thinking I witnessed an incredible athletic event, I learned that the athlete tested positive for steroids."

It turns out QLogic and Microsoft had stacked the deck to a nearly hilarious degree to get the results. First, it used a Texas Memory RamSan 325 FC storage array, which is a solid state device. It also used a block size of 512 bytes, which you don't see in the real world. Chris nails this fantasy benchmark setup to the wall:

"No other virtualization vendors have published benchmarks using solid state storage, so the QLogic/Hyper-V benchmark, to me, really hasn't proven anything. Furthermore, the published benchmark fails to reveal latency numbers, which has been the most useful value of storage performance in virtualized environments. Applications can be very sensitive to I/O latency, and it's import to disclose latency numbers in any storage benchmark ... To me, these exercises in smoke and mirrors trickery (i.e. solid state storage in a hypervisor storage performance "benchmark") yield more questions than answers."

Go get 'em, Chris. Microsoft and QLogic, you need to bring it stronger to the hole than that (to use current basketball terminology) or risk more "in your face" rejections in the future. You can start by being more transparent about your methodology. A cynic might claim that you purposely left out information that would show your "benchmark" to be the farce that it is.

The lesson: beware benchmarks that aren't up front about methodology -- especially when they come from a company that's just released a brand-new product (like, uh, Hyper-V for example) and needs to push it.

Posted by Keith Ward on 07/03/2008 at 12:48 PM


Featured

Subscribe on YouTube