Dan's Take

Legit Survey or Sales Pitch?

Not all IT surveys are disguised sales tools for a vendor's product, but enough are that you need to be on your guard.

From time to time, I'm approached by a PR representative of a vendor with the "news" of the release of a new survey that, naturally, supports the position of the vendor. Having executed quite a few surveys during my time at IDC and at 451 Research, I'm rather sensitive to the use of this research tool as a sales gimmick rather than as a way to learn more about customers or market dynamics.

Useful Tools
When designed and executed properly, a survey can be an extremely useful tool that allows analysts to gain a better understanding of customer needs, issues with currently available products, features desired in future products, acceptable pricing for products and services, and so on. Although having this information in hand allows the market to build better, more useful products, it's increasingly difficult to get people to respond. Why, you might ask? Although I haven't conducted a survey on surveys, I would attribute this reluctance to many factors. Here's a small sample of issues I've seen with vendor-sponsored and -executed surveys:

  • Too busy. Decision makers are just too busy to bother with a survey, no matter how useful participation would be to their organization. This can lead to an unacceptably low response rate. Many times only a small fraction -- as small as a tiny fraction of 1 percent -- of the people on the list respond.
  •  Too many questions. Some surveys ask too many questions, taking too much time to complete, so potential respondents abandon the surveys.
  • Biased questions. The questions asked are designed so that they support the sponsor's viewpoints or its products. Potential respondents often abandon the survey when this is discovered.
  • Sample not representative of the market. Some only survey their own customers, conduct the survey at their own events, or only survey decision makers in a limited geographical area. The end result is that the survey is not universally applicable.
  • Wrong person asked to respond. Just because a person is in a decision-making role for a company doesn't mean that person participates in every decision the organization makes.
  • Survey is really a sales tool rather than a way to learn about market dynamics. Some suppliers use surveys as a way to support their product, their service or their approach to the market, rather than seeking information in a neutral, unbiased way.

Good Surveys Are Hard
Lists of potential respondents can be very costly, so many vendors will purchase the least-costly list they can find that includes potential respondents who are vaguely, sorta, kinda like the target group. Another common approach is for a supplier to use its own customer list. This shotgun approach often means that many are irritated and a few respond.  This, by the way, lowers the overall response rate for all other surveys.

I've learned that authoring a comprehensive, concise, neutral and unbiased survey instrument (list of questions) is quite an art. It's rather rare that a product manager or product marketing manager trying to learn about the needs of their product's target audience will do much more than write something promoting his views. I've been subjected to surveys containing misspelled words and grammatical errors, as well.

A comprehensive, worldwide survey must be offered in several languages and using several different techniques.  While some will respond to an e-mail invitation and visit a Web site to answer questions, others will only respond if someone they know and trust invites them to take part. Furthermore, some will only respond if they're going to be personally interviewed rather than being asked to visit a Web site. This, of course, means that conducting a survey is a long process, so industry opinion could've changed by the time results are compiled, analyzed and a report is written.

Silly Surveys
Here are some examples of silly surveys that have been presented to me as if they offered insight into the worldwide market. What's clear from these examples is that the PR folks are hoping that I'll comment on these surveys and help the sponsor make its point. Unfortunately, most of these surveys were not constructed or executed very well. In the end, the sponsor looks silly and takes a hit on its reputation.

  • A supplier touted that a recent survey supported the adoption of a feature of one of its products. When I ask about the survey sample, I'm informed that well over 90 attendees of that company's own customers took the time to take the survey at its own customer conference. The results of the study are presented as if they have worldwide significance, rather than merely showing the preferences of a few.
  • Another survey makes strong predictions about market-purchasing decisions. When I probe about the base of respondents, I'm informed that comments of more than 400 "IT specialists" were considered. When I ask how many of them are actually decision makers within their own company, I learn that only a very small percentage of the respondents fit in that category. Will their organizations really pay attention to those opinions? It's not at all clear.

Do Four Respondents a Survey Make?
A well-known software supplier recently promoted a "Total Economic Impact" study executed by Forrester Consulting LLC. The research purported to support the idea that using industry standard x86 systems, rather than single-vendor Unix systems, were responsible for the following "economic impacts":

  • Reduced CAPEX by 80 percent: $20,520,000.
  • Reduced maintenance fees: $9,234,000.
  • Increased responsiveness to changing business needs -- "Our response time was reduced from months to minutes."

That was interesting, but when I examined the research a bit more closely, I discovered that a grand total of four people were interviewed, including the following (taken directly from the survey report):

  • National government agency in Europe. With more than 100,000 employees and 2,000 branch locations, this agency provides services to citizens across the country. The agency has more than 5,000 servers, 900TB of storage, and more than 100 applications with dedicated databases.
  • International bank headquartered in Asia. The bank has more than 10,000 branches located across the globe. The technical division maintains 2,000 to 3,000 servers, 300TB of data, and more than 200 applications.
  • Global energy company based in Europe. This energy company has operations in nearly 80 countries and employs more than 75,000 people around the globe. The company currently has 5,000 servers, 8PB of storage, and 400 enterprise applications.
  • Global automobile parts distributor headquartered in North America. The organization has more than 200 servers running its suite of SAP applications and another 1,000 servers with more than 2PB of storage.

Opposites Don't Attract
Even though these four decision makers represented a large number of systems, storage and branch offices, they really can't be seen as representing the market as a whole. The results, however, were touted as if the analysis would be useful to a worldwide audience.

I've seen similar reports coming from other suppliers, such as IBM and HP, that came to exactly opposite conclusions; that is, a smaller number of larger systems would reduce the organization's overall costs. IBM, for example, has released some pretty impressive results based on its Power8-based systems running its version of Unix (AIX) or either Red Hat or SUSE enterprise Linux distributions.

The point of those studies was that the single-vendor Unix systems could support larger workloads, larger amounts of memory, using faster processors, a larger amount of storage, could execute a larger number of virtual machines and could, as a result, do the same work on a smaller number of systems. The net result, these studies point out, is that the total cost of systems, software and overall support would be lower, taken as a whole.

Which view is right? That decision, as one of my college professors used to say, is left up to the student.

Dan's Take: Ask the Right Questions
Rather than rushing to believe the results of these studies, it would be very wise to better understand the who/what/when/where/why/how of the study:

  • Who was included in the study (and who wasn't)?
  • What questions were being asked in the survey; are those questions neutral and unbiased, or are they leading the respondent to support the sponsor's conclusion?
  • When/Where was the research done? Survey data spoils faster than milk. A survey done at a vendor's own event clearly doesn't represent a neutral sample.
  • Why was the study done? Was it designed to uncover facts, or merely gain support for the sponsor's products or services?
  • How were the questions asked? Did self-selected respondents respond to questions online? Were the questions asked during a face-to-face interview?

When reviewing the data, play close attention to the Table titles, the number of respondents who answered the questions and when the survey was executed. A survey done two years ago or that included only four respondents many not be relevant or useful to you or your organization.

Don't fall into the trap of taking an exciting headline to heart when the underlying research isn't dependable, reliable or representative of the market or your place in it.

About the Author

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.

Featured

Subscribe on YouTube