Prediction Markets
Tap Collective Intelligence
Small biases, big errors
Survey research: It's just claims, stupid.
The impact of other, innocent looking biases is even more striking. In one experiment we asked respondents to predict the outcome of a simple coin toss: heads or tails?
Using the traditional questionnaire method, with randomized answer options to tick and mixed amongst several other questions, the questionnaire produced a baffling result for this seemingly simple future question. Would other researchers also be surprised by these results? We got an opportunity to find out. At the ESOMAR Annual Congress in Nice, 2014, I had the honour to co-present an interactive session together with Jon Puleston, a friend and rising star in the field of research on research.
We asked the audience - all seasoned research professionals - to tick their prediction for the coin-toss on a questionnaire. Next they had to fill in the percentage of “heads” which they expected the other researchers to tick. Whenever I ask this question to laymen most respond with the expectation that the questionnaire will produce a 50/50 answer. Experienced market researchers already have a sense that traditional market research may be in trouble. They venture cautiously that “heads” may get a 54% response. Imagine: they are fully aware that their gold-standard method may be wrong by 17% on one of the simplest future questions possible, not involving any degree of emotional, mathematical, or logical effort by respondents.
Imagine their dismay when we reported the even worse empirical results: in some cases, it was as bad as 68% heads and 32% tails. If we were to trust the questionnaire method then we would have to consider “heads” double as likely as “tails” which is obviously grotesque. Even worse, if a future question is more complex - say a purchase intent - this problem is not recognizable by logic and few will suspect the result. Just consider what these results mean for political referendums, such as Brexit.
Of course, traditional market researchers try to apply skill to compensate for the many known biases by deflating and adjusting and benchmarking declared purchase intent, but it is self-evident that any deflator must vary greatly depending on many factors: for different countries or regions, over time, by product or category, with different buyer segments, and even respondent groups. This makes this deflation procedure more akin to a guessing game than solid science.
To make matters worse, respondents sometimes actually do switch on their brains in traditional questionnaires to work actively against a reliable result. For example, in pricing studies, respondents intentionally low-ball their responses because the questionnaire is perceived as a negotiation with the company. A local government in Austria sent out a questionnaire asking citizens about their requirements for faster broadband service and, at the end, how much they were willing to pay for such a service. The resulting answer was significantly lower than what similar broadband services fetched elsewhere. In election polls, respondents know that their response will be read by others, so they may abuse this to signal some emotion or sentiment through the survey to its assumed recipient, even if they intend to vote differently on election day.
Despite these significant errors, it stands to reason that the current “questionnaire generation” will ask future questions and intent questions to respondents for some time to come. The idea of asking a direct question to get a right answer is as intuitive as it is wrong.