Tuesday, April 17, 2012

The Risks of Projecting Survey Results To A Larger Population

In my experience, most quantitative research results are analyzed on the basis of the survey results themselves – such as the percentage distributions on rating scales – without the need to project results onto the larger population that the sample represents. It is generally understood that, with reasonably rigorous sampling procedures, these distributions are reflective of the attitudes held by the population at large.

In some instances, though, it is important to project to the larger group, such as when creating estimates of product use based on concept results. In these cases, we face a special challenge – do we take consumers at their word and simply extrapolate their answers to the larger population or do we use some combination of common sense and experience to adjust the data?

Although there are many sophisticated models for translating interest in a new product or service into projections of first year use, most include “adjustments” to the survey data to account for typical consumer behavior, such as:

1. The typical 5-point purchase intent scale is weighted in order to more accurately predict what proportion of the population will actually try the product. For example, the proportion of those who would “definitely buy” might be given a weight of 80% to reflect a high, but not absolute, likelihood of buying whereas those who would “probably buy” might be given a weight of just 20%.

2. Secondly, these results assume 100% awareness of the new product or service so further adjustments are required to account for the anticipated build in awareness, usually as a result of advertising, and

3. Thirdly, some estimate of repeat purchase is required, often derived from consumer experience with the new product or service or from established market results.

We take these steps to mitigate the risk of simply applying the survey results to the total population, as this could wildly inflate potential use of a new product or service.

This issue came to my mind this weekend when reading a New York Times article called “The Cybercrime Wave That Wasn’t” (http://www.nytimes.com/2012/04/15/opinion/sunday/the-cybercrime-wave-that-wasnt.html) in which Dinei Florêncio and Cormac Herley of Microsoft Research conclude that, although some cybercriminals may do well, “cybercrime is a relentless, low-profit struggle for the majority.”

Part of their analysis questions the highly-touted estimates of the value of cybercrime, including a recent claim of annual losses among consumers at $114 billion worldwide. This estimate makes the value of such crime comparable to estimates of the global drug trade. As it turns out, however, Florêncio and Herley conclude that “such widely circulated cybercrime estimates are generated using absurdly bad statistical methods, making them wholly unreliable.” This is a very practical example of how results from what appear to be reasonably large research samples can run into critical problems of statistical reliability, whether through poor sampling, naïve extrapolation or other sorts of statistical errors. In the case of the cybercrime estimate, it appears that the estimates of losses that come from just 1 or 2 people in the research sample are being extrapolated to the entire population, which means that

In this particular example, a more accurate approach would be to separate the “screening” sample – i.e., identifying those consumers who have been victims of cybercrime using an extremely large database – from the “outcome” sample. In other words, if the goal is to estimate the impact of cybercrime, the objective should be to find a reliable sample of victims and interview them on their experience, including the extent of their losses. This approach would provide a much more rigorous basis for estimating the total value of cybercrime. However, caution should still be exercised when projecting to the total population.

The key learning is that anytime we have data we want to extrapolate, we need to think about how much we trust that data to be accurate. There are some things consumers can report with superb accuracy - where they ate lunch today, the size of their mortgage payment, how many pets are in their homes. Assuming a decent survey sample, data of this sort can be easily extrapolated to a larger population. But other kinds of data are less accurate, whether due to the limits of human recall or various other forms of bias. Studies have shown, for example, that survey respondents cannot accurately recall where they ate lunch a week or two ago (recall error), tend to under-report their alcohol consumption (social desirability bias) and over-estimate their future purchases of products we show them in concept tests.

So, if we wish to extrapolate from our survey data to a larger sample, we have to be honest about how accurate the results are, what sorts of bias might inflate or deflate the numbers, and what sorts of adjustments, if any, we should make. And when we see stories in the media with giant estimates of the prevalence of some sort of crime, social problem or behavioral trend, we need to take a moment to ask how they came up with those numbers. Often, with a little digging, we see problems in how these estimates were created, leading to the same need for logic and common sense that we find when dealing with our own market projections.

No comments:

Post a Comment