Wednesday, August 22, 2012

The Perception Dilemma, Or, What Can We Do About Self-Report Bias?

A recent article in the Sunday New York Times called “Why Waiting Is Torture” (http://www.nytimes.com/2012/08/19/opinion/sunday/why-waiting-in-line-is-torture.html?pagewanted=all) brought to mind one of key dilemmas in survey design – the simple fact that people often “misremember” their experiences (which is what we call “self-report bias”). How reliable can survey results be if respondents cannot accurately recall what happened?

The article itself is about the psychology of waiting in lines and some of the points are very interesting (although perhaps not surprising to researchers!):

1. According to Richard Larson at M.I.T., occupied time (such as walking to a specific location) feels shorter than unoccupied time (such as standing around waiting),

2. There is a tendency to overestimate the amount of time spent waiting in line (the article quotes an average of 36%),

3. A sense of uncertainty, such as not knowing how long you will be in line, increases the stress of waiting, while information and feedback on wait times or reasons for delays improve perceptions,

4. When there are multiple lines, customers focus on the lines they are “losing to” and not on the lines they are beating, and

5. The frustrations of waiting can be mitigated in the final moments by beating expectations, such as having the line suddenly speed up.

What implications do these findings have on survey design and analysis? In my experience, if we are trying to get an accurate record of an event – such as the amount of time waiting in line – a straightforward recall question is not always the best choice. There are actions we can take during research design, in developing our data collection tools and in analysis to deal with the problems or poor or inaccurate self-report of behavior.

At the research design stage, we should ask whether a self-report on a survey question is the best way to collect the data. In some cases, we are better off using direct measures, such as observations of the behavior, instead of asking about it. At the questionnaire development stage, we can explore which ways of asking a question are more likely to limit bias, for example asking people what hours they watched TV last night will produce a larger per night (and more accurate) answer than asking people to estimate their total viewing hours per week. In the analysis stage we often know which direction the self-report bias will tend to lean – for example, people generally under-report their consumption of alcohol and over-report their church attendance. When we know these tendencies we can deal with them either by adjusting the answers up or down – if we know the appropriate adjustment to make – or by mentioning them when we report the findings or make recommendations.

The key here is to take the possibility of self-report bias into consideration and to have a plan for dealing with it. The existence of self-report bias does not invalidate research efforts, it is merely one of the many factors that research vendors and clients must take into consideration as they approach their projects.

How Should You Choose A Focus Group Moderator?

An article by Naomi Henderson in the Summer 2012 edition of AMA’s Marketing Research magazine gives a worthwhile list of guidelines for choosing a moderator (You can read this article at http://www.marketingpower.com/ResourceLibrary/MarketingResearch/Pages/2012/Summer%202012/Qualitative-Reflections.aspx).

She points out that such a choice is not necessarily straightforward because “qualitative inquiry is a delicate balance of personality, experience and awareness of the nuances of group dynamics.” In other words, in choosing a moderator, you are not only selecting someone with a particular set of skills, you are also choosing a personality and all the risks that come along with such a choice.

Naomi’s article goes on to give some very practical advice on:

· What types of questions you should ask a prospective moderator

· What types of questions you should ask the references provided by that moderator,

· What types of work samples to request, and

· What to look for in a sample DVD from your prospective moderator.

This advice is worthwhile and useful but one important point she is missing is that there are very different styles of moderating which can have a huge impact on the perceived “fit” between clients and moderators.

In my experience, the two biggest styles are what I call the “laid back” style vs. the “in your face” style of moderating. Both are effective forms of moderating but each can impact the “back room” in different ways.

Over the years, I’ve worked with a number of moderators of the “laid back” variety. They tend to be very calm, which helps the group relax, and are very deliberate in their approach, which means that the topics get thoroughly explored. One moderator in particular made very good use of silences in the group – instead of filling each moment with questions, he let respondents essentially talk through the issues and build on each other without doing a lot of active probing. I think this approach works but, at times, the silences can make certain back room clients uncomfortable because they are not “getting what they want.”

Personally, I’m more of the “in your face” type of moderator. These moderators take a very active role in the group, tend to run very high energy sessions and work very hard to avoid silences. Because there is almost always something happening in these groups, clients tend to get a sense that it is a “good” group. However, clients can also miss some of the nuance in these groups or feel that certain topics were not fully addressed.

My main point is that, in addition to the Naomi’s practical suggestions for choosing a moderator, a good addition is to also ask a prospective moderator “how would you characterize your style of moderating?” In doing so, think about the team/internal clients you will have working on your project and what style of moderating might fit best with them.

Sunday, May 20, 2012

What Questions Help Improve the Effectiveness of Qualitative Research?

All effective qualitative market research projects must start with a clear understanding of the background and objectives for each project. To help define a project and determine the appropriate methodology, we generally ask our clients the following questions:

  • What are the research objectives? What are you hoping to learn? What background information can you share which led to the need for this research?
  • Are there any other ways you might describe what you’re trying to explore in this research? (This question can help provide more richness to the definition of study objectives.)
  • What team/internal clients is this research being conducted for? Does this team or these clients have specific preferences about how research results are summarized and/or presented?
  • Are all team members in agreement about what this research should explore – and, if not, what are the differing perspectives?
  • What have you/your team already done to explore these issues? (This can include previous qualitative research, quantitative research, internal data, secondary research, etc.)
  • Have you ever done similar research in the past and, if so, are there any issues that were not addressed then, that you now wish you had explored? Do you have any frustrations concerning the last time you did similar research?
  • What other initiatives/internal issues might affect this research? AND what other initiatives/internal issues might be affected by this research?
  • What decisions will be impacted by the learning from this research? How might you act differently based on what you learn? Also, what are areas that can’t be changed, regardless of what the research might learn?
  • Are there any hypotheses about the answers, among your team or your internal clients? If you were to imagine that the project is complete, what’s your ideal outcome?
  • What do you expect to be the biggest challenges we encounter as we conduct this research?
  • What specific constraints do we need to keep in mind?
  • What stimulus material – if any – do you want to people react to and what format will it be in?
  • What issues related to target audience might be relevant to know as we design the research? And are there any customer segments we should keep separate or perhaps combine for any reason?

Clear and thoughtful answers to these questions are essential to meeting both the stated and unstated objectives of any qualitative research project. These questions help to decide (1) if qualitative research is the right methodology for your objectives and, if so, (2) which approach would best meet your needs (such as deciding between focus groups and individual depth interviews), (3) what the necessary recruiting specifications are so that an accurate and effective screening questionnaire can be written, (4) what issues need to be covered in the moderator guide and, finally, (5) how your research analyst should prepare the project deliverables.

It’s impossible to overstate the importance of setting clear objectives BEFORE you undertake any research project if you want to have a successful outcome. In fact, this initial discussion can avoid those disastrous “Oh, by the ways” that have destroyed many research efforts!

Tuesday, May 15, 2012

How Can I Get the Most Out of Ideation or Brainstorming Research Sessions?

There are a number of established techniques for ideation and/or brainstorming (which are similar, but not exactly the same) that can be effectively used a part of a systematic search for targeted opportunities in the form of new features, new products, new markets, and/or new services within various categories of interest.

The fundamental premise of these techniques is to start with an issue or challenge and then generate a broad range of different possible ideas to address that challenge. Often, there are two important components to a brainstorming project: “Divergence” is the process of generating ideas followed by “Convergence,” which consists of selecting and developing the top ideas.

Although brainstorming sessions share some similar characteristics with focus groups, brainstorming research sessions are quite different from traditional focus groups ‒ within the field of new product development, brainstorming sessions are about exploring possibilities, generating new concepts and discovering new opportunities, whereas traditional focus groups are best used to validate ideas, weed out bad concepts and improve existing concepts.

The distinctions between brainstorming sessions and regular focus groups carry through to some critical differences in how the groups are conducted:

  1. Brainstorming sessions last longer than most focus groups to ensure there is sufficient time for both training and ideation. Typically, each brainstorming session is scheduled to last between 2-1/2 and 3 hours whereas focus groups generally do not go beyond 2 hours.
  1. Participants are recruited specifically to be natural “lateral thinkers” or “intuitors” because this thinking style has been shown to correlate positively with the ability to generate new ideas. However, this isn’t a common talent – most consumers are very good at reacting to ideas they are presented with but they’re not as good at coming up with new ideas on their own. In addition, the “creativity” recruiting specifications are over and above the need to invite participants who have experience with the topic under discussion.
  1. Participants in brainstorming sessions are given a homework assignment to complete in advance of the session and are required to start generating ideas before attending the session. This helps to get them “primed” for the discussion and ensures that each session can start off with idea sharing from the start.
  1. During the recruiting phase of the project, a member of the project team will contact each qualified participant by phone to encourage their idea generation and answer any questions about the process or expectations from the sessions.
  1. Brainstorming sessions are not as much of a “discussion” as a focus group is – rather, the goal is to keep things moving and use ideas shared to spark additional ideas.
  1. Ideally, the client team (often consisting of 4 to6 people) is encouraged to be fully engaged in the process and to use the ideas from the consumers to help spark their own thinking. In the end, it is often the client team members who end up generating the best, most workable ideas.

Getting the best value from brainstorming sessions also requires following a number of important steps to ensure that good quality ideas are generated. In our experience, the most effective brainstorming sessions consist of:

  1. Introductions and training in the rules of brainstorming.
  1. IDEATION GENERATION. Each participant shares one idea at a time, the facilitator probes for clarification if necessary, and other participants share any “builds” they have on the idea. A “build” is a new idea that is sparked by the original idea shared. The participants continue to generate and share ideas throughout the session, while the client team listens in the backroom and builds their own ideas.
  1. Negative comments quickly shut down the idea-generating process; therefore, participants are taught to approach ideas with a specific mindset. If they hear a new idea they dislike, rather than share this negative reaction, they instead focus on generating a new idea that fixes what they don’t like or simply move on to sharing another idea they have generated.
  1. The client team is brought in with the participants mid-way through the session and the client team members work in small teams with the consumers. Typically, each small team is asked to consider the ideas they heard throughout the session and then develop their own “ideal” solution to the project’s challenge. This co-creation process yields a range of different “ideal” solutions for the client team to consider after the session, as they choose and develop their final ideas.

Tuesday, April 24, 2012

Getting the Best Value from Open-Ended Questions

An article by Carolyn Lindquist called “For Better Insights From Text Analytics, Elicit Better Comments” in the most recent edition of Quirks Marketing Research (April 2012) gives three recommendations for improving the quality of consumer responses to open-ended questions. These three recommendations are:

1. Target your questions

2. Ask why

3. Be sensitive to placement

Based on my own experience, these are worthwhile considerations when designing surveys. I think most quantitative researchers – including me! – can fall into the twin traps of asking too many open-ends in a single survey and not defining those open-ends as clearly as possible.

I’m a strong believer in what I consider “directed open-ends,” which means that the wording is specific to the situation rather than a catch-all “please list comments below.” For example, in concept tests, I strongly believe in asking for strengths and weaknesses separately and this makes the survey both easier to answer and to analyze. This is consistently with Carolyn’s recommendation to “target your questions” – the example she gives is to link is to vary the open-ended question text according to the stated level of overall satisfaction.

I’m intrigued by Carolyn’s suggestion to ask “why” rather than “what questions,” as they have found that asking “why” (such as “please tell us why you were less than satisfied with your experience”) yields longer and more useful answers than asking “what” (as in “please tell us what we can do to improve your next experience”). She has found that the responses to “what” questions contain less detail and emotion than the answers to “why” questions. I think this suggestion is worth testing out. However, this does not mean we should ask “why” after every rating question, as we’ve had some clients request a few times over the years!

I also agree with her third recommendation on being sensitive to the placement of open-ended questions, although I don’t agree with her suggestion that open-ends should only be asked at the end of a survey. In my experience, open-ended questions should appear where they make the most sense in a survey and a nice balance of quantitative rating questions and open-ends makes for a more pleasant and natural survey-taking experience. One caveat though – I avoid having too many open-ended questions listed sequentially, as I believe that too many open-ends in a row can lead to a feeling that the survey is longer than it actually is and lead to respondent fatigue.

Thursday, April 19, 2012

Some Practical Advice on Statistical Testing

One thing that I am willing to admit is that I am a very “practical” researcher, meaning that I prefer to rely on the craft of analysis when constructing a story more than statistical analyses. This is not to say that advanced statistical tools do not have their place within a researcher’s tool box but they should not substitute for the attention required to carefully review the results and to dig deep through cross tabs to uncover the patterns in the data so as to create the a relevant and meaningful story. Remember the adage – “the numbers don’t speak for themselves, the researcher has to speak for the numbers.”
A great example of this is the use – and misuse – of statistical testing. I would never claim to be a statistician but, over the years, I’ve found that the type of statistical testing that often accompanies data analysis has very limited uses. In a nutshell, statistical testing is great for warning analysts when apparent differences in percentages are not significantly different. This is extremely important when deciding what action to take based on the results. However, such testing is no use on its own when determining whether statistically significant differences are meaningful. In my experience, statistical significance works as a good test of difference but such differences alone are insufficient when analyzing research data.
I love this comment from an article by Connie Schmitz on the use of statistics in surgical education and research that “Statistical analysis has a narrative role to play in our work. But to tell a good story, it has to make sense.” (http://www.facs.org/education/rap/schmitz0207.html) She points out that, with a large enough sample size, every comparison between findings can be labeled “significant,” as well as concluding that “it is particularly difficult to determine the importance of findings if one cannot translate statistical results back into the instrument’s original units of measure, into English, and then into practical terms.”
The idea of translating survey results into practical terms represents the very foundation of what I believe market research should be doing. This same idea is highlighted in an article by Terry Grapentine in the April 2011 edition of Quirks Marketing Research called “Statistical Significance Revisited.” Building on an even earlier Quirk’s article from 1994 called “The Use, Misuse and Abuse of Significance” (http://www.quirks.com/articles/a1994/19941101.aspx?searchID=29151901), he stresses that statistical testing does not render a verdict on the validity of the data being analyzed. He highlights examples of both sampling error and measurement error that can have major impacts on the validity of survey results that would not at all affect the decision that a particular difference is “statistically significant.” I agree wholeheartedly with his conclusion that “unfortunately, when one includes the results of statistical tests in a report, doing so confers a kind of specious statement on a study’s ‘scientific’ precision and validity” while going on to point out that “precision and validity are not the same thing.”
Personally, I find it especially frustrating when research analysis is limited to pointing out each and every one of the statistically significant differences, with the reader expected to draw their own conclusions from this laundry list of differences. How can that possibly be helpful in deciding what action to take? In this case, the researcher has simply failed to fulfill one of their key functions – describing the results in a succinct, coherent and relevant manner. In contrast, I believe that I follow the recommendation of Terry Grapentine (and of Patrick Baldasare and Vikas Mittel before him) that researchers should be seeking and reporting on “managerial significance,” by focusing on the differences in survey results “whose magnitude have relevance to decision making.” This is quite a different approach than simply reciting back the results that are statistically different.
Going back to Connie Schmitz’s article, she closes with a great observation conveyed by Geoffrey Norman and David Streiner in their book PDQ Statistics:
“Always keep in mind the advice of Winifred Castle, a British statistician, who wrote that, ‘We researchers use statistics the way a drunkard uses a lamp post, more for support than illumination’.”

Tuesday, April 17, 2012

The Risks of Projecting Survey Results To A Larger Population

In my experience, most quantitative research results are analyzed on the basis of the survey results themselves – such as the percentage distributions on rating scales – without the need to project results onto the larger population that the sample represents. It is generally understood that, with reasonably rigorous sampling procedures, these distributions are reflective of the attitudes held by the population at large.

In some instances, though, it is important to project to the larger group, such as when creating estimates of product use based on concept results. In these cases, we face a special challenge – do we take consumers at their word and simply extrapolate their answers to the larger population or do we use some combination of common sense and experience to adjust the data?

Although there are many sophisticated models for translating interest in a new product or service into projections of first year use, most include “adjustments” to the survey data to account for typical consumer behavior, such as:

1. The typical 5-point purchase intent scale is weighted in order to more accurately predict what proportion of the population will actually try the product. For example, the proportion of those who would “definitely buy” might be given a weight of 80% to reflect a high, but not absolute, likelihood of buying whereas those who would “probably buy” might be given a weight of just 20%.

2. Secondly, these results assume 100% awareness of the new product or service so further adjustments are required to account for the anticipated build in awareness, usually as a result of advertising, and

3. Thirdly, some estimate of repeat purchase is required, often derived from consumer experience with the new product or service or from established market results.

We take these steps to mitigate the risk of simply applying the survey results to the total population, as this could wildly inflate potential use of a new product or service.

This issue came to my mind this weekend when reading a New York Times article called “The Cybercrime Wave That Wasn’t” (http://www.nytimes.com/2012/04/15/opinion/sunday/the-cybercrime-wave-that-wasnt.html) in which Dinei Florêncio and Cormac Herley of Microsoft Research conclude that, although some cybercriminals may do well, “cybercrime is a relentless, low-profit struggle for the majority.”

Part of their analysis questions the highly-touted estimates of the value of cybercrime, including a recent claim of annual losses among consumers at $114 billion worldwide. This estimate makes the value of such crime comparable to estimates of the global drug trade. As it turns out, however, Florêncio and Herley conclude that “such widely circulated cybercrime estimates are generated using absurdly bad statistical methods, making them wholly unreliable.” This is a very practical example of how results from what appear to be reasonably large research samples can run into critical problems of statistical reliability, whether through poor sampling, naïve extrapolation or other sorts of statistical errors. In the case of the cybercrime estimate, it appears that the estimates of losses that come from just 1 or 2 people in the research sample are being extrapolated to the entire population, which means that

In this particular example, a more accurate approach would be to separate the “screening” sample – i.e., identifying those consumers who have been victims of cybercrime using an extremely large database – from the “outcome” sample. In other words, if the goal is to estimate the impact of cybercrime, the objective should be to find a reliable sample of victims and interview them on their experience, including the extent of their losses. This approach would provide a much more rigorous basis for estimating the total value of cybercrime. However, caution should still be exercised when projecting to the total population.

The key learning is that anytime we have data we want to extrapolate, we need to think about how much we trust that data to be accurate. There are some things consumers can report with superb accuracy - where they ate lunch today, the size of their mortgage payment, how many pets are in their homes. Assuming a decent survey sample, data of this sort can be easily extrapolated to a larger population. But other kinds of data are less accurate, whether due to the limits of human recall or various other forms of bias. Studies have shown, for example, that survey respondents cannot accurately recall where they ate lunch a week or two ago (recall error), tend to under-report their alcohol consumption (social desirability bias) and over-estimate their future purchases of products we show them in concept tests.

So, if we wish to extrapolate from our survey data to a larger sample, we have to be honest about how accurate the results are, what sorts of bias might inflate or deflate the numbers, and what sorts of adjustments, if any, we should make. And when we see stories in the media with giant estimates of the prevalence of some sort of crime, social problem or behavioral trend, we need to take a moment to ask how they came up with those numbers. Often, with a little digging, we see problems in how these estimates were created, leading to the same need for logic and common sense that we find when dealing with our own market projections.

Monday, April 9, 2012

How and When Should I Use Statistical Testing?

Statistical testing is a common deliverable provided by market research vendors. But in some cases the users of the research findings may be uncertain about what the statistical testing really means and whether or not it should influence the way they use the data. Below are five key questions to keep in mind when using statistical testing.
1. What kind of data am I dealing with? Statistical testing can only be applied to quantitative data, such as survey data. There are no statistical tests for qualitative data, such as focus groups and in-depth interviews.
2. What am I trying to learn? Most statistical testing is used primarily to help decide which of the differences we see in our data are real in terms of the population we are interested in. For example, if your findings show that 45% of men like a new product concept and 55% of women like the concept, you need to decide if that difference is real ‒that is, the difference seen in your survey accurately reflects a difference between men and women that exists in the larger population of target consumers.
3. How certain do I need to be? Confidence intervals are the most common way of deciding whether percentage differences of this sort are meaningful. The size of a confidence interval is determined by the level of certainty we demand – usually 90% or 95% in market research, 95% or 99% in medical research – and the size of our sample relative to the population it is drawn from. The higher the level of certainty we demand, the wider the confidence interval will be – with a very high standard of certainty, we need a wide interval to be sure we have captured the true population percentage. Conversely, the bigger the sample, the narrower the confidence interval - as the sample gets bigger it becomes more and more like the target population and we become more certain that the differences we see are valid.
4. How good is my sample? Most statistical tests rely on key assumptions about how you selected the sample of people from whom you collected your data. For tests like the confidence intervals described above, this key assumption is having some element of random selection built into your sample that makes it mathematically representative of the population you are studying. The further your sampling procedure strays from this assumption, the less valid your statistical testing will be. If you can make the case that your sample is not biased in any important ways relevant to your research questions, you can rely on your stats tests to identify meaningful differences. If you have doubts about your sample, use the tests with caution.
5. Does my data meet other key assumptions about the test? Some stats tests assume particular data distributions, such as the bell-shaped curve which is an underlying assumption for confidence intervals. If your data are distributed in some other way – lop-sided toward the high or low end of the scale or polarized – the stats test is worse than worthless, it will actually be misleading!
6. Does the stats testing seem to align with other things I know about the research topic? Stats tests should supplement your overall understanding of the data. They are not a substitute for common sense. Keep in mind that most data analysis software will produce stats tests automatically, whether or not the tests are appropriate for the particular data set you are using. Almost every experienced researcher has watched someone (or been someone) trying to explain a “finding” that was nothing more than a meaningless software output.
If you can provide honest, satisfactory answers to these five questions, stats testing can hugely improve your understanding of your data and help you identify its key themes. And likewise, these key questions can keep you from wasting your time analyzing differences that aren’t really there.

Tuesday, January 24, 2012

How Many People Do I Need To Survey To Get Meaningful Answers?

A central decision for anyone considering a survey – or any other quantitative research – is figuring out how big the survey sample needs to be in order to produce meaningful answers to the research questions. Researchers focus on sample size because it ties together three core aspects of any research effort:

  • Cost – the bigger the sample, the more it will cost to collect, process and analyze the data
  • Speed – the bigger the sample, the longer it will take to collect it (big samples can sometimes be collected quickly, but usually only by further raising costs!)
  • Accuracy – the bigger the sample, the more certain we can be that we have correctly captured the perceptions/opinions/behavior/beliefs/feelings of the population we are interested in (the technical term for this is statistical reliability)

As we see from these three bullets, the decision about sample size essentially boils down to a trade-off between cost, speed and accuracy. So when we pick a sample size we are making a decision about how much accuracy we are going to purchase, within the framework of our budget and timing.

Fortunately for researchers, quantitative samples do not have to be enormous to provide findings that are accurate enough to answer most market research questions. Any unbiased sample (we’ll talk about sample bias in another blog entry) of 50 or more stands a halfway decent chance of giving you a reasonable view of the population it is drawn from and, as we increase the sample size, our confidence that we have the correct answer increases. We can show this effect by looking at the margin of error – the plus or minus number – for some common sample sizes. To keep it simple, all of these are calculated using the assumption that the sample is drawn from a large population (20,000 or more) and that we are using the 95% confidence level of statistical reliability (the most typical standard for statistical reliability used in market research). If we are looking at percentages:

  • A sample of 100 has a margin of error of ± 9.8%
  • A sample of 250 has a margin of error of ± 6.2%
  • A sample of 500 has a margin of error of ± 4.4%
  • A sample of 1,000 has a margin of error of ± 3.0%
  • A sample of 2,000 has a margin of error of ± 2.1%

Looking at these numbers you can see why national surveys, such as the big public opinion polls shown on TV or in newspapers, often have samples around 1,000 or so. Samples in that size range have small margins of error, and doubling the sample size wouldn’t make the margin of error much smaller – there’s no reason to spend money making the sample bigger for such a small gain in accuracy.

These numbers also show why we often urge clients to spend a bit to make a small sample bigger, but not too big! The gains in accuracy are all at the beginning – moving from a sample of 100 to something larger is almost always a good idea, while adding anything over 1,000 usually is not. So the rule of thumb is: 100 is probably too small and 1,000 is probably too big.

Of course, in real life it can be more complicated. We may need to examine sub-groups (age or income groups, political parties, geographic regions, etc.) within the population we are looking at. If a sub-group is small, we may need a bigger overall sample to capture enough of each of the sub-groups in order to provide an accurate picture of their views. So we have a rule of thumb about sub-groups, as well – don’t make decisions about any sub-group smaller than 30. For example, if we do a survey of households in a large urban area and we want to compare households by income level, we need to make our sample big enough to have at least 30 households in each of the income categories we want to compare. Assuming this is a normal city, there will be fewer households at the high end of the income distribution than at the low end, so we need to think about how to get enough of the high-end households to be able to make how comparison. So, if we want to be able to look at households with income over $100K, and 15% of the population has an income of $100K more, we need to have a sample of at least 200 households to ensure that 30 of the households would be in that category.

Using these rules of thumb, you can form an idea about how big your sample needs to be to answer your research questions - without spending more than you can afford.

Friday, January 13, 2012

The “Dirty Dozen” – The Most Common Things That Cause Market Research To Go Wrong

Clients who are new to market research often worry that they will spend time and money on a market research project only to later realize that errors in the design or execution of the study have rendered their investment much less valuable than they had hoped. And they are right to worry – there are plenty of cases of market research blunders and even examples where market research findings have led to product design or marketing decisions that were worse than if there had been no research at all!
There are lots of individual events or mistakes that can knock market research projects off track and a short blog entry could never list them all. But there are a few types of errors that account for most of the problems. We think of these as the “dirty dozen,” – the common mistakes that we see over and over again and that rob market research studies of their potential value to decision makers. The tables below list these problems by the stage of the research project where they typically occur, along with the consequences the problem brings and – most importantly – ways to avoid them.
At the research design stage:
Problem Consequences Solution
Poorly formulated objectives or research questions The data collected will not address the real issues, resulting in findings that are vague, inconclusive or even misleading. Write out your objectives and research questions and think about what kind of data would count as an answer to each one. Don’t skip this step or assume that everybody on the project has a shared understanding of what the research is supposed to accomplish.
Poor choice of data collection method(s) Lack of insight when needed depth or breadth (or sometimes both!) is missing from the collected data. Make the method(s) suit the objectives and research questions. Don’t get locked into “standard” approaches or doing what is easy rather than what is best for the study.
Sample design issues Asking questions of the wrong people can produce misleading answers in qualitative studies or statistically invalidate a quantitative project. Be explicit about the sample parameters. Know who you are going to talk to and exactly what larger population the sample is supposed to represent.
Poorly designed/untested research instrument Garbage in – garbage out. Failing to ask well-though-out questions, whether in a survey, a focus group or an in-depth interview, will produce poor quality results. Every question you ask should have a purpose that relates back to the research objectives. Don’t neglect review and testing of the questionnaire or interview guide.
At the data collection stage:
Problem Consequences Solution
Inadequately trained or prepared data collectors Data that is inconsistently gathered can produce gaps, validity problems and lack of depth. Use professionals who know their jobs and have proven track records. Even experienced survey data collectors, interviewers and focus group moderators need to practice with the research instrument.
Failure to meet the sample specifications If the sample you get is not the sample you intended, you may have data that is not pertinent or that misrepresents the views of the target population. Have good quality control on the sample. If adjustments have to be made, be very sure you are not giving up the validity of your sample in order to fill your groups or meet numerical quotas.
Quality control issues Poor quality control can result in errors in the data files, data that is missing or is mis- categorized. Have a plan to check incoming data as it is collected. Don’t wait until data collection is over to begin the process of checking for errors or problems.
Loss of data You can’t analyze data that has disappeared. Have back-ups (and more-back-ups). Never let data reside longer than necessary in a single location or file. Have security and back-up procedures for all data storage media.
At the data analysis/interpretation stage:
Problem Consequences Solution
Lazy/incomplete review of the raw data Important insights can be missed. Have a data analysis plan that sets out how the raw data will be handled. Allow enough time for data review and processing. Don’t rely on human memory or quick skimming to capture all the meaning that the data holds.
Inappropriate data reduction techniques Each time data is reduced, whether through coding of qualitative data or numerical consolidation of quantitative data, there is some potential loss of information or important details. Make sure your chosen data reduction techniques capture the themes, ideas and categories that will answer the research questions. Don’t be afraid to recode or re-analyze if new issues emerge while data reduction is in progress. Remember recoding means you have learned something from your data – it’s a step forward not a step back!
Over-reliance on statistics Interpretation that is guided only by statistical testing runs the risk of missing insights that didn’t quite pass the test criteria or finding “insights” that are really just an artifact of the statistical method. Know the strengths and weaknesses of the statistics you use. Use them as guidelines and tools, not as the word of the research gods. Statistics are not a substitute for common sense of familiarity with your data and your research topic.
Canned answers Having a bias toward a particular answer or type of interpretation can blind you to new themes and ideas that emerge from your data. Keep an open mind. Let the data speak to you. Think about what would count as disproof of your preferred interpretation and make sure that there’s a way for that evidence to emerge.
As you can see, there are many potential pitfalls and room for error in market research projects of any type. Careful planning, design, oversight and analysis are absolutely key to getting the best value from the money you spend on market research!

Friday, January 6, 2012

How Do Qualitative and Quantitative Research Differ?

For market researchers, the distinctions between qualitative and quantitative research are very clear, even to the point where many researchers specialize in one or the other of these two types. If the difference is so clear in our minds, then why is it so difficult to write a clear, straightforward, practical and meaningful explanation of the difference?
Most explanations of the qualitative/quantitative distinction tend to focus on listing the typical methods used for each. But this puts the emphasis on the means – the way the data is collected. The more important issue in thinking about the qualitative/quantitative distinction is on the ends. It is the type of research questions you are trying to answer, and more importantly, what type of information would count as a meaningful answer, that determines whether your approach should be qualitative, quantitative or a mix of both.
For example, if your research question is about how some behavior or opinion is distributed across different groups of people, geographic areas or across time, you need numbers – usually percentages – to make these comparisons. This sort of numerical comparison implies a quantitative approach and all the methodological trappings that go with it, including:
  • Closed-end survey-type questions
  • Relatively large numbers of research participants or direct observations
  • A sampling approach that ensures that you have the statistical validity to project the findings from your sample onto the population you are interested in
In contrast, if your research questions are less about comparisons and more about understanding chains of logic, decision processes, group norms, reactions to ideas or sensory stimuli, or cultural domains, you probably need a qualitative approach. Qualitative research focuses on understanding the human experience in context by either:
  • Directly embedding the research in the scene or cultural setting (ethnography)
  • Using group dynamics to explore norms, attitudes, reactions and beliefs (focus groups)
  • Employing inductive questioning to probe reasoning, causation and psychology (in-depth interviews)
In short, it is knowing both the research question and what counts as an answer to the question that determines which “toolkit” the researcher will turn to for a specific project.
That said, the stage of our knowledge about a topic is often an important factor. When we first begin learning about a new topic we are usually in an exploratory mode. We may still be developing our vocabulary about the topic and couldn’t write a good survey about it if we tried – we don’t know enough to write good closed-end questions! Qualitative research is often used at this stage. The very “hands-on” qualitative techniques enable us to learn the lay of the land – how consumers think and speak about the topic. When we see research objectives using verbs like explore or identify – we are usually at this stage of knowledge and lean toward qualitative techniques. We may also use qualitative research at a later stage of knowledge to help interpret quantitative data. In these cases we use interviews or focus groups to understand the correlations that appear in survey data.
Quantitative studies are usually done only in areas where we already have the appropriate knowledge and vocabulary to design good survey questions. We have to know enough about what to ask about, how to ask it, and who to ask it of, to meaningfully quantify the responses. Research objectives that include words such as test, measure or evaluate are more typical of this stage of knowledge.
In short, thinking about qualitative and quantitative research is not about choosing methods or techniques – it is about understanding the nature of your research questions and having a good sense of what type of data – percentages, narratives, images, word counts, observations, maps of logic chains or decision processes would provide useful answers. If you are clear about what you are asking and what you need to know, the choice of methods will largely take care of itself.