KS&R Blogs

jkraus@ksrinc.com

Blogger’s Disclosure:  the following blog entry may be considered "old school" or even "outdated" to some – I beg to differ!

When I first started out in market research 20+ years ago, we regularly pre-tested quantitative surveys with a smaller set of respondents before starting data collection to ensure the questionnaire was well understood, engaging, and promoted thoughtful responses from participants. We ascribed to the adage "if you're going to do something, you might as well do it right". In our experience, pre-tests are a rarity in today's fast-paced, I've got to have the answer yesterday mentality where budgets are tight and patience is thin.

Let's face it, pre-testing is not an enjoyable experience for most since you're really looking for what's wrong with your survey (although what's right about it is just as important). I mean, would Picasso step away from what he thought was a completed painting and embrace the opportunity for other people to find flaws in it. Perhaps the analogy is a bit overdone, but researchers tend to find perfection rather than imperfections in their surveys even though our job is to get at the "truth" through a crisp, well though-out, and highly engaging survey. Dare we think our survey doesn't pass muster on any of these accounts.

There is also a certain out of sight, out of mind mentality to survey research nowadays, particularly as we continue to move more and more to online data collection. We can't be in the minds of all the respondents as they complete a survey, but since it is presumably so well written (see pre-testing isn't fun above), then we take a leap of faith that everything is "fine", and of course respondents have complete clarity about what they're being asked and are highly engaged throughout the entire survey.

Finally, and perhaps more to the point, pre-testing is no longer in vogue (dare I say it's "uncool"). Who wants to take a stand and say we need to slow down (just a little) to make sure we've got this one right because too much is at stake if it isn't. I've silenced an entire room taking that stand on various occasions over the last few years" (for the record, also "not fun").

These are all fair points, or at least honest ones. And I've certainly missed an opportunity to pre-test a survey at different times in my career when my brain and my gut told me to insist on doing so – we all have! However, and it's a BIG however, we as researchers need to bring pre-testing back into the mainstream as a standard practice, at least in the following situations:

  • For more complex survey instruments that include new and/or detailed information that is critical for respondents to understand well in order to provide thoughtful input (for example, new product testing to ensure all features and benefit attributes are well understood).
  • For longer questionnaires, where there is a question about whether or not respondents will stay engaged throughout the entire survey and/or a need to identify the point that fatigue starts to set in.
  • For studies involving a long list(s) of attributes where there is a potential opportunity to pare them down to reduce respondent fatigue (e.g., attributes that are considered very similar by respondents, those that have little/no importance, etc.).
  • For choice-based studies where the main objective is to provide as close to a real-world buyer's experience as possible in a research environment. If the choices we're providing respondents are not well understood or overly complex we're defeating the very purpose this design was chosen in the first place. Since these studies usually have to do with opportunity forecasting or product optimization, they also require a level of precision that requires well understood choice sets.
  • Any study that is deemed particularly strategic, mission-critical, and/or is tied to opportunity for significant gain or loss depending on the decisions that are informed by the research. In other words, studies where the stakes are too high to risk faulty research results due to a poorly constructed survey.
  • Any study where the results will be deeply scrutinized and it's imperative to provide evidence that the survey itself was fully tested and (as appropriate) adjustments made to ensure it was well understood and engaging to respondents.

Although there are honest and practical reasons that pre-testing has fallen out of favor over the past several years, the truth is the downside risk of not executing a well thought out pre-test in certain situations is too great to dismiss. If there isn't a little extra time and budget to pre-test, an even more honest question might be whether the research is worth doing in the first place.

Note: in a future blog, I'll provide suggestions to execute effective pre-tests when time and money is tight (as it usually is!)


Jim Kraus
Vice President, Principal

Jim Kraus

Jim has over two decades of market research experience on the client and supplier side with particular expertise in the Technology, Financial Services, and Consumer industries. Jim's passion is helping his clients bring products and services to market that are competitively differentiated, meet a critical customer need(s), are optimally positioned, and result in a great customer experience before and after the sale. Known as a creative, result-oriented researcher, Jim leads KS&R's efforts to continuously bring innovative tools and methodologies to our clients that enhance learning and produce a more positive impact on their business performance. Jim holds an MBA from Rutgers University in New Jersey. Outside of work, Jim enjoys reading, golf, softball and spending time with his wife, daughters, and two dogs (also female!).