When you start to design a customer feedback survey program, you will potentially fall prey to “death by bias.”  Your survey results will be tainted by biases.  Sometimes a particular bias matters and sometimes it doesn’t.  You should be aware of and understand the biases that could send you heading down the wrong improvement path because you are reacting to a flawed survey result.  In this post, I will describe some of the biases that can play the most havoc with the accuracy and significance of your survey results.

What is Survey Bias?

The basic premise of surveying is that the sample of customers completing your survey is representative of your total customer base. In survey sampling, bias refers to the tendency of a sample statistic to systematically over- or under-estimate a population parameter.  That is fancy talk for results that are not representative of the total population because of a systematic slanting of collected responses.

Types of Biases

Non-random list of potential survey respondents – This typically occurs early in your journey of collecting and using feedback.  The most likely cause is customer-facing employees (i.e., sales, support, service) offering to provide a list of survey invitees who will “definitely” complete the survey.  However, these selected customers are the ones who “love” the company, products, and people so you do not get a true picture of how your entire customer base actually feels.

Survey mode – There is a significant different in response results between telephone and web surveys.  (For the academically minded, please download “Survey Mode Impact Upon Responses and Net Promoter Scores” here).  In our paper, we showed that the differences between survey modes for the “NPS question” was highlighted in the mean score (using a 0 to 10 scale) for each, listed here:

Telephone Mean Score: 8.79

Web Mean Score:          7.44

This bias deserves special attention since many companies are migrating from the more expensive phone surveys to web surveys as they collect email addresses. They then combine all the results into one data set.  The problem with this is that, over time, they will have more responses from the lower scoring web surveys and they may believe their customers are becoming less satisfied when this is not necessarily the case.  Ignore at your own peril!

Demographics – Things like respondent’s age, gender, education level and where they live all have an effect on results.  However, as long as your respondents are basically the same from survey to survey then the variations get averaged consistently and you are OK to use these results.  Here are some examples of how these variables affect NPS® results (all derived from the same survey by Satmetrix®):

AgeNPS varies between 14 (age 18-29) and 43 (70+)
GenderNPS for males=25, women=30
Location (in the U.S). – NPS = 27 in the West and 35 in the South
Education Ph.D.- 16 to high school graduate = 37

Culture – The country where the respondents live or do business really affect results.  A European consultant friend shared some data with me that illustrate this bias. Using a 0 (low satisfaction to 5 (high sat.) scale, the U.S. rates a 4.54 (reasonably high) while France is 4.19 (very low) and Switzerland a 4.69 (very high).

Subject of the survey – Using data from the same source as previously mentioned, there is a significant satisfaction level difference between products and services.  One example is in the U.S., where satisfaction with products was about 8% higher than overall satisfaction with services.

In addition, numerous other factors can impact survey results.  For example:

  • Question wording
  • Scale design
  • Positioning of questions (start of survey or end)
  • Geography (impact of different cultures)
  • Survey length
  • Use of incentives and reminders

What Do All These Biases Mean?

If your company is randomly inviting customers to participate in your survey, and is less concerned with the absolute “number” and more concerned with the trend, then all this is just interesting.  However, once a company starts to compare itself to other companies, you are asking for trouble.  You have no idea how the results were generated and hence what biases are in play and how much they impact the comparisons.  This same effect occurs when different divisions in a company, or different geographic regions within a single division, are compared.

If this topic peaks your interest you have plenty of sources of information.  Google returned 69,100,000 results when I typed “survey bias” into the search box.  For most of us, the best approach is to randomly select invitees from your customer list, get enough completed surveys, and only compare yourself against yourself (trending) and you will be fine.

Good surveying!