When you start to design a customer feedback survey program, you will potentially fall prey to “death by bias.” Biases will taint your survey results. Sometimes a particular bias matters, and sometimes it doesn’t. You should be aware of and understand the biases that could send you to the wrong improvement path because you are reacting to a flawed survey result. In this post, I will describe some of the biases that can play the most havoc with the accuracy and significance of your survey results.
What is Survey Bias?
The basic premise of surveying is that the sample of customers completing your survey represents your total customer base. In survey sampling, bias refers to the tendency of a sample statistic to systematically over- or under-estimate a population parameter. That is fancy talk for results not representative of the total population because of a systematic slanting of collected responses.
Types of Biases
Non-random list of potential survey respondents – This typically occurs early in your journey of collecting and using feedback. The most likely cause is customer-facing employees (i.e., sales, support, service) offering to provide a list of survey invitees who will “definitely” complete the survey. However, these selected customers are the ones who “love” the company, products, and people, so you do not get an accurate picture of how your entire customer base feels.
Survey mode – There is a significant difference in response results between telephone and web surveys. (For the academically minded, please download “Survey Mode Impact Upon Responses and Net Promoter Scores” here). In our paper, we showed that the differences between survey modes for the “NPS question” was highlighted in the mean score (using a 0 to 10 scale) for each, listed here:
Telephone Mean Score: 8.79
Web Mean Score: 7.44
This bias deserves special attention since many companies are migrating from the more expensive phone surveys to web surveys as they collect email addresses. They then combine all the results into one data set. The problem with this is that, over time, they will have more responses from the lower-scoring web surveys, and they may believe their customers are becoming less satisfied when this is not necessarily the case. Ignore it at your peril!
Demographics – Respondents’ age, gender, education level, and where they live all affect results. However, as long as your respondents are the same from survey to survey, the variations get averaged consistently, and you can use these results. Here are some examples of how these variables affect NPS® results (all derived from the same survey by Satmetrix®):
Age – NPS varies between 14 (age 18-29) and 43 (70+)
Gender – NPS for males=25, women=30
Location (in the U.S). – NPS = 27 in the West and 35 in the South
Education Ph.D.- 16 to high school graduate = 37
Culture – The country where the respondents live or do business affects results. A European consultant friend shared some data with me that illustrates this bias. Using a 0 (low satisfaction to 5 (high sat.) scale, the U.S. rates 4.54 (reasonably high) while France is 4.19 (very low) and Switzerland a 4.69 (very high).
The subject of the survey – There is a significant satisfaction level difference between products and services using data from the same source as previously mentioned. One example is in the U.S., where product satisfaction was about 8% higher than overall service satisfaction.
In addition, numerous other factors can impact survey results. For example:
- Question-wording
- Scale design
- Positioning of questions (start of survey or end)
- Geography (impact of different cultures)
- Survey length
- Use of incentives and reminders
What Do All These Biases Mean?
Suppose your company randomly invites customers to participate in your survey and is less concerned with the absolute “number” and more concerned with the trend. In that case, all this is just interesting. However, once a company compares itself to other companies, you ask for trouble. You have no idea how the results were generated, what biases are in play, and how much they impact the comparisons. This same effect occurs when different company divisions or geographic regions within a single division are compared.
If this topic piques your interest, you have plenty of sources of information. For most of us, the best approach is to randomly select invitees from your customer list, get enough completed surveys, and only compare yourself against yourself (trending), and you will be fine. Google returned 69,100,000 results when I typed “survey bias” into the search box.
Good surveying!
About Middlesex Consulting
Middlesex Consulting is an experienced team of professionals with the primary goal of helping capital equipment companies create more value for their clients and stakeholders. Middlesex Consulting continues to provide superior solutions to meet the needs of its clients by focusing on our strengths in Services, Manufacturing, Customer Experience, and Engineering. If you want to learn more about how we can help your organization field better surveys, please contact us or check out some of our free articles and white papers here.