Summary:  In the survey research world, so-called non-response bias is an ever growing challenge for solid, viable research, even if many people simply ignore the threat.  Election seasons present a great opportunity to explain the concept.

Why? Because

  • Non-response bias is a threat to credible survey research, but
  • It’s the goal of every politician!

Let’s explain this bizarre contrast.

What is Non-Response Bias?

Properly executed survey research collects data from a relatively small number of people — the sample — and projects the results to the population at large.  That is its key advantage of survey research.

Since we’re getting data only from a small group, some random error is inevitable.  The margin of error (or statistical accuracy) tells us how much we should believe the data from the survey sample as telling us how the larger group feels.  Put simply, if we got responses from everyone, we’d expect the results to fall within the margin of error.

Sounds great, but another critical assumption is in play: that the sample respondents are representative of the larger group.  But some people are more – or less – motivated to take surveys.  People who feel strongly one way or the other are more likely to respond, and some people just never take surveys.  No doubt, you can relate.

Those who choose not to respond create the potential for a non-response bias.  If the non-respondents are structurally different from those who do respond, then our findings suffer from a non-response bias. It skews our findings and may lead to bad decisions.  (Participation bias is another term sometimes used.)

This bias is particularly perverse since it’s very difficult to estimate and correct.  Think about it; how do you figure out what people who chose not to response would have told us?  It’s a bit of a Catch-22.

The one safe statement is this: the smaller the response rate, the higher the likelihood of a non-response bias.

And How Does Non-Response Bias Relate to Elections?

An election is a survey, one in which we attempt a census. Every registered voter is invited to cast their vote in a ballot, which is a simple questionnaire with checklist questions, but not everyone will vote – not even in dictatorships.

Now, what’s every politicians dream?

“My supporters are so motivated that they’ll all get out and vote.  Meanwhile, my opponents’ supporters aren’t motivated at all to vote.”

In other words, the politicians want a non-response bias, which will swing the election to their favor!  In fact, it’s a goal for political campaigns: create a non-response bias by using negative advertising to suppress the turnout of the opponent’s supporters.

Conversely, voter turnout drives on election day – or in early voting – are meant to get their supporters to vote in greater proportion than they represent in the electorate.  Those drives attempt to create a sample bias.

The goal of driving both of those biases is to have the ultimate vote totals not be representative of the population at large.

How to Reduce Non-Response Bias?

In contrast to politicians, organizational surveys want to avoid biased data – unless we purposely want to manipulate the results.  (No, that would never happen!!)  However, in our organizational and business surveys we may inadvertently do what the politicians overtly attempt to do: suppress the turnout of some people we’ve invited and invite the groups that we know will give us “good scores.”

So, a first principle is to design your survey program to motivate people to move completely through the survey process.  This increases the likelihood that anyone invited to take a survey, regardless of their views, will complete the survey.  Sounds easy, but it takes thoughtful work.

  • Engage the respondent at every stage from invitation to closure.
  • Avoid convoluted, complex questions.
  • Use language natural to your audience.
  • Have a logical flow to the survey.
  • Keep the survey short and focus on the critical information you need.
  • Use mixed-mode surveying – though that does require comparing the scores between modes.
  • Use mobile-friendly survey design – though that requires a very simple questionnaire design.
  • Send follow-up reminders to non-respondents.
  • Show the value of the survey program – if you’re surveying stakeholders whom you may survey again – by highlighting actions taken as a result of past survey research in ongoing communications.

Reminders are particularly useful since comparing scores from the initial invitation to later reminders gives us an indication of any non-response bias.

However, remember: We never eliminate non-response bias; we can only reduce its impact.  That doesn’t mean we can just brush it aside.  The politicians sure don’t!

Administration, instrumentation biases, and all elements in survey project design and execution are key parts of the training delivered in our Survey Workshop Series.