Properly executed survey research collects data from a relatively small number of people — the sample — and projects the results to the population at large. That is its key advantage of survey research.
Since we’re getting data only from a small group, some random error is inevitable. The margin of error (or statistical accuracy) tells us how much we should believe the data from the survey sample as telling us how the larger group feels. Put simply, if we got responses from everyone, we’d expect the results to fall within the margin of error.
Sounds great, but another critical assumption is in play: that the sample respondents are representative of the larger group. But some people are more – or less – motivated to take surveys. People who feel strongly one way or the other are more likely to respond, and some people just never take surveys. No doubt, you can relate.
Those who choose not to respond create the potential for a non-response bias. If the non-respondents are structurally different from those who do respond, then our findings suffer from a non-response bias. It skews our findings and may lead to bad decisions. (Participation bias is another term sometimes used.)
This bias is particularly perverse since it’s very difficult to estimate and correct. Think about it; how do you figure out what people who chose not to response would have told us? It’s a bit of a Catch-22.
The one safe statement is this: the smaller the response rate, the higher the likelihood of a non-response bias.
And How Does Non-Response Bias Relate to Elections?
An election is a survey, one in which we attempt a census. Every registered voter is invited to cast their vote in a ballot, which is a simple questionnaire with checklist questions, but not everyone will vote – not even in dictatorships.
Now, what’s every politicians dream?
“My supporters are so motivated that they’ll all get out and vote. Meanwhile, my opponents’ supporters aren’t motivated at all to vote.”
In other words, the politicians want a non-response bias, which will swing the election to their favor! In fact, it’s a goal for political campaigns: use negative advertising to create a non-response bias that will suppress the turnout of the opponent’s supporters.
Conversely, voter turnout drives on election day – or in early voting – are meant to get their supporters to vote in greater proportion than they represent in the electorate. Those drives attempt to create a sample bias.
The goal of driving both of those biases is to have the ultimate vote totals not be representative of the population at large.
In contrast to politicians, organizational surveys want to avoid biased data – unless we purposely want to manipulate the results. (No, that would never happen!!) However, in our organizational and business surveys we may inadvertently do what the politicians overtly attempt to do: suppress the turnout of some people we’ve invited and invite the groups that we know will give us “good scores.”
So, a first principle is to design your survey program to motivate people to move completely through the survey process. This increases the likelihood that anyone invited to take a survey, regardless of their views, will complete the survey. Sounds easy, but it takes thoughtful work.
Engage the respondent at every stage from invitation to closure.
Avoid convoluted, complex questions.
Use language natural to your audience.
Have a logical flow to the survey.
Keep the survey short and focus on the critical information you need.