How do response bias and non-response bias apply to our organizational surveys.
If we have low response rates, we likely have a non-response bias in our survey results. How did those who chose not to participate actually feel? Well, we don’t know since they chose not to participate. It’s a bit of a Catch-22.
I’ve heard people want to claim that those who didn’t respond must be satisfied. I even saw a LinkedIn discussion propose that the non-respondents should be included as “Passives” in the calculation of Net Promoter Scores! Such logic flies in the face of the very concept of surveying, and no professional surveyor would ever recommend it. (If they did, they’d better carry a lot of professional incompetency insurance.)
We may also have some types of response bias affecting our survey results. Ever taken a company survey that asked about some employee’s performance, but you knew the person wasn’t responsible for your experience; rather, it was systematic and out of the person’s control? Did that change how you answered the question? Arguably, that would a concern response bias.
I had that experience recently with a United Airlines survey. It asked about my satisfaction with the length of time for the agent to perform the service. It took some time, but the delays were due to slow responses from the code share partner’s systems. I didn’t want the agent to take a hit for something beyond her control. So, I altered my response. That’s a type of response bias.
So how did response bias and participation bias affect the pollsters in so many races?
Either the electorate had a significant shift in its views in the last week or the polls predicted a higher turnout among democrats. The latter seems most likely. In other words, a response bias in their polling questions affected their prediction of the participation bias on election day.
These people are seasoned pros. Imagine how many biases are in the surveys done by novices using SurveyMonkey and the like?