Survey accuracy. Along with response rates, survey statistical accuracy is the topic that we surveyors get asked about most. Why? Probably because it has to do with statistics, and most people don’t have fond memories of their college statistics class.
Survey accuracy numbers really aren’t that strange a concept. We hear accuracy numbers in political polls, usually stated as a “plus or minus some percent.” The percent we hear is usually somewhere between 4% and 7%. You’ll hear the political talking heads say that the candidates “are in a statistical tie” or “the candidates polling numbers are within the margin of error.”
Those concepts directly apply to your organizational surveys. In this brief article I’ll explain accuracy, confidence levels, and margin of error.
What Does Survey Accuracy Mean?
First of all, the reason we need to talk about survey accuracy is because in most survey situations we do not get data from everyone in our group of interest – or population as researchers call it. Not everyone will respond to our survey invitation or we have such a large population that we’re sending the invitations to a sample drawn from the population.
Our data will be from a subset of our population, and we are going to use the results from the sample (sample statistics) to gauge how our group as a whole feels. Any difference between our sample data and the population data – if we got data from everyone – is called sampling error.
So, our survey accuracy numbers tell us how much we should believe the survey results as an indicator of how group of interest feels.
Let me explain this with an example. Say we’re analyzing our data using percentages, and the survey found that 40% of respondents agreed with some statement in a survey question. Let’s also assume that the number of responses gave us a +/-5% survey accuracy.
Then in this example, we can be pretty certain that if we got data from our whole group of interest, the “Percent Agree” would lie somewhere between 35% and 45% (40% +/-5%).
Why Do We Need to Know Our Survey Accuracy?
Imagine if your accuracy was instead +/-10%. Now the interval widens to 30% to 50%. Would you now feel as confident about your decisions based on the survey results? Obviously not.
Here’s another important reason to understand this. You’re likely trending the survey data over time. If next year, the “Percent Agree” was 43%, have the feelings of our target group really changed? I can guarantee you that most managers will conclude it has – especially if it means a bonus. However, with a +/-5% accuracy, the true trend might not be 40% to 43%, but instead it could have been 45% to 38% — if the two scores had actually fallen at the extremes of the 2 ranges!
We do have statistical procedures to assess whether the scores have really changed – a so called statistically significant difference – but we’ll keep the discussion simple here.
The point I want to stress here is that many readers of your survey statistics will think they have Truth – with a capital T. They don’t. They have an indication of the truth with some amount of sampling error. Decision makers shouldn’t go off half-cocked over small changes in survey scores without doing additional statistical analysis.
Where Does “Statistical Confidence” Enter the Picture?
I’ll close here adding one more piece to the puzzle that’s I’ve ignored to keep the discussion simpler. You may have noticed my dodgy statement earlier “we can be pretty certain that if we got data from our whole group…”
In reality, we have some level of confidence that the sample data provide a certain accuracy. So to be proper, the example I’ve been using is that we’re 95% confident of having an accuracy of +/-5%. Maybe you’ve seen a phrase like this: “95% +/-5%.” Now you know what that means.
Here’s the good part. By convention, we always use a 95% confidence level, which is why we seldom hear the confidence level presented. In some circumstances we might use a higher confidence level (say, 99%) or a lower one (say, 90%).
So, focus on the survey statistical accuracy.
To satisfy the statistician purest, here’s the correct interpretation of the example I’ve been using: If we conducted the identical survey using 20 different samples drawn from the same population, we’d expect the results from 19 of the 20 (95%) to fall within the accuracy interval (+/-5%).
So when we see the pollsters get an election prediction wrong, the reason could be that it was that one out of the 20.
Finally, I promised to define margin of error, sometimes abbreviated as MOE. It’s the same as statistical accuracy.
For a more detailed discussion about this topic and all things related to a survey project…