Running your own survey programs? Learn how to do it right from us — the premier worldwide trainers for survey design & analysis.
Posted: February 20, 2018
Surveys can help us learn how a group of people feels about something, be it customer or employee satisfaction, future needs, attitudes towards an organization, views of some policies, or a whole host of other things. To truly learn the data collected must be valid, that is, they must measure what you’re intending to measure.
How can it not? Well, there’s numerous ways we can corrupt our survey process, but the single biggest mistake is ambiguous wording in our survey questions. That happens when the meaning of key words or phrases is unclear. People could have different interpretations of the question.
How then can we interpret the data in different people were, in essence, asked different questions?
An ambiguous question in a survey results when various people interpret some word or phrase in different ways. The above election-meddling question contains four words that form how you interpret what the question is asking. (Researchers would say these words “operationalize the construct.”)
Let’s take them in order.
At this writing, significant evidence shows “Russia” fed misinformation and fomented disruption during the campaign through Facebook and other social media platforms. Some evidence exists that local election systems were probed for possible hacks, but none were successful apparently.
So, a respondent with a tight definition of “election” as the voting and vote tallying process could answer No to the question, while someone else with a looser definition of “election” to include the campaign could answer Yes.
Both would be right. And both could believe that Russia engaged in nefarious “electioneering.”
You might think that an interval rating scale would solve the problem, but it wouldn’t. If using an Likert-type Agreement scale, you would ask the level of agreement with the statement, “Russia interfered in the 2016 election.” If you interpreted “election” as the voting process, then your Agreement rating would be low. Someone with a broader definition of “election” would give a high Agreement rating. Ambiguity still is an issue.
Of course, this all assumes there’s no response bias in play. Many people would answer this question not based upon their view of Russian interference in the election, but only based upon their like or dislike of President Trump — or the sponsors of the polling. In this case, their response is driven by an ulterior motive and not an honest statement of how they feel about the issue at hand.
For the unscrupulous researcher, ambiguous questions are wonderful. They would use a broad phrasing that might lead many to agreement with a statement even though their true feelings are more narrow. The researcher can claim a mandate for action where none in reality exists.
If we’re going to take action, that action should be based upon real knowledge, not interpretations of questionable research.
In my survey workshop training, I preach the need for survey instrument testing, both internal and external, before going live. Here are three recommendations.
The above suggestions will reduce ambiguity and should eliminate the really egregious errors. But your survey questionnaire will never be 100% clean. Yes, that’s the goal, but you’re unlikely to achieve it. Some respondents due to some life circumstances will have some bizarre interpretation of some question(s). That’s reality.
That doesn’t mean we should just accept ambiguity in our questions. We shouldn’t, and we should strive to eliminate ambiguity. Just know that perfection is unattainable.