Posted: February 20, 2018
We conduct surveys so that we can learn how a group of people feels about something, be it customer or employee satisfaction, future needs, attitudes towards an organization, views of some policies, or a whole host of other things. To truly learn the data collected must be valid, that is, they must measure what you’re intending to measure.
How can it not? Well, there’s numerous ways we can corrupt our survey process, but the single biggest mistake is ambiguous wording in our survey questions.
Ambiguous Questions – The Biggest Threat to Survey Findings
Examples of ambiguous questions abound, but let’s study a survey question asked in US since the 2016 presidential election: Russian meddling in the election.
Numerous public opinion surveys have sought to measure the public’s view on this, and they typically use phrasing similar to: “Do you think Russia interfered in the 2016 election?” The question might be posed as a binary Yes/No question or on an Agreement interval rating scale.
Seems straightforward, right?
Yet, two people with identical underlying viewpoints could give diametrically opposite responses. How? The ambiguity in the question phrasing.
What Makes a Survey Question Ambiguous?
An ambiguous question in a survey results when various people interpret some word or phrase in different ways. The above election-meddling question contains four words that form how you interpret what the question is asking. Let’s take them in order.
- Think. Various respondents might require different levels of proof to assert that they “think” or “believe” something in reality happened.
- Russia. What does “Russia” mean in this context? The Russian people? Russian oligarchs? The Russian government? My guess is that most everyone would interpret this as Vladimir Putin or his operatives.
- Interfered. This word is open to interpretation and needs to be viewed in the context of the following prepositional phrase “in the 2016 election”. How you interpret “election” affects how you interpret “interfered” as we’ll see.
- Election. Here we have real ambiguity. For some “election” means the actual voting process where we cast ballots and report the results. Others may view “election” as including the Those are quite different. So “interfered in the 2016 election” could mean 1) to influence people’s views or 2) to change vote tallies.
At this writing, significant evidence shows “Russia” fed misinformation and fomented disruption during the campaign through Facebook and other social media platforms. Some evidence exists that local election systems were probed for possible hacks, but none were successful apparently.
So, a respondent with a tight definition of “election” as the voting and vote tallying process could answer No to the question, yet someone else with a looser definition of “election” could answer Yes. Yet, both could believe that Russia engaged in nefarious “electioneering.”
You might think that an interval rating scale would solve the problem, but it wouldn’t. If using an Agreement scale, you would ask the level of agreement with the statement, “Russia interfered in the 2016 election.” If you interpreted “election” as the voting process, then your Agreement rating would be low. Someone with a broader definition of “election” would give a high Agreement rating. Ambiguity still is an issue.
Of course, this all assumes there’s no response bias in play. Many people would answer this question not based upon their view of Russian interference in the election, but only based upon their like or dislike of President Trump. In this case, their response is driven by an ulterior motive and not an honest statement of how they feel about the issue at hand.
How to Test for Ambiguous Questions
In my survey workshop training, I preach the need for survey instrument testing, both internal and external, before going live. Here are three recommendations.
First, as the question designer, read every question out loud, word for word. (Do this is a private, secluded spot. Otherwise co-workers will start to wonder about you.) You will “hear” problems in your questions that you wouldn’t otherwise notice.
Second, have an internal project team review the questionnaire. Other people will see mistakes to which you are blind. Check your ego when you hold these sessions. You will be surprised at the issues others find.
If you think some phrase could be problematic, here’s how to find out. Ask each member of your team to write down their interpretation of the phrase, e.g., “interfere in the election.” See if they all write the same or nearly the same thing. If not, rephrasing is necessary.
Third, and most often bypassed, perform an external test of the questionnaire, a so-called pilot test or beta test. Administer the survey preferably in person to a handful of people from the respondent audience. Ask them to explain why they responded to questions as they did. In that way, you’ll get their interpretation.
Can Ambiguity in Survey Questions be Eliminated?
The above suggestions will reduce ambiguity and should eliminate the really egregious errors. But your survey questionnaire will never be 100% clean. Yes, that’s the goal, but you’re unlikely to achieve it. Some respondents due to some life circumstances will have some bizarre interpretation of some question(s). That’s reality. That doesn’t mean we should just accept ambiguity in our questions. Just know that perfection is unattainable.