Posted: February 20, 2018

We conduct surveys so that we can learn how a group of people feels about something, be it customer or employee satisfaction, future needs, attitudes towards an organization, views of some policies, or a whole host of other things.  To truly learn the data collected must be valid, that is, they must measure what you’re intending to measure.

How can it not?  Well, there’s numerous ways we can corrupt our survey process, but the single biggest mistake is ambiguous wording in our survey questions.

Ambiguous Questions – The Biggest Threat to Survey Findings

Examples of ambiguous questions abound, but let’s study a survey question asked in US since the 2016 presidential election: Russian meddling in the election.

Numerous public opinion surveys have sought to measure the public’s view on this, and they typically use phrasing similar to: “Do you think Russia interfered in the 2016 election?”  The question might be posed as a binary Yes/No question or on an Agreement interval rating scale.

Seems straightforward, right?

Yet, two people with identical underlying viewpoints could give diametrically opposite responses.  How?  The ambiguity in the question phrasing.

What Makes a Survey Question Ambiguous?

An ambiguous question in a survey results when various people interpret some word or phrase in different ways.  The above election-meddling question contains four words that form how you interpret what the question is asking.  Let’s take them in order.

  • Think. Various respondents might require different levels of proof to assert that they “think” or “believe” something in reality happened.
  • Russia. What does “Russia” mean in this context?  The Russian people?  Russian oligarchs? The Russian government?  My guess is that most everyone would interpret this as Vladimir Putin or his operatives.
  • Interfered. This word is open to interpretation and needs to be viewed in the context of the following prepositional phrase “in the 2016 election”.  How you interpret “election” affects how you interpret “interfered” as we’ll see.
  • Election. Here we have real ambiguity. For some “election” means the actual voting process where we cast ballots and report the results.  Others may view “election” as including the   Those are quite different.  So “interfered in the 2016 election” could mean 1) to influence people’s views or 2) to change vote tallies.

At this writing, significant evidence shows “Russia” fed misinformation and fomented disruption during the campaign through Facebook and other social media platforms.  Some evidence exists that local election systems were probed for possible hacks, but none were successful apparently.

So, a respondent with a tight definition of “election” as the voting and vote tallying process could answer No to the question, yet someone else with a looser definition of “election” could answer Yes.  Yet, both could believe that Russia engaged in nefarious “electioneering.”

You might think that an interval rating scale would solve the problem, but it wouldn’t.  If using an Agreement scale, you would ask the level of agreement with the statement, “Russia interfered in the 2016 election.”  If you interpreted “election” as the voting process, then your Agreement rating would be low.  Someone with a broader definition of “election” would give a high Agreement rating.  Ambiguity still is an issue.

Of course, this all assumes there’s no response bias in play.  Many people would answer this question not based upon their view of Russian interference in the election, but only based upon their like or dislike of President Trump.  In this case, their response is driven by an ulterior motive and not an honest statement of how they feel about the issue at hand.

Why are Ambiguous Questions a Problem?

I’ve had people challenge me on my concern for ambiguous questions.  Data are data they would argue.  My pushback to them is simple: “How can you interpret the results?  More importantly, what action would you prescribe?”  Hmmm…

Let’s assume some percentage of the respondents interpreted “interfered in the 2016 election” as meaning the voting process and some interpreted it as feeding misinformation into the campaign.  The implications for the action differ depending upon which interpretation was predominant.  And you wouldn’t know which meaning was most commonly assumed!

In reality, many (most?) survey designers will be blind to their ambiguous questions and will simply assume that all respondents interpreted the question the way they, the designer, intended for them to interpret it.  This assumption can lead to erroneous findings and recommendations for action.

And that’s why ambiguous questions are the biggest problem in survey research.  The researcher is likely blind to the validity problem.

How to Test for Ambiguous Questions

In my survey workshop training, I preach the need for survey instrument testing, both internal and external, before going live.  Here are three recommendations.

First, as the question designer, read every question out loud, word for word.  (Do this is a private, secluded spot.  Otherwise co-workers will start to wonder about you.)  You will “hear” problems in your questions that you wouldn’t otherwise notice.

Second, have an internal project team review the questionnaire.  Other people will see mistakes to which you are blind.  Check your ego when you hold these sessions.  You will be surprised at the issues others find.

If you think some phrase could be problematic, here’s how to find out.  Ask each member of your team to write down their interpretation of the phrase, e.g., “interfere in the election.”  See if they all write the same or nearly the same thing.  If not, rephrasing is necessary.

Third, and most often bypassed, perform an external test of the questionnaire, a so-called pilot test or beta test.  Administer the survey preferably in person to a handful of people from the respondent audience.  Ask them to explain why they responded to questions as they did.  In that way, you’ll get their interpretation.

Can Ambiguity in Survey Questions be Eliminated?

The above suggestions will reduce ambiguity and should eliminate the really egregious errors.  But your survey questionnaire will never be 100% clean.  Yes, that’s the goal, but you’re unlikely to achieve it.  Some respondents due to some life circumstances will have some bizarre interpretation of some question(s).  That’s reality.  That doesn’t mean we should just accept ambiguity in our questions.  Just know that perfection is unattainable.

Tips for a Successful Survey

Request Chapter 1 of the new edition of our Survey Guidebook for key points in a more effective survey program.

Fill out my online form.
Survey Sample Size Calculator

Get our Excel-based calculator. It can also be used to gauge statistical accuracy after the survey has been completed.

Fill out my online form.