Most surveys of customers, employees or the like use what are called “checklist questions”. It’s a good name since it is what it says. We present a list of options to the respondent and ask which ones fit the respondent. They’re also known as multiple choice questions.
For example, we might ask:
What types of cars have you driven in the past?
What types of cars are you considering for your next purchase?
What features are important to you for your next car selection?
We could pose these as open-ended questions, but then the poor analyst has to try to make sense of all the textual responses. Yuk! So, instead we present a list of options from which the respondent can select. Easier for the respondent and MUCH easier for the analyst.
Checklist questions are heavily used in data collection forms where many of the questions will be demographic in nature.
Advantages of Survey Checklist Questions
The reason surveyors like this question type is fourfold. For the respondent, checklist questions are:
Easy for the respondent to understand
Easy for the respondent to complete
For the surveyor, checklist questions:
Are easy to analyze (as mentioned above)
Provide clear and readily interpretable answers that can be translated into action plans.
Say we had a post-sale product fulfillment survey. We can ask a binary checklist question:
Was anything wrong with the shipment? Yes or No.
If the person answers yes, we can branch to a checklist question asking
What the type(s) of problems did you encounter?
followed by a list of possible problems. We can calculate the frequency distribution of certain types of problems – solid data for an improvement project.
Survey Sample Size Calculator
Get our Excel-based calculator. It can also be used to gauge statistical accuracy after the survey has been completed.
Every survey question type has some design considerations. The goal is to get valid, meaningful data, and these design standards help achieve that. Well designed questions also lower the likelihood of respondents skipping a question — an “item non response” — or abandoning the survey.
The wording of checklist questions is generally pretty straightforward. The real challenge lies in the set of checklist options.
Comprehensive list of options. The checklist should have a “home” for every respondent. That is, respondents should be able to find an option that fits their situation.
Non-overlapping list of options. Comprehensiveness means we present many options, and there should be clear distinctions among the options. The respondent shouldn’t have to debate which of two options to select because they’re both saying nearly the same thing.
Avoid fatigue. Comprehensiveness is good but not too much comprehensiveness. Too detailed a list means a long list that will lead to a “satisficing” effect – the respondent will choose the first good enough answer, not the optimal answer.
“Other: Please specify.” Always include this option at the end. It captures the unexpected and gives us feedback on the quality of the checklist.
So, you want a short, but comprehensive list. Seems contradictory, doesn’t it? Yes, you’re right. This is the art of survey question design. We have to deal with such trade-offs frequently in survey design.
Another design consideration relates to the order of the checklist items. Primacy and recency effects – choosing the first or last option – are known to occur. If you put the most likely answer first, you’re creating a bias toward that answer. The best solution is to randomize the order, negating the order effects. However, Other always goes last.
WSJ Example of Flawed Survey Checklist Questions
Look at the nearby checklist question from the Wall Street Journal (WSJ) customer service website. As a subscriber, I can go to my account and “Report Delivery Issue.” Look closely at the list. Notice what’s missing? There’s no “Other.”
A survey checklist question with missing options
Further, the checklist options are very limited. I believe the person who designed the list has never gotten home newspaper delivery! He or she certainly never did any field research to develop the list. Here’s what’s missing from my own experiences over the decades:
Paper delivered to wrong or different location on property. My paper is typically thrown onto the driveway. Makes sense. But occasionally, it’s thrown onto my lawn – or into bushes. I might not find it right away. Lawn delivery is fine in the summer, but what about winter when I could have 3 feet of snow on the ground? Do I want to trudge through the snow to get it?
Paper delivered while suspension was in place. I can suspend delivery while I’m traveling. That’s for the safety of my property. A pile of newspapers on the driveway is a good signal for a crook that the home is ripe for a break-in. Five years ago I had a delivery person who always delivered while the delivery was supposed to be suspended — and then no restart delivery for days when the suspension had ended. (I tricked her by starting and ending delivery 3 days ahead of my actual trip dates. No joke.)
All of this feedback that should go back to the driver, the delivery company, and to WSJ. Today, it can’t. If WSJ had included an Other option, they might have learned what to add to the checklist.
Do they not want to know? It’s not like they haven’t been told. I have made these comments known to WSJ customer service multiple times over the years, but to no avail. Nothing ever changed.
However, I recently had such a bad experience with a poorly designed subscription renewal form that I reached out via LinkedIn to the VP of Customer Experience. I had a long, pleasant phone conversation with a senior manager; I think they are listening. I hope.
The lack of an Other text box in the WSJ data collection form wards off the customer service folks there from learning new types of issues their customers have. But the Other submissions have a downside: data cleansing.
Before you compile results for a checklist question, you must examine all the Other submissions. You will likely see some ideas for refining the checklist options going forward with the unexpected submissions, such as “Got NY Times, not WSJ.”
However, you will also find people writing in an answer that would seem to fit one of the categories you offered them. You will need to reclassify those into the appropriate bucket to get meaningful statistics. Now, you might be thinking, “If I didn’t offer the Other, the respondent would have chosen that option.” You might be correct, but you might also find that the way you worded that option led people to not choose it.
Critically look at the wording in light of the Other submissions that talk to the same option. And learn what new options you need in your checklist.
Tips for a Successful Survey
Request Chapter 1 of the new edition of our Survey Guidebook for key points in a more effective survey program.
Let’s look at another example of poorly design checklist questions. This is from an employee survey at Boston University (BU). They had found that employees were leaving BU because of the commute to the school and the lack of good housing nearby. (By way of full disclosure I have my graduate degrees from BU and I taught there during my doctoral program.)
Poorly worded checklist question and checklist options
This first question is a wonderful example of overlapping response options. What’s the difference between:
Commute to BU
Method of commute to BU
Most times I can figure out what the survey design meant to write. Here I can’t.
Let’s look at the next question. Earlier I said that the wording of survey checklist questions is usually straightforward. Here it was not. It’s convoluted. What “would bring [me] closer to BU”? Moving closer would bring me closer. D’oh! How about: “What factors might entice you to move closer to BU?”
A flawed Importance question
Elsewhere I’ve written about ways to measure importance. Our third example above is a good example of how not to do it. The question states: “What is most important…?” But then the instructions say “Check all that apply.” Those are contradictory. Either have radio buttons and tell respondents to check the most important or have checkboxes and ask to check the two most important. If everything is important, nothing is important.
Is Neutral neutral here?
Finally, look at the response options for these last pair of questions. What does Neutral mean here? Neutral means “in the middle” or “not strong feelings in either direction.” How does that differ from “Long, but not too bothersome” and “Maybe”?
As the reader of the report, how would you interpret Neutral responses in the context of these questions? My guess is that many respondents chose Neutral when they “preferred not to answer”. Regardless, this survey question checklist option set garbles any idea for a course of action.
Yes, you’d think that they’d tap the research professors to design the survey, but they certainly didn’t. At least, I hope they didn’t.
Designing good survey checklist questions isn’t rocket science, but a survey designer does need to think about the
Correct wording of the question
What items to put in the checklist
How to word those checklist options.
And we need to look at the textual responses from “Other; please specify” as the opportunity to learn from our respondents.
Finally, a properly done pilot test before launch might have caught these mistakes. Why survey designers and form creators don’t think they make mistakes is beyond me. I wish I were that perfect.
I’ll bet I have a typo or two in this copy despite much proofreading. Tell me! I belief in the value of feedback. (Okay, I just planted that mistake.)