Summary:  Many people when they’re writing a survey question seem to think there’s only one way to write the question to measure something.  Not true.  In fact, the survey question type choice has many options.  However, the choices have trade-offs.  Some survey question types provide more useful data for survey analysis but those same question types might create too much respondent burden
leading the respondent to pass on the question – or the survey in full.

Let’s illustrate with a very common type of survey inquiry, namely a question that asks the respondent to prioritize among a list of response options.  Our example here is a question asking about what would be important to someone’s next purchase of a laptop.  However, you can readily substitute your list for which you want prioritization.

Constructing the List of Response Options

We start with the group of items that we want the respondent to consider and then identify those that float to the top as important:

  • Price
  • Hard drive storage size
  • Hard drive storage type (disc or solid state)
  • Main Memory size
  • Main processor (chip) speed
  • Screen size
  • Screen resolution
  • USB ports, number of
  • Battery “size” – number of hours can run on battery

When constructing the list, we immediately face one of those nasty trade-offs.

  1. We want the list to contain all the things a respondent is likely to consider but…
  2. We also don’t want the list to be so long that it’s intimidating.

If the list has more than 10 items, it’s almost certainly too long.  The above list is right on the edge, and for some of the question types below, it may be too much.

Providing a long list with the goal of getting greater detail may have a boomerang effect.  It may encourage what’s called satisficing behavior.  That’s when the respondent looks at a long list and picks the first good-enough answer.  That answer may not be the optimal answer, but the long list leads the respondent to satisfice.

Note here that I’m assuming the survey is being delivered by webform or paper.  For a telephone survey, we have to keep the list quite short to minimize the cognitive burden on the respondent.  On a telephone survey, we’re asking someone to listen to the list, memorize it, and select the most important one (or ones) from the list.  A list longer than five items will likely lead to a primacy or recency effect, that is, selecting the first or last item in the list.  That could be considered a form of satisficing.

How do we keep a list short but still capture the detail we might miss?  Include an “Other.  Please Specify” option.  If the respondent doesn’t feel their prime choice is in the list, they can now enter it.  Realistically, we as the survey designer may miss some items as a possible response.

The Other option also provides feedback on the quality of our list.  If lots of people list something in the Other option, we should consider adding that one to the list next time, perhaps swapping out one that was seldom chosen.  Frequently, we also see Other entries that are close to one of the options provided but with some nuanced wording difference.  That might lead us to reword the choices in the future.

Finally, it’s best to have the list, regardless of the question type you select, be presented in a randomized order to each respondent.  This will negate some of the satisficing and sequencing issues in the collected data.

Survey Question Type Choice: Options

Once we have our list, how can we ask for prioritization?  Here are our survey question choices.

Multiple-Choice Checklist, Single Response Question Type

Seemingly, the most straightforward question type is seen in the nearby screen shot.  Ask someone to choose the one most important.  Our analysis will be the percent of respondents who selected each item.

Sounds clean and simple, but…

  • We don’t learn what might be second in importance (or third). To give an extreme example, what if everyone viewed two of the items as first and second, respectively, in importance.  You’d never learn that.  In our laptop example, price is likely to be a first choice for lots of people. Knowing #2 would help our decision making.
  • Choosing one answer may not be so simple for some people. What if someone views two items as of equal top importance?  Now they’ll be agonizing over which to check.  The question is not simple any longer.

Multiple-Choice Checklist, Multiple Response Question Type

A quick change to the above question is to let the respondent choose more than one answer.  (Notice that the radio buttons, which signify “check only one” are now checkboxes, which signify “check all that apply.”)

But we probably don’t what someone to check ALL that apply.  If people check all of them, then we haven’t learned what’s truly important.  Instead, instruct the respondent to check the two (or three, depending on the length of the list) most important.  Now we’re getting the top two in importance.  That likely provides better data for decision making.  (Do note – and tell your readers – that the percentages will now add to 200%.)  And the respondent burden is quite low.

Many survey tools provide edit capabilities for multiple response checklist questions, perhaps in a more expensive subscription.  If so, you could instruct people to check up to two items or exactly two items.  If someone checked three items or, in the second case, fewer than two, they’d get an error message.

Many software tools also support a follow up question for the multiple-choice checklist that would ask, “Of the two you selected, which is most important?”  You would then have the #1 & #2 importance items, which leads us to forced ranking survey question type.

Forced Ranking Question Type

We’ve all seen these questions as in the nearby example.  They’re a favorite approach since they appear to provide prioritized information, but I caution you about using them.  Why?

  1. Analytical Burden. In the example here with 9 items, we’d get a 9×9 matrix of data – the percent of respondents who selected each item for each ranking point.  Are you going to give your manager a chart with 81 data points – probably a stacked bar chart?Of course not!  You’ll summarize the data, probably showing the percent of respondents who chose each item as #1 in importance and as #1 or #2.  That’s the useful information – and it’s almost exactly what the multiple-choice checklist option above provides.(Note: while you might have data coded into your spreadsheet as numbers, you cannot legitimately take means of data from a forced ranking question.  It’s ordinal data, not interval data.  A companion article discusses how survey question type determines the data type generated, which then determines analytical options.)
  2. Respondent Burden. If our analysis will focus on the top rankings, why ask people to rank order anything beyond the top ranks?  You’re asking people to do a lot of work for no benefit.  And it is considerable work – burden – that may lead to an item non-response or an exit from the survey in total.  Worse, they may just click on buttons to get through the question giving you garbage data.  How much thought would you put into properly ranking 9 items?  Exactly!
  3. Respondent Annoyance. Software designers have devised several ways to present the forced ranking question to the respondent.  They all stink to some degree, some horribly so.  In the nearby example, you have to select ranks using drop down boxes, which are low in usability. Further, if you select the same rank more than once, you get an error message.  The annoyance factor, especially if it’s a required answer, may lead to survey abandonment.

Friends don’t let friends use forced ranking questions.

Interval Rating Scale Question Type

Presenting each item to the respondent asking for a response on an interval rating scale is perhaps the most common way to get data on importance.  It’s a nice clean way to get data with fairly low respondent burden, and we can calculate mean scores since the data are – hopefully – interval in nature.

Interval Rating Question Type

However, the approach has one serious drawback similar to the “Check all that apply” in the checklist approach.  Nothing stops the respondent from giving the same rating to every item.  What have you learned if every item is rated the same?  Nothing!  We really want the respondent to engage in trade-off analysis, and interval rating questions don’t support that goal.

Where perhaps the most common way to measure importance, it is arguably the least useful.

Fixed Sum Question Type

The checklist version asked people to select the top item(s) in importance.  In the forced ranking version, we ask people to put the items in rank order.  What if instead we asked the respondents to assign points to each item based on importance?  Now we have a fixed-sum question.  (This goes by different names such as constant sum or fixed allocation.)

We give people, say, 100 points and ask them to assign them to the various items.  The scores must add to the 100.  More points mean more importance.  Our analysis focuses on the mean score for each item (since this is interval data) to see relative importance.  That’s a real plus for this approach.

Obviously, the question type has downsides and design concerns:

  • It is considerable work for the respondent and some people will want to skip it – similar to the forced ranking.
  • Choose a survey tool that presents a running total to the respondent as they complete the question. Not all tools do that!  The one shown here uses sliders to select the points, which are shown in the box to the right.  Once you hit 100, you can’t slide for more points, but the point total is not provided.
  • It’s best to have the number of items divide evenly into the total points to simplify the math for the respondents. With 9 items – 9 items! – equal importance would be 11.11 points.  So, if you absolutely must have 9 items, make the point total 90.
  • Don’t allow entering negative numbers! Yes, I’ve seen respondents do that.
  • Don’t use a lot of these questions in a survey. That will drive people away.
  • It’s best to put these question(s) toward the end of the survey once you’ve developed rapport with the respondent and they are fully engaged.

Conclusion

Measuring criticality or importance is a critical important aspect for many survey designers.

  • Fixed sum questions will give you the most useful data for analysis, but the respondent burden may be a turnoff.
  • Forced ranking questions are annoying at best and provide limited analytical options.
  • The “select one” checklist seems simple and enticing, and may be fine in some circumstance.
  • The “select top two” checklist type is one to consider. The question has limited respondent burden, the data will be readily understandable, and you may get more useful data beyond selecting just the top one.