Survey Question Choice: How Question Format Affects Survey Analysis

When your create a survey questionnaire, a critical questionnaire design issue is how you are going to pose the survey questions to the respondent. Why is this critical? Because the survey question type determines the type of data analysis you can perform on the data it generates. This article discusses that relationship. We start from the premise that you have identified your research objectives and now need to write each survey question. We end the article discussing when you should use each survey question type in your questionnaire. In other articles we discuss how to identify what survey questions you should be asking and critical issues in the phrasing of a survey question. Most importantly, we show how incorrect analysis of one data type could lead to incorrect conclusions.

Once you’ve identified the issues you want to research, you now need to formulate them into survey questions, and you have many options in survey question types to consider.

  • Open-ended text questions
  • Multiple choice questions
  • Ordinal scale questions
  • Interval scale questions
  • Ratio scale questions

Each of these broad categories of survey questions types contains a number of different formats, but what differentiates among the survey question types is the type of data generated by the question. You might be thinking that “data are data.” (or datum is datum). But in fact, 5 data types exist, and the type of data generated by the survey question constrains the type of analysis you can perform. (Some may argue that text is not a type of data. Fine…)


Notice that the analysis potential is cumulative. That is, as you move up that table to a higher type of data, you can perform more mathematical operations. This is important! If you want to trend average scores on some survey question(s), then you have to choose question types that generate Interval or Ratio data. Ordinal and nominal data are not amenable to taking averages.

Data Analysis Error

Let me give you an example of an analysis error I personally witnessed. I was a member of a committee to select a new town administrative official. After we had reviewed resumes and interviewed five finalists, the consultants running the process had each committee member rank order all five of the candidates, that is give a 5 to the candidate we liked best, a 4 to the next best person, and so on. While we were doing this, the committee member next to me said, “Can I give two 5s? I thought candidates B and C were equally strong.” The consultants said no. (I also thought the same two candidates were neck and neck.)

When we had done our scoring, the consultants added up the scores, presented the results, and wanted us to view the spread between the summed scores as indicative of the relative distance in how we assessed the candidates. That last point is critical. Please reread it. Imagine if all of us on the committee had viewed the five candidates as shown on the following spatial map where the distance between candidate letters represents the difference we felt existed between them. Candidates to the left are more favored. Candidates to the right are less favored.


Applying the rank scores (that is, Committee Member 1 gave B = 5, C = 4, A = 3, E = 2, D =1 and so on) and adding, the final scores would be:

Candidate A = 10
Candidate B = 25
Candidate C = 17
Candidate D = 13
Candidate E = 10

The mathematical answer is clear. Candidate B is the winner with 25 points and the next nearest candidate, C, is a whopping 8 points behind, only 4 points ahead of Candidate D. But look at the spatial map. Is B that much superior? 4 of the 5 committee members thought B & C were really a dead heat. Member 5 didn’t like C for some reason — and liked D more than anyone else.

Are C & D closer in preference than B & C? That’s what the (bogus) math shows, yet the spatial map paints a different picture. B may be preferred to C, but it’s close enough to spark a reasoned debate — unless you rely on the math to guide the decision. That debate in fact, is what happened in our deliberations.

What happened? The consultants had us rank order the candidates, generating ordinal data. They then treated the data as interval data and added the scores. Ordinal data means that the answers are in some order, but says nothing about the distance between the ordered items. Interval data means there’s an equal distance — cognitively — between the scores. So, imagine instead that we had been asked to rate the candidates on a 1 to 10 scale. That would have generated interval data — if we all viewed the difference a 10 and a 9 as the same as between a 9 and an 8, and so on.

In another article, I make the point that interval scales are not perfect, in fact they are lousy for measuring importance among a set of factors — or in this case the preference for one candidate. If every member rated every candidate a 10, then no differentiation would result. That’s why the consultants wanted rank orders; it forced us to choose one over the other. But adding the rank scores was wrong mathematically. Near ties and huge gaps were treated the same in the math.

What analysis could have been done? They could have developed cumulative frequency distributions as shown in the table below for the data displayed in the spatial map:


Look at this table. Candidates B is still the clear leading choice, but compare the analysis of the summed ranked scores with what this table shows. Are Candidates C & D closer than Candidates B & C as the ranked sums indicated? No. Candidate C clearly is second, but D is certainly more distant. This analysis, which is mathematical correct for ordinal data, does show B & C as being close contenders. But still missed is the fact that four of the five members felt those two candidates were almost the same.

Another alternative would have been to use a fixed sum (also known as fixed allocation) question format. In this case, we would have been told to allocate 100 points among the five candidates based upon our preferences. If we felt that all five were equal, then we would have given each 20 points. But if we felt one candidate was better we should allocate more than 20 points to that candidate. However, then some other candidate(s) would have to get lower scores. Our allocations must add to the fixed sum of 100 points.

Based on the spatial map above, the scores might have something like:


These scores are shown in the table below with the averages for each candidate.


Notice what this analysis would show. Candidates B & C are neck and neck. (I used round numbers to simplify the display so maybe not quite so neck and neck.) Since the fixed sum question format captures relative distinctions and has interval properties, the data can be added and averaged. They better reflect the true underlying relationships.

The point of this example is that you should think about the type of analysis you want to present to the decision makers and then chose question types that will properly support that kind of analysis. To perform illegitimate mathematical operations on a data set could lead to incorrect decisions!

When to Use Each Survey Question Type

I started this article mentioning the five types of survey questions. When should each be used?

Open-ended text questions. Use to generate a more detailed answer or to gather information that respondents feel has not been covered in the closed-ended questions. Use sparingly since high in respondent burden, administrative burden, and analytical burden.

Multiple choice questions. Use for demographic questions and other issues where the respondent is to select among a set of response choices, either selecting the best option or selecting all that apply.

Ordinal scale questions. The theme in ordinal questions is that the response options are on some ordered continuum. Forced ranking questions, such as the one outlined above, are prone to respondent error. Another type of ordinal question is when the respondent is ask to choose the answer among an ordered set, such as when Goldilocks was asked about the temperature of the porridge: too cold, just right, or too hot.

Interval scale questions. These are the most common type of survey question. You have probably seen the following scales used: satisfaction, likelihood, strength of agreement, and frequency. But the survey questionnaire designer can create a scale to match the dimension she wishes to measure. As mentioned, the critical differentiator between ordinal and interval scale questions is that equal intervals exist between each adjoining pair of response options. Many researchers believe that the typical 1-to-5 or 1-to-10 scalar questions are not truly interval but are in fact ordinal. That will be addressed in another article.

Ratio scale questions. When respondents are asked to tell us some physical measure, such as income, years of education, or how long their phone call was on hold, these are ratio scale questions. The data they provide have a true zero. (On an interval scale, a zero response option is simply arbitrary. Zero income, for example, is real.) Frequently, we solicit ratio data with what appears to be an ordinal scale with response options presented in ranges, such as if we were to ask for the number of years of education the person had achieved, asking the respondent to check one of the following options: 1) 1 to 12 years, 2) high school degree, 3) associates degree, 4) bachelors degree, 5) graduate degree.  While the question looks ordinal, we could treat the data as ratio in our analysis.

Why present ranges? First, it’s faster for the respondent to answer the question, lowering respondent burden. Second, it’s less invasive to ask someone to check an income range, for example, then to ask them their annual income. Would you tell a stranger your income level? Probably not, but you might be willing to check a box that says your income is $50,000 to $75,000 per year.

Remember the goal: you are trying to solicit information from the respondent that meets your research objectives — but without creating undue respondent burden. That’s the term surveyors used to describe the amount of effort we’re placing on the respondent to complete the questionnaire. Why are we concerned about respondent burden? For two reasons. First, the higher the burden, the lower the survey response rate is likely to be. Second, the higher the burden, the greater the likelihood of confusion and errors by the respondent. Fixed sum or fixed allocations questions, while I like them to force respondents to consider trade-offs, are high in respondent burden. They should be used judiciously and only after you’ve involved the respondent in the subject matter of the survey. I would never put one of these as the first question in a survey!

There is far more to cover in the impact of question format upon data analysis — and other impacts on your survey program. Hopefully, this article open your eyes to the impact. Your choice of question types and question formats should not be done haphazardly or capriciously. That decision will drive the analysis portion of your project — assuming you want to perform the analysis correctly. (duh…)  We’ll delve in the selection of question formats more fully in other articles.