Generate Actionable Survey Data
When performing most any customer research, but especially when conducting customer satisfaction surveys, a key goal is to create “actionable data.” Why actionable? The end result of the research should be input to some improvement program or change in business programs or practices. If that’s not the goal, then why is the research being performed? (Hopefully, it not just to get a check mark on some senior manager’s Action Item list!)
However, mass administered surveys may not provide the detailed, granular data needed for taking action. Well-designed survey instruments use mostly structured, closed-ended questions. That is, these question formats ask the respondent to provide input on an interval scale, for example, a 1-to-5 scale, or by checking some set of items that apply to them for some topical area. These close-ended questions have two main advantages:
- Ease of analysis for the surveyor. Since the responses are a number or a checkmark, there should be no ambiguity about the response. (Whether the respondent interpreted the question correctly is another issue.) The surveyor can mathematically manipulate the codified responses very easily. This is in contrast to open-ended questions that provide free-form textual responses. Analyzing all that text is very time consuming and subject to interpretation by the survey analyst. (It’s also darn boring — but I don’t tell my clients that!)
- Ease of taking the survey for the respondent. A key metric for a survey instrument lies in the degree of “respondent burden.” That is, the amount of effort required by the person completing the survey. Writing out answers is far more time consuming than checking a box or circling a number. Greater respondent burden à lower survey response rates.
The closed-ended survey questions help paint a broad picture of the group of interest, but they seldom give details on specific issues — unless the survey contains a great many highly detailed questions, which increases the burden on the respondent through the survey length. Surveys typically tell us we have a problem in some area of business practice, but not the specifics of the customer experience that is needed for continuous improvement projects.
So, how can the detailed actionable data be generated — as part of the mass administered survey or as an adjunct in a more broadly defined research program? Here are some ways to consider getting better information:
- Think through the research program and survey instrument design. I just mentioned above that actionable information can be generated through a detailed survey, one that asks very specific questions. But survey length becomes an issue. Longer surveys will hurt response rates. Perhaps your research program can be a series of shorter surveys administered quarterly to very targeted — and perhaps, different — audiences.
Additionally, examine any instrument critically to see if unnecessary questions can be eliminated or if questions can be structured differently to solicit the desired information from respondents more efficiently. For example, say you wanted to know about issues or concerns your customers have. A multiple choice question would identify if a customer had concerns for the items listed, but you don’t know the strength of the concern. Instead, consider using a scalar question where you ask the level of concern the customer has. It’s a little more work for the respondent, true, but not much. Yet, you may get data that is far more useful.
Survey instrument design is hard work, but it’s better that the designer works harder than making the respondent work hard.
- Judicious use of open-ended questions. As mentioned, an obvious way to generate detailed is to ask open-ended questions, such as, “Please describe any positive or negative experiences you have had recently with our company.” While some respondents will take the time to write a tome — especially on a web-form survey — those respondents without strong feelings will see this as too much work and give cryptic comments or none at all. Yet, their opinions are crucial to forming a broad — and accurate — profile of the entire group of interest.
Novice survey designers typically turn to open-ended questions because they don’t know how to construct good structured questions. In fact, it’s a dead give away to a survey designer’s skill level! If you find you have to fall back upon open-ended questions, then you don’t know enough about the subject matter to construct and conduct a broad-based survey. It’s that simple.
Some time ago I received a survey about a professional group that had 11 (yes, eleven!) open-ended questions in four pages. Recently, I received a survey about a proposed professional certification program. The first two questions were open-ended. And this latter survey was done by a professional research organization! Asking several open-ended questions is a sure fire way to get blank responses and hurt the response rate.
That said, open-ended questions can generate good detailed data, but use them judiciously. One way they can be used appropriately leads to our next subject.
- Use branching questions in the survey instrument. Frequently, we have a set of questions that we only want a subset of the target audience to answer either because of their background or because of some experiences they have or have not had. Branching means that respondents will be presented certain questions based upon their responses to a previous question that determine the “branch” a respondent follows. These are easiest to implement in telephone and web-form surveys where the administrator controls the flow of the survey, and most difficult to implement in paper-based surveys. (Don’t even think about uses branching on an ASCII-based email survey.) Some survey programs call these “skip and hit” questions.
Branching can shorten the survey that a respondent actually sees, allowing for targeted detailed survey questions without unacceptable respondent burden. For example, if a respondent indicates he was very unhappy with a recent product or service experience, a branch can then pose some very specific questions.
As alluded to above, the branch may lead to an open-ended question. But beware! An audience member at a recent speaking event of mine had encountered a survey where a pop-up window appeared with an open-ended question whenever he gave a response below a certain level, say 4 on a 1 to 10 scale. He found these pop-ups annoying — don’t we all! So, he never gave a score below five. Talk about unintended consequences! The survey designer created a false upward bias to the survey data!
- Use filtering questions in the analysis. When we get a set of survey data, we will always analysis it as a whole group, but the real meat may be found by analyzing the data segmented along some variables. These filtering questions may be demographic variables (e.g., size of company, products purchased, years as a customer, age, and title). Those demographic data could come from questions on the survey or they could come from our database about those whom we just surveyed. (This presumes that the survey is not anonymous. If it is, then we have no choice but to ask the questions. But, again, beware! Demographic questions are imposing to the respondent. Too many of them will hurt response rate.)
The filtering questions may also be response results from key questions on the survey. Just as the response on a question, such as satisfaction with problem resolution quality, could prompt an open-ended branching question, the results of that question may also be used to segment the data base for analysis. Basically, you’re looking for correlations or association across the responses to multiple questions to see if some cause-and-effect relationship can be identified. (Multivariate statistical procedures could also be used.)
- Conduct pre- or post-survey interviews. Perhaps the best method or getting more actionable data is to expand the definition of the research program to include more than just a mass-administered survey. Every research technique has its strengths and its weaknesses. Surveys are good at painting profiles of some group. Interviews and focus groups (also known as small group interviews) are very good at generating detailed, context-rich information. These data can help you understand cause-and-effect relationships by getting the full story of what’s behind the respondent’s feelings. I’ll talk more about these in a future article.
Such research techniques are frequently used at the start of a research program to understand the field of concern. This understanding then allows for a better designed survey instrument, but context-rich research also provides valuable information about the subject area. There’s a double benefit to this research. But there’s nothing that says these interviews can’t be used at the back-end of the research program as a follow-up to the mass administered survey. Surveys frequently pose as many new questions as they answer, and this is a method for answering those new questions. In fact, you might be able to generate the interview list on your survey. When you pose an open-ended question, offer to contact the person to talk about their issue in lieu of having them write in their comments. In essence, that creates an opt-in list from highly motivated people.
Unfortunately, no silver bullet exists for getting actionable customer feedback data. Research programs have inherent trade-offs, and this article outlined some of the critical ones.