What Survey Mode is Best?

Summary: What survey mode is best to use? A recent Cint survey seemed to answer that question. 91% (sic) of respondents said they prefer web-based, smartphone, or SMS surveys. However, this survey suffers from a severe sample bias, given that the survey was likely conducted electronically. Just as response rates cannot be generalized across surveys done in different industries in different countries, so too the best survey mode or method cannot be generalized. Know your group of interest and what’s the best survey mode for your group.

~ ~ ~

Ever seen survey results that are too good to believe? Well, as the old adage goes, if it’s too good to believe, then you shouldn’t believe it. I call these — with all due deference to Homer Simpson — “D’oh! Results.” Even Homer in hindsight knew he should have seen that coming. Yet, it seems that many people accept survey results without critically thinking about them. I have some upcoming articles on the blind acceptance of the Net Promoter Score® and the Customer Effort Score, but I recently came across an example that displays a survey administration bias that should be obvious to even a casual reader.

Cint recently published the results of a market intelligence study of theirs. (Note: Cint removed the press release, otherwise I would link directly to it.) The study had several interesting findings, among them a discussion of survey methodology preferences by the public.

In terms of methodology, new technology was by far the most popular means of undertaking market research, with over 91% stating their preference as “smartphone”, “web” and “SMS”. Only 4% of those surveyed would choose to undertake market research by mail as their first choice, and just 1% would like to be surveyed over the telephone.

Wow, 91%!! Nowhere in the article is the methodology of their survey presented, but the first sentence of their online article give a pretty good hint.

17 January 2012: A survey out today from global provider of smart solutions for gaining market insight, Cint…

I sent an inquiry to Cint asking about their methodology, but to date I have received no response. (I actually find that fact alone very interesting for a firm engaged in customer engagement.) Anyone want to make a bet on the methodology that Cint used in their study? I am pretty confident that they surveyed using a “smart technology,” that is, a web survey.

If so, isn’t it obvious that people responding to web-based survey would say they preferred to be surveyed via the web? D’oh! Isn’t 91% an awfully high percentage? That number jumped out at me as so large that it didn’t pass a face validity test. It immediately made me skeptical of the results and want to learn more.

Sample bias such as this is not unusual. Years ago, I recall seeing a survey conducted at a conference on CRM technologies asking attendees about their senior management team’s involvement in CRM initiatives. Isn’t it senior management that approved of someone going to the conference? The administration bias is obvious. D’oh!

In fact, I am guilty of a similar bias. My Survey Workshop series spans three days. I wonder whether I should deliver the classes on Monday to Wednesday, Tuesday to Thursday, or Wednesday to Friday. I have a personal preference, but I really want to do what’s best for my market. So, on the feedback survey at the end of my class, I decided to ask which set of days is preferable. Guess what I find from this convenience sample, which is a common type of sample bias? If the class is a Monday-to-Wednesday class, that’s the preference. If it’s a Tuesday-to-Thursday class, that’s the preference. Etc. I have an obvious sample bias. D’oh! But at least I know that and can interpret my results in that light.

The Cint folks make no mention of the shortcomings of this aspect of their study’s findings. And it seems that others miss this critical point.

An American Express Open Forum article discussed the Cint study and included this synopsis of the impact of the survey methodology findings:

Get online. Consumers overwhelmingly prefer tech when it comes to surveys; 91 percent cite “smartphone”, “Web”, or “SMS” as their preferred methods. Just 4 percent like mail in surveys, and a mere 1 percent want to be surveyed by phone.

A Business Wings review of the article included:

If, after reading this, your business is now more likely to consider undertaking market research, you may wish to understand the preferred methodology. The clearest lesson is not to call people, as only 1% of those surveyed said they liked being contacted by telephone, compared to 91% who prefer to offer opinions via smartphone apps, SMS or online surveys.

Neither writer critically looked at how Cint’s survey methodology would impact these findings or seemed to question the astonishing 91% as unrealistic. D’oh!

As a university professor of 25 years, I can tell you that academics for years have stressed the need to build “critical thinking skills” among our students. We still have much work to do. D’oh!

Just as response rates cannot be generalized across surveys done in different industries in different countries, so too the best survey mode or method cannot be generalized. Know your group of interest or “population” in surveyor lingo. What’s the best survey mode for your group?

So how could you form a study that would properly understand survey methodology preferences? Here’s an example

Snap Surveys and Silver Dialogue conducted a study on multi-mode surveying methods. [Unfortunately the study is not available online anymore.] They queried visitors at National Trust properties in the United Kingdom. (These are historic properties many of which I have visited upon trips to Britain. I highly recommend them!) Visitors were asked by an interviewer upon entering the property if they would be willing to take a survey after their visit, and how they would prefer to be surveyed.

Here are their findings, quoted verbatim from their study:

  • The most preferred questionnaire modes were paper/post 30% and web 26%.
  • Point of visit methods, e.g. paper and kiosk, had far higher response rates (91%) than post visit, e.g. web and smartphone.
  • Using a multi-mode approach increased the overall visitor survey response rate to 61% from around 45% achieved in previous paper based visitor surveys.
  • A multi-mode approach reached a wider audience than previous surveys.
  • Asking people to “pledge” to complete a survey after their visit helped to boost response rates.

Notice how these results conflict with Cint. Web garnered a 26% vote. The approach used here determined the best survey mode works for this transactional survey setting, but it may not work for all settings. I certainly would not apply their findings to other survey programs, and they didn’t claim you could. Now how confident do you feel in following Cint’s prescription?

Further, note that Cint queried about what survey mode people said they “preferred”. They didn’t conduct empirical research to measure the impact upon response rates of meeting someone’s preferred survey mode, as Snap did. The impact of survey mode upon response rates is what we, as surveyors, really want to know, along with the impact of survey mode upon the scores respondents provide. (I am currently co-authoring a paper on this latter topic. Stay tuned. It’s pretty dramatic.) Self-reported preference for survey mode isn’t the outcome of interest; the survey mode that leads to highest response rate is the outcome of interest — along with the most valid data.

Here’s another example of the distinction between survey mode preference and response rates. If students at universities were queried about how they would like to complete the end-of-semester course evaluations, I would suspect that they would want a web survey as opposed to the traditional scanable paper surveys administered in class.

Two places where I’ve taught, Harvard Certificate in Management Program, and Babson College, moved to a web-based survey method from the paper surveys administered in class. Hey, ya gotta use new these new smart technologies, right? Response rates fell in half. And they were surprised. And these are academics who are schooled in research design! D’oh!

Automated Phone Surveys

Surveys can be conducted using many different technologies. Phone surveys, mail surveys and web surveys are the best known means. A more recent addition to this list is IVR surveys, also known as automated phone surveys. IVR stands for “Interactive Voice Response.” When you call a company for service, your phone call — and all those annoying phone menus — is managed with an IVR, and IVR technology provides the capability to conduct surveys. In this article I will examine the strengths and weaknesses of the IVR survey approach, and I will use a recent experience with a Sears IVR to measure customer satisfaction to illustrate some points.

What is an IVR survey?

An IVR survey shares many characteristics with a telephone survey since the survey is conducted over the telephone.  But rather than have a live interviewer conduct the survey, the survey is delivered using a prerecorded script. The respondent is asked to enter his responses using the keypad of the telephone and by verbalizing free-form comments. (IVR survey programs can even analyze the tone of the responses.) Since there’s no interviewer to clarify questions, the script must be clean and anticipate all the possible responses someone being surveyed may have.

IVR surveys are most commonly used to measure a customer’s reactions to service just delivered through a call center environment, known as a transactional survey. You may have had this experience. When you call some company before being connected to an agent, you may be asked if you would be willing to take a short survey at the end of the call. When the service is complete and the agent disconnects from the caller, the caller is automatically transferred into the survey program. It’s important that the agent not know who is going to be surveyed to keep the measurements clean.

IVR surveys need not be limited to this situation. You could invite a respondent group to take an IVR survey by calling them or by emailing them with a toll free number to call to take a survey.

My recent experience with a Sears IVR survey shows the IVR method’s strengths and weaknesses.

What are the Strengths of the IVR Survey Method?

Speed of Survey Data Collection. The most striking advantage of the IVR survey method is how quickly a company can collect feedback from an individual. Within 5 minutes after a service interaction is finished, the company can get feedback from the customer about her experiences. No faster means of collecting feedback exists.

Service Recovery Flag. That completed survey from the respondent gets analyzed immediately by the IVR survey program. Management can establish criteria to flag a specific survey response for review. For example, if someone provides a very low rating for overall service quality, information about that service transaction and the subsequent survey can be forwarded to a designated manager. The customer could get a call immediately to address the issue. The immediacy helps ensure that the customer is by the phone to take the call.

Lower Survey Response Bias. Response bias results from a respondent’s predispositions when they enter into a surveying routine. Common ones are acquiescence where the respondent gives the answer he feels the surveyor wants and auspices where the respondent gives an answer to please the interviewer. Since the respondent is interacting with a machine, the likelihood of any response bias is much reduced. We may not understand the script, but few of us will be intimidated by the process. However, the flip side may prove true should a low score result in an immediate follow-up phone call by a designated manager. That call could prove intimidating to some — and make them leery to take future IVR surveys.

Lower Cost for Larger Survey Programs. IVR surveys have a cost model that has a high fixed cost, but lower variable cost. The script must be developed and recorded, which can be costly. But the cost to deliver each individual survey is relatively small.

What are the Weaknesses of the IVR Survey Method?

Lack of Anonymity. Like a telephone survey, no anonymity exists. We may feel confident that our responses will be treated confidentially, but heck, they have our phone number. (If you call them on a toll free number, they get your phone number, even if you block caller ID.) They know who we are. Thus, any survey research purpose that relies heavily on anonymity, such as an employee survey, is best done through some other method.

Keep it Very Simple. Again, like telephone surveys, the questionnaire or script must be kept simple. Complex survey questions that can need to be administered with the visual aid of a printed page or web page cannot use an IVR survey approach. For the most part, IVR survey designers are limited to interval-rating scale questions, that is, questions that ask for a response on a 1-to-5 scale or something similar. (The much touted Net Promoter Score® survey method cannot be done by IVR since it uses an 11-point, 0-to-10 scale.) Check list (or categorical) questions are difficult to administer since all the options must be read to the respondent.

Narrow Scope. When an IVR survey is conducted as a transactional survey at the end of an interaction with a contact center, e.g., a phone call, the only things that can really be evaluated in the survey is what happened during the immediate interaction. Yet, that interaction is likely one part of a broader transaction, which is composed of a series of interactions.

In the case of the Sears customer survey, they used the IVR survey not at the end of my call with their 800-4MyHome call center, rather they used it to measure my feelings about the technician’s visit to my home. This shows that the IVR survey method can be used to measure the broader transaction, although Sears through the stupidity of its functional silos, chose to measure just the transaction involving the technician.

Telephone Turn-off. Some people just dislike telephone interaction, including telephone surveys. We can thank telemarketers, especially those who use surveys as a pretense for a sales call. Some people also don’t like interacting with a machine. And those of us who have grown to hate IVR phone trees — and I count myself fully in that group — may also hate anything that involves an IVR.

In summary, IVR surveys can be a very useful tool to get very quick feedback for transactional surveys, and they are flexible enough to be used in other situations.  However, care must be taken in the survey questionnaire design, which the Sears IVR survey definitely did not do.

An Old Dog’s New Trick: Postal Mail Surveys

Cutting edge is where it’s at. We’re all trained to believe that. Yet many times the new tools may not best address the core needs or requirements of the business process. New and high tech does not necessarily mean better. We’ve all seen solutions looking for problems, and many times when a legitimate new solution is found, we dismiss the old solution because it’s not a sexy, cutting edge approach. In the world of customer research — in particular customer satisfaction surveys where I spend much of my professional life — one old dog, administering surveys by postal mail, might just perform the best tricks.

The web has automated the process of conducting surveys, lowering the cost through its scalability and increasing the quality of the surveying effort by the quickness of data collection. It also has lead to a higher response rate — or so we think — or so maybe it once did. Most people I talk to about surveying want to use a web form administration method. There are strong merits for the method, which seems like a win-win-win solution.

In my workshops on surveying practices, I present the pros and cons of each survey administration method, and one of my students, Pamela Kellerstrass, Director of Marketing & Client Services for Motor Cargo, decided to conduct her first customer survey using the old dog of postal mail. Motor Cargo is a freight company located outside Salt Lake City, Utah that specializes in less than truckload (LTL) shipments. They had never asked their customers for feedback on the quality of service. Pamela didn’t have a huge budget, but she did have some company resources at her disposal. For her customer base, email interaction is a relatively new device, so they did not have email addresses for the entire customer base. This effectively eliminated electronic survey administration as that would introduce a serious bias to the results.

Pamela turned to the postal mail approach and applied her ingenuity to maximizing the response rate. The survey was mailed in a 10″x7″ bubble envelope.  Inside was a cover letter printed on stationery with a coffee motif. The letter started, “Help us wake up and smell the coffee!” Also inside the envelope were the 50-question survey instrument, a self-addressed, stamped envelope — and a small package of ground coffee, whose aroma hits the olfactory senses as soon as the package is opened.

The introductory letter explains Motor Cargo’s commitment to customer feedback, crystallized in their motto, “Determined to Listen. Designed to Perform.” It points out the role of their Customer Loyalty Manager and how the results of the survey will be used to improve service. Pamela also promised to “compile the results and report back to you changes we have made as a result of your good feedback.” Finally, she promised to send respondents a company coffee mug.

There are five steps to getting responses to a survey. Get the survey:

  • Noticed,
  • Opened,
  • Read,
  • Completed,
  • Returned. 

Consider how Pamela addressed each of these points. The unusual envelope certainly gets it noticed in the postal mail In-Box — and opened. The envelope’s contents will likely spur the recipient to read the material, and the phrasing of the letter, along with the coffee mug, will help motivate people to complete the survey. The clean layout and design of the instrument also promotes survey completion. Motor Cargo also made a follow-up phone call to the recipient list to gently push people to complete the survey. Finally, the SASE will help ensure it gets returned.

What were the results? Pamela got 182 responses out of 300 invitations, better than a 60% response rate! While we’d like 100% participation, 60% is a very admirable response rate. More importantly, they learned that customers weren’t happy with the process for getting a billing error corrected, a fact the president communicated in a follow-up letter. According to Pamela, “the survey also showed us what we were doing right. We always knew our good drivers were our best asset, but this just confirmed it. What a great opportunity to reinforce with our drivers the importance of the customer’s positive perception and how they are appreciated for their good work.”

Could Motor Cargo have gotten the same response using electronic surveying? Perhaps, but consider how many emails we all get. Do you notice the survey invitations you get buried among the scores of emails in your inbox at the start of the day? Many people tell me they just automatically delete emails from people they don’t recognize. If you don’t get your survey invitation noticed, the rest of your effort is moot. Perhaps the best way to get visibility and 10 minutes of your customer’s mind share is to return to the old dog of postal mail surveys.