Bribes, Incentives, and Video Tape (er, Response Bias)

Summary: Incentives are part of many email requests to take a survey. Recently, I saw this taken to a shocking extreme. I was blatantly offered a bribe to give high scores. This article will cover the pros and cons of incentives. Incentives are known to be a two-edged sword. But this example highlights how incentives can corrupt survey data.  Are the data real or fabricated?

~ ~ ~

Ever had someone try to influence your willingness to answer some survey? Of course you have. Many, if not most, surveys come with some offer to get you to take the survey, but I recently had a truly blatant example that made my jaw drop. As I describe this “incentive,” I’ll place it within a framework that will allow you to think more deeply about the interaction between the drive for higher response rates to reduce non-participation (or non-response) bias and the potential to corrupt the data you intend to analyze.  Response rates and incentives are always a vibrant discussion topic in my survey workshops.

survey-incentivesEverybody wants higher response rates perhaps because it’s one of the easiest survey factors to understand — even in the C suite. More responses means higher statistical accuracy. So, surveyors put considerable emphasis upon improving response rates. Perhaps so much that new problems are created.

In addition to better accuracy, getting a higher response rate reduces the non-participation (or non-response) bias. With a low response rate you are less sure that the data collected properly represents the feelings of the overall group of interest, i.e., your population. This bias or misrepresentation in the data set is created by those who chose to not respond. That’s why it’s called a non-response or non-participation bias. Consider elections where certain groups are less inclined to vote. That’s a participation bias.

Many factors influence response rates and thus the non-participation bias:

  • Administration mode chosen: each modes affects demographic groups differently
  • Quality of the survey administration process, especially the solicitation process:  effectiveness of the “sales pitch,” guarantee of confidentiality, frequency of survey administrations, reminder notes, sharing of survey findings with respondents
  • Quality of the survey instrument design: length, layout, ease of execution, engaging nature of the questions
  • Relationship of the respondent to the survey sponsor
  • Incentives

While the relationship of the respondent audience to the surveying organization is without doubt the most important driver of response rates, my focus here will be on the impact of incentives. Why? Because in driving for more responses through incentives, we may actually corrupt the survey data by introducing erroneous invalid dataa response bias.

I recently bought a license from SideMicro.com for a Windows 7 license to run under Bootcamp on my new Mac laptop. The transaction was perfectly fine. Good price. Got the link to download the software. The product key worked as expected. I then got a follow up survey request by email. Here’s what it said: (See screenshot.)

Please Give us A 5 star Good Review ,And We Will Give You $25.00 Visa Gift Card Or Store Credit Immediately. We Appreciate Your Time .

SideMicro survey screenshotWow! Before I dissect this, let me point out a deception. Would you rather have a $25 gift card or a “store credit” — whatever that is? Cash is king. What did I get — and probably everyone else who fell for the hook? A coupon code good for 10% off additional purchases. I actually wrote to them to say that I didn’t appreciate the deception, but I got no response.

They got what they wanted — a high score for the ratings folks. But am I now more or less loyal to them? If I bought from them again, would I comply with their request for high scores? That answer is obvious.

Lesson: The survey process itself can affect loyalty.

Now let me turn to incentives. The goal of an incentive is to motivate people to take the survey who don’t feel strongly and wouldn’t otherwise take the survey. That reduces the non-participation bias.

Incentives fall into two categories depending upon when the incentive is provided to the respondent:

  • Inducements. The incentive is provided with the survey invitation. Example: a survey by postal mail with some money inside. We have no guarantee that the person will take the survey.
  • Rewards. The incentive is provided after you take the survey.

For the same budget we can offer a larger reward vs. inducement.

Here’s a key question. Is the incentive motivating people to:

  1. Take the survey and provide legitimate, valid data? or
  2. Just click on response boxes to get the prize, that is, provide bogus, invalid data? (I call this monkey typing.)

An incentive should be a token of appreciation. If the incentive increases response rates reducing non-participation bias, that’s good. But if it leads to garbage data being entered in the data set — a response bias — that’s bad. More, but bad, data is not the goal! If we offer incentives in our survey programs, we all know in our hearts that some people are just monkey typing, but we want to make that minimal. I have actually counseled clients to reduce the size of their incentives for fear it would promote unacceptable levels of monkey typing.

Consider the incentives you’ve been offered to take a survey. We’ve all had service people try to influence our responses on a survey. Some are subtle and unobtrusive, for example, by actually doing their job to a high degree of excellence, which is the behavior that a feedback program should create, or politely telling us that we might get a survey about the service received and hope that you’ve had a good experience. Then there are the blatant pitches, such as the car salesperson who hands you a copy of the JD Power survey you’ll be receiving — with the “correct” answers filled in.

On a scale of 1 to 10 where 10 is outrageously blatant, this pitch from SideMicro gets a 12. I have never seen such a overt bribe to get better survey scores. In an odd way, though, I respect their honesty. They weren’t coy or obsequious. They were direct. “We’ll pay you for a high score.”

Regardless of how we feel about this manipulation, the effect it creates is called a response bias.  A response bias occurs where the respondents’ scoring on a survey (or other research effort) is affected by the surveying process, either the questionnaire design or the administration process.  In other words, something triggers a reaction in the respondent that changes how they react to the questions and the scores they will give.

Think about the surveys you’ve taken. Did something in the solicition process or the questionnaire itself affect how you answered a question?  The result is that the data collected do not properly reflect the respondents’ true views.  The validity of the data set is therefore compromised, and we might draw erroneous conclusions and take incorrect action based on the data.

Response biases exist in many flavors and colors. Conformity, auspices, acquiescence, irrelevance, and concern for privacy are common types.  Incentives typically create an irrelevancy response bias, but it could also create an auspices bias.

  • Irrelevancy means the respondent provides meaningless data just to get the prize. You  might see this in the pattern of clicks where the radio buttons are in the table to the right of the screen. You might see straight line answers — all 1s or 10s — or the famous “Christmas tree lights” diagonal pattern.
  • Auspices is where the respondents gives the answer the respondent wants, which was the bias created from the payoff by SideMicro.

Lesson: A goal of the survey administration process is to generate a higher volume of valid data that reflect the views of the entire audience. Incentives are a two-edged sword. They may positively reduce the non-participation (non-response) bias, but that could be at the cost of introducing measurement error into the survey data set from bogus submissions.

That is, unless the goal of the survey program is purely for marketing hype, as is the case with SideMicro. They clearly don’t care about getting information for diagnostic purposes.

Lesson: Many survey “findings” we see from organizations are simply garbage due to flawed research processes.

Sampling Error — And Other Reasons Polls Differ

Summary: The wide discrepancies across polling data raises the question about the sources of survey error. This article will discuss the different types of survey errors within the context of political polls. Even for those conducting feedback surveys for their organizations, lessons can be learned.

~ ~ ~

During this political season friends and colleagues have asked me about the numerous polls, why they are so different, and what’s the nature of the error in “margin of error”? While I’m not a political pollster, the issues pollsters face are the same that we face in the feedback surveys for our organizations. We have many opportunities to make errors in our survey execution, and I’ll give a brief explanation of them here.

Let’s start out with margin of error, sometimes abbreviated as MOE. You will also hear the term statistical accuracy and sampling error used to describe the survey’s margin of error. This “error” isn’t an error in the normal sense of the word. Our error here is that we didn’t get data from enough people. While we may think of that error as a mistake of survey design or survey execution, many times because of the size of our population — our group of interest — we cannot get more data. The size of our respondent group “is what it is.” With political polling, the population is huge, but the size of the respondent group is more a factor of the cost of conducting the survey. It’s a trade-off. More responses mean higher accuracy but also higher cost.

We typically only hear reported the MOE, which will be around +/- (plus or minus) 3% or 5%. However, every MOE has a second component, which is our confidence level. That is, we have a certain confidence level that our results are within some margin of error. By convention, the confidence level is 95%. If some researcher used a different confidence level and didn’t report it, that would be questionable research ethics. Why? Because the confidence level and MOE move in opposite directions.

I like to describe this interplay of confidence level and MOE using a dart throwing example.

If I stand five feet from a dartboard, I have some level of confidence that I could throw a dart within an inch of the bull’s eye. Think of that inch as the MOE. If I back up to ten feet from the dartboard, I would have a lower level of confidence of hitting that one-inch area. (Lower confidence for the same MOE.) However, I would have the same level of confidence at the ten foot distance as I did at the five foot mark but of hitting a larger area around the bull’s eye. (Same confidence but a larger MOE.)

But what does a statistical accuracy of 95% +/- 3% mean?  Technically, if we repeated the same survey 20 times, each with a different sample drawn from the population, we would expect the scores from 19 of the 20 — or 95% — to fall within plus or minus 3% of the score we first got.  So, if a poll has two candidates within 2% and the MOE is +/- 3%, you’ll hear said it’s a “statistical dead heat.”

So how come multiple polls with similar accuracies have results that lie beyond the MOEs, for example one poll showing candidate X at 42% and another poll showing candidate X at 49%, each with a MOE of +/- 3%?  The differences probably lie in other errors or biases that are confronted in surveying.  These errors fall into three broad areas:

  • Errors from Survey Instrumentation Bias
  • Errors from Survey Administration Bias
  • Errors from the Determination of the Research Objectives
  • Errors in the Survey Data Analysis

Instrumentation bias results from the design of the survey instrument or survey questionnaire. The most common form of this bias comes from the wording of the questions. Questions might have ambiguous wording, double-barreled questions, lack a common benchmark, lead with an example, have loaded wording, or poorly designed scales. All of those could lead the data captured to not accurately reflect respondents’ true views.

wp-presidential-pollFor example, let’s look at a Washington Post ABC news poll from late August, 2012. One survey question used an “anxious scale” that could easily create ambiguity. I have never seen an “anxious scale” used — and for good reason.  “Anxious” can have multiple meanings.  In high school, I was anxious for the school year to end but I was also anxious about final exams.

I could be anxious for Mitt Romney to become president, but I could instead be anxious about Mitt Romney becoming president. Ditto for Barack Obama. “Anxious” here could be interpreted as, “I can wait!” or “I have anxiety about it.” Those are very different interpretations of the word “anxious.”
Remember, this was a telephone survey so nuanced meaning in the wording can’t be discerned by rereading.  Additionally, note the syntax in the question: “…how do feel about how…” This horribly contorted construction increases the likelihood of someone misunderstanding the question.

Comparisons between surveys can also be dubious because different surveyors use different scale lengths, i.e., the number of points or options in the response set. How can you then compare the results across surveys? You really can’t. If the question is a binary choice — are you voting for Obama or Romney? — the cross-survey comparisons are more legitimate, but what if one survey includes the Libertarian and Green candidates? That muddies the comparisons.

Question sequencing also matters. If a survey asks a bunch of questions about a candidate (or a product), and then asks an overall assessment question, that assessment is colored by the preceding questions. Sometimes these lead-in questions are just straw-men, leading questions with loaded wording that are meant to prompt negative or positive thinking.

All of these sources of instrumentation error can and will happen even in a professionally designed, politically neutral survey, but they can also be used by surveyors to doctor the results to get the findings they want.

Administration bias results from how the survey is actually conducted. Here, we are not necessarily dealing with errors of execution. Survey administration decisions confront a series of trade-offs where there is no one right approach. All survey administration methods have inherent biases that we attempt to control.

Telephone surveys tend to get higher scores than other forms of surveying, especially when posing questions asking level of agreement with a series of statements. But telephone surveys also tend to get people to respond who don’t have strong feelings, which reduces non-response bias.

Non-response bias is caused by people not participating and thus we don’t have data from them. If the non-participants differ structurally from the participants, then the data collected aren’t representative of the overall group of interest. How to measure non-response is a quandary for surveyors since it’s caused by people who don’t want their views measured.  That’s quite a Catch-22!

If we look at the political polls in detail, which are typically done by telephone, we see respondent groups that include people who are not registered to vote and people who indicate they aren’t likely to vote.  (The New York Times‘ polls typically include 10-15% unregistered voters in their respondent group.)  If a survey used other survey methods, such as web form surveys, these people would be far less likely to take the survey, increasing the non-response bias.  At issue is whether the purpose of the poll is to understand the views of the general populace or to predict the election.  If we’re unclear about this, then we have introduced an error from having a poorly defined set of reserch objectives. This error I find to be very common in organizational feedback surveys.

The only certainty on which all surveyors would agree is that the lower the response rate (or compliance rate for telephone surveys), the greater the likelihood of non-response bias.

Telephone surveys do run the risk of introducing interviewer bias. If every interviewer doesn’t deliver the survey script identically, then we run the risk of the data not reflecting the actual views of the respondent group. One of the polling companies, Rasmussen, uses interactive voice response (IVR) surveys, which are recorded scripts to which respondents enter their feelings via the phone keypad.

While interviewer bias is eliminated, IVR surveys can introduce a sample bias. The person who takes the IVR survey is probably the person who answers the phone. Is that really the person whose views we want? It could be a 13-year-old.  Surveying with live interviewers allows for better screening of the respondents to get a more proper sample. In fact, they may not talk with the person who answers the phone, but instead ask to speak to the adult in the household who has the next birthday. This practice helps insure a random selection of people to take the survey.

Most polls present the results for “likely voters” as well as for all respondents. How “likelihood to vote” was determined (or “operationalized”) by each poll can make a large difference. Did the survey simply ask, “How likely are you to vote in this year’s election?” or did they ask for whom the respondent voted in the last one or two elections and classify the respondent based upon whether they answered “did not get the chance to vote”?  (The “did not vote” option is always phrased in a neutral, non-judgmental manner.)

If you look at the poll results from, say, the NY Times polls, you will see a self-reported, very high likelihood to vote, typically over 80%. However, the actually percentage of registered voters who will vote will be somewhere around 60% to 65%.

This dichotomy in how the data are analyzed highlights another type of error in play that is caused in part by the survey administration and in part by the survey instrument: response bias. Response bias is the bias or predisposition that the respondent brings to the process that the survey may activate. We are taught that voting is a civic duty, so people are likely to say they intend to vote when they don’t. That’s why the pollsters who simply ask, “How likely are you to vote” go to the next level of trying to assess the enthusiasm of the respondent since that’s a better indicator of actual voting likelihood.  But is asking about past voting activity a better indicator of voting likelihood? There’s disagreement on this.

Both response bias and non-response bias will be present to some degree in every survey.  We can only attempt to minimize those biases.

But the survey data analysis can also introduce error into how readers perceive the results. What if the demographics (such as income, age, race, and gender) of our respondent group do not match the known demographics? Here, the pollsters will perform statistical adjustments, which are a complex form of weighted averages, to make the respondent group reflect the actual group.

That practice has been controversial in this 2012 election. The demographic profile of voters from the 2008 presidential election was markedly different from past elections, as was the 2010 mid-term elections. If pollsters adjust the polling results to reflect the 2008 demographic profile of voters and the 2012 actual voting profile returns to historical norms, then the adjustments will be introducing another source of survey error from incorrect data analysis.

An additional controversy about statistical adjustments we are hearing this year is whether adjustments should be made based upon self-reported, political party affiliation. Democrats have been over represented in much of the 2012 polling data when compared to party registrations. The pollsters say they adjust for attributes of respondents (that is, demographic variables) and not for attitudes. They argue that party affiliation — an attitude — is too fluid to use as a basis for statistical adjustments.  We shall see on November 6.

~ ~ ~

Phew! Note how many different factors can skew a poll’s (or survey’s) results. So, the next time you scratch your head at why the polls say different things you’ll know there’s a lot in play. For this reason, Real Clear Politics takes an average of all the polls, arguing that this index is more accurate since it balances out on net the skews that may be built into any one polling approach.

Frankly, I’d rather we had no political polls. Journalists would then be forced to do their jobs to enlighten us on the issues rather than have talking heads discussing the horse race. IMHO…

Battling Survey Fatigue

Summary: Survey fatigue is genuine concern for surveyors. As more companies survey, the willingness of people to take all these surveys plummets. Applying the golden rule of surveying, can help you stand out, reduce the survey fatigue for your respondents, and increase survey response rates.

~ ~ ~

Tragedy of the Commons. Funny how some concepts learned in college have stickiness — and relevance years later in ways not envisioned. The fundamental idea of the Tragedy of the Commons is that a resource that is owned in common will be overexploited. Each individual, acting rationally from his or her point of view, will use the common resource to the point where collectively it will be seriously degraded.

The idea was described by Garrett Hardin in a Science article in 1968 using a town’s common grazing area for the town folks’ animals. One of today’s “commons” is customers’ mindshare. We all fight to get a piece of our customers’ – or prospects’ — attention to the point where it becomes a cacophony and no one gets these people’s full attention.

Consider customer feedback surveys. The New York Times printed an article in March 2012 about the “onslaught” of customer surveys. I couldn’t have used a more apt term — and perhaps because I train people on how to conduct customer survey programs, I am part of the problem! Survey fatigue — or survey burnout — is very real.

Remember when getting a survey invitation to take a web-form survey was novel and fun? (Heck, I can remember when email was novel and fun back in the ’80s.) I do survey design work for a living, so I have a natural incentive to take most any survey. Hey, I might learn something — or find grist for a future article. But even I have reached the burnout stage with the constant requests to give feedback on some website I’m using.

Why has this happened? As the cost of surveying has plummeted due to internet-based survey tools, more and more companies are conducting surveys. The “common” of people’s time, attention, and willingness to take surveys is being overgrazed. If the surveys were well-done, brief and to the point, the overgrazing might be tolerable. But so many surveys are so poorly done — for example, Home Depot’s way too long survey and Yahoo’s insulting survey — that the overgrazing has reached the point of eating the roots, leaving little for the rest of the organizations who might want to collect customer feedback.

Even worse, some surveys aren’t really surveys. They’re a lead generation marketing pitch — or a phishing attack — disguised as a survey.

You might say there’s lots of pastureland from which to graze for respondents, but the problem with survey fatigue is twofold, as I explained in a recent article on survey bias versus survey accuracy.

  • Fatigue leads to lower response rates, reducing statistical accuracy.
  • Fatigue means that those with extreme views are more likely to respond, leading to a serious survey bias, known as non-response bias. That is, those who don’t respond likely have different views from those who do. The survey data that winds up in the survey data base don’t properly reflect all customers’ views. The data are biased.

So what to do? Well, you cannot control other organizations surveying propensities, but by doing a survey program right, perhaps you can shine among the rest.

First, don’t over survey the same people. This is a particular concern for transactional survey programs. You should have some time window in which you would not survey the same person again. My default window for this is three months, which seems reasonable. And don’t survey people who say, “Stop sending me surveys!” That just creates bad word of mouth and could lose a customer.

Second, coordinate your organization’s survey programs. Are other groups in your organization doing surveys that you don’t know about? Two concurrent survey invitations to the same person from different survey programs in the same company presents a really bad image to the respondent. This is why many large companies invest in large (and expensive) customer feedback management systems and a central feedback program office which controls customer contact lists.

Third, keep the surveys short, to the point, and most importantly, engaging. Years ago I took a hotel survey where on the 8th or 9th screen — with many questions per screen — the progress indicator registered 35%. Resist the temptation to turn your transactional survey into a relationship survey. Take a Home Depot “brief” store visit survey and you’ll see what not to do. Market researchers and product managers — no offense if you are one — seem to have very unrealistic assumptions of respondents’ commitment to complete a survey. Know your audience and how bonded they are to you. That will guide their survey tolerance level.

Fourth, make the survey easy to take. Lots of extraneous verbiage increases the respondent burden, which is the work we ask the respondent to do. Checklist questions with lots of options with detailed explanations may generate precise data, but it won’t generate any valid data if no one completes it. If you need that precision, consider conducting interviews. “Think outside the survey box” when building your customer feedback program.

Fifth, consider offering an incentive. I’m not a big fan of incentives since they can promote what I call “monkey typing.” No, I don’t mean “Survey Monkey typing,” but rather, people clicking on any response choice just to get the prize. How valid are the data in that circumstance? It’s garbage. In fact, I have counseled clients to reduce the size of the incentive they were planning to offer. It’s supposed to be a token of appreciation, not payment.

Lastly, incent them to respond by sharing a summary of your findings. Show your respondents that you really mean it when you say their opinion is important. Close the loop and tell them what you’ve learned and what action you’re taking. That of course means that you actually take action. It’s such a simple, powerful idea. Yet, so very few organizations close the feedback loop with their customers.

While we can’t control all the survey noise and overgrazing, we can control our piece of the survey common. Basically, practice the Golden Rule of Surveying. Survey unto others as you would have them survey unto you.

What’s the point? An Unactionable Transactional Survey

Summary: Transactional surveys are a vital cog for operational improvement and in customer retention. The HomeAway customer experience transactional survey doesn’t generate actionable data, including identifying customers in need of a service recovery event. This article reviews the shortcomings of this customer support transactional survey.

One of the great things about writing articles on customer experience survey practices is that examples of good and bad survey practice fall into one’s lap. As a customer of HomeAway and its sibling VRBO, both sites for advertising vacation homes, I recently had an as-yet unresolved issue that required me to contact their support organization. Some time after my initial email exchange with a support representative, I got an invitation to take a customer experience survey.

HomeAway-survey-invite

I was pleasantly surprised to get this survey invitation since every other feedback survey invitation I had received from this company was a misrepresentation. While the invitations had said they were feedback surveys, they were really market research surveys. I don’t like to be lied to.

My initial customer experiences were positive in this support incident, so I gave a positive review, and — foolishly in hindsight — put some thought into giving some specific comments. A day or two later, I sent a follow-up email to support about my issue, and got no response. I sent yet another email and got no response. That made me want to amend my previous survey submission. So, I dug out the invitation email and clicked on the link. There was the survey web form, but I wasn’t amending the original survey submission. I was just back at the original blank survey web form.

Had I looked more closely at the initial invitation and the survey web form, I would have realized that this was just a web page to which anyone could navigate. I did not see any key structure in the URL or anything else that would control access to the page. Heck, they didn’t even plant a cookie on my computer to prevent me from a resubmission. So, I was able to submit another survey response.

HomeAway-survey

Why is this bad transactional survey design practice?

First, the folks who run this support operation I do not believe have thought through their support model. What triggered the invitation to take the survey? Is it based on each email I send to support? (I think that’s the case since I got a second invitation based — I believe — on the same incident.) In most all support environments, one incident may be comprised of several interactions, possibly through multiple communication channels. This survey would seem to presume that all incidents need only one interaction. Look at the penultimate question: Was your issue completely resolved with the response you received? I will bet the management views “no” responses as negative. Yet anyone with a support service background knows that several interactions may be necessary to understand the nature of the issue and to address all the issues. Note they did not ask if my issues had been resolved.

Second, they have limited control over the survey submissions. It appears I could submit countless survey responses. Perhaps my last one overwrites my first one.  If I can submit multiple responses, that of course, would corrupt the data set. Beyond that, non-customers of HomeAway could submit surveys. Without proper survey administration controls, data validity is definitely called into question. I certainly wouldn’t feel confident taking action based on these data.

That leads to my third point. This transactional survey does not generate structured, actionable data. As the manager of HomeAway’s customer support operation, what would you do with the results of this survey each day, week, or month? The survey has three very high level summary questions and one open-ended question to solicit diagnostic information. The survey has no closed-ended questions to diagnose shortcomings in the service delivery. Comments can be very useful, but to rely on that sole question for actionable data seems dubious.

Fourth, perhaps the most critical action that one can take from a transactional survey is to engage in service recovery, that is, to address and resolve issues of a specific customer to save them as a customer. How do you do that with this survey? They don’t know who the submitter is, and they don’t ask if the respondent wants a follow-up. In the opening paragraph, I said that I had wasted my time giving specific feedback. This is why. I actually thought that someone with some authority would review my comments and take action on them or would follow up with me. How silly of me.

This survey can only be used for summary evaluations of the support organization. They can’t even evaluate specific support representatives. Maybe the senior customer service manager needed a metric, just a metric, to argue for his bonus.

Here’s some free consulting to HomeAway. If you really want to tick off customers, ask them for specific feedback and then ignore them. Here, HomeAway has no ability to engage in recovery — unless they’re tracking the IP address of my computer.

I actually told the customer support representative that unless HomeAway changes a critical policy that I would not be renewing my subscriptions to both HomeAway and VRBO. They have no customer guarantee, so canceling the subscriptions gets me no refund, but I will be looking for other advertising vehicles for my Harpswell, Maine rental.

The survey practices of this transactional survey reinforce my view that HomeAway has a very poor customer ethic. I guess I understand why HomeAway purchased the website, HomeAwaySucks.com — but you’ll find many forums for rental owners that sing the shortcomings of HomeAway.

Communicate the Survey Findings — and Your Actions

Summary: Many survey project managers think that a survey project ends with the analysis, presentation of findings, and suggested courses of action. Companies that follow such practice will likely find their survey response rates falling over time as customers — or other group of interest — feel less engaged.  Two vital last steps in a survey project are: 1) take action on the findings and 2) communicate the findings and intended actions back to the respondent group. These two steps show the feedback program is for real, and they will help drive respondent engagement with future feedback requests.

You’ve written a beautiful questionnaire that meets your research objectives. You’ve sent it to your respondent group, getting a great response rate. The data have been analyzed, report written, and presentation made to management. You check “completed” on your job plan. You’re done, right? Wrong. Two more steps are vital to receiving full value from your survey project today and from your feedback program tomorrow.

  • Take action on your findings
  • Communicate your findings and your actions taken back to your respondent group.

To the first item, you may be — hopefully — thinking, “of course!” But to the second item, you may well be thinking, “Ohhhhh… Really???” Both are vital to getting full value from an ongoing survey program. Let’s look at each.

Take Action. This may seem so obvious you can’t believe I would bother mentioning it, but you would be surprised at the number of companies that do surveys “just because” or because the results are “nice to know.” I have actually done survey projects for clients where I know my spiral bound report is going on someone’s bookshelf (or “filed” elsewhere) and nothing will be done with the findings. I may get paid, but it’s not professionally fulfilling.

While I don’t think you would ever find this to be a corporate policy, some people want to gather data to support their position on an issue, and if they don’t get the “right” results, they just forget about the research. Others do surveys without any clear intent of taking action on the findings. Still others intend to take action but don’t design the questionnaire to generate actionable data.

This lack of action is less likely to occur where the research objectives have been well thought through, leading to a properly designed questionnaire. It is also less likely to occur where a company has a survey program office — or some such title — with control over customer contact data. One of my survey workshop attendees ran such an office. If someone came to her wanting to conduct a survey, her first question was, “What are you going to do with the data?” Without a good answer, you didn’t get access to customer contact data. She wasn’t going to let anyone take up customers’ time because something was “nice to know.” You needed to specify actions that would be taken from the findings. I suspect she’s not the most popular person in her company, but that gatekeeping action is needed to force people to think through their customer research programs.

Communicate Your Findings. I suspect this suggestion gave you cause to pause. Certainly you intend to communicate your survey’s findings to your management team and others to whom the findings would be relevant.

But why would you communicate the findings back to the respondent group? The answer can be found in your survey invitation. You likely told the person their opinion was of vital importance and you intended to use the data to improve the products and services you provide them. Great. Prove it! I know I have received such survey invitations from companies, and then I wonder what they’ve done with my feedback. All too often I am left having no idea if any responses were even tabulated.

If you are engaged in an ongoing feedback program from customers, suppliers, employees or some other group with whom you have an ongoing relationship, you are likely to go back to these same people repeatedly to get feedback. Why would they complete your next survey if they don’t see any tangible benefit from the last survey they completed? It’s your responsibility to show them how they will benefit from completing the survey beyond the vague sales pitch in the survey invitation. You promised them benefits; now show the benefits that have accrued from their feedback.

The point of the communication is to show respondents that you really did do something with the input from the survey. An ongoing survey program needs to be marketed, and communicating the findings is a critical marketing element.

Do be clear on one point. I am not suggesting that you should try to influence the responses provided by customers. You want honest, candid, forthright responses. Rather, I am suggesting you should try to influence their willingness to provide a response. Most companies consider offering an incentive to juice responses, but that can be a two-edged sword as I’ll explore in a future article. Communicating findings creates an incentive that costs little, and it supports the broader goals of the ongoing feedback program. It shows that as a company you’re serious about building long-term relationships rather than just engaging in sloganeering.

What would you communicate? Clearly, you are not going to reveal any proprietary information or personnel decisions derived from the survey. But you can certainly reveal the findings of the survey at a high level and say that you’re implementing steps to correct the problems identified and that you’ll be maintaining the practices that respondents liked.

How can you communicate the findings? My assumption in this article is that you are capturing feedback from a group with whom you have ongoing contact, be it customers, suppliers, members, or employees, and not an arms-length consumer-research project. Given that, how do you normally communicate with this group? Possible vehicles are newsletters, account manager visits, an article on the intranet, and user group meetings. Be pragmatic. Use these vehicles to communicate your feedback findings and actions. Newsletter editors are always looking for articles.

Hopefully, I have convinced you to communicate summarize findings and action plans to your respondent group. It’s all part of closing the feedback loop. You still have one more small task. Include the fact that you will communicate findings in your survey invitation!

One category of communication I consider absolutely not optional: responding to specific problems customers reveal in a survey.

Responding to Survey Cries for Help

Summary: Surveys, especially transactional surveys, identify customers in need of a service recovery act. The worst thing you can do is ignore screams for help. Better to just forget doing the survey. This article discusses this most important communication element in a survey program.

In another article I discussed the need to communicate the findings from a survey project back to our group of interest. The goal of this communication is to close the loop. You asked the respondent to give you feedback. Now show the customer that you really do read the survey submissions and you really do take action on it. Such actions will likely lead to greater, long-term participation in the survey program with more meaningful information.

While some may not see the need for this — really, some see surveys as one-way communication — I also mentioned there’s another type of communication with our respondents that I do not consider optional: responding to cries for help.

Whenever we survey a group with whom we have an ongoing relationship, be it customers, employees, suppliers, or members, some survey submissions are going to voice complaints, perhaps very loud complaints. This is actually good! If we don’t know that people are upset, how can fix the issues? In fact, one of the major reasons we conduct surveys is to flesh out those who have serious issues that need to be addressed.

Customers who are not happy with your product or service can do one of two things. They can complain to you or they can just go away silently. (Technically, there’s a third choice.  They could yet keep buying from you because the switching cost is too steep to change, but you can be sure these are customer relationships that will be vexing.) If their compliant gets voiced, then you can address the issue and hopefully retain the customer.

In fact, fixing customer issues can be the best path to stronger customer loyalty. Research has shown that customers who have had a complaint resolved quickly and fairly are more loyal customers — far more loyal — than customers who have had no complaints. Isn’t that just common sense? Think about your own consumer interactions.  f you voice a complaint and the company ignores you, puts up a wall denying there is a problem, or grudgingly responds, how likely are you to buy from them again? On the other hand, what if the company takes ownership of the problem and fixes it, wouldn’t you feel more confident about buying from them again in the future? By fixing the problem, the company is also “fixing” the customer relationship.

the-screamNotice I wrote, “If their complaint gets voiced.” Most people won’t bother to complain. In some cultures, people are more or less likely to voice a complaint. Regardless of the culture, most complaints will go unvoiced. Here’s where surveying — and other customer feedback channels, including social media — come into play. Part of the goal of a survey is to give voice to complaints. This is particularly true of surveying at the close of a transaction, but it also holds true for periodic surveys, such as annual surveys. The survey invitation is in essence an invitation to tell what’s wrong (if anything).

Okay. So we get a survey returned and someone gave us very low scores and may have written some critical comments in a text box field. What do we do? We invoke our service recovery procedures, of course! Don’t tell me you don’t have service recovery procedures. (“Service recovery” is a glass-half-full alternative phrasing to “complaint handling.” Would you rather work in a complaint department or a service recovery department?)

I am jesting here a little, but I am always surprised at the number of companies that have no procedures in place to handle complaints. Here is the most critical take-away from this article. Before you hit the “send” button on your survey invitations, you must have a procedure in place to respond to complaints. Those procedures must include who responds and the type of response, including any compensation, that should be offered depending upon the nature of the problem — and the importance of the customer.

Companies that are caught flat-footed by complaints in survey responses are multiplying the problem. If a customer is upset, that’s one thing. But if a customer complains and you’re slow to respond — or worse, you ignore them — then they’re even madder! It’s throwing gasoline on a fire. I will humbly suggest that the goal of the survey program is not to increase customer angst! Yet I have personally experienced being ignored after voicing strong complaints in a survey from Sears, a major US retailer.

You might be thinking, “But we do our survey anonymously in order to get more truthful responses. What now?” You’re right. While anonymity may have benefits in the quality of the information garnered from the survey, you now face the conundrum of having an unknown, upset customer whose problem you can’t fix. For this reason many companies chose to not have anonymous surveys. This is especially true for business-to-business transactional surveying programs.

In an anonymous survey, you should include this question, “If you have some unresolved issue, please contact us at XXX or enter your contact information below. We will respond within 24 hours.” Many people won’t enter their contact information. C’est la vie. But at least you gave them the chance to have their issue resolved.

Notice I said “respond within 24 hours.” Speed of response is vital, even if it’s just acknowledgement of the receipt of the issue promising a more detailed response within another day. Operationally, this presents a requirement for your survey program. The survey responses must be reviewed at least daily to identify those customers with complaints that need to be addressed. This is sometimes referred to as creating a “hot sheet.” If you are contracting out your surveying effort, “hot sheeting” must be a key requirement of the vendor.

One company I know launched an initial survey project and a customer included a handwritten, two-page letter detailing issues. (This was a postal mail survey.) The company did not have procedures to review the incoming surveys, and it was many days before a manager saw this. I’m glad I wasn’t the one placing the call to the customer!

On non-anonymous surveys it is also common practice to include that question asking if the customer wants a follow-up contact on some issue. Here’s a real dilemma. The customer voices issues, but then checks “No” indicating they don’t want to be contacted. You want to fix their issue, but if you do call them, you’re violating their specific request.

Here’s a cute way of handling this. Pose the question, “Do you want someone to contact you?” but only give a “yes” response option. That gives the customer the opportunity to emphatically say “contact me” but doesn’t give them an opt-out ability.

Great. You have procedures in place to flag upset customers and respond to their issues. Done. Right? Wrong. You have remedied the problem and fixed the customer, but has the underlying process that caused the problem been resolved? The surveying program provides information to initiate root cause identification and resolution. If your company has some quality initiative in place, the survey program can be a key component of it. In fact, that is how you may get funding for the survey program.

I have been talking about communication to our external customer, but I’ll end here with a 180-degree turn to communication with our internal customers. If you are running a survey program, you should also be communicating your findings to the “process owners” of the processes that have caused the problems. These communication channels can be highly problematic, especially if the survey is done by the service organization but hears issues about the product sold to the customer. In fact, that strategic role for after-sales service was the topic of my doctoral dissertation.

Survey Resource Requirements

Summary: Survey projects are frequently treated without the due diligence required to ensure success. They seem deceptively simple. Why the need to plan? That, as any project manager knows, is a recipe for failure. Some level of resource commitment is needed from an organization to provide a firm foundation for a successful survey project. This article examines those requirements from both monetary and personnel resource perspectives.

In the decade or more that I’ve been doing survey work I have one observation that I can’t quite explain. Company budgets for customer feedback programs are widely bifurcated. Many companies will write good-size, six-figure (in US dollars) checks to run a comprehensive, customer feedback office. These may be totally outsourced, or done entirely with internal resources, or a combination.

Then there are the companies that will barely spend a dime. I wish that were as big an exaggeration as it sounds, but it’s not. I find there’s little middle ground.  Companies either:

  • “Get it” and are willing to invest in listening to the customer
  • “Get it” but view it as an expense with a dubious return so they spend little, or
  • “Don’t get it” but may go through the motions of a poorly funded program.

Far be it for me to say it’s a waste of money to hire one of the large, well-known professional firms to run a feedback program – though you may also be paying a lot for their “name.” They do a very good job, employ sound methodologies — maybe with a few exceptions — deliver a lot of analysis and information with considerable hand holding. (The exceptions I have seen are survey programs designed by market research firms, not the dedicated survey firms, that don’t have solid background in survey design and apparently think there’s nothing to it. I have written some articles about these types of surveys I have encountered.)

On the other extreme, many years ago I got a call from a gentleman in Florida who was put in charge of doing a customer satisfaction survey for his company. He asked me a bunch of questions, and it was readily apparent that he had no background in conducting surveys. In fact, he was very open about that. I gave him a lot of advice, and, yes, I did suggest to him to attend one of the workshop. Sorry, no budget for that. So, I suggested my book. Sorry, no budget for even a book! About a year later I was speaking at a conference in Florida, so I wrote to this person just on the chance he might be able to attend the conference. I was relieved to learn that the survey project had been axed. Given the level of commitment that management was showing, the project would have failed.

So, what budget do you need to run a decent survey project if you’re doing this internally?  Let me break the budget into personnel and dollars.

From a personnel standpoint survey programs have two key players. First, a project leader is needed. This person will likely be the person who drafts the survey instruments and thus needs a solid background in survey questionnaire design as well as knowledge of the range of tasks in a survey project. Running the survey program may be a full-time job for the project leader, but that depends on the size of the company and the scope of the program. What is absolutely necessary is that the survey project management role be part of the person’s job plan. I’ve encountered service managers who are told to do a survey “in their spare time.” Yea, right. Who has spare time in their work life, and why would you spend time doing something on which you’re not measured or rewarded?

Perhaps the more important player is the sponsor or champion. In my tale above about the Florida gentleman, it was clear that no one was championing the project. The sponsor needs to argue for the monetary budget, fight the political battles, and make sure that any people assigned to the project team don’t get pulled off for some other project.

The aforementioned project team is the third category of personnel needed on any survey project, especially if the survey program is to be executed primarily or exclusively with internal resources. A survey project has a number of tasks to be executed, more than one person can do handily. Most important, the project team needs to be the review and refinement body for the survey instrument that the project leader is likely writing. The worst survey instruments are those written by one person with no reviewers.

For the monetary side of the budgeting process, the most important and perhaps largest piece is the headcount dollars for the project leader and project team. This could be one half of a full-time equivalent (FTE) to several FTEs for a highly involved, customer feedback program done internally. But beyond headcount there are going to be out-of-pocket expenses. Let me address those through the time line of a survey project.

During the questionnaire design phase of a survey project, most expenses will be minor. If you’re running focus groups as part of an exploratory research project to build a better instrument, you will likely need to rent meeting room space and provide refreshments and some token gift to participants. You may also want to provide some token gift to participants in the pilot testing phase, and you may pay someone to transcribe recorded notes. All of these are minor, especially compared to the personnel costs. (Of course, we’d recommend you attend our survey workshop series to get a solid grounding in survey methodology.)

The survey administration stage will incur most of the out-of-pocket cost of a survey project since that’s where the logistical tasks lie. If you’re using web-form administration, you will need to buy software or rent one of the hosted survey tools. This cost could run from hundreds of dollars to tens of thousands of dollars. A broad range of capabilities exists in these tools, depending upon your requirements. Many companies start out with one of the inexpensive hosted tools, such as SurveyMonkey, QuestionPro, or Zoomerang. (The list of survey tools is very extensive.) These tools will do the job, but as your requirements or sophistication increase, you may well outgrow them. But as with most any product, you don’t know what you don’t know until you use them. Using one of these tools will make you a more informed purchaser of the next generation survey tool. Again, most of the cost here will be personnel-oriented. As with most software applications, the cost of learning to use the survey tool will be far greater than the cost of buying or renting the tool.

Phone surveys done internally on a small scale probably encounter little to no out-of-pocket expenses, but when you get to a certain scale, you need to invest in a software application to manage the surveying process.  Postal mail surveys entail considerable expense due to the mailing logistics.  Even if you use internal resources to stuff envelopes and key in the data from the returned forms, you still have costs for envelopes, paper, copying, and postage both outbound and inbound – you must provide return postage.

The data analysis and reporting stage will again have very low out-of-pocket expenses. The survey tool you use will likely have an analytical package, and your company likely has a statistical package that you can use. Excel will work fine for most survey analysis purposes. Personnel time is again the major cost here. And don’t forget that included in this stage is any service recovery program where you respond to customers who have voiced serious issues with your products or services along with implementing the findings of the survey program.

If you are budget constrained, you might be asking why you would consider outsourcing a survey project. As with any project, the two main reasons to outsource are:

  • Capacity. Do you really have the bandwidth in your staff to do the job? Survey programs look simple on their face, but they are more involved then most people realize — until they are knee-deep into the administration process.
  • Competency. The devil’s in the details.  Do you have the knowledge of surveying methodology to know all the details that need to be considered to correctly design and execute a survey project?  If you don’t, then you might compromise the whole value sought from the program.  Bad, invalid data is worse than no data at all.

Survey projects add a third reason for outsourcing:

  • Credibility and Confidentiality. Your customers may not tell you things that you really need to know, but they may be willing to tell a third party “the bad news.” Because respondents feel more assured about the confidentiality of their responses, the findings of the program may be more credible. The most obvious example of these factors is personnel surveys, which almost all companies outsource.

It’s your call about whether to outsource your survey project or do it internally, but make sure the decision is made knowledgeably. A company can run its own survey program and do an admirable job, but don’t be penny-wise and pound foolish. You don’t need to spend hundreds of thousands of dollars, but you do need to establish a budget that properly recognizes the resources needed to do a credible job of generating valid feedback from your customers or your employees.

Reverse Engineer a Statement of Survey Research Objectives

Summary: A successful survey program starts with a firm foundation. Without a clearly articulated Statement of Research Objectives the survey project is likely to meander with disagreements among the players during the project especially about the questions to create for the survey instrument. Failure is more likely. This article outlines the critical elements that should be part of this first step in a survey project and how to reverse engineer the research statement from a currently used survey questionnaire.

~ ~ ~

Perhaps the most common and important mistake a surveyor (or researcher) can make lies at the very onset of the project: good project planning. A customer feedback program is composed of a series of projects, and good project management skills should be exerted. I won’t get into budgeting and scheduling here, but instead focus on what should be the first section of any project plan: a statement of project objectives.

What’s in a good statement of research objectives? I like to use the Who, What, When, Where, How, and Why metaphor, sometimes called the 5 Ws (and an H) in the journalism profession.

Who:

Is our group — and subgroups — of interest?

What:

Are we trying to understand about those groups’ views?
Will we do with the results?
That is, Why are we doing the survey?

When:

Are we going to send out the survey invitations and reminders?
Are we closing the administration period and start the data analysis?

How (& Where):

Will we develop the questionnaire?
Will we administer the survey? (What method, sampling procedure, incentive…)

The when, how, and where are mostly logistical planning issues. The who (aside from being a bunch of aging rock and rollers) is critical. It makes us think about exactly who is the target audience for our research. That sounds simple on its face, but it can actually be an area of discussion and disagreement on the project team. Is our target population all customers, customers who have purchased from us in the past year, customers who have purchased a certain set of products, customers who have contacted us for service, etc., and who in the customer organization should be completing the survey?

The what and why strike to the heart of the matter.  If you can’t articulate what you’re going to do with the findings, then how can you justify wasting some customers’ time filling out this poorly conceived survey? One of my workshop alumni runs the customer survey program office for a major technology company, and if someone who wants to survey customers can’t answer the what and why, then she won’t execute the survey. Period. A good discussion with the sponsors of the survey research may bring out the hidden agenda for the goals of the research. As an outsider doing a project for a client, I always need to know those agenda so that I can properly position the actual findings, but even as an insider doing a survey for your own company, those agenda need to be known.

Recognize that I’m not saying that this statement of research objectives is locked in and cannot be changed. It can be and very well will be as you move through the survey project. But with a well-thought-out statement of research objectives you’ll make changes having thought through the implications.

What happens if you skip this step and just start writing survey questions? The survey instrument may wander and meander across multiple topics that likely need to be answered by different respondents. I see this all too frequently in surveys brought to my workshops by attendees. Frequently, it’s because multiple players have been pushing their agenda for the surveys, wordsmithing the survey questions to achieve the results they want. There’s an old adage that a camel is a horse built by committee. I’ve seen many “camel surveys”.

It is also likely that when you get the data from the survey and then — finally — start thinking about what research questions you want to answer, you will find you do not have the data you need or the data aren’t structured as you need it for some analysis.

When someone asks me to review a survey instrument, I like to “reverse engineer” the statement of research objectives. That is, I look at the survey and determine what I see as the implicit research objectives. It’s a good exercise that you can practice with any survey you encounter.

Let’s look at an example. The nearby survey was one I got from the company that installed a new heating and cooling (HVAC) system in my 200-year-old house. What are the firm’s research objectives? What does the sponsor of this survey seem to think drives customer satisfaction with the purchase, design, and installation of a heating and cooling system? Now, put yourself in the position of the person who has hired a company to do the work. (It doesn’t have to be an HVAC system, but any major work done in your home or business.) What is it you’re buying? What are the products and services you have bought? Let me add that this survey was left in my house about half way through the installation process, which took about a week. That’s the where and how.

customer-questionnaire

Even a cursory review of the questionnaire reveals that the company thinks the behavior of their technicians in my home was all that really mattered. The techs could be great, but what if the system doesn’t work? Where are the questions about the design of the system and whether it works as promised? Guess who would be measured by those questions? Exactly, the head of the company who performed the design work. Will the owner of the firm learn about all the things that truly drive customer satisfaction? I don’t think so. (There are several specific shortcomings with the wording of questions as well.)

You might argue that the survey was to capture my feedback about what was happening right then — my interaction with this swarm of techs in my house. This isn’t the proper time to ask about the quality of the system design; I needed to experience it first. You’re right; I would agree with you. However, I did not receive any follow up research after the system had been functioning for a week or a month to find out how I liked the performance of the system.

Notice I keep using the term, “research objectives.” You might have thought, “Hey, I’m doing a customer survey. I’m not doing research.” Yes, you are. Legitimate research needs to be done following a certain level of rigor to truly  answer the business questions. The above discussion brings out this point. To fully understand the customer’s satisfaction with the HVAC system you would need a multi-pronged research approach. One survey about the techs in my house cannot answer all the relevant research questions. A program of research, which may include surveys and/or interviews of different people at different times, may be needed.

If I may make a bad pun on a common business phrase, you need to think outside the survey research box.

Sears IVR Customer Satisfaction Survey

You would probably think that mistakes in survey design would be made by small companies with limited resources and knowledge. Yet, some of the best examples of bad survey design practice I find in big companies. In this article, I’ll illustrate the mistakes made in the IVR  survey to measure Sears customer satisfaction. (Home Depot’s store visit survey is another glaring example of bad survey design.) Read my related article regarding to learn the strengths and weaknesses of IVR survey administration method.

Note: If you have landed on this page because you ran a search on “Sears Survey,” please note that you are NOT on the Sears website. We get people contacting us thinking we’re Sears looking for all kinds of help. We’re not unsympathetic, but please contact Sears. By the way, it appears that entering the Sears survey gets you on all kinds of mailing lists. Beware.

In a nutshell, the Sears IVR customer satisfaction survey:

  • Didn’t allow me, the customer, to fully express my feelings about my entire transaction with Sears
  • Forced me to answer irrelevant (to me) survey questions
  • Force fit a suvey question to a scale to the point where the data generated are not interpretable (that is, they generate garbage data)
  • Asked me for comments — and then ignored me.
  • Most importantly, lead to greater customer dissatisfaction and destroyed any customer loyalty I had to Sears

The last bullet should startle you. Customer surveys are typically meant to provide customer measurements to increase customer satisfaction. So, how could a survey program design actually lead to greater customer dissatisfaction? The answer lies in a poorly conceived survey project and program design that leads the customer to come away with a more negative attitude toward the company in addition to generating survey data of questionable value.

Before I explain all of these survey design problems in detail and how to avoid them — hint: the answer lies in having a good customer survey program design — some background on my interaction with Sears customer service is needed.

My Sears Service Interaction

I bought a hot water heater in 1997 from Sears. It had a lifetime warranty. In September of 2007 the heater failed; it cracked. My transaction with Sears evolved into the following series of interactions.

  1. Claim Initialization. I called the Sears call center at 800-4MyHome to inquire how I would invoke my warranty protection for the failed hot water heater. This interaction was great. They actually had my purchase on record, so I didn’t even need my receipt, which I did have. They arranged for a repair tech to come to my house two days later to verify whether the tank failure was covered under warranty. I wasn’t thrilled with the 2-day wait, but I had backup hot water capabilities. The agent told me that should the technician verify that the failure was covered by my warranty, then I would not be levied the $69 service charge for the technician visit. The explanation was poorly worded, so I restated what I thought the agent said to get verification.
  2. Technician Verification of My Claim. The technician came to my house, verified my claim, gave me the paperwork to get a new hot water heater at my local Sears store — and charged me the $69 service charge. Foolishly, I paid it (with my credit card). I figured getting my refund from a company of Sears’ stature would be no problem. Boy was I wrong.
  3. Fulfillment of my claim. The field tech gave me a form to take to my local store to get my replacement heater. My day was shot so I went to the Marlboro Sears store to get my new heater, but they told me the paperwork would take 24 hours to get into the IT system. Why didn’t the technician tell me this? I was told I would get a call the next day. I didn’t. I had to call them. I did eventually get my new heater, got it installed, etc.
  4. Resolving the Improper Charge. When questioning the charge with the field tech at my house, he told me to call the local office in Danvers, Mass. rather than call the service center. He said I could talk to a “customer relationship specialist” there. The person with whom I spoke was the most useless, aggravating person I have ever engaged. He cut me off in mid sentence, refused to let me explain the situation and basically told me whatever the tech said is right. He was completely unconcerned about the discrepancy between the service center and the tech.  He was so exasperating, I hung up on him. If he is a “customer relationship specialist,” then Sears is in big trouble.I called 1-800-4MyHome again and verified that I should not have been charged. They told me to get my refund by working through my local Sears store. I tried unsuccessfully for months. I wrote to the Sears headquarters about this likely illegal practice in the hopes I could avoid going through my state’s consumer affairs division.A month after my letter was delivered I got a call from a troubleshooter at Sears headquarters. She told me the charge was legitimate. The service agents at 800-4MyHome gave me wrong information and Sears is not responsible if its agents give a customer wrong information that is the basis for establishing a contractual arrangement between Sears and the customer. Tough luck. End of story. I was flabbergasted. My jaw dropped.

Soon after the technician’s visit, I received an automated phone message asking me to take an IVR survey. I should have transcribed the survey, but I can do it good justice. The 6 or 7 question survey — it was one question longer than stated in the introduction — posed all its questions on a satisfaction scale, where 1 represented Very Dissatisfied and 5 represented Very Satisfied. All of the questions pertained to the technician’s visit to my home, which, I’ll argue, was a glaring mistake. But the biggest mistake was what Sears did with my survey data.

What Lessons Can We Learn from the Sears IVR Survey?

Keep the Survey Design Focused. I teach in my Survey Design Workshop to keep a tight focus on the survey contents and not let other departments meddle their way into the survey design process. This I believe happened at Sears. The first 6 questions were about the technician’s performance, but the last question, asked about my “satisfaction with the technician making me aware of Sears’ products and services.” That’s not something about which I give two hoots, and I suspect the operational folks in Sears service don’t either. But the marketing folks do. Clearly, the technician is tasked to promote Sears products, which is one of those strange — and inappropriate roles — for service technicians. Survey focus is achieved by having a good Statement of Purpose or Research Objectives — and sticking to it.

Word Questions to the Scale. The question mentioned immediately above also displays a classic problem in survey question design: wording a question to fit the scale. Look at the question. It was posed on a 1 to 5 satisfaction scale. I had no idea how to respond. The technician didn’t make me aware of any Sears products. I didn’t know he was supposed to, and I was glad he didn’t. So, does that mean I should have scored it a 5 indicating Very Satisfied with the technician, but not because he nicely made me aware of Sears products but because he didn’t waste my time with a sales pitch that I would at best find annoying? Or should I score it a 1 indicating Very Dissatisfied because… well, I don’t know why I would. But I am sure the marketing folks who pushed for this question would want me to score it a 1.  I didn’t. I did the cop-out, middle-of-the-road, 3. Regardless, the marketing folks at Sears are interpreting the results from this survey question that is simply generating garbage data.

Develop Comprehensive Response Options. That last question really needed a Not Applicable option. I tried just not answering, but the system would not let me do that. So, as said, I entered a 3. I had a sense I would soon get the opportunity to answer an open-ended question, so I had to get beyond this question.

Develop Comprehensive Question Set. While we want to keep any survey, but especially an IVR transactional survey, short and sweet, you still need to pose all the needed questions. Usually, my criticism of surveys is that they go too far a field. The Sears survey was too narrow. The technician gave me factually incomplete information about how to get my replacement heater, and I was charged improperly. Those critical flaws in the transaction went unaddressed in the survey questions. Trust me; those factors have made the deep impressions on me about Sears on-site service.

Think Beyond the Interaction to the Transaction. A service transaction is a chain of interactions, and the weak link needs to be identified. The Sears survey only asked me about my experience with the technician. My transaction with Sears went well beyond the technician’s visit, including my interaction with the service center and the store. No one ever asked me about those interactions, and thus did not allow me to express my full feelings about my experience. (You’re probably thinking, “but wasn’t there an open-ended comment question?” Hold that thought!) Yet, I’m sure those who designed this awful survey think it’s great. I, as the customer, came away very frustrated since I had had a poor experience and did not get to say my piece.

Notice how this issue relates to the need for a good Statement of Purpose. While the typical customer (survey respondent) won’t think in these terms, they will know if a survey is comprehensive and well designed. The Sears IVR survey wasn’t. Why? My guess is that Sears has very stovepipe functional groups that do not interact.

Need for a Service Recovery Program. The above mistakes are bad but not egregious. The most disastrous error in the Sears survey is what they did NOT do with my data. The last question presented the opportunity to make a free-form comment about my experience. I was waiting for that opportunity.  I even developed notes to be sure I covered everything. I listed the litany of mistakes that Sears made. You would think that the person who analyzes that textual data would flag me for a follow-up call. A well-run company with a well-designed survey program would have done that. Sears did not. They ignored my cry for help, making me angrier than I already was. Yes, the survey — designed to measure customer satisfaction and identify needed operational improvements — actually made the situation worse!

Need for an Integrated Feedback Management Strategy. Even if someone had read and responded to my comments, it might not have mattered. I recounted in the background on my Sears transaction that Sears headquarters basically told me to get lost. Sears sees no ethical or legal obligation to honor the verbal contract established by its service agents. This is a pretty startling indictment of the strategic role of customer service at Sears. But why does Sears go to the bother and expense of having this one feedback survey that sits like an isolated island disconnected from its potential business value?

Will I ever shop at Sears again? You know the answer. That $69 savings is quite the Pyrrhic victory for Sears.

Addendum #1: Missed Value in a Service Recovery Act. After the headquarters’ troubleshooter told me to get lost, I started looking at my legal options, including initiating a grievance through my state’s attorney general office. Then, 6 weeks after phone call with Sears, a check for $69 shows up from Sears. No accompanying letter. No accompanying phone call.

Part of the purpose of a service recovery act is to win back the hearts and minds of the customer. The check partially won back my mind, but my heart? I’m still seething at the process and how I was treated. Compare this to how Subaru handled its service recovery of a flawed service recovery. Sears missed an opportunity here to win me back.

Addendum#2: Some People at Sears Do Take Ownership. Some Sears headquarters’ folks came across this article and one bought my Customer Survey Guidebook. And in Fall 2009, this person and a colleague came to my Voice of the Customer conference. We had some nice and interesting chats. I give them credit for not blaming the messenger. I actually have shopped in a Sears store with a little more confidence that customer service can be a trait at Sears.

~ ~ ~

Here’s some of the emails we’ve received from people thinking we’re Sears… unedited but anonymous.

  • sears tool #9 43664 rachet wrench reversable angle 13mmx14mm, “NO ONE” can find this tool!!.. please call
  • hello my name is XXX  we have been along time customer of sears but these days i really wonder if you really do think about taking care of your customers and i dont mean just a single store i have been in contact with the store  and every single phone number i can possibly find on or about sears  you people just seem to want to give lip service and nothing else i guess when sears changed owners you really have  a problem helping customers with their problems or you just run a big scam how your customers are really number one well im here to tell you the word of mouth is still a time tested and very effective way of comunicating even in this day of age  and believe me i know a lot of people  common and business men and women alike an old man once told a car dealer rather large that did  a friend of his  wrong  he said to them if you do people right  i will tell everyone i know and if you do people wrong i will tell everyone i know and he did the business went bank rupt after on
  • I was in the sdtore shopping and on my slip is says to go to your web site and enter for a 4,000 sears gift card, i did not liket he site all it did was ask me question about college and buyin other things, this is faluse addvertising. now all these people are calling me. what about the gift cert you said about, never did get any information on it. I am a sears customer  not all this other junk.
  • Today, I called the number for Sears repair and service in my local phone book. The automatic answering machine did NOT give me any option to speak with the people at the service center in Melrose Park, where I left my vacuum cleaner for repairs a week ago. And there is no contact number on the receipt I received when I dropped off the appliance. So can you help me reach this repair center???? The woman to whom I was transferred merely hung up on me when I told her what I need.

Hilton Hotel Customer Survey Program

Ever fill out a survey and wonder if anyone ever reads it? I do — and I’m in the business of teaching people how to conduct a survey. So imagine my surprise when I got an email from Will Maloney, the General Manager of the Hilton hotel in Pearl River, NY, the day after I completed a survey about my stay at his hotel. I was surprised — and thrilled — to see the manager take the survey so seriously. Not surprisingly, I inquired about Hilton’s survey design practices, and he put me in touch with Stephen Hardenburg, who is the Director of Syndicated Customer Research for all the Hilton brands.

I always learn something from other survey practitioners, and Stephen was kind enough to share with me information about the Hilton Hotel survey program. In particular, Stephen spoke about how to improve survey response rates, how Hilton uses its survey program to drive customer satisfaction measurement and improvement, and the challenges of international surveys.

Stephen, you have an interesting title. What exactly does a Director of Syndicated Customer Research do?

We are part of the market research department that covers all the Hilton Family brands. I am responsible for the “syndicated” research, meaning the regular surveying program we do or surveys that we subscribe to, for example the JD Power and Associates syndicated hotel guest satisfaction studies. A colleague in our department handles the custom research including more ad hoc research, conducting focus group research and such.

How did you get into this?

I went to school for hotel & restaurant administration. First, I worked at the O’Hare Hilton, and then moved on to a number of different Hilton hotels in the Los Angeles area handling food and beverage event services. I then went to the Hilton corporate office in Beverly Hills as regional food & beverage director. I was in operations for the first half of my career and then got into brand management.

For the Hilton full service brand I started doing performance management to track how well the Hilton brand was doing. That got me involved in their Balanced Scorecard and all the performance metrics — financial, revenue, customer service, quality assurance. I did this for 6 to 7 years.  I then started to focus on the customer service levels information, which led to the surveying program and my current job. I started here in early 2007.

Tell me about the Hilton surveying program.

We do transactional surveys after a guest’s hotel stay has ended. We conduct surveys for all 10 of our brands at close to 3,000 hotels worldwide. This past year we have over 1.5 million completed surveys administered in 25 languages across 78 countries.

We partner with Medallia, Inc. to conduct the surveys. Most of the surveys are completed on-line using a web form, but we supplement this with paper surveys sent by postal mail for those brands where they don’t have email addresses, for example, smaller hotels like Hampton and Homewood Suites, which tend to have more pop-in stays in smaller markets. If the guest hasn’t made a reservation, then we might not have an email address.

Usually we send out the survey 24 to 48 hours after the stay. We’re getting better and better with the timeliness. It was 72 hours a year ago, and then we got it back to 48 hours. Now we send the records [about guests who have completed their stay] to Medallia each night. We have a goal of cutting it back to 24 hours.

The majority of the responses we receive are within 24 hours after sending the email invitations out. Therefore, in most cases we are getting feedback on guest stays within 48 to 72 hours after they have checked out. So there is a higher recollection of the satisfaction with their stay then there would be with a stay 2 to 4 weeks prior.

One of the great things about having 10 different brands is that it allows for experimentation. Each brand has its own survey, so we get to learn about surveying practices from these comparisons.

So what have you learned?

A key goal has been to get our response rate up. We’ve looked at trends over the years, and the response rates and abandonment rates have everything to do with length of the survey. For those brands that do have the longer surveys, they have the lower response rates and higher abandonment rates. It doesn’t take a rocket scientist to figure this out that the shorter the survey the better.

The response rate is now about 30% for all online surveys, and abandonment rate is about 6% down from 7% last year.  We’ve done a better job of shortening the length of the survey. They were on average about 12 minutes, and we’ve gotten them down to 10 minutes. We need to better and get them down to 7 minutes.

How have you shortened the time it takes someone to complete the survey?

One of things we’ve done this year is to use branching to shorten the survey. For example, we ask, “Did you use the restaurant?” and only if you check off “yes” do we ask you to rate it. That was a change we made in 2007. So, that helped us cut down the time of the survey, and gave us better data.

So, how sensitive is the length of the survey questionnaire to the response rate?

The longest survey is for the Hilton full service brand.  The mean time to complete the survey is 11.3 minutes. 16% [of respondents] take more than 15 minutes to complete [the survey]. For this brand the response rate is 27.5%.

The shortest survey is for the Waldorf Astoria Collection. The mean time to complete the survey is 4.8 minutes. 5% take more than 15 minutes, probably due to open-ended comments, while 85% take less than 7 minutes. Our response rate here is 32%.

I think the length of the surveys impacts future response rates. There is a lag.  Initially, customers do respond and take the survey because they don’t know the length, but they will remember if it’s a really long survey and not respond to a future request. The Conrad brand has the highest response rate at 36%, but this is the first year we’ve conducted surveys for that brand. The Hilton brand is the lowest at 27.5%.

How do you decide who gets a survey invitation?  Or do you send an invitation out to every guest?

Our goal is to get a certain number of responses for each hotel each month. For Hilton brands it’s 75 completed surveys, 45 for Hampton, and 35 for Homewood. We do random sampling to get our invitation list, and we target the same mix of Hilton Honors members versus non-members.

One of the controls we have in place is to not over-survey a customer. If they stay within a particular brand, we will not survey them more than once every 30 days. If they stay within the family of brands, we won’t survey more than once every 90 days. We have several different rules in place. For example, we maintain also an opt-out list.

Do you offer any incentive to get people to complete the survey?

No, at this time we don’t offer any incentive.

Why do people fill out the survey?

Good question. We do get a pretty good distribution of responses from positive to negative. So I can’t say we only get those who complain or compliment. I think customers just want their voices to be heard.

How about Hilton Honors Members?  Is their response rate higher?

There’s no real difference in response rate. However, Honors members tend to be tougher graders as well as business travelers.

How is the survey instrument structured?

We use a 10-point scale now. We switched to that 4 years ago from a 7-point scale. We were trying to align with the JD Power survey, which uses a 10-point scale. The endpoint anchors are Extremely Satisfied to Extremely Dissatisfied. And of course, we use the branching I mentioned earlier.

Now let’s get to the interesting part since this is what got me connected to you. What do you do with the data?

The great thing about Medallia reporting is that it is live data, updated on a nightly basis. So, a hotel could see how the front desk is being rated on, say, friendliness today versus yesterday. If there’s a change, they can take quick action to address the issue.

Will Maloney, the General Manager at the Pearl River Hilton, wrote to me that “each morning over my coffee I review the surveys from the day before and write a response to every customer survey our hotel receives, whether it’s positive, negative, or somewhere in between.” I guess he wasn’t joking!

No, he wasn’t. Our goal is to get the hotel managers to be responsive and to fix problems quickly.  By getting them this feedback faster, we’re giving them the tools to do this. That’s part of the reason why we’re pushing to get the surveys out faster after the hotel stay ends. It also shows our customers that their feedback is important to us and we really do value their input.

At an aggregate level, how do you report the data?

One of the changes we made this year to our reporting was how we report our “Top Box” scores — on a 10-point scale — for our overall loyalty calculation. We used to report the percentage of responses in the top 3 boxes [that is, 8s, 9s, and 10s]. Now we report only the top 2 boxes [that is, 9s and 10s]. We felt this was a better definition.

The data are used in the Balanced Scorecard process both at the brand level and at the hotel level as part of their bonus plans. What gets measured gets results. These bonuses go deep into the staff. The hotels will have different incentives for each department, for example, front desk staff and housekeeping, and for the hotel as a whole by week, by month, etc. The bonuses then roll up for the region level and the brand level. It’s a very complex bonus structure.

Sounds like a Six Sigma program with some influence from the Net Promoter Score® concept. 

We don’t do Six Sigma formally, but we do have hotel performance staff that work with groups in the hotel to improve performance.  We don’t use the Net Promoter Score® directly either. One of the problems I see with the Net Promoter Score® is that it reports just that one calculated number. You could get the same number with different combinations of promoters and detractors, but the combinations would really tell different stories.

You mentioned you survey worldwide. Is that new?

We just launched internationally in January this year [2007], and cultural issues are presenting new challenges. This is what we’re struggling with now. In Asia guests rate much harsher than Americans or Europeans. The Japanese are most definitely the harshest. However, not only do the Japanese not give 10s, they also give few 1s. So their scores tend to cluster more in the middle. Surprisingly, the response rates are still near 30% in Asia.

The Middle East presents other challenges. In Middle East guests just don’t want to give out their email addresses. So we’re having trouble getting enough of a sample. Though, the response rate is the same.

We’re still learning.

So, how is it being the “Survey Guy” in your company?

There is some disdain for the person who leads the survey program.  Hotel managers [who don’t like their scores] will say, “You’re talking with the wrong people…”When we first launched, the biggest misimpression was that many managers felt that if we got them a bigger sample, their scores would go up.  “If you got me more surveys, my scores would go up.”  Of course, that’s not likely to be true once we get a certain number of responses.

Do you have any recommendations for others in a similar situation? 

It’s a constant learning process, especially with the international piece now.

Thanks, Stephen, for your time and your insight.