Home Depot Transaction Survey

Summary: Transactional surveys are a good method for tracking the quality of service delivery in near real time. The concept is quite simple. After some transaction is complete, you ask the customer of the transaction to provide feedback. They should be short and sweet, but the Home Depot transactional survey is anything but short, imposing high respondent burden. The hook to get you to complete the survey is a raffle entry, but given the many shortcomings of the survey design — including a truly egregious set of instructions for aspects of the transaction not actually experiences — are the data valid?

Note: This article was written in 2007. Since then, the survey has changed somewhat from what is described here. While the survey used in 2012 as I write this is shorter, it is still quite long for a transactional survey, and most surprisingly, the egregious instructions are even more front and center for the respondent.

Note: If you have landed on this page because you ran a search on “Home Depot Survey,” please note that you are NOT on the Home Depot website.

~ ~ ~

If you’ve shopped  at a Home Depot — or Staples or even the Post Office — you may have noticed an extra long receipt. The receipt includes an invitation to take a “brief survey about your store visit” with a raffle drawing as the incentive. Seems simple. Why not do it?

The Home Depot customer satisfaction survey is a classic example of an transactional or event survey. The concept is simple. When customers — or other stakeholders — have completed some transaction with an organization, the customer gets a survey invitation to capture  their experiences. These transactional surveys typically ask questions about different aspects of the interaction and may have one or two more general relationship questions. Event surveys will also have a summary question about the experience, either at the start or the end of the survey. Reichheld’s Net Promoter Score® approach is an example of an event survey program.

Transactional Event Surveys as a Quality Control Tool

The most common application for event surveys is as a customer service satisfaction surveys. Why?  It’s perhaps the most efficient and effective method for measuring the quality of service from the most critical perspective — that of the customer.

A transactional survey is a quality control device. In a factory, product quality can be assessed during its manufacture and also through a final inspection. In a service interaction, in-process inspection is typically not possible or practical, so instead we measure the quality of the service product through its outcome. However, no objective specifications exist for the quality of the service. Instead, we need to understand the customers’ perception of the service quality and how well it filled “critical to quality” requirements — to use six sigma terminology. That’s what an event survey attempts to do.

An event survey has another very important purpose: complaint solicitation.  Oddly to some, you want customers to complain so can you resolve the problem. Research has shown that successful service recovery practices will retain customers at risk of switching to a competitor.

But the Home Depot transactional survey is a good-news, bad-news proposition.  The goal of listening to customers is admirable, but the execution leave a lot — a whole lot — to be desired. The reason is simple. The Home Depot survey morphs from an event survey to a relationship survey. And it is loaded with flaws in survey questionnaire design.

Relationship Surveys — a Complement to Event Surveys

In contrast to an event survey that measures satisfaction soon after the transaction is complete, a relationship (or periodic) survey attempts to measure the overall relationship the customer — or other stakeholder — has with the organization. Relationship surveys are done periodically, say every year. They typically assess broad feelings toward the organization, whether products and services have improved over the previous period, how the organization compares to its competitors, where the respondent feels the organization should be focusing its efforts going forward, etc.  Notice that these items are more general in nature and not tied to a specific interaction.

Relationship surveys tend to be longer and more challenging for the respondent since the survey designers are trying to unearth the gems that describe the relationship. But unless the surveying organization has a tight, bonded relationship with the respondents, a long survey high in respondent burden will lead to survey abandonment.

The Home Depot Customer Satisfaction Survey — Its Shortcomings

If you’ve taken the Home Depot survey, you probably found yourself yelling at the computer. The survey purports to be an event survey. The receipt literally asks you to take a “brief survey about your store visit.” Like the Energizer Bunny, though, this survey keeps going and going and going…

When I took the survey, having bought a single item for 98 cents, it took me between 15 and 20 minutes — far too long for a “brief” transactional survey. And I suspect it could take some people upwards of an hour to complete.

Why so long? First, the design of the transactional aspects and second, it transitions into a relationship survey. In my Survey Workshops and conference presentations, I may mention the Home Depot customer satisfaction survey, and all who had taken the survey were unanimous in their feelings about it. Respondents were frustrated; it took forever to get through the survey.

First, the design of the transactional aspects. One of the early questions asks you what departments you visited. Sixteen departments are listed! When I took this survey, I had stopped quickly at Home Depot for one item, but a modest home project could involve electrical, hardware, fasteners, building materials, etc.

This checklist spawns a screen full of questions about each department visited. This is known as looping since the survey loops through the same questions for each department. Looping is a type of branching where a branch is executed multiple times piping in the text for the name of each department.

See how the survey can get very long very quickly? (I knew it was a very long and complicated survey when I clicked on the drop down box on my browser’s back button and saw web page names like “Q35a” and “Q199”.)

home-depot-survey-plumbingSecond, the designers also make the survey feel longer by their survey design. They chose to use a 10-point scale. (See the example nearby.) Now when you think about the helpfulness of the associate in the plumbing department, can you really distinguish between a 6, 7, and 8 on the scale? Was it such an intense interaction that you could distinguish your feelings with that level of precision? Of course not. The precision of the instrument exceeds the precision of our cognition. This is like trying to state the temperature to an accuracy of 2 decimal points using a regular window thermometer! But the request for the precision lengthens the time for the respondent to choose an answer — with likely no legitimate information gain. People ask me, “how many points should a survey scale have? Isn’t more points better?” Not if the added respondent burden exceeds the information gain.

Third, the scale design is wrong. The anchor for the upper end of the scale is Completely Satisfied; whereas, the lower end anchor is Extremely Dissatisfied. In general, end-point survey anchors should be of equal and opposite intensity. These aren’t. Also, the Completely Satisfied anchor corrupts the interval properties of this rating scale. “Completely” makes the leap from 9 to 10 larger than any other interval. (Mind you, this survey design was done by professionals. You novice survey designers, take heart!)

home-depot-survey-errorFourth, the survey designers explicitly want you to add garbage to their data set! A response is required on every question, grossly increasing respondent burden. Plus, some of the questions simply are not relevant to every shopping visit. Look at the example below for the error message you’ll receive if you leave an answer blank.

If you do not have direct experience with an aspect listed here, please base your responses upon your general perceptions.

So, some of the survey responses will generate data about the store visit and other responses will generate general data based on the Home Depot image!  Please tell me the managerial interpretation of these data? How would you like to be the store manager who is chastised for surveys on visits to your store when some of the data are based upon “general perceptions”! (If you know a store manager, please ask them how the survey results affects his/her job.)

Some survey design practices are based on the personal preference of the survey designer. Other practices are just plain wrong. This practice — in addition to some other mistakes — is beyond plain wrong.

A primary objective in survey design is for every respondent to interpret the questions the same way. Otherwise, you’re asking different questions of different respondent subgroups. On which variation of the question do you interpret the results? Here, the survey design  poses different interpretations of the questions to respondent subgroups — and we don’t know who did which! Quite simply, the data are garbage — by design!

[While Home Depot has modified its survey from when we first posted this article, those instructions remain. In fact, the instructions are in the introduction of the survey!]

home-depot-survey-associates

Fifth, the survey is clogged with loads of unnecessary words that lead to survey entropy. Look at the screen about the Associate. The survey designers could restructure the survey design with a lead-in statement such as, “The Associate was…” for the entire screen. How many words could then be eliminated? The remaining words would be the constructs of interest that should be the respondent’s focus. To borrow a concept from engineering, the noise-to-signal ratio can be improved. Even the scale point numbers don’t need to be repeated for every question. That’s just more noise.

The Event Survey Run Amuck

After the truly transactional questions, the survey then morphs into a relationship survey. Where else do you shop?  What percentage of your shopping do you do in each of those other stores? How much do you spend? What are your future purchase intentions? And on and on and on.

Survey length has several impacts upon the survey results. First, it’s certainly going to impact response rate, defined as the number of people who start and complete the survey.

Second, the length will create a non-response bias, which results when a subset of the group of interest is less likely to take the survey. I can’t imagine a building contractor taking the time to do this survey.

Third, the survey length activates a response bias of irrelevancy.  The quality or integrity of the responses will deteriorate as respondents move through the survey. People go through the scores of screens to enter a raffle to win a $5000 gift certificate as promised on the survey invitation on the receipt. Of course, the raffle entry is done at the last screen. (Note: one prize is drawn each month, and according to the Wall Street Journal, Home Depot receives about a million surveys each month. If so, then the expected value of your raffle entry is one-half penny!)

As screen after screen appears and you feel you’ve invested so much time, you’re determined to get into that raffle. But what happens to how you approach the questions? You just put in any answer to get to the next screen. And if you think a particular answer might spawn a branching question, for example, by saying you were unhappy about something, you avoid those answers. I know I was not unique in this reaction. I quizzed people who took the survey without leading them toward this answer. That is the reaction this absurdly long survey creates.

~ ~ ~

Since I first wrote this article, the Wall Street Journal reported in a February 20, 2007 article on Home Depot’s woes, “Each week, a team of Home Depot staffers scour up to 250,000 customer surveys rating dozens of store qualities — from the attentiveness of the sales help to the cleanliness of the aisles.” After reading this article, how sound do you think their business decisions are?

Event Survey Practical Points

If you’re looking to create a survey used for capturing customer feedback about completed events of transactions, here are some practical points.

Keep It Short. If you want to get a good response rate, then keep it short and sweet — the ol’ KISS logic. For most transactional processes, 7 to 12 questions should be sufficient. You MUST resist the temptation to turn into an “all things for all people” survey — or more appropriately, “all things for all departments” survey. Every department will want a piece of your action. Say “NO” early and often. It’s either naiveté or sheer arrogance on the part of the survey designers to believe that they can get — or con — a respondent into taking a long survey and generate legitimate answers.

Use Random Sampling. If you have ongoing transactions with a customer base, you probably don’t want to send a survey invitation to everyone every time they have had a closed transaction. This will promote “survey burnout” and lead people to completing the survey only when they have an axe to grind — the so-called self-selection, non-response bias. Instead, randomly select people from the list of closed transactions. You will need some administrative controls over this list management to ensure you don’t overly survey certain people.

In the Home Depot case, random sampling really isn’t in play since this is an event survey with a point-of-contact survey administration method. They could generate the survey invitation randomly on the receipts, but survey burnout isn’t caused by repeated invitations but by the survey length.

Implement a Service Recovery (Complaint Handling) System Concurrent with the Event Survey Program. Complaint handling and event surveying are tightly linked. They’re complementary elements in a customer retention program. If a customer voices a complaint in a survey and you don’t respond, how’s the customer going to react? Obviously, you’ve just flamed the fires of dissatisfaction. A Yahoo web page has the following comments about the Home Depot survey:

home-depot-commentsTowards the end it asks for comments. I gave some comments then asked if anyone actually reads these comments. I gave my email address, and asked for a reply, but no one ever replied. I figured I’d at least get a form reply. Do you think anyone actually reads the comments in surveys like these?

[reply to the post] I did the same thing when I took the survey. I had a lot of bad comments and asked for a reply. No response. I will go to my local hardware store next time. It just seems like HD has gotten too big, almost like Walmart. (sic)

The comment screen on the Home Depot survey does say that if you have issues you would like addressed to call its Customer Care Department, providing the toll-free number. But do customers recognize — or care — that the survey program is not linked to the customer service operation? Of course not. Try explaining to a customer why entering a comment like the ones above is not equivalent to contacting customer service. You’ll get glazed looks back. This practice demonstrates inward-out thinking, not outward-in thinking. (And if we’re on Daylight Savings Time, exactly what are the hours for Customer Care?)

Consider Different Survey Administrative Methods. Transactional surveying can be done by telephone, web form, paper sent through postal mail, or using the IVR if you’re in a call center operation. Since this is a quality control tool, you want to get your data as quickly as possible to act on any business process issues. Postal mail surveys are notoriously slow. Telephone surveys are expensive. Web form surveys are fast and inexpensive once the system is set up, but your target audience must have web access and be web savvy. Could Home Depot be introducing an administrative bias through web surveying?

How Often & How Soon to Survey. In the Home Depot point-of-contact survey approach, the surveying is essentially done at the close of a transaction. In situations where you have a data base of customer contact information, you could do the surveying in batch mode, say, every day or every week. Weekly is the typical period. If you let the period be too long, say monthly, the respondents’ recall will be poor, and you increase the probability of a process problem affecting yet more customers until you learn about the problem through your survey.

Outsource Surveys Versus In-House Execution. Many surveying services exist that will conduct the survey program for you. They may give you real-time access to the results through a web portal, and they may give you comparative statistics with other companies in your industry. But you will pay for these features. Transactional surveys can readily be done in house, but don’t short change the design and set-up. You need to have some level of dedicated focus in a program office to make it happen. You also must protect the confidentiality of any survey information about employee performance.

Pilot Test Your Surveys. A survey is a product that you as the survey designer should test before launching, just as a company should test any product before making it and selling it to customers. The pilot test or field test is critical to finding out the flaws in the detail of the survey design and in the overall design, like its length. If the Home Depot survey was pilot tested, it was an ineffectual test.

Don’t Abuse The Survey and Your Respondents. Please know the difference between an event survey and a relationship survey, and be humble in your request for your respondents’ time. By attempting to make the survey serve two masters — the event and the relationship — you’ll compromise on both. By shooting for the stars in terms of the information you demand, you may just get nothing. Or worse than nothing — made-up responses just to get to the raffle.

Generate Actionable Survey Data

When performing most any customer research, but especially when conducting customer satisfaction surveys, a key goal is to create “actionable data.” Why actionable? The end result of the research should be input to some improvement program or change in business programs or practices. If that’s not the goal, then why is the research being performed? (Hopefully, it not just to get a check mark on some senior manager’s Action Item list!)

However, mass administered surveys may not provide the detailed, granular data needed for taking action. Well-designed survey instruments use mostly structured, closed-ended questions. That is, these question formats ask the respondent to provide input on an interval scale, for example, a 1-to-5 scale, or by checking some set of items that apply to them for some topical area. These close-ended questions have two main advantages:

  • Ease of analysis for the surveyor. Since the responses are a number or a checkmark, there should be no ambiguity about the response. (Whether the respondent interpreted the question correctly is another issue.) The surveyor can mathematically manipulate the codified responses very easily. This is in contrast to open-ended questions that provide free-form textual responses. Analyzing all that text  is very time consuming and subject to interpretation by the survey analyst. (It’s also darn boring — but I don’t tell my clients that!)
  • Ease of taking the survey for the respondent. A key metric for a survey instrument lies in the degree of “respondent burden.” That is, the amount of effort required by the person completing the survey. Writing out answers is far more time consuming than checking a box or circling a number. Greater respondent burden à  lower survey response rates.

The closed-ended survey questions help paint a broad picture of the group of interest, but they seldom give details on specific issues — unless the survey contains a great many highly detailed questions, which increases the burden on the respondent through the survey length. Surveys typically tell us we have a problem in some area of business practice, but not the specifics of the customer experience that is needed for continuous improvement projects.

So, how can the detailed actionable data be generated — as part of the mass administered survey or as an adjunct in a more broadly defined research program? Here are some ways to consider getting better information:

  • Think through the research program and survey instrument design. I just mentioned above that actionable information can be generated through a detailed survey, one that asks very specific questions. But survey length becomes an issue.  Longer surveys will hurt response rates. Perhaps your research program can be a series of shorter surveys administered quarterly to very targeted — and perhaps, different — audiences.

    Additionally, examine any instrument critically to see if unnecessary questions can be eliminated or if questions can be structured differently to solicit the desired information from respondents more efficiently. For example, say you wanted to know about issues or concerns your customers have. A multiple choice question would identify if a customer had concerns for the items listed, but you don’t know the strength of the concern. Instead, consider using a scalar question where you ask the level of concern the customer has. It’s a little more work for the respondent, true, but not much. Yet, you may get data that is far more useful.

    Survey instrument design is hard work, but it’s better that the designer works harder than making the respondent work hard.

  • Judicious use of open-ended questions. As mentioned, an obvious way to generate detailed is to ask open-ended questions, such as, “Please describe any positive or negative experiences you have had recently with our company.” While some respondents will take the time to write a tome — especially on a web-form survey — those respondents without strong feelings will see this as too much work and give cryptic comments or none at all. Yet, their opinions are crucial to forming a broad — and accurate — profile of the entire group of interest.

    Novice survey designers typically turn to open-ended questions because they don’t know how to construct good structured questions. In fact, it’s a dead give away to a survey designer’s skill level! If you find you have to fall back upon open-ended questions, then you don’t know enough about the subject matter to construct and conduct a broad-based survey. It’s that simple.

    Some time ago I received a survey about a professional group that had 11 (yes, eleven!) open-ended questions in four pages. Recently, I received a survey about a proposed professional certification program. The first two questions were open-ended. And this latter survey was done by a professional research organization! Asking several open-ended questions is a sure fire way to get blank responses and hurt the response rate.

    That said, open-ended questions can generate good detailed data, but use them judiciously. One way they can be used appropriately leads to our next subject.

  • Use branching questions in the survey instrument. Frequently, we have a set of questions that we only want a subset of the target audience to answer either because of their background or because of some experiences they have or have not had. Branching means that respondents will be presented certain questions based upon their responses to a previous question that determine the “branch” a respondent follows. These are easiest to implement in telephone and web-form surveys where the administrator controls the flow of the survey, and most difficult to implement in paper-based surveys. (Don’t even think about uses branching on an ASCII-based email survey.) Some survey programs call these “skip and hit” questions.

    Branching can shorten the survey that a respondent actually sees, allowing for targeted detailed survey questions without unacceptable respondent burden. For example, if a respondent indicates he was very unhappy with a recent product or service experience, a branch can then pose some very specific questions.

    As alluded to above, the branch may lead to an open-ended question. But beware! An audience member at a recent speaking event of mine had encountered a survey where a pop-up window appeared with an open-ended question whenever he gave a response below a certain level, say 4 on a 1 to 10 scale. He found these pop-ups annoying — don’t we all! So, he never gave a score below five. Talk about unintended consequences! The survey designer created a false upward bias to the survey data!

  • Use filtering questions in the analysis. When we get a set of survey data, we will always analysis it as a whole group, but the real meat may be found by analyzing the data segmented along some variables. These filtering questions may be demographic variables (e.g., size of company, products purchased, years as a customer, age, and title). Those demographic data could come from questions on the survey or they could come from our database about those whom we just surveyed. (This presumes that the survey is not anonymous.  If it is, then we have no choice but to ask the questions. But, again, beware! Demographic questions are imposing to the respondent. Too many of them will hurt response rate.)

    The filtering questions may also be response results from key questions on the survey. Just as the response on a question, such as satisfaction with problem resolution quality, could prompt an open-ended branching question, the results of that question may also be used to segment the data base for analysis. Basically, you’re looking for correlations or association across the responses to multiple questions to see if some cause-and-effect relationship can be identified. (Multivariate statistical procedures could also be used.)

  • Conduct pre- or post-survey interviews. Perhaps the best method or getting more actionable data is to expand the definition of the research program to include more than just a mass-administered survey. Every research technique has its strengths and its weaknesses. Surveys are good at painting profiles of some group.  Interviews and focus groups (also known as small group interviews) are very good at generating detailed, context-rich information. These data can help you understand cause-and-effect relationships by getting the full story of what’s behind the respondent’s feelings.  I’ll talk more about these in a future article.

    Such research techniques are frequently used at the start of a research program to understand the field of concern. This understanding then allows for a better designed survey instrument, but context-rich research also provides valuable information about the subject area. There’s a double benefit to this research. But there’s nothing that says these interviews can’t be used at the back-end of the research program as a follow-up to the mass administered survey. Surveys frequently pose as many new questions as they answer, and this is a method for answering those new questions. In fact, you might be able to generate the interview list on your survey. When you pose an open-ended question, offer to contact the person to talk about their issue in lieu of having them write in their comments. In essence, that creates an opt-in list from highly motivated people.

Unfortunately, no silver bullet exists for getting actionable customer feedback data. Research programs have inherent trade-offs, and this article outlined some of the critical ones.