Customer Satisfaction Surveys: The Heart of a Great Loyalty Program

Murphy’s idea of satisfaction is an open can of tuna fish, loyalty would be ensured by a second can of tuna fish, and his idea of surveying is watching chickadees, titmice and sparrows at the feeder. Yes, Murphy is a cat, a cat with elevated calcium levels in his blood, which may indicate a tumor. Since his diet could be the cause and Deli-Cat® is his primary diet, Murphy’s owner, who happens to be the author of this article, called the Ralston-Purina customer help line, whose number was on the Deli-Cat® packaging, to learn the precise calcium level in the food.

Murphy-in-sunThe help desk was true to its name, but Ralston-Purina’s interaction with the customer is not purely passive. A few days later, I received a phone call whose purpose was to conduct a survey on behalf of Ralston-Purina. The interviewer asked not only whether the information request was fulfilled and the agent was courteous, but also whether my experiences with the customer service desk would make me more or less likely to buy Ralston-Purina products in future. In other words, this telephone survey was  trying to assess the affect of the help desk upon my loyalty to the company and its products.

The consumer goods industry has always been on the vanguard of customer research, so Ralston-Purina’s efforts in regards to their help desk shouldn’t surprise us. However, customer support organizations in companies across various industries are also engaged in similar customer research efforts.

Loyalty is a Behavior

Before talking about customer loyalty programs and “customer sat” surveys, we need to answer two questions. First, what is loyalty? Second, why should we, as support service managers, care about loyalty?

To answer these questions, let’s look at the support business process. First, the customer contacts our organization to learn how to get the intended value out of some product. Then, through the service encounter, the question or issue is addressed, leading to two outcomes, a technical outcome and a behavioral outcome. The technical outcome relates to the extent to which the problem was resolved. The behavioral outcome will be driven by how the customer feels towards the company as a result of the interaction.

Customers enter the service encounter with certain expectations, and the comparison of those expectations to their perception of our performance equates to some level of satisfaction, which in turn leads to some set of behaviors. Hopefully, loyalty towards our company, rather than mere satisfaction or – even worse — dissatisfaction, is the behavioral result. As service managers, we all wear two hats, operations and marketing, making both the technical and behavioral outcomes key to understanding the effectiveness of our support organization.

You might ask why the distinction between loyalty and satisfaction is so important. Various research studies have shown that customer retention and loyalty drive profitability – from both the revenue and cost sides of the equation. These studies have shown how much more it costs to attract new customers than to keep current ones – and how much more product loyal customers are likely to buy from us.

There’s also a less obvious impact upon revenue. When customers more highly value a company’s products and services, they see a cost to switching vendors. Not only does this retain the customer, but this enhanced loyalty allows us to practice value-based pricing, which is a technique to maximize profitability by setting prices to “harvest” the value the customer perceives. Customers who are merely satisfied don’t feel the value and see little cost to switching vendors.

How, then, do we know if our customers are loyal and how can we increase their loyalty? That’s the aim of a customer loyalty program. A customer loyalty program is a portfolio of research techniques and action programs designed to assess customers’ attitudes towards our organization or company and to take action to improve their opinion. Both quantitative and qualitative research efforts are needed to capture the full breadth of information needed to assess and improve.

A well-balanced research portfolio allows an organization to

  • Listen Actively. Customer research should be performed continuously to ensure a consistent quality of service delivery.
  • Listen Broadly to the entire customer base. That’s the role of surveys. Surveys are also very good at identifying specific problem areas for in depth review and at providing an overview of our relationships with customers. We’ll focus on surveys in this article.
  • Listen Deeply. Focus groups and personal interviews, perhaps as a follow-up to a survey, allow us to understand the “why” behind the attitudes that surface through broad-based surveys. These research techniques generate the granular data that provides solid footing for improvement efforts.
  • Listen to Extremes. Comments from those who are highly pleased or displeased may identify strengths to be duplicated or fail points to be corrected. We should actively, not just passively, encourage people to complain – and to compliment.

Mass-administered surveys, which can gauge average satisfaction levels along multiple service dimensions, are the heart of a system to listen broadly to customer feedback. These surveys give us benchmarks of performance, and they may direct us towards weak points in the process.

Companies use various tools to administer these surveys. The Customer Services group in Aetna Insurance’s Retirement Services division conducts surveys through a VRU-based product immediately after a service transaction is completed. This AutoSurvey system from Teknekron can direct an email or fax to the appropriate person in the company whenever a survey response falls below some threshold. This allows for an immediate call back to the customer. “That instantaneous ability to capture a mistake and fix it is worth its weight in gold,” according to Dick Boyle, Vice President Customer Services.

America Online uses Decisive Technology’s internet-based survey services to take a constant pulse of customer’s reactions to support quality after support incidents are closed. While the scalability and affordability of the system were important, the ability to provide rapid feedback on key performance measurements has allowed AOL to improve its business performance.

Speedware, the Canadian OLAP software publisher, conducts telephone surveys for key performance measurements shortly after the service transaction is closed, asking how the customer’s perceptions match their expectations on a 1 to 10 scale. Maria Anzini, Worldwide Director of Support Services for Speedware, analyzes her survey data base with Visionize’s Customer Service Analyzer. Last year, the data showed a drop in first call resolution and customer satisfaction in a very technical skill area for European customers.  This hard evidence proved the need for additional staff for that targeted area.

“It’s our thermometer,” says Anzini, who also uses the survey data to manage her service agents. The results are part of a 360-degree performance review process. Each agent sees a chart showing his performance against the average agent performance. The surveys have also provided valuable data for recruiting and training practices, which is very important given the cultural differences across her international user base all serviced out of Montreal headquarters.  Anzini has identified a profile of the best agent for each product and for each geography. “I have to pay attention to how they [the international customers] want to be serviced.” For example, since British customers are more demanding, she hires agents who “maintain the charm but get to the point.”

But the personal interaction afforded by the telephone survey method allows Speedware to capture richer data. They encourage respondents to explain the basis for their answer, and these quotes are vital pieces of information. “That’s where the juice is” stated Anzini. The ability to probe deeply to get to the “why” behind customer feelings is the type of actionable data that is essential for a re-engineering effort properly directed at root causes.

Dick Boyle of Aetna also now supplements his 5-question surveys with comments at the end. He found that many respondents were giving high marks on the separate elements of support quality, for example, courtesy, knowledge, effectiveness, only to give a mediocre overall rating. “That led me to believe that something about the experience was not being captured in the questions, and we were missing the boat.”

This example also shows the importance of properly designed research instruments. Only well-designed instruments will provide valid information for business decisions, and there are many easy mistakes that can be made when designing survey questions.

Survey Design & Rich Data Analysis

Perhaps surprisingly, one key criterion of a good instrument is that it highlights distinctions among respondents.  Ralston-Purina’s call center performed a mail survey a few years back, and the results indicated that all aspects of service were uniformly outstanding. While this may seem good, the results – data with no variance – didn’t allow them to identify what areas needed improving. The cause was the questionnaire design, which masked true underlying differences in the perception of performance. That instrument posed questions to respondents asking for answers on a 5-point scale with agree-disagree anchors. This design frequently led customers to say early in the survey, “It’s a 5 for all of the questions.”

Tom Krammer and Rik Nemanick, professors at St. Louis University who are conducting the latest research program, decided to move away from this scale. They also stopped asking questions that related to specific call center agents since they found that in Ralston-Purina’s market context, customers were concerned about getting agents in trouble. They now ask for response on a scale where the anchors are performance relative to expectations and the midpoint is where expectations are just met. That change revealed significant distinctions since responses were now spread across the range of responses rather than clumped on one end of the scale.

Professors Krammer and Nemanick also thought about the analysis they wanted to perform before designing the survey instrument. They wanted to do more than just benchmark performance and point to areas of concern. Their comprehensive survey research design allowed them to perform statistical tests to demonstrate cause-and-effect relationships. This richer analysis allowed to really target improvement efforts. Ralston-Purina is using the statistical analysis to understand the specific impact of different problem resolution tactics to customer behavior, such as future purchase intentions.

“All complaints are not created equal,” according to Ken Dean, Director of Quality Systems & Resources. A cat that can’t hold down its new food requires a different response than a customer who gets a bag of broken dog biscuits. The impact of different tactics, e.g., a free bag of food versus discounts, is measured in the survey. They even use “non-product-oriented resolution tactics” (for example, sending flowers or a coupon for a restaurant dinner) and they are developing a table to optimize the service recovery, comparing the tactic to the resulting satisfaction.

Service Plus [formerly ServiceWare], the Canadian service management system vendor, is also using surveys to bring hard evidence to business decisions. Gary Schultz, Vice President of International Technical Support, recently started using SurveyTracker to conduct surveys by email. “Up until we used the survey tools, our data was soft – opinion based.” For example, sales and marketing were pushing Schultz to make his support organization a 24 x 7 operation since all the benchmark companies, e.g., Corel and Microsoft, were doing it. A quick survey found that 98% of the customer base would not find it beneficial since they did not operate around the clock. “It would have been a value-added service with no added value… The survey data allowed us to make the right decision about that… and we did it [the survey] in 4 to 5 days.”

More Benefits from Active Listening

While attitude assessment and targeting improvement activities are the primary objectives of a customer loyalty program for a support organization, other objectives, which affect customer loyalty, can be addressed.

First, relationship marketing objectives can be met in part by a customer loyalty program. The very process of soliciting customers’ perceptions about the quality of the service encounter alters those perceptions. Empathy is important to a customer’s overall impression of service quality, and a well designed and executed program shows the customer you care about his opinion.

Second, customer education can be advanced through the customer interaction. Nancy McNeal, of New England Business Systems (NEBS) conducted a user survey for her help desk. Aside from the “fun [of] interacting with people” from all over the company, “we became acknowledged as a corporate help desk rather than just an IT function.” NEBS employees previously had not realized the extent of the services the help desk performed in supporting their activities.

In a similar vein, Aetna Retirement Services educates some key customers by bringing them into the call center and letting them listen to calls. This “gives them an appreciation for what we do, according to Dick Boyle. “They actually do sit down and listen to calls. We then debrief with them and ask them what we could do better.”

When Speedware conducts its telephone surveys, they sometimes find that customer expectations are unrealistic since they don’t reflect the terms of the contract. “We reset their expectations… We take advantage of the survey [interaction] to educate the customer,” according to Maria Anzini.

Third, the objectives of your loyalty program don’t have to be constricted to perceptions about service just because you’re in a support service organization. Loyalty is determined by the weakest link in the value-added chain. As the last link, customer support is in the ideal position to assess the loyalty quotient for the company’s entire value-added chain.

At ServiceWare, this role is explicitly recognized. “The customer relationship belongs to me,” according to Gary Schultz. “I’m responsible for the customer just after the sales cycle, that is, the long term life cycle of the customer… The issue of referenceability is my ultimate responsibility and mandate.” Gary’s group uses his email surveys to “qualify and quantify exposure areas” in both the service and software products.

Speedware’s Anzini also uses the survey data to help those outside her organization. If a sales representative is going to a customer site, she prints out the results of the surveys along with other operational data. This properly prepares the sales representative and is “part of the customer-centric loop” practiced at Speedware.

Aetna’s Customer Services group is using its survey tool to “provide high quality competitive market intelligence on our products, our customer, our trends, problems or strengths to the rest of the company.” Through this, Dick Boyle intends to turn his “customer service function into one of the most important pieces of strategic thinking in the firm.”

The Keys to Good Listening

Notice the elements found across these programs.

First, these companies listen broadly to their base of customers, compiling scientifically valid statistics of support organization performance.

Second, they actively listen by touching customers soon after a transaction.

Third, they listen deeply, trying to get to the granular data that supports improvement activities.

While all of this makes eminent sense, there is an underlying fear.  As Aetna’s Dick Boyle says, “How much of an appetite does the customer have to want to do this.”  Knowledge workers are increasingly overwhelmed with information demands.  If everyone tries to get their customers to talk, will we kill the golden goose?

~ ~ ~

For you animal lovers, I am happy to report that Murphy is fine. Despite exhaustive tests at Tufts Veterinary Hospital, which did not please him in the least, the cause of Murphy’s elevated calcium level has not been isolated, but he shows no adverse effects from the calcium. And, in fact, the calcium levels have returned to normal.

He is still an active lord of his manor.