Summary.  Federal Agency Customer Experience Act of 2017 promises to make agencies more responsive to its citizen-customers.  But will it?  By itself, will the bill’s requirements achieve its goals?  Is the feedback program required by the bill properly designed?  What elements of the feedback program be part of a Congressional bill versus assigned to some administrator to determine in conjunction with people who are survey experts?  This article will explore those questions.

Currently, here in the summer of 2017, a bill is before the US Congress to mandate that agencies collect feedback from customers – the Federal Agency Customer Experience (CX) Act of 2017.  As someone who has been in the customer feedback business for several decades, I laud the idea that agencies should listen to its constituent customers.

However, this bill comes with several concerns.  First, we’ll look at specific concerns with the bill and the requirements dictated for agencies’ survey programs.  Should this many details be in a bill? Lastly, we’ll examine  concerns about the general approach to improving the customer experience.

Suggestion Box for Customer Experience Improvement

How Will the Customer Experience Feedback Be Used – for Improvement or Measurement?

Feedback surveys used correctly can provide valuable information for operational improvement efforts.  But they are like the forbidden fruit in the Garden of Eden.  Since the surveys will provide feedback on individuals’ actions, there will be the temptation to use the feedback to measure employee performance.

A truism of organizational behavior is that any operational improvement tool that gets used to measure performance will compromise the value of the tool for operational improvement.  The incentive will always be to improve the performance measure — whatever it takes.

The “improvement” could be through improving the process, but it will also come from “survey manipulation”.  We all have encountered that at car repair shops, restaurants, retail stores, etc. where we get barraged by “pleas for 10s.”  Massaging survey results is really quite simple.

How to Compare Agencies?

Since the bill stipulates 4 or 5 “standardized questions”, it appears that one goal for the bill would be to compare agencies on those survey measurements.  However, the survey industry is not like the accounting industry.  We do not have standards, just conventions.

One cannot legitimately compare the results between two surveys that are seemingly measuring the same thing unless the survey questions and the survey administration are identical.  Differences seen in scores could be an effect of the survey methodology and not operational performance.  Yet, we know it will happen – perhaps with erroneous conclusions.

Not just individuals, but agencies will be incented to manipulate the survey results.

If You’re Going to Compare, Don’t Create Sequencing Effects

Beyond the common required questions, agencies can add their own specific questions.  The bill doesn’t specify that those questions follow the common questions.  If a purpose of the bill is to allow comparative measurements across agencies, then the common questions should go first in the survey.  Otherwise, a sequencing effect will impact how people answer those questions.

How Can Service Recovery Be Performed?

One section of the bill requires that the survey respondents be anonymous.  Great!  The goal of anonymity in a survey program is to help reduce concerns about improper use of personal information and thus increase response rates.

(Here’s an example of how a lack of anonymity and confidentiality can affect a government survey program.)

But with anonymous respondents how do you now engage in service recovery? Service recovery occurs when the specific problem a customer has is made known and then successfully addressed.

This is a key benefit of transactional surveys.  But, if you don’t know who submitted the survey, then service recovery can’t happen. Should not a goal of the Customer Experience Act be to improve specific customer experiences?

Further, complete assurance of anonymity is impossible to achieve since IP addresses are captured with any webform submission, assuming that’s the administration mode.  Postal mail surveys are the most anonymous, but that mode has so many drawbacks.

I’m From the FBI.  Your Customer Experience Survey is Too Long

Another section of the Customer Experience bill limits the number of questions to 10.  Short surveys, especially for transactional surveys, make sense.  But should the survey length be fixed by Congress?  To collect specific diagnostic information to plan continuous improvement initiatives may require more than 10 questions, especially given that 5 are already fixed by the bill.

I know for a fact that many government agencies today conduct feedback surveys with more than 10 questions.  Would those survey programs become in breach of the law?

Ask Only Once – or Else

Another section of the bill states that a customer can be asked for feedback for only “1 solicitation per interaction”.  Virtually every company doing customer experience feedback surveys sends out multiple reminder notes to respondents.  No doubt you’ve received them after, say, a hotel stay.  Those reminders increase the response rate and reduce the non-response bias.

I think the bill’s authors meant to stipulate that a specific person can’t be asked for feedback about multiple transactions within a certain time period, which is standard industry practice.  This is a good indication that the bill was not proofread by someone with a surveying expertise.

Ask Now While I’m “Here”

Yet another section requires that the survey invitation by made at the “point of service” if practical.  Why?  Does “point of service” refer to temporal or physical proximity — or both?  Survey invitations should be made very soon after a transaction is completed.  That reduces recall bias.  But the bill’s phrasing here could lead to confusion on the survey logistics.

The World Better Fit Our Model – and Not Change

As stated, the bill specifies 4 or 5 questions that must be asked.  But what if service delivery modes change making some of the questions irrelevant?  We now need an act of Congress to update the survey!

In fact, some of the questions are already irrelevant for some service delivery channels, for example, “whether the individual or entity was treated with respect and professionalism.”

This required question assume personal service delivery.  What if the customer used a self-service delivery channel?  The question won’t make sense.  And one would hope for both efficiency and efficacy reasons that government agencies would increase self-service options!

Report What You Heard and What You Did With What You Heard

The bill requires a report that summarizes the feedback collected and “how the covered agency uses the voluntary feedback received by the covered agency to improve the customer service of the covered agency.”  That’s the whole point of feedback collection: taking action.

However, the survey design as stipulated in the bill will limit the ability to collect true diagnostic information to improve customer service.  If an agency gets a low score on “treating with respect,” what can they then do?

The culture should lead from feedback to improvement action — see next section — and it’s better that it be done as part of the culture rather than driven by Congressional dictate.  But have the agencies been handcuffed by stipulations in the bill from using the tools necessary?

Should Government Agency Customer Experience Feedback Efforts be Top-Down or Bottom-Up?

The goal of feedback is to learn what’s gone right and what’s gone wrong from the customer (or constituent) perspective.  The feedback data are inputs to a continuous improvement process, in this case to improve customer experience.  Such quality management initiatives are used by a large share of private companies, and I know from having trained many government employees on survey design that they are also part of many (or most) government agencies.

Continuous improvement programs are composed of a holistic set of initiatives and tools.  To be effective they must be implemented within the proper cultural infrastructure.  That is, simply implementing one tool – feedback surveys – by itself without the support of managers and employees will strongly compromise expected outcomes.

Without a continuous improvement culture, managers may view the feedback program at best as a bureaucratic exercise or at worst as an infringement on their authority.  Employees may view the feedback surveys at best as a nuisance and at worst as a threat.  While senior management may set the goals of a program, there must be bottom-up buy-in for the program to truly succeed.

The Quality Circle Example

Consider for example Quality Circles.  In the 1970s and ‘80s, the Japanese auto companies found a competitive advantage in the quality of their cars versus US auto makers.  These companies, and especially Toyota, had a long history of continuous improvement; it was part of their corporate cultures, their corporate DNA.

One element in their toolbox was quality circles.  The idea is for employees to meet at the beginning or end of a work shift to discuss the problems they encountered and how those could be addressed.  Quality circles got lots of management (and media) attention in the US.  They were featured cover stories in newsmagazines, the new shinny penny, the answer to quality problems!

But they weren’t.  They failed miserably in US auto plants.  Think of the culture in which they were inserted back then: auto plants whose overwhelming primary goal was to meet production quotas, staffed by union employees who did what they were assigned to do as negotiated by their union.  Now these employees were asked to identify problems and brainstorm to find solutions.  They’d been trained to not rock the boat.  The cultural infrastructure simply was not there.

Is the cultural infrastructure in place today in all agencies to support feedback tools for a continuous improvement program?  In some, no doubt; in all, I doubt it.  And imposing it by legislative decree may get cursory compliance but not an embrace of the objectives.

Conclusion

The Federal Agency Customer Experience Act of 2017 has its heart in the right place.  However, many details need to be reworked lest they cause problems for agencies currently gathering feedback or create a bureaucratic requirement that will cost taxpayers’ money yielding limited benefits – other than the companies that get the contracts to run these programs.

PS:  You might be wondering why I’m writing this article rather than tell the bill’s authors.  I did tell them, along with my congressional representatives.  An autorespond is the most I have received back, and not even that at times.

Survey Workshop Singapore 2016

Dr. Fred will be conducting his Survey Workshop Series at the Embassy Suites in Alexandria, VA on September 25-27.  The “training class,” which is the term to use if you’re a government employee, teaches how to design and execute a survey project, covering many of the topics discussed in this article.  See our event calendar and registration page.

He will also be delivering a special, one-day questionnaire design class as part of the 930Gov.com customer service conference at the Washington DC Convention Center on 4.  The class will be an abridged version of the questionnaire design coverage in the full workshop curriculum and will have specific coverage of the implications of the Federal Customer Experience Act.