Money Grows on Trees — If You Believe the Polls

Summary: Political polls — as well as organizational surveys — many times present conflicting results within a poll. The reason is that the surveys have not been designed to force respondents to engage in trade-offs among conflicting options. We see this in the New York Times, CBS News poll of swing states released on August 23, 2012 where the poll indicates that respondents want to keep Medicare as we know it yet spend less on it. Clearly, something is amiss.

~ ~ ~

The NY Times, CBS News and Quinnipiac University released a poll of swing states (FL, OH, WI) on August 23, 2012. The key finding from the headline, “Obama Is Given Trust Over Medicare,” was summarized as:

Roughly 6 in 10 likely voters in each state want Medicare to continue providing health insurance to older Americans the way it does today; fewer than a third of those polled said Medicare should be changed in the future to a system in which the government gives the elderly fixed amounts of money to buy health insurance or Medicare insurance, as Mr. Romney has proposed. And Medicare is widely seen as a good value: about three-quarters of the likely voters in each state said the benefits of Medicare are worth the cost to taxpayers.

But here’s the question posed from the details of the survey results, which thankfully the NY Times does publish:

35. Which of these two descriptions comes closer to your view of what Medicare should look like for people who are now under 55 who would be eligible for Medicare coverage in about ten years? Medicare should continue as it is today, with the government providing seniors with health insurance, OR, Medicare should be changed to a system in which the government would provide seniors with a fixed amount of money toward purchasing private health insurance or Medicare insurance. (Answer choices rotated)

Just over 60% wanted to continue Medicare as is, and about 30% said they supported changing the system.

Now, look at the results for the next question:

36. To reduce the federal budget deficit, would you support major reductions, minor reductions, or no reductions to spending on Medicare?

 Almost 60% of respondents supported major or minor reductions in Medicare (roughly 11% Major, 48% Minor).

The Times inexplicably doesn’t report this latter finding from their survey. In fact, the headline for the article could easily have been, “Strong Majority Favor Reductions in Medicare Spending.”

But how can 60% support keeping Medicare as is yet the same percentage support spending reductions? The survey design did not force respondents to make trade-offs among competing alternatives, and these conflicting results show why forcing respondents to make trade-offs is so important. Forced trade-offs eliminate the money-grows-on-trees responses we see here. When reviewing poll findings, I frequently find such conflicting results — and only selected results are reported in the write-up.

Perhaps more puzzling is that the question as phrased is not grounded in how normal people think, that is, people who live outside of the Washington DC beltway. No one is proposing that Medicare spending should be reduced. At issue, is the rate of growth in Medicare spending. David Wessell of the Wall Street Journal in summarizing the Congressional Budget office analysis says that Ryan is proposing a Medicare be 3.5% of our Gross Domestic Product (GDP), which is the total output of our economy, in 10 years versus  4% of GDP if the program stays as is.  Currently, Medicare consumes 3.25% of GDP. With the expected growth in GDP, even under a Ryan plan Medicare spending is increasing.

Reducing spending on Medicare could be interpreted as:

  • Reducing per capita spending on each Medicare recipient
  • Reducing the overall spending on Medicare, that is, the total spent each year
  • Reducing Medicare spending as a percentage of GDP
  • and maybe some I’m not thinking of!

How did you interpret the phrasing in Question 36? Since the leading phrase in the question was “to reduce the federal budget deficit” my educated guess is that the second option above is what most people were thinking. That’s the only option that would actually “reduce” the deficit — as opposed to slowing the growth of the deficit.

Regardless, with such ambiguous phrasing, it’s near impossible to interpret the results except that 60% support some kind of reduction, a position that is incompatible with keeping Medicare “as it is today.”

My conclusion is that this phrasing shows how rooted the poll designers are in Washingtonian logic. Only in Washington is a slowing of growth rates in spending, even on a per capita basis, considered a “reduction.” Imagine the polling results if they had presented it accurately.

~ ~ ~

Another interesting element in the questionnaire design can be found in the the question immediately preceding question the Medicare change question:

34. Overall, do you think the benefits from Medicare are worth the cost of the program for taxpayers, or are they not worth the cost?

The poll found roughly consistent results for the three states with 75%-16% feeling that Medicare is worth the cost. That question helps set the mental state of the respondent that Medicare as we know it is a good thing going into the next question about making changes to the program.

We should also note that Question 35, does not present the proper choices to the respondent. Congressman Ryan’s 2011 plan did call for offering only premium support to those currently under 55 when they reach Medicare eligibility. However, the 2012 Ryan plan offers the choice of premium support or staying in traditional Medicare. In other words, the poll did not test the actual choice offered between the two campaigns even though that is how the Times has pitched the results of the poll.

Further, while the headline is that “Obama Is Given Trust Over Medicare,” the poll has mixed results. While by a  51%-42% margin Obama is trusted more to handle Medicare, more people strongly disapprove of ObamaCare than strongly approve.

Perhaps the most startling result in the poll — and not reported by the Times — was the seismic shift in the Florida senatorial race. In the Times‘ late July poll, Democrat Bill Nelson led Republican Connie Mack 47%-40% while in this poll, Mack led 50%-41%.

An Example of the Impact of Question Sequencing

Summary: The New York Times and CBS News released a nationwide poll on July 19, 2012 that conveniently ignores the impact of question sequencing and presents questionable interpretations of the data. The example shows why consumers of survey data should always examine the methodology of the survey, especially the design of the survey instrument.

~ ~ ~

In a related article I looked at some polling done by the New York Times, CBS News, and Quinnipiac University. In this article, I’ll turn to a nationwide poll that the Times and CBS News released on July 19, 2012. It shares many of the questions that the state-focused polls do, and it’s a horribly long survey at about 100 questions. My focus here is on the impact of question sequencing and how the reporters summarized and presented the findings. Again we see why you should always examine the survey instrument and the methodology of the surveyor before believing the survey’s findings — especially as presented.

About two thirds of the way through this long survey after a series of issue questions, Question 41 asked:

41. Looking back, how much do you think the economic policies of George W. Bush contributed to the nation’s economic downturn — a lot, some, not much, or not at all?

I ask you, the reader, to think about your “mental frame” as you consider that question. In other words, what are you thinking about? To achieve a valid questionnaire, every respondent should have the same interpretation of the survey questions. So, for this question to be valid we should all have similar interpretations — and the person who summarizes the results should also share that interpretation.

I think it’s fair to say that most people would be thinking about how much they blame the recession of 2008-09 on the Bush policies. That’s when the “economic downturn” occurred, and the authors of the survey have asked you, the respondent, to “look back.”

The results of that question were:

a lot — 48%
some — 33%
not much — 12%
not at all — 6%
don’t know — 2%

Here is how those results were presented in the New York Times article, which was the closing thought for the article.

Nearly half of voters say the current economic plight stems from the policies of Mr. Obama’s predecessor, George W. Bush, which most voters expect Mr. Romney would return to. (emphasis added)

Question 41 did not ask about the “current economic plight.” When you read “the nation’s economic downturn” in question 41 were you thinking of the “current economic plight?” I doubt it. (Economic growth is miserably anemic as I write this in August 2012, and the economic tea leaves are not pointing up, but currently available data do not have us in a “downturn.”) Granted, the question does not have a specific timeframe, so the authors can get away with this interpretation. I guess.

Question 42 repeated the previous question but asked about President Obama.

42. Looking back, how much do you think the economic policies of Barack Obama contributed to the nation’s economic downturn — a lot, some, not much, or not at all?

The results of that question were:

a lot — 34%
some — 30%
not much — 23%
not at all — 12%
don’t know — 1%

The reporters didn’t see fit to report these results in the article. More interesting to me as a survey designer is that Questions 41 and 42 were rotated. I would love to see the results broken out based upon which question was asked first, but the Times does not provide that detail.

Clearly, there is a sequencing effect in play.

If you were asked first about Obama’s impact on the “economic downturn,” you are certainly thinking more near term. It is doubtful that people were considering the 2008-09 recession as Obama’s fault (except maybe for those real political wonks who know of Senator Obama’s votes protecting Fannie Mae and Freddie Mac from proposed deeper regulatory oversight but even then the impact would be minimal).

So hearing the question about Obama’s impact on the “economic downturn” has set a more near-term mental frame. Now you are asked about Bush’s impact on the “economic downturn.” Are you thinking about the 2008-09 recession? Certainly not as much as if the Bush question were asked first. I think it’s fair to say that people blame Bush far less for today’s economy than the economy of 2008-09.

To summarize, I am sure the scores for questions 41 and 42 varied significantly depending upon which one was asked first. If we were only told the splits…

The proper, unbiased phrasing for the question would be,

Thinking about the current state of the economy, to what extent do you consider [Bush/Obama] to blame for the economic problems our country currently faces?

That in fact is how the writers of the article in the Times present the question, but that’s not the question that was asked. Far from it.

~ ~ ~

Now let’s look at the last phrase of the Times summary.

Nearly half of voters say the current economic plight stems from the policies of Mr. Obama’s predecessor, George W. Bush, which most voters expect Mr. Romney would return to.

According to the polling data, do “most voters expect Mr. Romney would return to” President Bush’s policies? This finding is based on question 57:

57. If elected, how closely do you think Mitt Romney would follow the economic policies of George W. Bush — very closely, somewhat closely, or not too closely or not at all closely?

The results were:

very closely — 19%
somewhat closely — 46%
not too closely — 18%
not at all closely? — 7%
don’t know — %

We can debate until the cows come home and the keg runs dry about the interpretation of “somewhat closely.” But perhaps more importantly, the survey treats “economic policies” with one broad brush. Some of those policies led to the “economic downturn,” but other policies most assuredly did not.

Further, some of the respondents who believe Mr. Romney “would return to” Bush policies may not have responded in Question 41 that they thought those policies “contributed to the economic downturn.” You cannot legitimately make the statement that the authors did linking the results of Questions 41 and 57 without segmenting the data and analyzing it properly. But they did.

~ ~ ~

Bottom line. The closing statement of the New York Times article distorts what the survey data actually said, due to sequencing effects and a convenient reinterpretation of the question. The Times is making it sound as if the polling supports the contention that voters still hold Bush responsible for the current weak economy. That may be true, but these polling data, properly analyzed, do not support that contention.

Caveat Survey Dolor: “Show Me the Questionnaire”

Summary: “Show me the car fax” is one those lines from a TV ad that frankly gets annoying after a while. My version of it is “Show me the survey instrument.” I annoy some organizations when I ask to see the survey instrument before I’ll even contemplate the findings derived from the survey. To most people, examining the instrument would seem an unnecessary annoyance. In this article I will show you why you should always go to the source and verify the validity of the data generated by the survey instrument.

In fact, I had a long string of emails with a local-to-me company that published some survey findings that got national attention. I wanted to see how they presented certain terminology to respondents that I suspected would bias how people took the survey. They declined to show me the instrument with a very lame excuse. I even told them I would help them with future survey projects in exchange for the publicity. But I guess their reasoning is: why let sound research get in the way of a good headline.

~ ~ ~

We’re in the political silly season in this summer of 2012 with polls coming out almost daily. Should you believe the summaries presented by newscasters or newspaper writers are true to the data collected? Should you believe the data collected are accurate? We see major differences across polls, so these are legitimate questions. While we can’t do a full audit of the polling processes, we can look, perhaps, at the survey instruments used.

In this article I examine a poll conducted by the New York Times, CBS News, and Quinnipiac University. Let me state right up front that I am pointing out the shortcomings of a survey done by two liberal news outlets. (Yes, my dear Pollyanna, the New York Times has a liberal bias. Shocking, I know.) I suspect if I dug into a conservative news outlet’s survey, I would find questionable distortions, though in ones I have examined, I have not seen validity issues with questions like the ones below.

On August 1 and 8, 2012 the New York Times published polls of six battleground states for the November election: Florida, Ohio, Pennsylvania, Virginia, Colorado, and Wisconsin. To their credit, the paper does provide access to the actual survey script used for the telephone survey and summary results by question. Most of the major polls make their survey language available. Those that don’t are probably hiding sloppy instrument designs — or worse.

The survey scripts appear identical for questions posed for the national level. However, they did change their definition of relevant population or sampling frame from the first batch of surveys to the second batch. For Florida, Ohio, and Pennsylvania they only report results for “likely voters;” whereas, the Virginia, Colorado, and Wisconsin surveys reported results for some questions that included registered but not likely voters and some that included non-registered respondents. See why it can be hard to do comparisons across surveys — and these surveys were done by the same organizations!

Much has been made of the fact that these pollsters oversampled democrats. (That is, the self-reported affiliation of respondents as republicans, democrats, and independents had democrats in greater proportions than in the registered voter base.) We can also look at the sequencing of questions and ask whether it creates a predisposition to answer subsequent questions a certain way. But here I want to focus on two questions that clearly show how the pollsters’ world views affected the questions they asked.

Question 19 reads as follows:

19. From what you have read or heard, does Mitt Romney have the right kind of business experience to get the economy creating jobs again or is Romney’s kind of business experience too focused on making profits?

The pollsters present the false dichotomy of business experience as focusing on either jobs or profits — a favorite theme of some. Businesses do not choose either jobs or profits. Jobs result from profitably run businesses. The question displays an incredible lack of understanding of how businesses function — or perhaps it was purposeful. In a similar vein, we have heard that corporations are sitting on a pile of cash and are “refusing” to hire people.

~ ~ ~

The next question in the survey is:

20. Which comes closest to your view of Barack Obama’s economic policies:

   1. They are improving the economy now, and will probably continue to do so,
2. They have not improved the economy yet, but will if given more time, OR
3. They are not improving the economy and probably never will.

Notice what’s missing? 1% of respondents in Florida and Colorado did. The pollsters didn’t offer choice 4. “Obama’s economic policies are hurting the economy.” 1% in Florida and Colorado apparently took the initiative to voice that option, and to the pollsters’ credit they captured it.

Isn’t it legitimate for some people to believe that the president’s economic policies are hurting the economy? Apparently, not to these pollsters. They only think that Obama’s economic policies can help the economy or be benign. Yet, rational people can certainly feel that regulations, promised tax policies, and the uncertainty of Obama’s temporary fiscal and economic policies are hurting the economy.

The pollsters only provided neutral to positive response options with no negative options. A basic requirement of a well-designed question is that it provides the respondent a reasonably balanced set of response options. This is not a mistake a seasoned survey designer would make.

Another problem with the question is that “economic policies” covers a very broad area that is open to multiple interpretations by respondents — and manipulation by the writer of the findings. The pollsters would have generated more valuable, interesting, and valid data if they had structured their question as:

Consider each of the following areas of Barack Obama’s economic policies. What impact do you feel each has had upon the economy now and in the future? Greatly helped, Helped somewhat, No impact yet but will, No impact now or in the future, Hurt somewhat, Greatly hurt.

— Policy 1
— Policy 2, etc.

~ ~ ~

Is the purpose of the polls performed by major news organizations

  1. to understand the feelings of the populace
  2. to drive those opinions or
  3. to generate data that certain, preferred candidates can use to their advantage in the campaign?

Looking at these two questions — as well as phrasing in a July 19 poll — it’s hard to say the former, which should be the goal of responsible, unbiased researchers.

In summary, these two questions show that these pollsters bring bias to their polling. Always look at the survey instrument to sense if there’s bias in the wording and fairness in interpreting the data before accepting the findings. This caveat applies to political polls as well as organizational surveys.

~ ~ ~

So why does a business hire (or layoff) someone?

A business hires someone if they feel the long-run value delivered to the organization will exceed the fully loaded cost of employing the person. It’s really that fundamental. While it’s unlikely a company can measure the direct value to the bottom line of a single employee or even a group — except perhaps for the sales force — that is what companies decide in the budgeting process. If the cost of employment exceeds the benefit, bottom line profit decreases. Why would a company hire people if the value they bring doesn’t exceed their cost?

The counterargument may be made that companies fire people to increase profits. It is true that laying off people may increase bottom-line profit, at least in the short run. (Google, not a politically conservative company at all, laid off many at its Motorola Mobile acquisition.) If the people being laid off had costs that exceeded their benefit, yes, profit will increase. But keeping people on the payroll just for the sake of “employment” can hurt those who deliver positive value to the company.

I worked for Digital Equipment Corporation in the 1980s. The company was on top of the world in 1987 when it employed more than 120,000 people worldwide. When senior management missed the changes in the competitive market, the company still resisted layoffs until the financial health of the company was threatened. In a decade Digital no longer existed with tens of thousands of job losses that greatly affected the Boston technology beltway for years to come.

More recently, look at the US car companies that employed people who literally did nothing in their “job banks.” Did that lack of focus on profit advance the bankruptcies? Most certainly.

No one’s business experience is focused on creating jobs. Entrepreneurs and business people want to build sustainable businesses by creating products and services people chose to buy. Jobs are a by-product, albeit a very important by-product, of a successful, profitable company.

Go to a thousand company websites and read their mission statements, preferably small growing companies that may not yet be profitable but are our job-creation engines. How many companies say their primary mission is to create jobs? I doubt you’ll find one.

Here’s the empirical proof that we all know. Ever heard of an established, unprofitable company that is hiring lots of people?

I recognize that was a bit of a rant, but as a business school professor this idea of “jobs versus profits” needs to be challenged for the misrepresentation that it is, and it is disturbing to find it in a survey done by a professional organization.

Bolton Local Historic District Survey — A Critique

Summary: The survey critiqued here is an instrumentation bias example, showing how to support an argument with survey statistics. Rather than a survey that is an objective collector of data to understand public opinion, the survey design accomplishes the purpose of driving public opinion. This is achieved by the Bolton Local Historic District Study Committee through intentional instrumentation bias, as described.

~ ~ ~

Some of the more fun surveys for me to critique are ones from organizations that are trying to generate data to argue some point of view. The purposeful data are usually generated through some combination of intentional instrumentation bias, especially loaded and leading wording and administration bias as I teach in my survey workshop series. I’ll call them advocacy surveys. We see these types of surveys from advocacy groups and public policy organizations — and politicians!

Instrumentation Bias: A survey instrument or questionnaire that does not capture data to properly reflect the views of the respondent group contains an instrumentation bias. Many types of errors — whether intentional or unintentional — can lead to instrumentation bias.

Loaded language and leading wording that drive the respondent to particular responses are two examples of instrumentation bias.

Some of the surveys can be downright hysterical in the loaded and leading language they use. Some are more subtle to the point where the respondents and the consumer of the findings don’t recognize the manipulation. It’s akin to the question of how to lie with the statistics that are pre-existing, but here the data are manufactured to achieve a goal. We may say this is how to lie with survey statistics.

A true researcher develops a hypothesis or research question and then conducts research that generates valid, objective data to test the hypothesis and draw conclusions. That’s the scientific method. Sometimes the data do not support the hypothesis, and the researcher then looks for a hypothesis that the data suggest, which may prompt additional research to confirm. This, in fact, happened to me in my doctoral research.

In these other cases, the “conclusion” has already been made before the research is performed. The goal is to generate data to support the conclusions, and probably to avoid generating data that may confound the preset conclusion. I recently got one of these in my home town of Bolton, Massachusetts from the Local Historic District Study Committee.

I must admit upfront that my objectivity in critiquing this survey was challenged since it quite literally hits close to home, which I’ll explain at the end of this article. Briefly, I disagree with the underlying premise of those conducting the “study.” I present my biases here so you can filter out how, if at all, they have colored my critique. That’s being intellectual honest. I am confident in my professional objectivity in critiquing the survey practices described below.

~ ~ ~

I received a one-page, double-sided mailing early in the week of January 23, 2012 from the Local Historical District Study Committee, a group established by our selectman. The call to action to get you to open the mailing lays out how many old structures lie in Bolton’s National Historic District. “Should these be preserved? We need your input.”

bolton-historic-district-surveyWe don’t learn how many historical structures have been destroyed or altered in the past 10, 20, or 30 years, but the direct implication is that they are threatened. Even the website to which one is directed later contains zero data about the destruction of historic properties. Only that “without a local historic district, our village center could be lost forever through future demolitions and alterations.” (emphasis added)

So is this a solution looking for a problem?

The introduction below the fold — the sheet was folded in thirds for mailing purposes, avoiding the need for an envelope — says, the “Committee would like your input in establishing a local historic district… Please take a few minutes to express your thoughts on whether a local historic district is needed…” I will show you that the survey is not designed to capture input but rather to drive opinion.

To take the survey, you have two choices. First, you can type in a 55-character (sic) web address, which includes an underscore covered by an underline. This would challenge anyone who is not web savvy. But you can also find the link at a shorter web address that contains more information about the committee’s work. Could that information bias the respondent who is about to take the survey?

bolton-survey-introductionThe web site also includes the latest draft of the proposed by-law, draft #7. That raises an interesting question.

The Study Committee is on its seventh draft of a by-law to be presented to the town meeting in 3 ½ months in May, with the formal review cycle among town committees about to start for any proposed law. Why is the committee collecting town-wide input now? While they have held many open meetings, the survey purports to be an opportunity for the general citizenry to provide input. If true, the survey should have been done months ago when different alternatives methods for historic preservation might have, or should have, been considered.

Second, you can complete the paper survey on the back side of the mailing, and mail it to the committee chairperson. No envelope or postage are provided.

The question to ask as a survey professional is whether the administration method(s) is going to lead to an unbiased response group. The administration method here clearly presents hurdles to completion, meaning those with strong feelings are more likely to complete the survey. If true, the survey administration method is creating a non-response bias. This bias occurs when those who respond to a survey are likely different from the entire group of interest.

The survey method and the introduction also raise the question of the unit of analysis for this survey. Each household (I presume) received one copy. Yet, at our town meeting in May we don’t vote by household; we vote by individual. This is a common problem with no clean solution. I encounter this with business-to-business surveys where the company being surveyed has multiple people who interact with the company conducting the survey. Is the relationship being surveying at the business level or at the individual level? We can make strong arguments both ways as is the case with this LHD survey. I would have included in the instructions a comment to make additional copies for each member of the household who is a registered voter. But these issues with the survey administration are minor compared to issues with the survey instrument.

~ ~ ~

So let’s look at this survey instrument.

bolton-survey-questions-1-3The first three questions are awareness questions that use a binary yes/no scale. Here that may be appropriate because you either are aware of these things or not. You probably can’t have some limited degree of awareness. The designer also uses the NHD and LHD acronyms here and throughout the survey. NHD is defined at the top of the survey, but LHD is only defined in the introduction. Use of acronyms in surveys is dangerous. If the respondents confuse the meaning of the acronyms, then how valid are the data generated? I avoid acronyms in my survey design work for this reason.

Further, the second question asks, “Are you aware an NHD is a registration only?” What’s “registration only” mean? It’s very poorly worded shorthand phrasing that creates serious ambiguity for the respondent.

But what’s the purpose behind these awareness questions? My educated guess is that the question is actually meant to educate the respondent, to make known to them that houses in a National Historic District are simply registered. No restrictions on what is done with the property are part of the NHD listing. That “shortcoming” is, in fact, the goal of creating the LHD (Local Historic District).

In other words, the purpose of the question is not to measure awareness but to make people aware of the distinction between NHD and LHD. As we will see, much of this survey’s purpose is to affect the thinking of the respondent, not to allow the respondent to “express [their] thoughts.” In a broad sense that makes this survey an example of push polling.

Push polling is a practice that has recently become part of the political landscape. In a push poll, telephone calls are made before an election purporting to be a survey about the election. But the questions are all designed to impart information, typically highly negative, about one candidate. An example might be, “How aware are you that Mr. Candidate was arrested three times on drunk driving charges?”

If one were designing a survey to counter this survey, a push polling question could be, “Are you aware that the US Constitution provides specific protection for private property rights but makes no mention of collective property rights?”

bolton-survey-questionsNow we encounter eight interval-rating questions, asking respondents their strength of agreement with these eight statements. Just in case someone misses the point, the first and last questions measure the same attribute — Importance of old homes to Bolton’s character.

Most of these statements are hard to disagree with. Yes, the town center is historically significant. Yes, architectural features and stone walls should be preserved. Yes, historic preservation affects property values. Who would disagree with those? Not me. They’re motherhood and apple pie. I will be stunned if 80% of respondents don’t Agree or Strongly Agree with those statements. That’s the evidence that the Study Committee wants to show an LHD is needed and wanted by the citizenry.

Perhaps more importantly, these questions get the respondent into a pattern of agreeing with the statements presented. Don’t discount that effect. That’s one of the validity shortcomings of this question type, known as Likert-type questions. Most of us like to be agreeable, and we can get put into that routine by asking a series of questions to which we will agree. Once into that rhythm, we’re asked whether a town committee should be appointed and empowered to impose what we just agreed should happen. The flow makes it easy now to agree with that.

Notice the language used in the questions. The language chosen conditions the respondent to view historic homes as an asset of the town collectively. Yet, you might say, aren’t these homes privately owned? But if the views of the historic homes are a collective right, then conflicts between these rights will inevitably exist. These contested rights between private property rights and collective rights must be adjudicated by government, in this case a committee established by law to control what owners do with their property in order to preserve the collective right. Pretty slick reasoning, isn’t it? And all an outcome of the survey design.

From a questionnaire design perspective I did find the fifth question puzzlingly — “New owners of structures in the proposed LHD will maintain the historic character of their structures regardless of whether a committee and by-law exists.” Clearly, the desired response by the study’s author is Strongly Disagree. If owners were going to maintain historical structures properly, then the issue of an LHD is moot.

A question structured like this is normally known as a reverse coded question. Questionnaire designers put one or two reverse-coded questions early in a survey to make sure the respondent doesn’t get into a response routine just giving the same answer without reading the question. I applaud the apparent attempt to establish questionnaire validity. However, this question is positioned too late in the survey to serve that purpose. The respondent’s response pattern has already been established. Further, I suspect many people, to the chagrin of the study’s author, will check Strongly Agree because they’re in the agreeable rhythm.

Routine occurs when the respondent gives the same response to every question. Once into the response rhythm, the respondent likely doesn’t read and consider the questions fully. This effect compromises instrument validity since the respondent isn’t really answering the questions.

Putting aside the question structure and location, notice the intent of the question. The question presumes that it is the responsibility of new home owners to maintain their houses’ features, thus reinforcing the collective asset argument. The implicit message is that newcomers to town won’t understand that the home they just bought provides benefits to non-owners, so we need to control what they do to preserve the collective property right.

Why didn’t they ask about current home owners in the proposed district? That would have personalized the impact of the by-law to current residents, but that may have antagonized people who will cast votes in the May town meeting.

bolton-ldh-committee-questionThe last closed-ended question asks what specific expertise should be on an LHD committee. It’s well-known that respondents gravitate toward the first option provided, which is Architect. The last option is Historic District Resident. This structure is already baked into the proposed bylaw on page 2. What they omit as an option is Study Committee Member, which the proposed by-law includes as a member of the LHD committee. I’ll note here, if I have properly ascertained the addresses of the five members of the Study Committee, only one owns a home that would affected by the proposed by-law. I did chuckle a bit at the inclusion of Lawyer as a potential member of the LHD committee. The Study Committee apparently sees the potential impact to the town’s litigation budget this by-law could bring to the town.

~ ~ ~

The above points make readily clear that the survey is designed to generate data to support a position — that historic preservation is desirable and the only means to do so is through the creation of a town committee with force of law to impose their decisions.

If this were an objective survey — truly a study — of how best to preserve historical structures and how to address contestable rights, we would see other questions such as:

  • “Maintenance of all homes in the town helps maintain property values throughout the town.” (This would provide analytical contrast with the penultimate internal-rating question.)
  • “Private property rights should be greatly respected when considering the various elements that may comprise an LHD by-law.” (Note that the survey never asks anything about private property rights, only collective property rights. This omission is stunning given that the fundamental purpose of an LHD is to restrict private property rights.)
  • “Property owners whose property rights are restricted by an LHD should be compensated by lower property tax rates, shifting the tax burden onto those who enjoy the benefits of these older homes without paying the cost of ownership or maintenance of them.” (In other words, if you believe in collective property rights, put your money where your mouth is.)
  • “Rather than restrict what property owners can do with their historical property, the town should provide incentives to encourage preservation.” (Would a carrot-and-stick approach instead of a command-and-control approach be preferable, especially since there is no evidence of an imminent threat to historic structure? The proposed by-law is all stick and no carrot. Note that the by-law draft currently has a $300 per day fine, and no provisions for expedited emergency repairs.)

Inclusion of these types of questions would be fair and balanced. Why not pose questions about which approach to historic preservation is more preferred? The reason is clear. Such questions might provide data that conflict with the objectives of this Study Committee.

Let’s be honest. The Committee’s objective is to implement a command-and-control system over what people who own homes in the proposed district can do with those homes. The survey’s purpose is to manufacture data to marshal a call for collective action to control private property. Those are words that command-and-controllers don’t like to use. “Preserve historic assets” sounds so much more benign and beneficial.

In most all public policy matters there are pros and cons, benefits and costs that must be weighed. To inform the public decision requires capturing information on both sides of the trade-off. The survey provides no data to elucidate such trade-off decisions. It wasn’t designed to do so.

~ ~ ~

As a history major, I also know that history is not stagnant. The houses built today will be historical structures in 100 years. Shouldn’t we preserve these assets as well whether in the proposed district or not? Why isn’t that proposed? Because it’s easier to enact laws that initially control the behavior of a few. Fewer people affected mean fewer people to react against being controlled by the collective entity. Note that no questions were asked about the preservation predispositions of current historic home owners. It’s a free pass to vote for the law if you’re not affected. No skin off my back — and I benefit. What’s not to love? Of course, with the precedent set, the next law may seek to control your property.

~ ~ ~

royal-barry-wills-cape-oldWhen reading this article, you have no doubt discerned my views on the topic. I promised to explain. You may have guessed I own an historic home. I bought my Federalist period home in Bolton 30 years ago. The original part of the house dates back to 1804. My home is not in the proposed Local Historic District, but experience shows that when command-and-control legislation is implemented, the tendency is for it to expand not contract. In my neighborhood we have about a half dozen old homes, all being maintained and improved by their owners without the benefit of those who claim to know what’s in our best interests better than we do, as is the case with the homes in the proposed historic district.

I bought an old house because I love old houses and I wanted a house I could renovate. My childhood home was a Royal Barry Wills cape. In my 30 years of home ownership, I have renovated most all of my house, doing much of the work with my own two hands, being very sensitive to its history. While certainly updating it, I have preserved and used materials from the house wherever practical — and some where it wasn’t practical.

royal-barry-wills-cape-newI have tried to make the home more handsome, and I think you would find few disagreements from my neighbors. (See nearby photos.) I didn’t need anyone to tell me what I should or shouldn’t do with my property. In fact, had my home been under the proposed law’s jurisdiction, I would have needed permission to replace the asphalt shingles on the face of the house with clapboards! Would I have bought the house if I had to run the phalanx of an appointed commission and possibly “any charitable corporation in which one of its purposes is the preservation of historic places, structures, buildings or districts,” which is included in the definition of Aggrieved Person in the proposed by-law? Probably not.

I am concerned about the preservation of historic homes.
So, I did something novel.

I bought one. What a concept!

~ ~ ~

To close, I will admit that I considered completing the survey providing answers that I knew the Study Committee would not want to see. However, that would just make me equal in intellectual dishonesty as those who designed a data collection instrument whose data will undoubtedly be presented as an unbiased view of the thoughts of the town’s citizens.

Data Collection Form Design Issues

Summary: So, what can the American Recovery and Reinvestment Act (ARRA) of 2009 — also known as Stimulus 1 — teach us about survey questionnaire design practices? You may think nothing, but you’d be wrong. We can see the need for thinking through the logic of the data collection form and the introduction for a form that is all about demographic data.

~ ~ ~

On a recent visit to my doctors office I was told I needed to complete the form shown nearby. We’re all used to getting a new form to sign seemingly on every visit to every doctor’s office to meet some new regulation that’s been dreamed up, but this one was different.

So here’s a quiz. What a “Meaningful Use System”?

If you know then, 1) you work in some facet of the health care delivery network, 2) you work in the Department of Health & Human Services, or 3) you’re a pitiful wonk who needs a life. I had to do quite some online digging to figure it out. As I learned on the HHS site, “The HITECH portion of the American Recovery and Reinvestment Act (ARRA) of 2009 specifically mandated that incentives should be given to Medicare and Medicaid providers not for EHR adoption but for ‘meaningful use’ of EHRs.”  [For you non-wonks, EHR stands for Electronic Health Record. Unfortunately, the link to that description is now gone.]

Got that?

So the introduction to this data collection form entices me to provide the information by stating:

New federal guidelines effective January 2011 require our electronic health record system to be certified as a “Meaningful Use System”. In order to meet meaningful use guidelines, XXX Medical Associates is required to collect additional demographic information.

In surveys I design I always soften the requests for demographic information since most people value their privacy and are hesitant to provide personal information without good cause. Does the introduction here inspire me to want to complete the data collection form? Heck no. Then, rather than have a “Prefer Not to Say” option, instead if I want to protect my privacy, I am labeled as “Refused to report.”

meaningful-use-system-survey

Then I am asked to “kindly complete the information below for you and all family members…” The “Required Demographic Information” is Race, Ethnicity, and Language. Some of you will disagree with me here, but I am one of those who resents being asked these questions. I believe in a society based on merit where superficial characteristics are irrelevant. Even worse, I resent being told I must provide the information. Should I choose not to provide it, I have to check “Refused to Report.” I am a marked man. One can imagine a phone call for such a choice!

Also, look at the logical disconnect in the form. I have to report this information for all my family members, who are to be identified at the bottom of the form, but there’s only one set of check boxes. The structure of the form presumes a homogeneous household! This is America. We have many households of mixed race, ethnicity, and language. What to do if you have a heterogeneous household?

Note also there are no instructions such as “check all that apply” or “check only one.”  By convention with checkboxes — as opposed to radio button, I should check all that apply.

Let’s accept that there’s a good reason for this information beyond some bureaucratic carrot and stick.  So, how does this information enhance this medical group’s EHRs to become a “meaningful use system?  We could conjecture on this.  Perhaps epidemiological studies will examine statistical associations between medical conditions and demographic profiles.

But if that were true, then explain the degree of specificity requested, especially for ethnicity, which sees Americans as Hispanic/Latino — and everyone else. Wow. That is a truly bizarre way of segmenting America’s ethnicity. We have great specificity for some races that are single-digit percentages but the majority of Americans will fall in one lump. It’s hard to see how the data here would be used for data analysis for medical research. Occam’s Razor says it’s use is for diversity assessment.

I was told that the form was a creation of the federal government. I don’t know if that’s true, but I wouldn’t be surprised. I am one of those who is not thrilled at the prospect of the inevitable barrage of unintended consequences from the Patient Protection and Affordable Care Act (PPACA)and ARRA. If this small window into the acts is any indication, I am even less thrilled.  But at least we learned something about survey design and data collection form design.

Lost in Translation: A Charming Hotel Stay with a Not-So-Charming Survey

Summary: Charming, posh hotels may not have charming surveys if they have been designed by those lacking significant survey questionnaire design experience. This article reviews the an event survey for a hotel stay at a hotel that is part of the Relais & Châteaux Association. We see many shortcomings in the survey design, some due to translation issues. This survey article is a collaboration with OmniTouch International’s CEO, Daniel Ord, who personally experienced this survey.

~ ~ ~

Most people, including those of us who are proud to be professionals in the Customer Service field, would assume that the more luxurious the brand is, the more likely the customer survey processes would represent the pinnacle of achievement. Unfortunately, paying more money (in this case for a luxurious hotel stay) did not equate to a superior survey program. We’ll open with some background and then look at the survey.

Background

Over the year-end holidays, Daniel and his spouse decided stay at the same lovely restored castle in Germany where they had spent our honeymoon. The castle cum hotel is part of the Relais & Châteaux Association, which is an exclusive collection of 475 of the finest hotels and gourmet restaurants in 55 countries.

Daniel booked the stay online from their home in Singapore. The communication and service was impeccable. In fact, the Receptionist indicated that if they wanted to enjoy the Gourmet Restaurant, they should change the dates of the stay to ensure that the restaurant would be open. So Daniel delayed their arrival date one day later than their original plan — and smiled at the good fortune.

On the morning of checkout, December 23rd, Daniel received an email invitation from the Association to comment on the stay at the hotel that commenced with the evening of the 21st December. However, he had changed the reservation to the night of the 22nd December and in fact, had not even checked out yet.

So given how wonderful the stay had been up to that point, they were a little surprised at the perceived “urgency” to complete a survey before the experience had been completed! Obviously, this was a process or administrative error, but the industry-based customer-service mindset kicked in and wondered how the reservation system could be “correct” but the survey invitation timing be “incorrect”. They decided to complete the survey only upon leaving the property and thus give it the proper attention at their next destination.

The Online Survey Experience

relais_invitationBefore even getting to the survey itself, the survey invitation (see nearby) contains some odd wording and quite frankly is off-putting. A key purpose of the invitation is to motivate the respondent to take the survey. This invitation doesn’t pass that test.

  1. Look at the opening sentence. “As far as we know, you recently stayed in one of our properties” on such-and-such date. Daniel’s initial reaction was, “You are darn right I stayed at this property — and I have the American Express bill to prove it!” Being greeted by name was a positive, but the next wording was very odd. Shouldn’t they know? If they wanted to confirm the information on record, then they should have just asked for a confirmation. Perhaps this was an issue of translation.
  2. The second line makes you really wonder. “If you had to cancel this reservation, we kindly ask you to ignore this message.” Daniel’s gut reaction, “If they don’t even know if I was an actual ‘guest’ then I am not very motivated to tell them how to improve.” It’s pretty clear that their reservation system and survey system are not tightly linked, but it leaves the guest wondering how organized they are.
  3. The third line indicates that they conduct “quality inspections at regular intervals” — but what is unclear to Daniel, the customer, is if he is part of this quality inspection process or whether this refers to inspections done by Association inspectors. More questions were raised in his mind by this phrase rather than answered.

Only in the last paragraph of the Survey invitation does the Association (finally) state that “Your comments are an integral and fundamental part of our quality approach.” Ah, now, after reading through the entire invitation, did Daniel understand where he fits into the picture.

Now onto the Survey Itself!

First of all, notice some graphic design features (which admittedly are hard to grasp from the individual screen shots here). Sections are blocked off with gray background, which is a nice design touch to organize the respondent. But there’s an opening title, “Your Opinion” followed immediately by a section on “Your Stay” with an odd juxtaposition of the two heading titles. More importantly, “Your Stay” solicits basic details of the stay, not opinions. Did anyone proof the layout?

relais-survey-your-opinionThe “Your Stay” section requires confirmation of place and date details for the stay, both auto-filled, but editable. Given these fields, the survey invitation certainly could be reworded. Note that they ask for the guest’s room number. Room Number and Number of Nights in the stay should be in the hotel’s transactional data, so why ask it here? Daniel understood why they wanted the room number — to address any stated issues with the specific room — but he had a gut-level reaction to what felt like an invasion of privacy. The questions got Daniel thinking in ways that run counter to the goal of getting honest feedback. In other words, it activated a response bias.  As a rule, demographic questions should go at the end of the survey for exactly this reason.

Another translation and/or design issue can be seen with the “You stayed” question. We like the complete-the-sentence question structure used here, but the structure falls apart with “Others”. Besides, what does the “Others” Option mean and why is there no interest in asking for details? We can infer the demographic groups of interest to the Association, but it seems odd that other groups are not of interest.

The next question appears to be another translation issue. “How did you get to know this property?” The smart aleck in us wants to respond, “By staying at the hotel, of course.” Much better phrasing would be, “How did you learn about this property?” Again, look at the checklist. It is very focus on Relais & Châteaux information sources. Do these options meet their research objectives? We cannot answer that, but we question it.

Next they ask, “Number of stay(s) including this one in a Relais & Châteaux?” Previously, they’ve used the term, “properties” without reference to Relais & Châteaux. “Properties” should be included here to avoid confusion. However, this is another data point that should be in their customer records. Daniel said he was tempted to enter a larger number so his responses would carry more weight.

In the next section they’re asking for “Your Rating” on aspects of the stay. First, note the scale. The difference between Very Poor and Fair is huge. If you thought something was poor, how would you score it?

relais-survey-your-ratingNext, look at the selection of items on which they want feedback. What’s missing are the various customer touchpoints, e.g., making the reservation, check in at reception, concierge, check-out. They apparently assume that their service is so consistently good that there’s no need for a feedback check on its quality except for the very broad question on “Hotel – Service”.

The layout here is also puzzling. There appear to be categories (Courtesy, Character, Charm, etc.) and then in most places one or two attributes to be measured. We also again see some apparent translation issues that create ambiguity. “Calm” of the location and of the property is an odd phrasing, as well as “Charm” of the “decoration”, “Leisure”, and even “Cuisine”. We are unclear about the distinction between “Calm of the location” versus “Calm of the property”. What is meant by “Maintenance”, which has an industrial tone, and “inside” and “outside” of what – one’s room or the hotel? “General impression” of what?

We also see double-barreled questions — asking two questions at once. “Character of the architecture” is different from “character of the location”, depending on your interpretation of “location”. What if there were more than one restaurant, which there were, and you ate in more than one?

Overall, many of the questions are very unclear. This section reads like an initial rough draft badly in need of revision cycles and pilot testing.

They end the section asking, “Do you intend to purchase Relais & Châteaux gift certificates in the course of the next 12 months?” with a checkbox as a vehicle for the respondent to indicate something, but we’re not told what. “Yes” needs to be put next to the box. At best, it is an odd placement for the question. Is it meant as an overall indicator of satisfaction with the property? If so, it seems like an odd one, especially given the phrasing. Usually, we ask future intent questions on a scale using the phrasing, “How likely are you to…” Daniel felt strongly that this question would be best in a follow-up survey, not in the feedback survey.

relais-survey-commentsNext we have one comment box with no phrasing to push for improvement suggestions or the like. Remember that a reservations agent made a very helpful suggestion. Without a prompt, such as, “Did anyone deliver exceptional service to you?” that aspect of the transaction might be forgotten when providing comments.

Then, we encounter a bizarre and horribly phrased statement preceded by a checkbox. “I do not agree to the passing on of my comments in anonymised form to third-party websites (i.e. your personal information will not be passed on, only your comments about the property)”. Please read that two or three times and see if you can fathom the impact of checking or not checking the box. What if 1) you do not want your comments passed on to third-party sites with attribution and 2) you do not want your personal information passed on either? Never use double-negative sentence construction. See why?

relais-contact-infoNext we encounter a section box with no title. Why? It has a whole series of required fields for all your personal information. You do NOT have the option of submitting this survey anonymously, and after that highly ambiguous preceding question, the likelihood of closing the web browser window without submitting our review is now extremely high.

Why would they ask this? The survey is not anonymous. In the URL for the survey screen, Daniel’s name was readily visible. And what is a 5C code? Daniel knows. Fred has no idea. Never use terminology that some survey respondents may not understand.

At the bottom of all this, as if they are trying to hide something, they finally get around to telling the respondent that all fields marked with an asterisk are mandatory. That should be at the beginning.

The end of the form has disclaimers about the use of personal information. But again, these statements can create more confusion given the earlier question.

In summary, you can see that even a simple, one-screen hotel-stay survey requires a degree of rigor if you’re going to develop meaningful, actionable, accurate data — and not tick off your customer! The designers of the survey instrument have introduced a tremendous amount of instrumentation bias and activated response bias that compromise the validity of the data collected.

An Honest Survey Invitation?

Summary: A survey invitation may make the first impression of a survey program for those in your respondent pool. A good impression is critical to getting a good survey response rate, but the invitation may present other critical information to the potential respondent. Most importantly, the invitation should be honest. The elements of a good survey invitation are presented in this article in the context of reviewing a poor invitation.

~ ~ ~

Sometimes surveys just start off wrong, that is, with a misalignment between the survey invitation and the survey instrument itself. Usually this occurs due to sloppiness; the survey designer didn’t work the details. Maybe the survey instrument was revised after the introduction had been written. However, the misalignment may also be intentional in order to persuade the invitee to take the survey. I’ll present the misalignment with a real example.

Why should the invitation align with the survey instrument? Well, because it’s an invitation. (d’oh!) The primary purposes behind the invitation are to:

  • Entice the recipient of the invitation to move along through the process and actually take the survey. In that sense it is a marketing document for the survey program. As the saying goes, you never get a second chance to make a good first impression. Later I’ll list the points that should be included in an invitation.
  • Set the “mental frame” of the respondent. We tell the respondent, “This survey is about…” to get them thinking about the topical area.

What if the invitation doesn’t align? A person may incorrectly take the survey, or a recipient’s time may be wasted once he realizes that the survey is irrelevant – and may be turned off to any future invitations.

What brought this topic to mind was a survey invitation I got from HomeAway.com. My wife and I own a waterfront rental property in the state of Maine, and we have advertised it through HomeAway for slightly less than a year. HomeAway is one of the leading sites for rental home listings, and the parent company has bought up several other sites recently to expand its reach beyond the US to worldwide.

homeaway-harpswell

Above is the email invitation I received. Let’s analyze it. In the process, I’ll touch on the survey itself.

Right off, note the date. Who is their right mind launches a survey on December 28? We want to launch a survey when we’re most likely to get some of the respondent’s mindshare. I guess one could argue that the week between Christmas and New Years is a slow week, so people are more likely to see the invitation and have time to take the survey. However, a significant percentage of people are on holiday that week and will be doing only the minimal email checking. This is a business-to-business survey invitation, so I feel it should be launched when business is active.

In fact, in the US the whole time from mid-November (prior to our Thanksgiving holiday) to mid-January is a time when it’s tough to get people’s mindshare. I always recommend to clients to avoid this time period for launching surveys other than ongoing transactional surveys. Why launch a survey with an immediate handicap of a lower response rate?

We may also be introducing some type of a sample bias launching a survey in this time frame. A sample bias occurs when something about our survey administration could lead to some members of our target population being less likely to respond to the survey invitation. This bias could mean that our statistical results are misleading even though we have enough data points for reasonable accuracy.

Now let’s look at the wording of the invitation.

We would like to get your feedback about HomeAway.com in order to improve the value that we provide to you and other property owners… The survey should take about 20 minutes of your time and you will be entered into a drawing to win one of five $100 Amazon.com gift certificates if you qualify.

They identify the group doing the survey and end with

This survey is for research purposes only and is not a sales or marketing survey. That you very much for your feedback.

As a survey designer, I was impressed, though some critical elements are missing. They provided several good “hooks”. I benefit if their site is better, and I might win a $100 raffle. However, my guard did go up when I read the “if you qualify” phrase. After giving the survey some gravitas by indicating that they have contracted a research company to do the survey, they make assurances that the survey is not being used as a ruse for a sales pitch. This struck me positively.

Then I link to the survey. Each of the opening screens posed demographic questions:

  • How long have I owned the vacation property?
  • How long have I rented the vacation property?
  • Who manages the property — the owner or a property manager?
  • Who’s involved in marketing decisions?
  • How do I market the property? They provided a checklist of marketing methods.
  • Which of these online rental sites am I familiar with? They provided a checklist of rental sites.

As I went through each screen of probing demographic questions I became more and more suspicious — and ticked off. I answered “none” to that last checklist question even though I had heard of a few of them. The next screen said:

Those are all of the questions we have for you. Thank you for your participation!

Wow! Talk about a let-down and being left with a feeling of being unimportant!!

Let’s examine the contradiction between the invitation and the survey instrument. But first, when I take a survey I try to turn off my left-brain analytical side and turn on my right-brain gut-reaction side. I try to “experience” the survey before looking at it analytically. After all, this is how the typical respondent will come to the survey process.

First, as a rule, demographic questions should go at the end of the survey instrument. Why? Demographic questions are not engaging; they are off-putting. After getting me excited about the opportunity to “provide feedback,” I got hit with a bunch of questions that didn’t excite me at all — just the opposite.

Second, they never said why they needed all these demographic questions answered. Some explanation should always be provided with demographic questions to help allay the concerns with the personal questions. Sometimes we do need to pose one (or two) demographic questions at the beginning of a survey to qualify the respondent or to branch the respondent to the set of questions that are appropriate. Reichheld’s Net Promoter Score® methodology does this in fact, posing different questions to promoters than detractors.

However, if it’s not an anonymous survey, which this wasn’t, then they  should have most all the demographic data in their files to “pre-qualify” a respondent. Apparently, they don’t. Or… this wasn’t really a feedback survey. (More to that point in a minute.)

This gets to my third issue. Note that the invitation contains no assurance of confidentiality or anonymity with the information I will provide. I knew that the survey was not anonymous because of the URLs, but the typical invitee may not know this.

Clearly, the purpose of this battery of demographic questions was to qualify me. But did I qualify? They never told me! That’s my fourth point in the shortcomings of this survey design. I am a customer of HomeAway.  Don’t they owe me the professional courtesy — or common decency — to tell me if I “qualified”?

Instead, I got, “Those are all of the questions we have for you. Thank you for your participation!” While that may be an honest statement, it’s a blatant half truth. Do you really want to leave a customer with the feeling of being unimportant? That’s what this survey design did. A survey can be — and should be — a bonding opportunity with a customer, not an opportunity to weaken the bond.

Fifth, the barrage of demographic questions activated a response bias on my part. Response bias is the bias that the respondent brings to the process brought out by the questionnaire or administration procedure. It leads to untrue answers from the respondent.

Many types of response bias exist.  Here it’s what I call concern for privacy.  The number of questions about my business practices made me leery of the surveyor’s motives, combined with no promise of confidentiality.

Remember, they promised me in the invitation that the survey was not for sales or marketing purposes, yet look at the questions they asked. My guess is that had I qualified for the survey, I would have been asked to compare HomeAway to other home rental sites. We can have a long discussion on the nuanced difference between a market research survey and a marketing survey, but one thing I know for certain. This was NOT a feedback survey. A customer should not have to “qualify” to provide feedback.

That’s my sixth — and most important — point. The invitation was not truthful. We want the respondent to be honest, forthright, and candid with us. Shouldn’t we demonstrate those same principles to the respondent? Will I ever waste my time taking another survey from HomeAway? Would you?

This isn’t the only flaw in HomeAway’s survey “program.” About a week or two after adding HomeAway to my advertising program, I got a survey invitation. I was impressed. I thought the survey was part of an onboarding process that would ask my experiences as a new customer. (Constant Contact does a wonderful job of onboarding.)

Alas, it did not. I am sure the survey was sent to all others who advertised properties on their site. Since I was new, most of the questions were just plain irrelevant to me at that point. Worse, having just set up my site, I was loaded with constructive feedback — positive and negative — that could have improved the site. Their loss is my survey-article gain.

What should be addressed in the invitation?

  • Benefit statement to the respondent. Why should the respondent give you their time? This is critical.
  • The purpose of the survey. This helps set the respondent’s mental state.
  • Who should be taking the survey.
  • An estimate of the time to take the survey. It should be a real estimate, not a low-ball lie. I do not state the number of questions, unless it is quite low. Question counts are intimidating.
  • Some statement about the anonymity — or lack thereof — for the person taking the survey along with a promise of the confidential handling of the information provided. This is especially important if using a third party for the survey process. If one is conducting “human factors research” for medical purposes, by law in the US all this must be disclosed. We shouldn’t need laws for this, and it should be part of all survey research.
  • Who is conducting the survey? If you are using a third-party, the invitation should come from your organization’s email system. If the invitation is going to come from the research agency or through a survey tool’s mail system, then you need to send an email prior to the invitation to validate the third party — and ask that the invitee set the necessary permissions for the mail from the third party to get through email filters.
  • Offer of an incentive if you choose to do so. If it’s a raffle, you should help show that the raffle is real by providing a link to a web page where the names of people who have won previous raffles are listed. Protect their privacy by listing just their name and their town. I checked the HomeAway site and found no such page. Perhaps I missed it. B&H Photo does a nice job of this as did United Airlines.
  • Contact information for sometime who can clarify questions with the survey.
  • An opt-out option for future surveys. The footer of HomeAway’s invitation has an Unsubscribe option, but is that for all of HomeAway’s emails or just for their survey invitations? Since I am a customer, I do want to receive relevant emails.

The real challenge is to cover these points succinctly. Since most of us view email using the preview pane, you want the “hooks” to be visible in the preview pane. Don’t fill the preview pane with the logo that marketing tells you that you must use to help brand the organization.

Writing a good invitation isn’t rocket science. It’s a combination of common sense and common decency with some marketing flair thrown in for good measure. But don’t let the marketing people use flair to cover the truth.

An Insulting Yahoo Merchant Survey

When training people on how to design better surveys in my survey workshops, I train these prospective surveyors on the process for putting together a survey along with the elements of a survey instrument  Then there’s good and bad practices in piecing together the different elements. I try to create a sense of good practice by looking at bad examples. But sometimes helping people design better surveys is simple a matter of applying basic common sense — and The Golden Rule. Let me show you an example.

Last year I bought some sunglasses through a Yahoo storefront, and I received an invitation to take a survey. See the nearby email invitation. The invitation is okay, except they use the odd phrasing of “placing a vote”, and the very open line spacing could lead critical information from being visible in the preview pane. Later, you’ll see why “The Yahoo! Shopping Team” is a misnomer in my book.

yahoo-survey-invitation

The survey itself was quite short and pretty straightforward. See the screenshot below. It’s not exactly how I would lay out a scale, but I think most people would understand how to read the scale — even without instructions. However, there are some problems with the items one is being asked to “vote” on.

First, what exactly does “Ease of Purchase” mean? Is that the usability of the website in placing an order or could someone interpret it as including payment options, which is not asked? If it is website usability, then we have a respondent recall issue. I got the survey invitation a full two weeks after I had placed the order. I go to dozens of websites every day. How can I recall the experiences with this website that far after the fact? If it was lousy, I’d probably have remembered that, but if it was really lousy, I probably wouldn’t have completed the purchase! The validity of the data from this question is suspect.

Then there’s “Customer Service”. What does that mean? I didn’t interact with anyone personally, so in my mind I didn’t experience any customer service. I left the question blank. Notice the legend in the upper right corner of the screen that denotes a red asterisk as indicating a required entry. Only the “Overall” question is required — so I thought.

yahoo-merchant-survey-questions

I filled in some comments and clicked submit.  Here’s the screen I got next.

yahoo-try-again

Huh? “Please select a rating” for what? I completed all the required questions, didn’t I? Apparently, all the questions were required. How nice of them to indicate that on the survey screen. Didn’t anyone test this incredibly simple survey? Apparently, they couldn’t be bothered. If this is a proxy for how much care and concern Yahoo! Merchant Services puts into its business operation, I wouldn’t want to be their customer.

More importantly, who was the twerp who chose the language “Try Again” — and who was the moronic quality assurance person or editor who approved this language? How totally demeaning to a customer. “Try Again.” Worst of all, the reason they wanted me to Try Again is entirely the fault of Yahoo’s survey designer! Why didn’t they just say, “You stupid idiot. Don’t you know how to fill out a silly survey form? A few cards short of a full deck, eh?”  Really, I have never seen such insulting language in a survey design, and I sample a lot of surveys.

Lesson: Part of the reason we do feedback surveys is to show a concern for our customers’ view. This sloppy survey design does the opposite. It would appear that their intention is to antagonize customers. When we issue instructions to our respondents, we need to be nice — yes, nice! (What a concept.) The respondents don’t work for us, in most cases. They are doing us a favor.  Treat them with courtesy and explain things nicely. Usually, I see bad phrasing in the request for demographic information.

But wait! It gets worse.

When I clicked on “Try Again,” here’s the screen I got.

yahoo-merchant-survey-questions

No, that’s not an error. All of my entries, including my typed comments, had been blanked out! The button really should say, “Start Over, You Idiot.” (Okay it really should say, “Start Over From Scratch, Because We At Yahoo Are Lazy Idiots And Don’t Care If We Inconvenience You. We Were Told To Do A Survey, Not To Do It Well.”)

Want to guess what I wrote in the comments field this time? Yes, feedback on the survey, not on my purchase. Of course, no one contacted me — and I did give them my contact information. Would I ever take another Yahoo Merchant Survey? Of course not, and I doubt anyone with my experience would either. The design of this survey program has thus introduced an administration bias into the data set.

At issue here is also process metrics. This survey “system” appeared homegrown. Do you think Yahoo tracked how many people quit the survey in midstream, that is, how many people clicked “Try Again” and then walked away? Or if they did complete the survey a second time, how many survey responses were devoid of comments?  If Yahoo did, I am certain the percentage would have been very high.

Lesson: Always look at the data about where people drop out of a survey. It tells you where there’s a problem with the instrument.

Bottom Line: Whoever owned the survey program for Yahoo Merchants should have been fired. The purpose of a survey should not be to tick off customers. Yet, that’s what the design and execution indicates was an implicit goal of this survey program.

The Poetry of Surveys: A Respondent’s Survey Design Lessons

I never really thought being a survey designer would be a topic for the cocktail hour with friends and strangers. I’d be exaggerating if I said it were, but I am surprised how in first time encounters people want to talk about surveys they have taken. The last stage of a survey project is the pilot test with people from the actual respondent group, and these conversations serve as learning moments on a par with pilot tests, particularly in the area of respondent burden.

I was answering telephones for the pledge drive for my local NPR jazz and folk station, WICN, and I wound up in a conversation with another volunteer. When she learned what I did for a living, she immediately — and passionately — talked about her experiences with a survey from Poetry Magazine.  She is a subscriber and, as a poet, is very passionate about the periodical. “I want them to succeed.”  In particular, she described things she likes and dislikes in surveys. Here are her lessons:

  • One to two screens at maximum. “Three screens and I’m gone.” How many of you have “brief” transactional surveys that go on for five or more screens? Do so at your peril.
  • The survey should be easy to answer. She prefers yes/no questions, but 5-point rating scales are okay for her. Forget the elaborate rating scales with confusing anchors, for example, “somewhat this versus somewhat that…”  That’s a real turn-off for her. (I personally am not a fan of yes/no questions, unless they are truly binary in nature and not scalar, but I get her point.)
  • Don’t force her to write comments or ask for comments with every question. You should be nice in asking for follow-up comments, but beware of asking her for too many comments. You’ll likely get none.
  • Demographic questions should not be intrusive and should be few in number. In particular, the income question is a hot button. “I may go on with the survey, but I’m wary.” In my survey workshops I teach that demographic questions put up a wall beyond the survey designer and the respondent. Here was the vocalization of that.

She communicated these points to the magazine as suggestions on how to change their survey.

Then she told me a story that demonstrates the value of developing a meaningful rapport with your customers — or whoever your respondent group is. One of her Poetry Magazines literally fell apart at the binding. These are not magazines that you just throw away. They are meant to be saved. She wrote to the editor in cute, poetic verse to complain. She quickly got a response from the editor with a “care package”. “I love them even more. The rapid response made me feel fussed over.” This experience just reinforces the power of Service Recovery and the value in encouraging your customers to complain, hopefully as nicely as she did.

As a survey designer, what’s the main lesson here?  Many of those in your respondent group overtly think about your survey design and have strong, cogent feelings about the impact of the survey design upon them. They may not use the term “respondent burden”, but they know it when they experience it.

Listen to your respondent group about your survey. This should be done during the design stage, at minimum during the pilot testing. You may be shocked at how much you learn, which in turn will impact how much valid information you learn from your survey program. To paraphrase Yogi Berra, “You can learn a lot by listening.”