MassMoves Funding Preference

MassMoves Report – Misleading Research on Transportation Priorities in Massachusetts

MassMove report on transportation priorities in Massachusetts does not have sound methodology and should not be used as the basis for decision making. It’s a wonderful example of how to concoct a research program to deliver the “findings” the sponsor wants. Shame on the Mass State Senate.

Biased Survey Samples

Meaningful survey results require a valid questionnaire and an unbiased administration of the survey. The CTE brain trauma study of NFL players used biased survey samples, which clouds conclusions from the study.

Survey Sample Selection: The Need to Consider Your Whole Research Program

A counterintuitive result of a survey project is that a survey’s results are likely to pose as many new questions as it answers. An email to me recently posed a dilemma that can result from a particular approach to a survey program. The company had conducted a survey and now they wanted to ask some follow-up questions based upon the learning from the first survey. The question is: whom should they invite to take the survey?

Do we send the survey to the same people again or do we contact the people who replied? What is the best way. We didn’t do sampling as we just sent the survey to every user.

Having attempted a census — sending an invitation to everyone — this company runs the risk of creating “survey burnout” if they invite everyone again. Customers will only give you so much time to help out your research efforts, and if you ask too much, you risk alienating them.

Since surveying is frequently used for some quality control purposes, let draw a parallel to our manufacturing colleagues. Frequently, to measure the quality of a tangible product coming off an assembly line, we have to destroy it to know the limits of performance. (Think of those automobile crash test commercials.) This is known as destructive testing since the tested product cannot be sold. When conducting quality control surveys of our intangible service product, we don’t want to engage in destructive testing, that is, burn out and annoy our customers!

So what’s the solution?

The power of statistics means we probably do not need to attempt a census for any survey — unless your group of interest is very small. In another article, I talked about response rate requirements and statistical confidence. Surveying a sample will draw a reasonably accurate profile of how customers, or any other group of interest, feel about certain issues or operational practices. The structure of a survey program that I recommend is to survey a sample, and then if a follow-up survey is desired, we can generate another random sample excluding those who previously responded without risking survey burnout.

The questioner specifically asked about surveying just the respondents to the first survey. I responded, “Surveying only the respondents [to the first survey] does introduce a bias to the results since your sample is not random.” That bias may be fine if you’re just looking for more detailed information and not trying to establish a profile of customer feelings or practices. “If you’re looking to develop more granular information about things learned from a previous survey, let me suggest not doing a survey, but, instead, conducting some in-depth research, e.g., interviews or focus groups.  These [surveys and focus groups] are complementary research techniques.” Don’t feel that you are constrained to one research too. In a previous article, I discussed ways you can generate more actionable data on a survey, but don’t feel you cannot go beyond conducting surveys.

The key lesson here is that you should think through the entire survey program before you conduct your first survey.  You could back yourself into a corner. Over surveying your customers is certainly not recommended. Remember, a survey’s goal is to measure satisfaction, and how the survey program is conducted will affect that very satisfaction. A survey is a CRM transaction.