Surveys are conducted to learn how some group feels. If the survey questions are flawed, then we don’t learn and may be misled. Ambiguous questions — questions whose phrasing leads to multiple interpretations — are the single biggest mistake made by survey designers. And perhaps a fatal one.
Author Archive for: firstname.lastname@example.org
About Fred Van Bennekom
This author has yet to write their bio.Meanwhile lets just say that we are proud Fred Van Bennekom contributed a whooping 112 entries.
Entries by Fred Van Bennekom
Customer loyalty should not be defined with a short-term view but, rather, looking at the lifetime of interactions. Otherwise, a company risks driving away long-standing customers, as United Airlines almost did.
Good customer experience management is not a one-time event, but requires reviewing customer journey maps whenever something in a process has changed. The implementation of the lovely new open-road tolling in Massachusetts missed a key part of the customer experience: how to tell the customer when something is amiss with his transponder. And this could be — and should be — proactively pushed to the customer.
MassMove report on transportation priorities in Massachusetts does not have sound methodology and should not be used as the basis for decision making. It’s a wonderful example of how to concoct a research program to deliver the “findings” the sponsor wants. Shame on the Mass State Senate.
Meaningful survey results require a valid questionnaire and an unbiased administration of the survey. The CTE brain trauma study of NFL players used biased survey samples, which clouds conclusions from the study.
The new TSA service design requires removing large electronic items. The goals are to improve the efficacy of the scans and to speed over passenger processing. However, this may not speed up passenger processing due to increased workload at passenger work areas. This is a great example of shifting a bottleneck, and, unfortunately, the lack of attention to all the “workers” and job tasks in the overall process flow.
Federal Agency Customer Experience Act of 2017 promises to make agencies more responsive to its citizen-customers. But will it? By itself, will the bill’s requirements achieve its goals? Is the feedback program required by the bill properly designed? What elements of the feedback program should be part of a Congressional bill versus assigned to some administrator to determine in conjunction with people who are survey experts? This article will explore those questions. This bill needs to be rethought with input from people who know something about survey programs.
Various survey question types can be used to measure something. The choices have trade-offs between analytical usefulness of the data and respondent burden.
In the survey research world, so-called non-response bias is an ever growing challenge for solid, viable research, even if many people simply ignore the threat. Election seasons present a great opportunity to explain the concept.
Why? Because non-response bias is a threat to credible survey research, but it’s the goal of every politician! Politicians want a specific result and creating a non-response bias is part of the process of winning an election, even if it’s a bit unsavory to say so.
A confluence of survey biases – response, interviewer & instrumentation – likely overwhelmed what the NY Times’ surveyors think they measured about people’s feelings about having a female presidential candidate.