“Have you ever met…?” A Question of Question Validity
Sometimes survey results don’t really show what the writers think they do, but instead show the errors inherent in poorly conceived or executed surveys.
In late March 2013 the Boston Globe reported a poll about Mayor Menino who was finishing his fifth — yes, 5th! — 4-year term. Overall, he was a good mayor, and his malaprops are absolutely legendary — or should I say he had “ionic” moments on the “jumbletron.” He decided in 2013 not to run for a sixth term and he died shortly after leaving office. The press misses him. Here are more famous quotes, some of which you have to be a Bostonian to fully appreciate.
The Globe poll found his popularity was still sky high at 74% and part of the reason may have been that “almost half said they had met Menino personally, an extraordinary feat for a big city mayor.” This was “down eight points from four years ago” according to the Globe article.
Boston is a city of 625,000 of which 83% are over 18, the sampling frame for the poll. That’s 520,000 people, indicating that Menino has “met” 260,000 people — if we believe the poll. (To keep NRN — nice round numbers — I rounded down the population a bit but rounded up the percentage met from 49% to 50%.) Remember, this would be 260,000 unique people he has met; meeting someone twice doesn’t count as 2.
Let’s assume that people interpreted the question as having met Menino while he was mayor in the past 20 years. That would mean he would “have met” 13,000 different people per year or 250 per week or 36 per day — in a 7 day week. (Mayors don’t get days off!) In a 60-hour work week that would be 4 people per hour. That is quite an “extraordinary feat.” According to the poll taken four years ago, he had met 356 people per week! (More on that curious fact below…)
Part of the Mayor’s hallmark has been his touch with real, everyday people. He spends considerable time in the various Boston neighborhoods, which is why he is so popular, never having received less than 57% of the vote in mayoral elections.
But… even if we include the 50 years of Menino’s life before he became Mayor, the numbers are still too fantastic. (Over 70 years he would have had to meet 71 people per week to have met 260,000 people.) The numbers just don’t pass face validity. What’s in the mix?
The Question Wording
“Have you, personally, ever met Mayor Menino?” is the question wording. Note it’s not “met with” which implies a real meeting with discussion on some point. To their credit, the survey designers did include the work “personally” to try to exclude people who might think they “met” the Mayor by seeing him on TV or by being in a crowd at some rally with 1,000 other people where the Mayor spoke. That phrasing implies to me that you’ve shaken his hand and at least exchanged a few words, mumbled though they might be. Clearly, the attempts at precision wording failed. Undoubtedly, some people have a very loose definition of “met.”
Too bad they didn’t ask a follow up question about where the respondent met the Mayor to validate the previous response. If the person said, “I don’t recall,” that would cast doubt about the response validity. I think we can all recall when we’ve met someone famous. So, not remembering would lessen the credibility of the respondent saying he had met Menino.
(I can recall clearly my encounters with former Senators Brown, Kerry, and Brooke and sports figures Reggie Jackson, Bobby Orr, Jerry Remy, and Carl Yastrzemski. And some of those encounters were many, many decades ago.)
Imagine you were asked the question. Don’t you want to say to the interviewer that you have met the Mayor? It sounds good. It makes you feel a bit important. It’s a good thing.
Response bias is caused by predispositions the respondent brings to the survey interaction. This is likely in play. In fact, the added word “personally” may have accentuated the response bias rather than increased the question’s accuracy.
Within the response bias domain are several specific effects that could be the explanation:
- Acquiescence where the respondent says what they think the interviewer wants to hear.
- Conformity where the answer makes the respondent part of the crowd.
- Social desirability where your answer is the “right” answer following societal norms.
To some extent people choose whether to be surveyed. While telephone surveys, which this was, suffer less from this self-selection bias, I suspect it is in play. If you were asked to participate in a poll about your mayor, wouldn’t you be more inclined to be polled if you had met the person?
Here’s how I know there are question validity issues.
The 49% percent who met the Mayor was down 8% from four years ago, far greater than the 4.7% margin of error for the survey. Note that the question said have you “ever met” Menino, not have you met the Mayor in the past four years.
Doesn’t “ever met” mean the number should only go up? If you asked me how many students I have “ever taught,” that number can only go up every year.
How could 8% fewer now say they have ever met Menino? That’s 41,600 people who had met Menino as of 4 years ago but who now say they haven’t. Apparently, people who have met Menino move out of the city — or die — in large numbers!
Here’s the point: this question is really measuring a subjective feeling about the likeability of the Mayor, and my educated guess is that the surveyors at the Survey Center at the University of New Hampshire know this. I suspect there’s a strong correlation between this number and other measures of sentiment about the Mayor. But the reporters who wrote up the story about the poll purport it to be an objective measurement of the Mayor’s flesh-pressing.
Lessons from this survey about good surveying practices and about survey question validity…
- Know what information you are getting from your surveys.
- Especially consider what underlying attribute or construct you are truly measuring.
- Could the question be interpreted multiple ways?
- If some results do not look right, be inquisitive. Challenge the results with common sense. Find out why. Examine the survey practices.
- If a question doesn’t work after a few tries at different wording, resign yourself to the reality of the surveying situation and take a different approach to measuring the construct.
- Whoever designs the survey questionnaire should also do the data analysis. That creates a tight feedback loop on survey instrument design issues. The larger the surveying operation, the more likely that the instrument designer is divorced from the data analysis, which is not good.
- When reporting the results, don’t misrepresent what the data are saying.
- When reporting the results, actually think about the data. (Is that too much to ask of a reporter?)
- And recognize there’s a lot of imprecision in surveys.