“Regardless of how I vote, I’m glad a woman is a major party nominee for president.”
That’s the opening question in the New York Times, CBS News poll taken in early September 2016 to measure the feelings about having a woman as a presidential candidate. As we review the results with a critical eye, we really have to ask how much weight the “findings” should receive.
Why do I say that? Consider the results from that very first question, one that will set the respondent’s mental frame for the rest of the survey. (“Don’t Knows” excluded):
Strangely, perhaps, the NY Times write-up focuses on the positive in the statement: “A broad majority of voters say… they are happy this milestone has been reached.”
Here’s an alternative conclusion. “Fully one third of women are unhappy that a woman is a presidential candidate. This shows our national culture is still trapped by the millstone of antediluvian viewpoints.” (The NYT would probably use phrasing like that.)
The “findings” support both conclusions, but should we believe the survey results? I have real trouble believing that a third – a third! – of women believe that a woman’s place may not be just in the kitchen but is certainly not in the Oval Office. It just doesn’t pass for me what we call conclusion validity, that is, is it believable. (I mean, just look at the adoring people the NYT chose for the article photo. Ah… How could any woman not be glad?)
Admittedly, I bring biases to my interpretation. I spend part of my professional life in academia. I live in a very liberal part of the country. I’m represented in Congress by Senators Warren and Markey and Congresswoman Tsongas. You can’t get more surrounded by liberal viewpoints.
I have the classical liberal view that our society should be a meritocracy marketplace, and the fact that women are advancing to ever higher degrees of responsibility is an indication that meritocracy is taking hold.
I know not everyone shares my views, but would so many be unhappy that anywoman is a presidential candidate? I doubt it. Other effects — survey biases — likely complicated how people responded.
Tips for a Successful Survey
Request Chapter 1 of the new edition of our Survey Guidebook for key points in a more effective survey program.
The opening clause, “Regardless of how I vote,” removes Hillary Clinton from the formulation of the respondents’ answers. Right? But did it? I doubt it. In fact, it may reinforce that halo effect! It makes us think about the specific candidates. The question wording likely activated a response bias that drove the Disagrees higher.
We see the dislike of Hillary Clinton in the second question, which was only asked of those who Agreed on the first question:
“Are you generally satisfied with Hillary Clinton as the first female presidential candidate or would you have preferred that the first female presidential candidate be someone else?”
Too bad they didn’t ask this question of the Disagree group from the first question. Given the results for the second question, many of the 30+ percent “not glads” in the first question likely feel that way because of who the nominee is, irrespective of their voting preference.
People also may have reacted to the operative work “glad.” If you believe in a meritocracy where gender is irrelevant, then there’s no reason to be glad because a woman reaches a milestone. That’s the way it should be. “Glad” is a strange word – or construct – to present to a respondent. It’s asking whether you share that emotional response, and many may have felt that didn’t describe their emotions. And don’t view it as an emotional issue.
Survey Biases — Driving Up the “Glads”
While the question wording likely drove the percent of “glads” down, other survey biases likely inflated the percent of “glads.”
First, the question uses an Agree/Disagree scale. Such questions are well known among survey researchers to suffer from acquiescence bias, also known as yes saying. People tend to want to agree. The question also presents what can be considered today’s social norm – that gender should not be a barrier. Respondents generally are more likely to express conformity with a question presenting a social norm.
Now let’s turn to the interviewer. Note: this was a telephone survey. The respondent may view the interviewer as someone with knowledge, leading us to acquiesce and agree more readily.
But we also likely have an interviewer bias based solely on their gender. Normally, an interviewer creates an interviewer bias by how they deliver the survey, for example, through intonation, but in this case the interviewer gender may create a bias.
Imagine you were male, and an interviewer whose voice (and name) appeared female delivered the above question to you. Wouldn’t you be more likely to acquiesce to the statement? If you were a female respondent, would you want to tell a fellow female that you’re a cultural barbarian?
Unfortunately, the NY Times doesn’t disclose whether they tried to control for that effect. In their explanation of how they conducted the survey, no mention is made of the interviewer gender nor if they looked at splits based upon that gender. That’s too bad since it would be interesting to know the impact.
However, it’s also scary that they don’t recognize the importance of the gender-induced bias and other survey biases — or maybe they do.
~ ~ ~
In summary, as I tell my survey workshop students, some things are very difficult to measure with a survey. It can be a difficult challenge to phrase a question that measures what we want to measure – the definition of validity – not complicated by a swirl of measurement effects.
That’s likely what happened here. But so many surveyors, particularly in big polling organizations, refuse to recognize that their attempts at measurement have failed or take the time to pilot test their language.
Buy Our Customer Survey Guidebook
Our self-help survey book walks you through a survey project, teaching you the critical elements of a well done survey in an easy-to-understand delivery.