This article examines a CNN poll on the impact of Covid-19 upon people’s routines, which showed huge differences in the impact based upon political affiliation. Maybe. But are the differences driven by politics as implied by CNN or by some factors that weren’t measured CNN didn’t measure, combined with survey design issues that accentuate political impacts on the polling?
~ ~ ~
Comfort Level w/Regular Routine by Party
CNN published a poll on June 10, 2020 (see nearby data table & graph) that asked whether people now felt “comfortable” returning to their “regular routine”. The data show an increase in comfort level, but the startling finding can be found in the graph.
Comfort Level Returning to Regular Routine
While both (self-described) democrats and republicans show increasing comfort in the past month, the disparity between the two groups is huge – more than 3 to 1.
The question posed isn’t inherently political. Imagine if we asked whether people felt comfortable driving 10 mph over the speed limit. Would we expect a huge split in responses based on political leaning? I doubt it. So why here?
Whenever we encounter research findings from surveys or other methods that seem to defy common sense – so called, conclusion validity – we should not just accept them. The findings may be true, but we should always look for alternative explanations. Then we can properly interpret the poll results – or dismiss them.
Did Covid-19 affect democrats more than republicans? Probably yes — but not because of their politics but because of where democrats and republicans tend to live.
Covid-19 hit cities far worse than more rural areas for obvious reasons. (Cities as transportation hubs were entry points for the virus and urban congestion fostered spreading.) Could that explain the difference in responses?
Logically, we would expect city dwellers to be less comfortable in returning to their “regular routines” since Covid-19 was far more prevalent there. Further, city dwellers’ “regular routine” certainly involves being in far more crowds than rural folks. If your “regular routine” involves riding elevators or subway cars, discomfort is certainly logical.
Could the true reason for the difference in responses be geographic — urban vs. rural — and that political affiliation is simply serving as a proxy for “where you live”? To some extent this is likely the case. Maybe a lot! However, this blatant shortcoming in research design means we can’t tease out the impact of political affiliation from one’s type of community. How sad.
Tip: A critical part of the survey questionnaire design process is to think about what analysis is important to perform and be sure to capture the necessary data.
Notice that this poll question asks about returning to a regular routine based upon the coronavirus outbreak. But Covid-19 isn’t happening in a vacuum. While the virus has dominated –- overwhelmed — the news for months, in late May just before the June polling was conducted a new event pushed Covid-19 to the side: protests and riots after the George Floyd killing.
Virtually every major city, along with many midsize ones, had rioting and looting – separate from the peaceful protests. Hundreds of businesses were destroyed, just when they were preparing to reopen as the pandemic impact waned. If you lived in a city, which I don’t, I would imagine that your attitude toward returning to a regular routine would be more negative after this unrest. The “new normal” has been supplanted by the “new new normal.”
As stated, democrats populate cities, so the impact of the urban unrest would be more pronounced for that political group. Again, the political party may be a proxy for the civil unrest impact on comfort levels for returning to regular routines.
While this new discomfort is not due to the coronavirus, it would be very hard to get respondents to separate that impact from the virus impact. And the pollsters didn’t ask a question about the unrest that could be used to identify the relative impact.
Where’s the intellectual, scientific curiosity?
The confounding factor of the urban unrest may explain why the increase in comfort level for democrats was less than for republicans.
Difference of Perception
I’ll assume you’re an open-minded person who’s willing to follow “science”. Turn on Fox News and you’ll hear the economy took a wallop due to Chinese duplicity over Covid-19 but things are improving. Stories will highlight areas of economic and health improvement.
Now turn on MSNBC or CNN (or a number of other media outlets). You’ll hear that the economy took a wallop due to Trump’s incompetence, and while things are somewhat improving, the virus is hitting minorities and women particularly hard and the second wave is around the corner due to irresponsible governors – and not due to protestors.
News reports are not factual; they’re interpretive. We all stew in the juices of our own confirmation biases.
Survey Sample Size Calculator
Get our Excel-based calculator. It can also be used to gauge statistical accuracy after the survey has been completed.
That is, we read the news that supports our views and reinforces our perceptions. In essence we’re lying to ourselves through our selection of chosen distorted news. (“My news media doesn’t lie!” you’re saying. Yea, right…)
So, perhaps some of the 73% to 23% comfort disparity between Republicans and Democrats is because we’ve successfully brainwashed ourselves by our media selections. In other words, perceptions differ by party even if not true in reality.
Imagine you’re a liberal and you get a call from someone saying they’re doing a poll for a conservative media group, e.g., Fox News or The Spectator. Are you going to tell them your true feelings or give answers that support your world view?
Now imagine you’re a conservative and you get a call from someone saying they’re doing a poll for the New York Times or CNN. Are you going to tell them your true feelings or give answers that support your world view?
Not giving truthful answers to a research question is known as a response bias. (Personally, I think respondent bias is a better term since this is a bias the respondent inherently brings to the process.) Numerous types of response bias exist – acquiescence, conformity, privacy concern, etc. – and this response bias I call ulterior motive. The respondents may be outright lying to achieve some unspoken objective.
Certainly many respondents gave answers to support political leanings – their ulterior motive – rather than their true feelings.
We’d like to think in surveys that each question is unaffected from previous questions, but that’s nonsense. This sequencing effect changes the respondents’ mental frame. Unfortunately, the detailed polling results do not give us:
The results presented jump from question 2 (I think – it’s unclear) to question 8 to question 11. How did all those other questions set up the respondent for the questions reported? And why were they “embargoed,” to use CNN’s term?
Question 2 (or “A2” as it’s called in the report) asks for the respondents’ approval rating for President Trump regarding “the economy” and the “coronavirus outbreak.” I don’t think you could have a more blatant example of a sequencing effect. This approach virtually guarantees a politically slanted response to subsequent questions.
Tips for Successful Survey
Request our white paper that discusses key points to follow for a better survey program.
When the respondents are asked about the impact of Covid-19 upon their routines, etc., in the back of their minds, they’ll be thinking, “Okay, I said I hate Trump, so I need to give answers that support my hatred for him and make him look bad.” And vice versa.
In other words, the sequencing impact of the Trump approval question, accentuates the likelihood of respondents lying for ulterior motives.
Look at the question again. (see nearby call-out.)
Comfort Level Returning to Regular Routine
The question design creates extreme responses. Why?
Binary Scale. The response scale Comfortable vs. Not Comfortable is set up as a binary decision. No middle ground. Isn’t comfort level a scalar concept? What if you were “moderately comfortable” with all of your regular routine? What response option would you choose?
Key Operative Phrase. The critical phrase in here — what academics would call a “construct” — is “regular routine.” That again is an all-or-nothing concept; it doesn’t say “most of your regular routine.” What if you were fully comfortable with 90% of your regular routine but not with 10% of it? Which response option would you choose?
See how these binary concepts — comfortable vs. not comfortable and regular routine vs. most of your regular routine — interact to lead to extreme responses?
Let me give a personal example. As I write this in mid June 2020, I feel fairly comfortable with “most of” my regular routine — shopping, hiking, even dining out. But there are parts of my routine about which I have 2nd thoughts. Slaters, our pizza joint next door has Thursday Open Mike music nights — or did. We were “regulars” there; it was always packed. Obviously, right now such events are off, but when they come back, I might have 2nd thoughts. (Note, we have dined at Slaters since the draconian lockdowns were partially lifted here in Massachusetts.)
So, I’m not “comfortable” with 100% of my “regular routine.” What option would I choose? Hmmm.
The question designers created high threshold effects for choosing the Comfortable option. This may account for the political factors to come into play more.
Good question design doesn’t force respondents into extreme positions that don’t properly reflect their views.
From afar, it’s not possible to say what explains the extreme scores between political parties in this poll, but something seems amiss. It defies credulity that Republicans and Democrats could have such divergent views of an objective phenomenon.
All the above factors may well be in play. My guess is that political party is a proxy for type of community in which the respondent lives — urban vs. rural, but we can’t tell since the pollsters didn’t collect the necessary demographic data.
Remember in statistics class you were told that a statistical association does not prove causation? This is perhaps a great example. If so, CNN’s reported results present a false causal impression.