fbpx

How to Read the Polls: A Society Today Ultimate Guide

  • Public opinion polls are now highly technical exercises in modeling rather than direct measurements of the population.
  • As a result, what they mean can be difficult to parse–and the horse-race-obsessed media often adds to misunderstandings rather than clarifying them.
  • In short, don’t pay attention to election polls before Labor Day, look at who administered the poll, look at the margin of error, look at poll results in the aggregate, don’t pay attention to claims about subgroups, and pay attention to systemic trends of over- or underperformance.

Society Today doesn’t do horse-race politics.

In other words, we don’t do breathless coverage about who is “winning” or “losing” elections months or years before those elections happen. What matters is who wins or loses on the day of the election, not on any other day. The reason the media breathlessly covers who is winning or losing at any given time is to provide fodder for their 24-hour news cycle, not to deliver any real value.

Still, you will hear a lot of polling results between now and November. And you’re better off knowing how to interpret the results yourself than taking some pundit’s word for it. So, in this post, I will dust off my doctoral-level survey methods and statistics training to tell you, in plain language, how to read the polls and understand what they are (and are not) telling you.

A magnifying glass zooming in on a bar chart, symbolizing how to read the polls

The Basics of Public Opinion Polling

  • Back in 1936, a popular magazine at the time, The Literary Digest, decided to try to poll the upcoming presidential election. They tabulated the results of 2.3 million responses and confidently predicted the Republican candidate, Alf Landon, would crush the Democratic incumbent, some guy named Franklin Delano Roosevelt.
  • Instead, FDR won in a historic landslide, beating Landon in the popular vote by 24 percentage points. The Literary Digest‘s polling miss was so embarrassing that it went out of business less than two years later–and is now taught in graduate survey methods courses as a shining example of how not to do a poll.
  • Meanwhile, a little outfit run by a man named George Gallup used a much smaller sample of 50,000 people, and came within 1 percent of predicting the popular vote. Unlike the Literary Digest, Gallup polling, of course, is still very much alive and polling today.

Polling has come a long way since the 1930s, but the basics are still the same. There are five types of errors–five ways miscalculations can sneak into poll results–that you need to watch out for.

1) Measurement Error

This is when the poll simply doesn’t measure what it claims to measure.

Most often, this happens when the questions (or the possible answers, in a multiple-choice question) are flawed. They’re incomplete. They’re confusing. Or they influence the respondent one way or another. (“Should the President do this totally awesome thing, or this other thing tantamount to lighting the Constitution on fire?”)

When you do a survey, ideally you want to ask totally neutral, crystal clear questions that every single respondent interprets the same way. That’s challenging to do sometimes. There are literally dozens of ways you can screw it up. That’s why it requires an advanced degree to do correctly.

2) Coverage Error

Then there’s the issue of coverage: does everybody in the population you’re surveying have an equal chance of being selected in your sample?

This was a big problem in the Literary Digest poll. They didn’t make any effort to come up with a random sample. Instead, they sampled their own readership, which skewed Republican. As a result, their results skewed Republican.

Avoiding coverage error can be quite hard to do if you’re dealing with a big population, like the American electorate.

  • For example, for a long time, the typical method of conducting opinion polls was to call landline telephone numbers.
  • But because so many younger Americans now rely entirely on their cell phones, pollsters have had to change their methods.
  • If you’re able to quantify who’s excluded from your survey, you can sometimes get around this problem by weighting the answers of certain respondents more than others.
  • But weighting also makes the analysis more complicated, so it comes with its own bundle of potential screwups you need to watch out for.
3) Nonrespondent Error

The third type of error comes from nonrespondents: the people chosen to participate in your survey who choose not to respond. If certain groups are systematically more or less likely to respond, it can mess up your results.

This was the second big problem with the Literary Digest poll. They attempted to poll 10 million voters, and only got 2.3 million responses.

As with coverage error, you can sometimes get around the problem of nonrespondents through weighting and modeling. In fact, many polls today have less than a 23 percent response rate, but are still able to produce accurate results because they know how to correct for nonrespondents. But the Literary Digest didn’t have any of today’s advanced modeling techniques at their fingertips. And so nonrespondents dramatically biased their survey.

Nonrespondent error was also an issue in both 2016 and 2020, when support for Donald Trump in opinion polls was lower than the support he got on Election Day:

  • The current best explanation for why this happened is because Trump supporters, when they were contacted by the mainstream media organizations that run many leading polls, were less likely to participate in those polls.
  • Pollsters knew this was going to happen, but they underestimated the magnitude of this effect in their weighting.
  • That difference of a few percentage points was why most polls incorrectly predicted Hillary Clinton would win in 2016, and why Joe Biden’s victory in 2020 was narrower than most polls indicated.
4) Sampling Error

The fourth kind of error is the “margin of error” that often gets reported with these polls.

It’s inevitable that there will be some error when you only sample a couple thousand people out of the 200+ million adults in the United States. The good news is: there’s a formula for calculating this margin of error. And the math works out so you really can just measure one or two thousand random Americans, and be quite confident you’re only within a few percentage points of what the American population really thinks.

5) Interpretation Errors

The fifth and final type of error isn’t really about the poll results themselves. It’s about the common misinterpretations people have about them. Unfortunately, the media, which should be helping people understand what the polls are saying, likes to encourage these misinterpretations by the way they report on them.

For example, people often forget that polls are just a snapshot. A poll tells you what a handful of people thought on one particular day, at one particular time. Yet people’s opinions are changing constantly. Polls about the race for president, for instance, really have no predictive value six, twelve, or eighteen months before the election. But the media loves to breathlessly cover them. So they pretend they matter and report on them anyway.

Failing to properly account for the margin of error is another common problem. The way polls are usually reported, you’ll be told Candidate A is at 51 percent, and Candidate B is at 48 percent, and maybe you’ll see in small print or mentioned as an aside that the margin of error is 3 percent.

  • But what that really means is: “We can be 95 percent confident the true support for Candidate A is between 48 and 54 percent, and the true support for Candidate B is between 45 percent and 51 percent.”
  • If the margin of error is 3 percent, it’s only when the difference between the two candidates is greater than 6 percent that you can say with statistical confidence that Candidate A is leading.
  • But the media always reports who’s “winning,” even if their lead is within the margin of error. They should instead report that the results are inconclusive, or that the candidates are statistically tied.

Finally, the media often reads way too much into the results of a single poll:

  • When we do a survey, we can only say with 95 percent confidence that the true result is within the margin of error. That means even if we do everything right, we’re still going to get wrong results 1 out of every 20 times.
  • That means we should be looking at results from multiple surveys, all conducted by good, reputable pollsters, to get the most accurate read on where things truly stand.
  • Instead, the media tends to hyperventilate over every unusual result (“Biden’s up by 20 points!” “Trump’s winning now!”), taking it as gospel and speculating what may have caused such a drastic change in the race—when, in all likelihood, that unusual result is probably just one of those 1 in 20 outliers that happen occasionally.

So, How to Read the Polls?

Here are my suggestions:

1) Ignore election polls before Labor Day.

Seriously, just don’t bother.

Whatever is dominating the news today will likely be old news by the fall. Grievances that certain subgroups have against their candidate will fade. The tribal nature of today’s politics will reassert itself during the spring and summer, as voters start to rally around the candidate they were always going to vote for in the first place. Then the polls might start to actually be predictive about who will win in November.

Obviously, it’s different with issue polling: polls that ask questions about abortion, immigration, or other topical matters. Those can be informative at any point in time. But as soon as a pundit starts speculating about the implications of an issue poll for November’s election, you can safely switch your ears off. Until September, it’s just too far away for any of these snapshots to tell us anything useful about what’s going to happen in November.

2) Look at who administered the poll.

This should go without saying. But too often, a poll simply gets lumped into “the polls,” without any regard to either (1) the quality of the organization conducting it, or (2) systematic biases in that organization’s polling results.

  • In reality, pollster quality varies widely, and you should take this into account.
  • The gold standard for assessing pollster quality is FiveThirtyEight’s Pollster Ratings, which you can view here.
  • You can see there are dozens of excellent pollsters, but as of the time of this writing, only three outfits had perfect scores: The New York Times/Siena College, ABC News/Washington Post, and Marquette University Law School.

You should also take into account systematic biases, if applicable. “Systematic biases” may seem to be a sign of poor quality, but not necessarily. Because so much of public opinion polling is now about modeling, it’s possible for a poll to be of good quality, yet systematically produce results that skew either slightly Democratic or slightly Republican. This can happen when an organization makes different assumptions about the electorate, which factor into their models and influence the results.

  • For example, Rasmussen Reports is a decent and prolific pollster. But they are famous for producing results that skew Republican.
  • They are not “bad.” They simply make certain assumptions about what the electorate will look like in their modeling that favor Republicans.
  • You shouldn’t ignore their polls completely. But you should take into account the fact that their assumptions about what the electorate will look like favor Republicans in just about every possible way.
3. Look at the margin of error.

I mentioned this above. But it’s worth repeating. If the gap between the candidates isn’t greater than the margin of error, there is no “winner.” They are statistically tied.

  • For example, as I write this, several polls have just come out showing Trump with a 2- to 4-point lead over Biden, leading to some Democratic freakouts.
  • Putting aside my prohibition against paying attention to polls before the fall, I investigated and found that almost all of these polls are within the margin of error.
  • Trump is not winning. They are tied.

Seriously, it is so infuriating to survey methods geeks like me that the media refuses to tell the truth when their results fall within the margin of error.

4. Look at polls in the aggregate.

Remember, even if a poll does everything right, its results will fall outside the margin of error 1 out of every 20 times.

That’s why it’s best to look at multiple polls by reputable pollsters to get the best possible read on what’s happening with the electorate. That random poll that says one thing while nineteen other polls say another thing is likely to be an outlier–even if it’s coming from a reputable pollster.

It’s nobody’s fault when the results of a poll are an outlier. Outliers are just going to happen 5 percent of the time. In fact, it’s good when a pollster publishes an outlier. It’s evidence that they’re being fully transparent and following best practices. You should be suspicious of any pollster that doesn’t publish an outlier every now and then!

5. Don’t pay attention to claims about subgroups.

Every now and then, you may come across someone who has claimed to dig deeper into the poll results and uncovered something very unusual about a subgroup. For example, maybe a poll shows that 50 percent of African Americans favor the Republican candidate, even though for decades 90 percent or more have voted Democratic. Something weird like that.

Don’t take this as evidence to discredit the poll’s overall results:

  • Basically what happens is, when people don’t like what the top-line poll results are telling them, they will sometimes dig into these deeper numbers, looking for anomalies.
  • If they find one, they will take this as evidence that the top-line poll results are wrong: “Aha! See? These numbers about African Americans can’t possibly be right! Therefore, this whole poll is wrong.”
  • But as I mentioned above, response rates in public opinion polls are getting smaller and smaller, and the reliance upon modeling to compensate for this is getting greater and greater. Essentially, the more you break a poll down into its subgroups, the fewer people are in that subgroup. And the fewer people are in that subgroup, the greater the margin of error is.
  • This subgroup margin of error is (1) rarely reported, (2) huge, often double-digits, and (3) so large as to be essentially worthless.

In short, it’s entirely possible to get weird results on the subgroup level, yet the top-line results of a poll are still perfectly valid.

6) Pay attention to systemic trends of over- or underperformance.

I already wrote about how, in 2016 and 2020, Donald Trump overperformed what the polls were showing. Trump supporters were being systematically underrepresented. As a result, the models used by the polling community were skewed Democratic.

We now seem to be entering a phase where Trump and Trumpian candidates are systematically underperforming. This began in 2022, shortly after the Supreme Court struck down Roe v. Wade. To be fair, there haven’t been a ton of elections since then. But in the 2022 midterms, the various special elections of 2023, and now in the Republican primary, Trumpian election performance has been lower than what the polls have suggested.

That doesn’t mean that will necessarily be the case in the 2024 presidential election. But it’s enough of a factor that, if I were participating in horse-race speculation, I would take those recent polls that the media is touting as Trump being up 2 to 4 points over Biden and say: no, they’re statistically tied, and in fact, given the current trend with how well the models are capturing the electorate, Biden might even still be a bit ahead.

The media is shouting about how Trump is ahead, yet Biden would probably win if the election were held today? Now you know why I felt it was necessary to write this guide.

How to read the polls tl;dr

Here’s the takeaway:

  • If all the questions and answers in the poll are well constructed …
  • And if everyone has an equal chance of being selected in the survey (or the pollsters properly use weighting to correct for any gaps in coverage) …
  • And if nonrespondents don’t significantly skew the results (or the pollsters properly use weighting to correct for any nonresponse errors) …
  • Then you’ll be able to say, with 95 percent confidence, that the true opinion of the American people falls within a certain percentage range.

That’s not easy! But the good news is: there’s a huge demand for accurate poll results. A lot of smart people have spent decades refining and perfecting these methods. They do still get it wrong sometimes. But in general, they get it right. And when they do get it wrong, mistakes are usually identified and fixed quickly. Most of the time, it’s misunderstanding or misinterpreting what the polls are really telling us—not systemic problems with the polls themselves—that lead people not to trust the polls.

In short, if we’re smart about how we interpret poll results, we really can get a good sense of what the American people truly want. And by the fall, we’ll know how afraid we really should be that American democracy is about to come to an end. (But seriously, don’t trust the polls until then!)

By Randy Lynn, Ph.D.

Randy Lynn, Ph.D. is a sociologist and author of The Greatest Movement in Human History and Torch the Two-Party System. He lives in Sterling, Virginia with his spouse and two children.

Leave a Reply

Related Posts

  • Gone But Not Forgotten: W.E.B. DuBois

  • How the Media Shapes Public Perception: The Power of Agenda-Setting

  • Gone But Not Forgotten: Ida B. Wells

  • A New Era of Inquiry: Emerging Trends in Sociology

  • Gone But Not Forgotten: Karl Marx

  • How Many Types of Privilege Are There?

  • Sociology Under Siege: Threats to Its Academic Standing

  • The Potential for Progress: Can We Truly Alter Society’s Course?

  • What Would Sociology’s Founder Say About 2024’s Culture Wars?