Unlock White House Watch Newsletter for Free
Your guide to what the 2024 US elections mean for Washington and the world
How did the polls in the United States fare last week? This may seem like a simple question, but depending on what you’re actually asking, your chosen criteria, and possibly even your fundamental beliefs about the human psyche, there are half a dozen equally legitimate answers.
Let’s start with the basics. At the national level, the polling average on the eve of the elections Vice President Kamala Harris won the popular vote by about one and a half percentage points. As of this writing, Donald Trump is on his way to win by that same marginby a combined error of around three points. This is a smaller error than four years ago and almost exceeds the long-term average.
At the state level, polls were on average closer to the outcome this year than in 2016 or 2020. However, for the first time in 50 years of public polling, the average poll across all states underestimated the same candidate: Trump.
But the thing is, these same statistics provoke wildly different reactions in different people. For the average left-leaning American who had spent weeks staring at a blue number that was marginally higher than a red number, tuesday results They were conclusive proof that polls don’t work. The three-point miss might as well have been 20 points.
But from the pollsters’ perspective, political scientists and statisticians, the surveys obtained relatively good results. The errors at both the national and state levels were all within the margin of error, and the fact that the polls did not capture opinions in Trump states any worse than those in deep blue states, a stark contrast to 2016 and 2020 , suggests The methodological refinements of recent years have worked.
If your temptation is to scoff at that last paragraph, let me offer you this: US Google searches for terms like “why the polls were wrong” peaked much higher last week and in 2016 than in 2020, despite that polls underestimated Trump even further in 2020.
The reason is pretty obvious, but it has nothing to do with statistics or survey methodology. The human brain is much more comfortable with binaries than probabilities, so a close mistake that alters the viewer’s world hurts much more than a broader mistake that doesn’t.
But I do not intend to let the industry completely off the hook, and to do so there are two distinct issues that need to be addressed.
The most obvious is that, although the polls did slightly better this year, this was Trump’s third consecutive underestimation. The methodological adjustments that pollsters have made since 2016 have clearly helped, but the basic problem remains. Whether due to new sources of bias introduced by these adjustments or new changes in the response rates of different types of people to surveys, pollsters appear to be moving down the escalator.
The second is a more fundamental issue related to the way the figures are presented. It’s true that pollsters and poll aggregators have been giving loud and clear warnings for weeks about how a very narrow margin in the polls could not only, but very likely, allow one side or the other to win decisively. But such a proliferation of health warnings raises the question of whether polls, poll averages, and their media coverage are doing more harm than good.
Let’s say that you, the pollster, and I, the journalist, know that the true margin of error in a survey it is, at best, plus or minus three points per candidate; That is, a poll with Candidate A winning by two points is not inconsistent with that candidate losing by four on Election Day, even if the poll was perfect. And suppose we also know that humans instinctively dislike uncertainty and will focus on any concrete information. So who does it help when we highlight a single number?
If we want to minimize the risk of unpleasant shocks for large sections of society, and want pollsters to have a fair hearing when the results are known, both sides must accept that polls are based on fuzzy ranges, not hard figures.