close
close

Ourladyoftheassumptionparish

Part – Newstatenabenn

How to understand the surveys you are seeing right now
patheur

How to understand the surveys you are seeing right now

This article is part of The DC Brief, TIME’s political newsletter. Register here to receive stories like this in your inbox.

When we finally reach the last weekend before Election Day, many of our friends suddenly become poll experts. Whether it’s a political circle, a youth book club or even the line at the supermarket, conversationalists only have one topic in mind. Did you see that? gender gap in the final New York Times poll last week? What about Thursday’s Gallup poll? demonstration Is voting intensity among Democrats higher than at any time in the last 24 years? But I heard in NPR That Trump is polling better than any Republican in the last two decades, even when Trump won in 2016? It can be a lot.

For those who want to do it, diving down the survey rabbit hole can be a choose-your-own-adventure story of self-confidence, self-torture, and profound confusion. And, to be frank, each path is entirely valid.

Sure, there are many metrics that can be used to evaluate the health and potential of the two presidential campaigns. Campaign financing data, advertising strategy, where the candidates are placing themselves in recent days. Oh, and don’t even get me started on the imprecise modeling behind early voting numbers.

But in reality, polls are the easiest way to get a feel for the race. In August we published a primer on how to read surveys like a pro. But in the final days of an election cycle like no other, many are wondering if pollsters are getting the presidential race completely wrong…again. Here’s a summary of why polls in 2024 are different than any other year and why that’s creating more confusion about who’s getting it right.

Don’t all the polls show that this is basically a coin toss?

Yes, but no.

With apologies to readers looking for an easy answer, there is none in sight. As Republican pollster Kristen Soltis Anderson says gradesThe numbers are remarkably consistent across different surveys, even when pollsters follow a different set of assumptions to get there. He Times The poll showing the race tied at 48% and the CNN poll showing the race tied at 47% may be accurate in each case, but there are big differences in how they reach similar conclusions.

Put in legal terms, jurors A and B can find someone guilty of a crime, but reach that verdict by prioritizing a different set of facts. That doesn’t mean the defendant isn’t guilty, but each juror’s reasoning can be as true as it is divergent.

Part of this multi-track path to the same end comes down to different electoral centers applying different theories of the electoral case. Is Harris changing the electorate in ways never seen before, with dramatic (and as yet unrealized) success among women and college-educated voters? Is Obama forming the old 2008 coalition? Is Trump reviving the base in 2016 fashion or is he banking on a different coalition that has become more tolerant of its disregard for norms? And should 2020 voting patterns be ignored, given that we were in the middle of a pandemic? All of those scenarios may be true, but to what extent? Different pollsters consider that some of these questions are more relevant than others in deciding who will vote.

So, yes, the polls are closed. No one in any of the camps sleeps comfortably these days, if they sleep at all. The candidates are busy for a reason: this may be decided by fewer than 100,000 people in three (as yet unknown) states. And nobody knows who they are.

So don’t all these surveys use a common base?

No. Not even close, if they’re honest. Each voting operation has to use its best understanding of who will actually show up. Typically, as Election Day approaches, pollsters move from a broader universe of registered voters to likely voters, and there comes a mix of statistical models, historical trends and more than a little instinct.

The co-director of Vanderbilt University’s robust elections operation, Josh Clinton, published an incredibly useful illustration of this challenge. Using a raw data set from a national poll conducted in early October, the expert found that Harris is ahead by about 6 percentage points. That finding reflects who pollsters were able to reach, which may not accurately reflect who ultimately votes. That’s where each pollster makes different decisions about how to adjust the raw data. When Clinton adjusts the data to fit the 2022 turnout universe, Harris is actually up 8.8 percentage points. Add in 2020’s turnout, and it’s a 9-point race with a lead for Harris. And using 2016 numbers, Harris still wins by 7.3 percentage points.

But this is where things can get interesting. If you overlay a model on how many voters identify as Democrats, Republicans, or neither, you can get very different views of the race. If Pew Research Center data on the nation’s electorate is to be believed, Harris’ lead shrinks to an initial lead of 3.9 percentage points if turnout resembles that of 2020. Taking the snapshot of the Gallup, that lead falls to 0.9 percentage points. So you can see how modeling alone, using the same raw numbers, can vary this race by 8 points. And that’s just the most basic example of how an adjustment here (in a single entry question) and an increase there in dozens of other factors can ruin the entire system.

This is happening across every election team in the political universe, with each group of data nerds looking at the data sets through different lenses. That’s why the same group of voters can tell the same thing to pollsters and see themselves reflected in a completely different race. There’s a reason we had to show our work in math class; The process matters as much as the response.

So we shouldn’t compare, say, CNN polls to New York polls. Times Center?

At all. The best practice is to compare like with like.

This year includes the added twist of Democrats dumping Joe Biden for Kamala Harris as their nominee in July. Basically, most comparisons between Biden’s departure before and after it are of limited usefulness. The same goes for comparisons between pollsters, since they all make different assumptions about the electorate.

There is also little value in comparing surveys of registered voters and those of likely voters. They are completely different universes.

Wait. Didn’t anyone fix political polls after 2016?

The 2016 polls became a joke and a punch after their misalignment with reality it became apparent quickly on election day. After all, Hillary Clinton was thought to be inching toward a clean defeat of Trump. But with the benefit of retrospective understandingIt was pretty clear that pollsters assumed too many college graduates would show up, being just one of the most obvious mistakes. The pollsters did their best to fix it four years later, but again the polls thought Biden would better that he

Part of this is the Trump effect, which again makes pollsters question themselves and, in particular, what factors matter most in determining voter behavior. A research team at Tufts University did a survey of, well, surveys, and found that some of the biggest changes to the back-end model since 2016 have come by giving much more weight to education, voting history, and where voters actually live. They also document a move away from giving too much influence to respondents’ income and marital status. Most pollsters have also adjusted the weight they give to age, race and gender.

So, yes, pollsters have taken steps to iron out the wrinkles that were so evident in 2016. But this is a science of public opinion that has to incorporate some assumptions. And that’s just that: best-educated guesses about the universe at play.

(Just to be contrary: a credible answer argument is that the polls in 2016 were not that off, it’s just that the national polls didn’t match the state-by-state results that mattered most. Clinton allies would prefer to blame pollsters for inflating her voters’ confidence to the point of complacency, but the reality is much more nuanced.)

So you’re saying we should calm down about the polls?

Absolutely. Surveys are informative, not predictive. By the time you read them, they are already outdated. Each of them is making some educated guesses about who will bother to cast their vote. Almost every crosstab in a pollster’s latest publication includes a judgment call, and no one gets them all right.

But let’s be honest: we won’t cool it down. It’s just not what armchair experts know how to do. After two (if not four) full years of waiting for this final push, the effect of these numbers is too much. It might be a waste of time, but it could ultimately have virtues in the most unlikely of ways.

The closeness of the polls can be an opportunity to get more people to vote if they believe they can truly determine the outcome. So, in that, these adjusted polls could be good for the exercise of democracy and, at the same time, garbage for the debate about it. However, it is all we are going to talk about for the next few days, and perhaps beyond if the expectations they create are too far from the result. I’m probably as guilty of this as anyone. And no, I probably won’t regret it.

Making sense of what matters in Washington. Subscribe to the DC Brief newsletter.