One of the most common tropes trotted out to defend the idea that Joe Biden and Democrats had a dominant lead ahead of Election Day is that polling was just so stable for Dems. It didn’t matter what event hit America; polls showed Democrats with a strong lead, and they thought that lead would stand by the time votes were cast on Election Day.
But instead of proving polling accuracy, all those unflappable surveys should have been a red flag signaling that the data was flawed.
America is in the midst of a pandemic, riots over race and police misconduct, and general unrest. And that’s not including all the mini-episodes of angst we saw this year, from the murder hornets saga to impeachment to the media accusing President Donald Trump of sending the country into a third world war.
If a country that unsettled doesn’t, at any point, show even a little bit of wobbling in the polling data, you should question that data.
I was warning in columns for Conservative Institute back at the start of October that the polling data could be seriously wrong for all the reasons mentioned in the press, from standard errors to the shy-Trump-supporter effect to my preferred answer: an oversampling of Biden voters who answered their home phones because they’re working remotely due to COVID-19. As I wrote on Oct. 5:
To the crowd of experts claiming, “the longer Biden leads, the worse it is for Trump,” you should know that’s a model talking — not a real analysis. No model has all the uncertainty of our time baked into it, and if it does, it’s raw guesswork on the part of the statisticians. Saying that is a lot like saying that the Atlanta Falcons have increased odds of winning a game, the longer their lead lasts into the fourth quarter.
The warning signs were there, and analysts ignored them. Pollsters and election modelers foresaw an outcome they wanted — the Biden blowout victory where Democrats won across the board in a landslide — rather than an accurate forecast.
The possibility of Ahab’s white whale was there, and the statisticians worked to confirm the reality they wanted instead of questioning the data, as an actual data-scientist would do.
That refusal to question results led to the polling misses we’ve witnessed this election season. That should make the entire polling and modeling industry look at itself and ask hard questions. But in the days after the election, that doesn’t appear to be happening at all.
And it’s not just the top of the ballot that pollsters missed; it was everywhere.
Republicans didn’t just win down-ballot races. There was an utter bloodbath as Republicans demolished Democrats up and down the ballot.
Democrats aren’t going to win a single new state legislative chamber this year. According to the Wall Street Journal, “Of the 98 partisan chambers, Republicans will control at least 59 next year.” What’s more, “Republicans will control both legislative chambers in 24 of the 36 states in which legislatures draw district lines for U.S. Congress, the state legislature itself, or both,” the Journal adds.
And it wasn’t just legislative chambers. Of the 10 governorships up for grabs in 2020, Republicans won eight. The only losses were in North Carolina and Washington state. Republicans won blowout victories in New Hampshire and Vermont, the home of progressive Sen. Bernie Sanders.
Republicans are also poised to curtail the Democratic majority in the House of Representatives. By the time all the votes are counted, the GOP will either retake the chamber or be poised to do so in 2022. And none of this includes the overperformance of many Republican Senate candidates in states across the country.
The one pollster that was proven right across the battleground states was Trafalgar Group. In an interview with National Review before the election, their founder, Robert Cahaly, said the problem with pollsters was that they were sampling the wrong people:
Why does that matter? “You end up disproportionately representing the people who will like to talk about politics, which is going to skew toward the very, very conservative and the very, very liberal and the very, very bored,” Cahaly explains. “And the kind of people that win elections are the people in the middle. So I think they miss people in the middle when they do things that way.”
Much like the punditry in national news publications missed the 2016 and 2020 election results, pollsters are committing these egregious errors. And constantly being off by this much — and this bad — harms the long-term integrity of people’s belief in results. Either people will eventually assume polls are wrong on everything, or they’ll start questioning the election results.
It doesn’t matter which ends up happening; the conclusions people draw harm trust in the system. Polling is a fixable problem if the people behind it are willing to change that.
Election modelers, from FiveThirtyEight to The Economist, are another issue, however. While I have optimism polling can improve, I’m not convinced modelers will do the same. That presents other problems.
If you’re told continuously that 80% to 90% probabilities aren’t happening, do you blame the modeler or the election system? That looks like a toss-up among Democrats when you include the anti-electoral college group.
2020 is the year that the polling missed widely again. That will have consequences down the road.