Advancing Political Polling FAQ

Visit External Link

By The Global Strategy Group Research Team

The 2020 elections were a watershed moment, not just for our country but for us as pollsters — as polling results in many cases overstated the Democratic margin of victory. In the subsequent months after the election, much has been written about the challenges facing the polling industry. Most recently, some have said the problems are “impossible” to fix. At GSG, we are not that pessimistic. Instead, our team has been hard at work unraveling and addressing the increasing challenges facing the accuracy of political polling. Since the 2020 election, we have rigorously examined our own polling, including an analysis of nearly 30,000 interviews all conducted in key states in the post-Labor Day period. Those efforts uncovered important findings that answer some of our industry’s most pressing questions. We do not have the answer to every question yet, but what follows are the answers to many questions we receive on a regular basis which outline where things stand and where we are headed.

1. What was the problem with polling in 2020?

Our analysis, which aligns with many that we have seen in our industry, has found that the main cause of error in 2020 polls was non-response bias: The people who responded to polls were attitudinally different than those who did not respond to polls in ways that our normal partisan and demographic controls did not account for.

Turnout error was another, smaller factor: Pre-election projections slightly underestimated Republican turnout. But minor differences between projected and actual electorates are not unique to 2020 — they are inherent in nearly every election. Our analysis suggests that turnout error contributed less to the problem than non-response bias.

2. What was the extent of the problem?

Our analysis of key post-Labor Day interviews found an average of 2.6 points of pro-Biden bias (the direction of the miss) and 3.1 points of error (the absolute size of the miss). Our Senate race numbers were essentially the same in terms of error and bias.

This level of error — a 3.1-point miss — is not alone deeply problematic or far different from other election cycles. What was problematic, however, is that nearly all polls — including GSG’s and other pollsters — missed in the same direction. In other words, almost all polls overstated Biden support in some way, and so at the aggregate level there was a clear problem (when the miss is in the same direction, that is called “bias”). You would prefer to see error randomly distributed with no bias — but that is not at all what happened in 2020.

3. What are you doing to fix it?

The short answer is that we are putting in place new procedures to better control for non-response bias.

The longer answer is that we are using the findings from GSG’s months-long, deep internal analysis of nearly 30,000 interviews, and collaborating with the broader polling and analytics community, to come up with survey weighting and sampling techniques to address the issue. This will involve a special focus on the way we weight partisanship but also is likely to include weighting controls that go beyond partisanship and correlate to non-response bias.

Additionally, we are working on a large-scale experimental project with a handful of other top Democratic polling firms to more precisely understand the people that we collectively missed in surveys and identify what voters we would have reached had we gone deeper.

At GSG, we have already made some significant changes to our weighting procedures, while other changes require additional research and collaboration. Changes we are either currently implementing or considering implementing include:

  • Weighting on attitudinal partisan variables, and not just voter file partisan variables; the latter has significant measurement error, especially in some states, which we found to be the source of some polling error
  • Weighting on engagement variables to reduce the percentage of hyper-engaged people we get in our polls, which has been found to be a source of some polling error
  • Weighting on new attitudinal benchmarks outside of partisanship that may be responsible for non-response bias
  • Weighting to attitudinal variables obtained via aggregate large data sets to reduce the variance inherent in a single poll with smaller sample sizes — something we call compositing

Finally, note that survey weighting is not the only way to fix this problem. Sampling, research design, and mode of interview are all important considerations. GSG is continuing to work through recommendations in those areas.

4. How much did the unique circumstances of 2020 (Trump, pandemic) contribute to error?

The 2020 election was unique in many ways. Though it is difficult to precisely quantify the effect of either Trump or the pandemic, it is credible that both contributed to polling error. Trump antagonized and reduced trust in the media (whom the average voter believes is responsible for most polling) and even attacked polling itself. That likely made his most ardent supporters less likely to answer polls. Meanwhile, there were massive partisan differences in the way Democrats and Republicans responded to the pandemic. Democrats were more likely to be at home and to be more available and eager to take surveys than Republicans.

That said, while these are credible causes of error in 2020, we are focused on solutions to the underlying problem — non-response bias — more than the individual specifics behind 2020. In 2022 and in future elections, there may be new problems that emerge. As such, GSG strongly believes that we cannot just focus on solving the last problem. Instead, we are focused on combatting non-response bias no matter the source or circumstances.

5. Is it easier to poll Democrats over Republicans? Are Trump voters more difficult to poll?

In 2020 — yes, it was easier to get Democrats to take our surveys than Republicans. We were getting more Democrats, especially engaged hyper-partisan Democrats in our unweighted samples because they were answering polls at greater rates. This was why partisan non-response led to polls biased in Democrats’ favor. However, as we move forward, we need to be sure that we are able to account for partisan non-response in any direction should new trends emerge.

6. How are you thinking about the midterms in relation to everything you’ve learned?

Midterm elections are very different from presidential elections, especially in terms of partisan turnout. Recent presidential elections have been high turnout affairs where both parties’ voters show up, while recent midterm elections have often had a differential turnout dynamic (favoring Democrats in 2018 and 2006 and Republicans in 2014 and 2010).

As such, while turnout error caused by projected electorates being different from actual electorates might not have been a big problem in 2020, it looms very large in 2022.

It is important to keep this perspective as we head into the midterms. In addition to solving for non-response bias, pollsters will need to think through different partisan turnout scenarios and how they may impact the results.

At GSG, we still plan to provide our clients with our main projection, based on our best guess of the projected electorate because we know that part of our job is to provide our best prediction so that clients can make tough decisions. But we also plan to have more discussions about the inherent uncertainty in turnout projections, and we plan to run scenarios and simulations on how different partisan turnout scenarios might impact the results.

7. If horse race polling was off, does that mean that the message guidance provided by polling was also off?

A poll being off by a few points does not mean everything within that poll is invalid. When we re-weight and fix polls to account for error, we generally see similar subgroup results, and similar relative rankings of items within questions and batteries — all of which help guide our strategic recommendations.

On the other hand, message-testing is better when it is built off the most accurate polling possible — and that remains our goal. We understand that sometimes decisions are made to lean in or out of the partisan environment, and to make those decisions we must have an accurate read on that partisan environment. In those ways, message guidance does depend on polls being accurate — a big reason why we are taking this accuracy initiative seriously.

In the end, polling remains the best tool we have to get a clear picture view of how voters are thinking and what they might respond to in a campaign. It would be a dangerous reaction to 2020 to stop asking voters questions about what they think. Then you’d simply be left with strategy developed based only on personal views, instincts, and anecdotes.

8. Were there some modes that performed better than others and how are you thinking about that going forward?

In our initial analysis and in comparing notes with other pollsters, public and private, who use a variety of different modes and techniques to poll voters, it appears that the non-response bias afflicting polls in 2020 was an issue across all modes of interviewing. That includes the three most common modes used today — live phone calls, texting, and online panel interviewing. We believe that multi-channel polling (polling using a mix of these modes) is important for the future as different voters communicate and live in different ways. However, mode of interview was likely not a magic bullet that would have solved the problem in 2020, and there is still more GSG is looking at to better answer this question.

9. Are some states harder to get accurate results in than others?

Yes, it appears that some states had greater accuracy problems in 2020 than others. There are at least two reasons for that. First, we conduct polling off the voter file (a database of all voters in a given state), and the quality of the data on the voter file can differ dramatically from state to state. In particular, some states have better partisan information on the file — meaning that at the individual level, we have a better sense of who is a Republican and who is a Democrat in some states more than others. Polling tends to be better in states where there is more accurate partisan information because it is easier to control for non-response bias.

Second, some states have populations that are harder to poll and harder to reach. That may be because they have more transient or newer populations, or there may be more language barriers in some states, or they may just have more of those non-responders who didn’t want to take our polls in 2020. All of those things can make polling in certain places harder.

10. Was polling error worse below the presidential level?

Our Senate race numbers were essentially the same in terms of error and bias. Our initial, ongoing analysis of House races suggests that error and bias were slightly larger than in statewide polls that measure Presidential and Senate races. In our polling, undecideds in House races were disproportionately Republican, so it is possible that late-deciders (or related issues around polling in races with lesser known candidates) are part of why error and bias were larger in House races. However, in general, we believe that non-response bias was the main source of error in House race polling, Senate race polling, and Presidential polling alike.