2020 Watch: Actually, Polls are Usually Right — 2016 Included.
With political horse race polls back in the news (Please, God! No! Noooooo!), you’ve probably heard someone say, “But weren’t the national polls worthless in 2016 when they predicted a big Trump loss? Why should I listen to them now?”
Answer: No they weren’t worthless, and actually, no, they didn’t predict a Trump loss at all.
Here’s why.
Polls don’t predict winners. They predict vote shares.
Some of the post-2016 bloviating about the accuracy of polling stems from a fundamental misunderstanding of what polls are designed to do. Election polls predict vote shares. They do not predict a “winner,” if the outcome is based on something other than vote shares like … say… the Electoral College. Simply put, if the popular vote does not determine the winner, you can’t blame a poll for just predicting the popular vote.
National Polls predicted the vote shares in 2016 accurately.
Here’s how the national polls measured up in 2016 versus the actual outcome. On Election Day, the Real Clear Politics (“RCP”) polling average (which gathers numerous national polls and averages their totals) showed the following polling averages:
Trump v. Clinton: Clinton +3.3%
Here’s how the actual vote totals came out:
Trump v. Clinton: Clinton +2.1%
(If you’ve been living under a rock, Donald Trump won the presidency despite losing the popular vote. Wow! Crazy huh!)
The final RCP data looked like this:
RESULT: The 2016 polling average was within 1.2 percentage points of the actual vote shares — well within the margin of error of virtually any poll. (Translation: Good job, polls!)
More to the point, which poll was more “accurate”? The Bloomberg poll projecting Clinton with. +3% advantage (off by .9%) or the IBD/TIPP Tracking poll projecting Trump with a 2% advantage (off by 4.1%)? Was the IBD/TIPP Tracking poll more accurate because it “picked the winner,” but was off by a much greater margin?
That’s not a rhetorical question. The answer is, no. The answer is an ear-splitting, banshee shriek of “NOOOOO.” Because Clinton got 2.1% more votes, the further polls deviated from predicting Clinton +2.1%, the less accurate they were — regardless of who won the election.
The 2016 Polling was more accurate than the 2012 Polling.
By comparison, the 2016 final polling average was actually more accurate than in 2012. On Election Day 2012, Real Clear Politics showed the following polling averages:
Obama v. Romney: Obama +0.7%
Here’s how the actual vote totals came out:
Obama v. Romney: Obama +3.9%
The final RCP data looked like this:
RESULT: The polling average was off by 3.2 percentage points from the actual vote shares. While this was often within the margin of error, it’s not surprising for 3.2% to be an outcome-determinative difference in a presidential election.
I know 2012 was a long time ago… but do you remember anyone questioning the fundamental value of polling after the polls were off by 3.2% but “predicted the winner” (huge quotations marks) correctly? I’m sure you don’t.
That’s because people don’t understand what polling does. Hopefully, now you do.