This election, polls have been center stage and often come under fire.
Donald Trump has mentioned online polls, for example, only to have them be contested as falsified, irrelevant, unethical, or out-of-context. But even more respected polls have been all over the map, with most showing a Clinton lead but by vastly different margins.
What explains this variation? How are polls conducted, and what makes for a trustworthy survey? Here's a look into polling during the 2016 election season.
But first, an introduction.
How Are Polls Conducted?
In 2016, most polls are done either online or over the phone. Pollsters use a sample size — a group meant to represent the larger population — to project how American citizens will vote in November. They come up with unique definitions of their populations: some survey registered voters, others likely voters, and others the adult population. "Likely voters" is an especially tricky category, as pollsters have to define what that means by measuring the enthusiasm of their respondents.
And low response rates make it difficult for pollsters to get a truly random sample, experts said.
U.S. & World
News from around the country and around the globe
"No poll is perfect," said Andrew Gelman, political science and statistics professor at Columbia University. "Response rates are typically less than 10 percent. So every poll needs to adjust the sample to match the population in some way."
Because the polls aren’t random, biases based on the sample taint the data.
Polls often differ because their samples vary.
"Who responds to a poll changes from one day to a next," Gelman said. "Different people are home. Different people are likely to respond."
When one of the parties is especially mobilized, its candidate will often experience a bump in the polls that doesn’t necessarily represent a change in public opinion. For example, after the Republican National Convention, Trump saw a perceived increase in support, and Hillary’s lead jumped immediately after the DNC.
Polling can also prove a self-determining process because if a candidate is thought to be winning, more of his or her followers will take the time to answer a survey, which changes the polling summary.
"Recently, there’s been a big shift towards Hillary Clinton in the polls, and I think that does represent a real shift in public opinion, and I think there are people who have changed their vote intention," Gelman said. "But also, now that the news is looking better for Clinton, I think more Clinton supporters are likely to respond to polls. And now that the news is not looking so good for Trump, I think Trump supporters are less likely to respond."
Gelman said this year's elections have proved different than those from the past. With Trump’s leaked 2005 video footage about sexual assault and subsequent Republican fall-out, things are becoming increasingly unclear.
"It’s really very hard for me as a political scientist to try to identify how important things like a split of the Republican party would be because historically, when we’ve had these kinds of splits, it’s typically been when the economy was going so strongly that basically everybody wanted to stay with the incumbent," Gelman said. "All sorts of things could happen. Presumably the most likely thing is that Clinton will win by a little bit more than 4 percent, but not a landslide. But it’s just hard to know because this is not something that we’ve really seen before."
And now, a deeper look at 2016 polling data, broken into three types: aggregated predictions, statistically relevant polls and unscientific surveys.
1. Aggregated Predictions
Aggregated predictions are not polls, but analysis of available polling data to predict who is most likely to win the election.
Example: FiveThirtyEight
How It's Done: Nate Silver aggregates polling data to predict the outcome of the elections based on a model set months before. He forecasts the probability that each candidate will win in November and offers three options to interpret his predictions.
"It’s one way of us telling readers, 'Hey, we don’t have all the answers on this. Here’s a couple of different ways you can do it,'" said Micah Cohen, politics editor at FiveThirtyEight.
As of Oct. 14, all three of FiveThirtyEight's models give Hillary Clinton more than an 80 percent chance of winning the election.
The three forecasts are based on all polling data that the FiveThirtyEight team considers legitimate. They've banned a few pollsters because of "really compelling evidence that they’re faking polls or that they’re doing something else really shady," according to Cohen.
But FiveThirtyEight doesn't treat all polls equally. Silver has rated each poll, and those with higher grades are weighted more in the model. Cohen explained that grades are based on "how accurate… the pollster (has) been in the past" and "how methodologically sound" the pollster is. Silver relies more heavily on state polls because historically they've been right more often.
The model makes predictions based on likely voters, a category Silver lets the pollsters define for themselves.
Strengths: According to Cohen, "The most basic strength is it does in a systematic and unbiased way what everyone is doing anyway."
Decades before FiveThirtyEight was conceived in 2008, politically active citizens were still trying to combine and decipher polls to predict who would win elections. Silver’s model is impartial, and so it should be more on point than subjective interpretations.
Silver was one of the most accurate pollsters during the 2012 elections, predicting every state in the union correctly.
Weaknesses: Statistical models improve with more data. Because presidential elections only happen every four years, FiveThirtyEight doesn’t have a ton of historical data to determine its model.
"We don’t know that much about how presidential elections work, and so we’re kind of limited by the sample size," Cohen said.
And then there’s the fact that, like many analysts, Silver was blindsided by a Trump Republican nomination. As Gelman said, this isn’t your typical election, and the polling data might not play by the same rules that led to correct FiveThirtyEight predictions in 2008 and 2012.
Similar resources: The Upshot by The New York Times
2. Statistically Relevant Polls
The most common polls during election season are conducted by polling organizations, often with a media partner, to predict the outcome of a race. The polls have a stastical basis, and pollsters typically release details on methodology and an expected margin of error.
Example: Marist Institute for Public Opinion Poll
How It’s Done: Marist conducts both state and national polls, with live callers phoning both mobile phones and land lines. Lee M. Miringoff, the institute’s director, said that his team is in the field nearly every day.
Used by NBC News and the Wall Street Journal, the Marist poll earned an "A" on FiveThirtyEight’s pollster rankings, correctly predicting 88 percent of the 146 polls Silver’s team analyzed.
A new poll released on Oct. 10 had Clinton up by 14 points in a two-party race and leading Trump by 11 points when third and fourth party candidates were introduced.
Each poll starts with a sample size of approximately 1,100 adults 18 and older. For national polls, Miringoff determines how many voters to call in each state from the state’s population and relative weight in the election. His probability model is based on likely voters, so first he must find out if the person on the line is registered to vote. Then, he asks a series of questions to gauge how likely they are to cast a ballot. Even if someone is unlikely to vote, they’re included in the model — their vote just weighs less.
"In polling, not all opinions are created equally," Miringoff said. "The ones who are going to vote are the ones you are most interested in finding out about."
Miringoff can ensure that his data is fitting with the U.S.’ demography by comparing census calculations with his own. He emphasized that the polls represent how the American people feel in the moment. A poll before and after one of the debates might not look the same.
"It’s all about timing. When you’re dealing with an election, it’s a moving target," he said. "This campaign has been one of ups and downs at different times, usually after an important event."
Strengths: By using two different methods — landlines and cellphones — Miringoff offsets bias from both (though not bias from only using calling). Younger people are more likely to pick up their iPhones, whereas older voters might still have a landline, so Marist’s polling takes into account different demographics based on the media they use. The team is also able to take note of how many people own cell phones versus landlines in each state and distribute polling to reflect that — one state may be 80 percent cells and 20 percent landlines, while another is 60 percent and 40 percent.
Weaknesses: The model takes time and costs money. A post-debate poll, for example, might last four days. Meanwhile, some pollsters are releasing data the night of the debate. Miringoff said that those polls will be skewed, as most responses will come from those impassioned to weigh in after 10:30 p.m. on the East Coast. But they’re fast.
Also, refusal rate (which includes people who aren’t home or whose numbers don’t work) is pretty high. These days, it’s hard to get someone to agree to take a survey over the phone. “Clearly it’s become a more difficult process,” Miringoff said.
Similar resources: Quinnipiac University, Gallup, CBS News/New York Times
Example: UPI/CVoter Poll
How It’s Done: The UPI/CVoter poll is one of two mainstream polls that has often predicted a Trump victory or shown a nearly tied election (the other is the University of Southern California/ Los Angeles Times poll). Both polls use last vote recall, where pollsters ask respondents who they voted for in the last presidential election to gauge how many voters are switching parties or won’t vote at all after participating in the last election. According to Yashwant Deshmukh of CVoter, last vote recall accounts for the Trump lead in his past predictions. However, UPI’s latest data shows Clinton with a comfortable lead.
CVoter has a "C+" on Silver’s pollster ratings.
After using a phone model in 2012, CVoter has moved online for 2016, experimenting with multiple platforms (like SurveyMonkey, Google, etc.) to garner about 250 responses per day. Internet users are incentivized to answer. Boosters focus on specific demographics — for example, one survey is in Spanish, exclusively targeting Latino voters.
CVoter measures likely voters by simply asking, "How likely are you to vote?" Its cut-off model removes unlikely and undecided voters from the equation. Like Marist, CVoter polls nationally based on population per state.
Strengths: It’s fast. UPI can update predictions with the data from 250 responses every day.
Weaknesses: Because the poll is online and compensated in some way, it’s tainted with participation bias — tendencies that skew the data.
"It is not a random probability sample," Deshmukh said. "Nobody claims that."
Deshmukh conceded that he’s "not a big fan of online samples," and if possible, he would have chosen a calling model with both landlines and mobiles. However, using automated dialers to call cells is illegal in the United States, and hand-dialing each number would make the process too expensive, he said.
Also, there’s a reason why most pollsters don’t use last vote recall — it relies on people remembering actions from four years ago, and respondents may misreport.
Deshmukh did not directly address his company's "C+" rating on FiveThirtyEight.
Similar resources: YouGov, Reuters/Ipsos, Google Consumer Surveys
3. Unscientific Surveys
Unscientific surveys are Internet-based polls that ask the user - anyone who comes to the site - to indicate their preference. They can quickly get feedback on a real-time event, such as a debate or a political convention.
Example: The First Debate
The day after the first 2016 presidential debate, Trump tweeted out that his "movement" had won the night before. He included an image with 10 polls all showing him as the victor. However, national polls conducted during the week following the debate implied a bump in Clinton's overall popularity.
So why did 10 polls indicate that she had lost the debate?
Websites like Drudge Report and CNBC launched surveys to try to monitor how each candidate performed. They were unscientific, in that they didn't use any controls. Forget categories like "likely" or "registered" voters -- anyone from around the world could respond, and if someone used proxies, the user could get into the survey multiple times. Also, as Miringoff noted, the East Coast respondents would only be those who were fired up and and would not be representative of national opinion.
Strengths: Unscientific polls yield nearly immediate results. As Gelman said, “People want to click every day, so you have to have something new."
Weaknesses: There is absolutely no evidence that they're believable.
What It All Means
According to Cohen, data from the last 15 presidential campaigns indicate that polls don't move much between October and Election Day. So based on current polls, the U.S. is is more likely to elect its first female president on Nov. 8.
But the final tally will probably be close, Gelman said. In the end, what matters is which "likely voters" turn up to the voting booths.
“There is evidence that there’s higher turnout in close elections," Gelman said.
And polls are subject to human error and can be wrong, as Cohen pointed out.
“These are tools built by very fallible people,” he said.