The demand for public opinion polling has never been greater. Once the purview of political operatives, survey results are now consumed by the masses. High information voters receive poll result notifications on their smartphones; polling aggregators such as Huffington Post Pollster update their polling charts and graphs hourly; and news organizations such as FiveThirtyEight and the Upshot use political poll results as inputs to complex probabilistic models attempting to predict election outcomes.
But as the demands for survey results grow, it is simultaneously becoming harder and harder to conduct high quality polls using the traditional and mathematically beautiful survey technique of random sampling. At a meeting last week in Boston of the New England chapter of the American Association for Popular Opinion Research, a group of top-tier pollsters weighed in on the problem.
"Using statistical inference to generalize from a small random sample is sort of magical, with a sample of several hundred to a thousand people projecting to the behavior of millions," said pollster Mike Mokrzycki, who runs NBC Election Polling.
Random sampling relies on the fact that any person in the target population of the survey has a known probability of being part of the sample. Traditionally, telephone polls (including WBUR's polls) have relied on such sampling principles as the basis for their polling methods. But changing technologies and societal norms have been making it harder to reach some members of target populations, and harder to assess the probability of reaching each person.
Survey response rates have plummeted as household landlines become a thing of the past and caller ID screening allows people to only answer calls from known callers. Low response rates can result in non-response bias, where polling results can suffer when non-responders differ from the general population in meaningful ways.
For example, if young people are less likely to respond to a poll and also hold different opinions than older respondents who might have higher response rates, the results of the survey may be substantially incorrect. If the people who do respond are substantially similar to those who don’t, non-response can often be corrected by weighting. But with survey response rates plummeting to the single digits, it is not guaranteed that weighting will continue to be an effective remedy.
NBC's Mokrzycki noted that most political polling results have remained remarkably accurate in the face of these changes, with some exceptions.
Questions about IVR
Some pollsters disdain Interactive Voice Response (IVR), or robo-polling. Andrew Smith, director of the UNH Survey Center, said he hands IVR polling calls to his 14-year-old son, given the lack of care for good results shown by IVR firms. "If their concern about quality is that good, I can do the same," said Smith.
A key limitation of IVR polling is the federal law which disallows automated calls to mobile phones. Studies have shown that 40 percent of households are cellphone-only, which puts a large segment of the population outside the range of IVR pollsters. IVR pollsters try to ensure accurate results by making sure that the respondents they do reach match the demographics of their target population through careful selection or weighting of the results. Some large IVR pollsters, such as Rasmussen and Public Policy Polling, have been supplementing their calls with panels of Internet users to reach cell-only respondents.
The new wave of Internet polling
The new wave of Internet-based panel polling was represented by Brian Schaffner, director of the UMass poll. Schaffer works with online polling company YouGov to conduct his surveys and he provided an overview of Internet polling techniques. YouGov polling has been adopted by the Huffington Post and The New York Times, but there is concern in the polling community about the methodological changes required by the new medium.
There is no directory of Internet users that allows for true random sampling. Instead, Internet polling firms recruit very large panels (well over 1 million panelists for YouGov in the United States), identify the demographics of the target population, and then match those demographics to actual people from their panel. Schaffner’s results have been promising.
"When Nate Silver did an analysis of the 2012 election polls he found an average Internet poll error of 2.1 [points]," said Schaffner. The average error of live interviewer phone polls was 3.5 points, and IVR pollsters had an average error of 5.0 points.
Internet polls and live caller polls of the Massachusetts race for governor have been similar to each other, showing the same average — Democrat Martha Coakley up 3 points. IVR polls, on the other hand, have been much more favorable to the GOP, showing Charlie Baker up 3 points, possibly due to the different populations seen by the different polling methods.
Killing the messenger
While political campaigns have always called into question polls that show their candidate down, Douglas Schwartz, director of the Quinnipiac University poll, has found that campaigns have stepped up their attacks in recent years.
"In the past, if we put out numbers that a campaign didn't like, they would call me," said Schwartz. "Their campaign pollster would say, 'What are you guys doing? What's your likely voter model? What's your screen? What is your weighting?' and we would come to understand why we had different numbers.
"Now it's a totally different game," said Schwartz of the testy governor's race in Connecticut. "They are going public. Even the candidate himself attacked the poll. I thought, 'You couldn't even get a flak to do that?' "
However, Schwartz maintained he does not have a problem taking heat from campaigns. “When you are being criticized by both Democrats and Republicans you have to be doing something right,” he said.
Another topic on the agenda was the politics of the 2014 midterm elections.
Professor Smith of the UNH Survey Center said that political parties are like sports teams and people don't often change allegiance, so it is more a matter of who is motivated to vote.
"Turnout, for midterm elections in particular, is key," said Smith. "Who is going to show up?"
Smith is spending much of his time trying to figure out the makeup of the midterm electorate, rather than focusing on voters' or candidates' stance on an issue. Given that many voters don't know the candidates, he is a proponent of using generic ballot questions on polls that ask for whether a respondent supports a generic Republican or Democrat.
Smith is also watching what is going on at the national level to motivate or discourage voters. "'Midterm elections come in three varieties for the White House party,'" said Smith quoting political analyst Charlie Cook, "'Bad, really bad and horrific.' and we're probably someplace between really bad and horrific."
Do polls influence primary voters?
During the question and answer portion of the program, panelists were asked about a recent dust-up between Massachusetts politicos in which political science professor Jerold Duquette claimed that the Massachusetts gubernatorial primary election was influenced by highly publicized weekly tracking polls in media outlets such as The Boston Globe. (WBUR released its own tracking poll less than a week before the primary election.)
"There is some very good political science research that shows how viable you think a candidate is affects your vote in a primary," Umass' Schaffner said, "although it is probably not the most important factor."
In evaluating polls, transparency is key
Michael Link, president of the American Association for Popular Opinion Research, congratulated the New England chapter for its first meeting of the reconstituted group and to provide his take on the major shifts going on in the survey community.
"We are in a very trying time, some might say," said Link. "But we have never lived in a time when people have more opportunities to express themselves, and there are more ways for us to measure public opinion than ever before."
Link said the AAPOR will continue its mission of providing guidance on methods and transparency, while also expanding the organization's scope beyond surveys to any technique for gauging public opinion, including social media analysis and data mining.
"Whether you are producing or consuming public opinion information, how do you know it is right?" asked Link. He then answered: "Standards and transparency are the key."
Brent Benson analyzes politics and public policy in the state of Massachusetts using a quantitative approach on the Mass. Numbers blog.
Correction: An earlier version of this post incorrectly reported Brian Schaffner's first name. We regret the error.