“If my Facebook feed represented political thought in America,” wrote Marjorie Winther recently in The Daily Kos, “Bernie Sanders would be president having won 92% of the vote. Trump would have gotten 0%, and 8% would have written in Cubs third baseman Chris Bryant.”
As all of the recent commentary about “filter bubbles” has rightly noted, social media has made us more insular, not less. And the spectacular failure of large-scale, repeated survey research has shaken our faith in the value of quantitative data as a tool for prediction, let alone insight.
Ultimately everyone -- pollsters, commentators and voters alike -- fell victim to the inherently false promise of binary, quantifiable answers.
In the stunning aftermath of the election, many of us who work in the consumer insights and market research industries are questioning how the pollsters were so mistaken in their expectations. Was the failure to predict Trump’s Electoral College win a problem with methodology? With bias? Were the wrong questions being asked of the wrong people? Following the “unrepresentative” people or mining the wrong social media sites?
The answer is all of the above, and more. Ultimately everyone -- pollsters, commentators and voters alike -- fell victim to the inherently false promise of binary, quantifiable answers.
So let’s start with methodology, then move on to philosophy.
There was one poll -- frequently derided for its unconventional methodology -- that got it right. The 2016 USC Dornsife/Los Angeles Times Presidential Election Poll differed from conventional polls in three crucial respects:
- Rather than seeking a “fresh sample” each time, the pollsters followed the same group of 3,000 people over time (cycling through week-by-week in groups of 400). That longitudinal approach made it easier to discern changes in attitude over time.
- Rather than asking a closed-ended question about voting intentions, the survey allowed respondents to indicate their probability (from 0 to 100 percent) of voting for a specific candidate. In so doing, it implicitly acknowledged that people are changeable, ambivalent, uncertain. It allowed for nuance.
- Contrary to those researchers who argue that past voting behavior shouldn’t be considered because people don’t accurately report it, the USC poll results were weighted based on how respondents said they voted in 2012. That trust in people’s ability and willingness to share past behavior -- and, more important, to share the narrative they’ve created about their own lives -- made for a richer understanding.
Continuity, nuance, and personal history -- all of these factors combined to create not a snapshot of a moment in time, but a portrait of the complex and conflicted people who were casting votes. Those are the human qualities that 20th century research so relentlessly and fallaciously tried to filter out. In a quest for rigor, researchers screened out complexity.
But there’s a fourth factor as well that can make or break an insight, and that is trust.
The USC poll also asked respondents whether they were comfortable discussing their voting intentions with others. “The survey found that Trump supporters reported themselves as being slightly more comfortable than Clinton voters in talking to family members and acquaintances about their choice,” the LA Times reported on election eve. “But Trump voters were notably less comfortable about telling a telephone pollster about their vote. Voters who backed a third-party candidate were even less comfortable responding to a poll. Women who said they backed Trump were particularly less likely to say they would be comfortable talking to a pollster about their vote.”
Honest disclosure -- and not just about voting intentions — is founded on trust, on the belief that we know who we are talking to, that we will not be punished or scammed or yes, marketed to, as a consequence of speaking our hearts and minds. It flourishes only when we advance from being “respondents” to colleagues, partners and fellow citizens.
As any qualitative researcher knows -- indeed, as any person interested in other people knows -- insight, judgment and predictive intuition spring from relationships. And real, two-way relationships, not just those fueled by emojis and shares.
If we’re going to understand each other, let alone predict one another’s behavior, we have to engage in dialogue, not simply harvest tweets. To really understand others, we need to develop long-term connections with human beings rather than simply capture data points.
If we’re going to understand each other, let alone predict one another’s behavior, we have to engage in dialogue, not simply harvest tweets.
But perhaps the biggest lesson to derive from this election is that as companies, consumers and citizens, we must move beyond research. Donald Trump's Electoral College win revealed not just an ideological gap, but an experiential one. Urban and coastal dwellers, used to a diversity of people on their streets and in their schools, voted against Trump’s nativist rhetoric. Rural and suburban voters, less disturbed by what is, for them, the distant threat of racial or ethnic violence, voted for his economic populist rhetoric. Both groups were motivated by emotion, deeply held values and, above all, by the limitations of their own experience.
We fear who and what we don’t know. If we are to prosper together, that’s the challenge we have to overcome. We need to use the real-time, borderless power of technology not merely to broadcast views, but to share the experience that touches hearts and changes minds. We need to move from being questioners and respondents to being co-creators of a future that’s welcoming to us all.