Olivia Goldhill
Cambridge Analytica talks a big talk. “We can use ‘big data’ to understand exactly what messages each specific group within a target audience need to hear,” Alexander Nix, the company’s chief executive, said at a marketing conference last year, according to The Wall Street Journal. Documents circulated by SCL Elections, the parent company of Cambridge Analytica, claimed to be “experts in measurable behavioral change.” The company claimed its methodology, “enables us to understand how people think and identify what it would take to change their mindsets and associated voting patterns.”
Can Cambridge Analytica understand people? Yes. Online behavior is indicative of a huge amount of information and it’s perfectly possible to analyze Facebook activity to determine everything from health and personality type, to political leanings and willingness to vote. If the company did obtain a comprehensive set of user data from Facebook, as has been reported, then it may have gotten unique insight into what makes people vote and how. “Facebook allowed them to combine different data sources in a way that allowed them to understand voters maybe better than voters themselves did,” says Dietram Scheufele, science communication professor at the University of Wisconsin-Madison.
Such knowledge would allow for precise, targeted advertising, which, of course, is exactly what Facebook sells its own advertisers, like Procter & Gamble. “Using Facebook words and likes, I can tell a lot about your political orientation, and hence show you an ad you’re likely to respond to (what’s your biggest concern: guns, gays, greens),” Lyle Ungar, a University of Pennsylvania professor who researches the psychology of social media use, writes in an email.
But after understanding the nuances of individuals and paying for precisely targeted marketing, do these advertisements change minds? It’s unlikely. We’re not as manipulatable as Cambridge Analytica would like to believe.
Scheufele points to foundational political science research from Columbia University sociologist Paul Lazarsfeld. “They showed in the 1940s that most campaign effects are really reinforcement effects,” he explains. Once someone already holds an opinion, they’ll buy into messages that support their pre-existing view. But ads don’t really make us start thinking differently.
“Very rarely will I sit there and say, ‘I’m a Clinton supporter, now I’m seeing all these pro-Trump messages, let me start voting Trump,’” says Scheufele. “What happens instead is it really reinforces and mobilizes groups of voters. The effects are somewhat limited.”
Such advertising isn’t entirely inconsequential. After all, the early primaries can be decided by a relatively small number of voters, and mobilizing particular groups can have a decisive effect. But, contrary to what Cambridge Analytica might suggest, there’s no precise and detailed science that suggests that if you show particular adverts to certain personality types at a specific time, then it will definitely have a powerful effect. After all, Cambridge Analytica, hired by the Ted Cruz campaign, failed to make him president.
The research conducted by Cambridge Analytica cannot be replicated in a scientifically solid manner, says Scheufele, not least because both the data and algorithms it used are constantly changing. Facebook alters its algorithm continually, and new users leave and join all the time.
“The claim to effectiveness is largely unproven,” says Scheufele. “There’s little scientific research. There cannot be. If I wanted to replicate the kind of work that Cambridge Analytica claims to have, I wouldn’t be able to. The algorithms that led to their conclusions no longer exist. The data has changed, the population has changed and so on.”
No credible academic paper would provide the basis for Cambridge Analytica’s suggestions that it relies on psychological techniques to get people to vote for a particular candidate. Psychologists in the field have well-educated guesses about the specific papers that the Cambridge Analytica scientists relied on—and none of these suggest that the level of manipulation the data company promised is possible.
Much of Cambridge Analytica’s work has been attempted by other political groups, notes Scheufele—including Barack Obama. His 2012 campaign hired “predictive modeling and data mining scientists,” according to job advertisements, which read: “Modeling analysts are charged with predicting the behavior of the American electorate. These models will be instrumental in helping the campaign determine which voters to target for turnout and persuasion efforts, where to buy advertising and how to best approach digital media.” Obama’s team even placed ads inside video games during the 2008 election.
The basic idea behind targeting specific groups quite old: Proctor and Gamble sponsored the creation of afternoon television “soap operas,” says Scheufele, because the company quite literally wanted to sell soap to a certain audiences—women doing housework at home—and so created the shows to attract a certain demographic.
Cambridge Analytica—and Facebook itself—simply takes this to the next level. It knows about us and our social groups—which is crucial, as social contagion has a massive influence on behavior. “That’s why Google has forever tried to buy some social network that was actually successful,” says Scheufele. “They tried Orkut, they tried Google Plus, they bought Waze for a billion dollars even though they already had Google Maps. Why? Because they want not just data on me but on all the friends surrounding me because that allows me to target much more precisely my potential vulnerabilities.”
Social media has allowed advertisers to target us and know more about us than ever before. “It’s the sociology and the psychology together,” says Scheufele. “We’re no longer targeting segments, we’re targeting individuals.” But the psychological techniques used to influence us are neither as precise nor as powerful as Cambridge Analytica bosses have promised. Yet.
No comments:
Post a Comment