By Adrian Chen, Nathan Heller, Andrew Marantz,and Anna Wiener
Last weekend, a pair of exposés in the Times and the Guardian revealed that Cambridge Analytica, the U.K.-based data-mining firm that consulted on Donald Trump’s Presidential campaign, not only used Facebook to harvest demographic information on tens of millions of Americans—something we’ve known since 2015—but also may have acquired and retained that information in violation of Facebook’s terms of service. The harvesting was reportedly carried out in 2014 by Aleksandr Kogan, a lecturer in psychology at the University of Cambridge, using a Facebook app, which was downloaded by about three hundred thousand users. At the time, Facebook’s data-sharing policies were far more permissive than they are now: simply by authorizing an app, users could give developers access not only to their own data—photos, work histories, birthdays, religious and political affiliations—but also to the data of all their friends.
Facebook has bristled at the suggestion that what Kogan and Cambridge Analytica did constitutes a breach. “People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked,” Paul Grewal, the company’s deputy counsel, stated last Saturday. But the furor has continued, amid reports that Stephen Bannon, the former chief strategist of Trump’s campaign, oversaw the collection of the data and used it to craft messaging for Trump’s Presidential bid. On Wednesday, Facebook’s C.E.O., Mark Zuckerberg, addressed the “Cambridge Analytica situation” in a post, writing, “We will learn from this experience to secure our platform further and make our community safer for everyone going forward.”
It wasn’t Zuckerberg’s first mea culpa. In the past eighteen months, thanks in no small part to Trump’s victory, he and other Silicon Valley leaders have been forced to reckon with the sometimes negative role that their companies play in Americans’ lives—fake news, ideological echo chambers, Russian bots. Now Facebook and the public face two crucial questions: How did we get here? And can the platform be fixed?
We asked a handful of writers who have covered technology for The New Yorker—Adrian Chen, Nathan Heller, Andrew Marantz, and Anna Wiener—to discuss these questions over e-mail and suggest something like a way forward. We began with Zuckerberg’s recent post. The participants’ remarks have been edited for length and clarity.
Adrian Chen: Zuckerberg’s apology jumps between two distinct issues—one technical, one human—in a way that gives me whiplash. The first issue he lays out and shuts down with aplomb. He explains that, in 2014, Facebook made it so that third-party apps like Kogan’s could harvest the data of users’ friends only if those friends also opted into the app. If that were the only problem, then this would be an open-and-shut case. Alas, there’s also the human problem—the problem of shady developers. The only thing that Zuckerberg offers here is harm reduction. Going forward, he writes, Facebook will “require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data.” Of course, Cambridge Analytica formally certified to Facebook that it had destroyed the data from Kogan’s app, then, according to the Times report, may have failed to do so. The larger problem, though, is that the winding path the data took, from Facebook to Kogan to Cambridge Analytica, suggests that protecting users’ information once it is in the developer ecosystem is incredibly difficult. You get a real putting-the-genie-back-in-the-bottle feeling.
Maybe Zuckerberg hopes that, by laying out this narrative, he’ll offer a picture of a technical loophole that has already been closed. But the human vulnerability is as present as ever. The only good news, it seems, is that it will be a little less violating next time.
Anna Wiener: That statement! There’s so much going on there, and also not that much at all. It reads to me like a postmortem for a software project, run through the communications and legal departments. It’s a gesture at transparency, but it’s very slippery.
Adrian, I think you’re exactly right that this is both a technical problem and a human problem, and that Zuckerberg is pushing the narrative of bad actors who exploited a loophole. But if we can call it a loophole at all, then it’s a policy loophole: Facebook was operating exactly as it was intended to. It was and is an ad network. The scope of the metadata that developers could harvest (and retain) probably isn’t surprising to anyone who has worked in ad tech, or at any tech company, really. Facebook trusted developers to do the right thing, and I think this reliance on good faith—a phrase that gets a lot of exercise in the tech industry—tracks with a sort of tech-first, developer-is-king mind-set.
In some ways, this trust in developers is a product of carelessness, but it’s also a product of a lack of imagination: it rests on the assumption that what begins as a technical endeavor remains a technical endeavor. It also speaks to a greater tension in the industry, I think, between technical interests (what’s exciting, new, useful for developers) and the social impact of these products. I don’t know how software is built at Facebook, but I imagine that the engineering team working on the Graph A.P.I., a developer tool that enables interaction with the platform’s user relationships, probably wasn’t considering the ways in which metadata could be exploited. It’s not necessarily their job to hypothesize about developers who might create, say, fifteen apps, then correlate the data sets in order to build out comprehensive user profiles. That said, maybe it should be the job of the product-management team. I don’t mean to lean too heavily on conjecture; Facebook is a black box, and it’s nearly impossible to know the company’s internal politics.
In any case, the underlying issues aren’t specific to Facebook. The question of good faith is an industry-wide problem. Data retention is an industry-wide problem. Transparency is touted as a virtue in Silicon Valley, but when it comes to the end user, transparency is still treated as more of a privilege than a right.
Andrew Marantz: To Adrian’s point: harm reduction is better than nothing. I’m all for harm reduction. And yet I agree with both of you that the human problem isn’t going away, because the human problem is: humans. We’ll eventually have driverless cars, even if a few pedestrians get killed in the process; we’ll eventually have bankless currency, whether it drives the banks out of business or not; but we’ll never have personless social media. (Or, if we do, it won’t be profitable for long; advertisers can’t sell widgets to scripts and bots.)
Zuckerberg, in his recent apology mini-tour, left behind a lot of tea leaves. The statement that most piqued my interest was this one, from his interview on CNN: “For most of the last ten years, this idea that the world should be more connected was not very controversial. And now I think that there are starting to be some people that question whether that is good.” As I noted in a recent piece about Reddit, Facebook’s mission, for most of its lifespan, was to “make the world more open and connected.” (That changed slightly last June.) Underlying the mission was a tacit assumption about human nature—that people are basically trustworthy and, therefore, that a more open and connected world will naturally, perhaps automatically, become a better one. Now Zuckerberg is prepared to make that assumption explicit and to admit that “some people” are starting to lose faith in it. Who are “some people”? Is Zuckerberg one of them? Does he find them persuasive? I want to know more.
Are human beings essentially good? Are they, to use Adrian’s word, essentially shady? The fact that we can’t know the answer is, I think, part of the answer. When you place an enormous bet on the kindness of strangers, you might be unpleasantly surprised. And you might break a few democracies along the way.
Nathan Heller: Human nature, indeed. It occurs to me that a lot of this slices into three big problems, and that they start to indicate what Facebook might do better going forward.
First, the trust problem. The good-faith presumption you were pinpointing, Anna, obviously can’t hold in the eyes of users. There have been repeated issues. It doesn’t help that Facebook is commercially oriented—and that few users understand what, data-wise, goes on under its hood. Say the transmission goes out on your car, and you take it to your mechanic, and the mechanic says, “Christ! Let’s get this fixed,” and spends a few days with it. Then, three months later, the transmission goes out again, and the mechanic says, “Christ! Let’s get this fixed,” and you pay for the repair again. And then it happens a third time! You would not be crazy to suspect that your mechanic is profiting from oversight. How can the mechanic win back your trust? Well, maybe the third repair is free. Should Facebook consider a step plainly against its commercial interests, as a show of good faith?
Second, the culture problem. As you suggest, Andrew, since people can be trustworthy and shady, both possibilities should probably be built into planning. Recently, for a piece, I talked with some people at data-security companies, and the thing they kept emphasizing was that you can’t completely prevent data breaches. They happen. What really matters is the response: how quickly you know about them, how comprehensively and transparently you react. Our very brilliant sawbones colleague Atul Gawande once gave a speech about surgical rescue. It turns out that complication rates at all hospitals are basically the same, and what differs—what makes people die less in some than in others—is “rescue” rates. Facebook seems to have a bad rescue culture. It notices (or publicly acknowledges) major problems too late. It tries to do things with scrap following an explosion.
Third, the conceptual problem—and I think this really goes to the heart of the human-technical fuzziness you point out, Adrian. Platforms are always talking about scalability, but they usually mean it only in a narrow business sense. (“Social,” for them, is a structural term, not a human one.) A platform with 2.2 billion monthly active users should probably be assessing scalability in some broader frame. What about societal scalability? Facebook’s year of problems suggests that it’s not woo-woo to think this way; for all its cogitation, the tech world rarely frets about second-order societal effects, let alone third-order and fourth-order. What if Facebook convened a cabinet of unfriendly social economists, media scholars, data historians, whatever—people who think systemically in other frames—and let them kick tires and raise a fuss? This strikes me as only a medium-insane idea.
Finally, I just want to say, lest I seem to run negative, that I was strangely charmed by Zuckerberg’s as-a-father-of-daughters answer. Say what you will; it’s on Zeitgeist.
Anna Wiener: Nathan, I think your point about Facebook’s commercial orientation is really important. Facebook’s customers are not its users. It’s a developer-oriented attention magnet that makes its money from advertisers based on the strength of its users’ data. For Facebook to truly prioritize user privacy could mean the collapse of its revenue engine. So when Zuckerberg says, “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you,” it’s very strange, because it assumes that Facebook’s primary orientation is toward users. Zuckerberg runs a business, not a community; my understanding is that Facebook sees itself as a software company, not a social institution, and behaves accordingly. (Also, I was tripped up by “deserve”: Facebook is a default. Aside from Instagram, a social network owned by Facebook, or a revival of Path, users don’t have much of a choice. Legislation recently passed in Congress is likely to compound this problem.)
To “fix” Facebook would require a decision on Facebook’s part about whom the company serves. It’s now in the unenviable (if totally self-inflicted) position of protecting its users from its customers. If Facebook users could see themselves as advertisers likely do, they would probably modify their sharing behaviors. Can the platform be both a functional social network and a lucrative ad network?
Andrew Marantz: Nathan asks whether Facebook might act against its commercial self-interest as a show of good faith. In January, the company announced a change to its News Feed algorithm: it would, somewhat ironically, show users less news, or at least less news of the professionally produced, non-personal variety. (It’s hard to tell real news from fake news, and even the real news is a bummer these days.) “I expect the time people spend on Facebook and some measures of engagement will go down,” Zuckerberg wrote in a postexplaining the change. Translation: We’ll make less money, at least in the short term. And yet, he added, “if we do the right thing, I believe that will be good for our community and our business over the long term.”
So was this a show of good faith, or just a canny long-term strategy? Does it go any distance, Anna, toward making Facebook seem like less of a software company and more of a social institution? Companies are designed to make money, but the people who run them are not immune to such human motivations as shame and pride. Adam Mosseri, the head of News Feed, was recently asked about Facebook’s role in exacerbating violence against Myanmar’s Rohingya minority. “We lose some sleep over this,” he responded. That’s believable, no?
Adrian Chen: I don’t buy the idea that whenever Facebook makes a decision that lowers its revenue it is some meaningful sign of social responsibility, or that it will necessarily lead to a better outcome. I’m thinking of how Zuckerberg, for the first years of Facebook’s existence, absolutely refused to take advertising, because it might corrupt the network. He lost a ton of money! The story is told all the time to frame Zuckerberg as a sort of reluctant capitalist. But now he’s out there defending the advertising model as a way to bring Facebook to the masses. Maybe I’m too cynical, but it seems as though Facebook has been very good at spinning whatever design or business decision is most advantageous to it at the time as good for the world, even as it grows bigger and more powerful, less actually accountable to the users it supposedly serves. I can’t be bothered with the performance of altruism any more.
Anna Wiener: I agree with you, Adrian: there’s nothing morally superior about making product changes that lead to short-term revenue dips. I do want to pick up on something that Andrew brought up—Adam Mosseri’s comment about losing sleep. I imagine that Facebook largely employs smart, thoughtful people who want to do the right thing. For me, the question is who, internally, is empowered to make product and policy decisions. I suspect transitioning from operating primarily as a software company to operating as a social institution would require an audit of the organizational DNA. In an interview yesterday, with Recode, Zuckerberg addressed Facebook’s challenges around content and free speech: “Things like, ‘Where’s the line on hate speech?’ I mean, who chose me to be the person that did that? I guess I have to, because we’re here now, but I’d rather not.” I’d argue that Zuckerberg himself doesn’t have to be the person making those decisions. Bring in some new voices, and empower them. Cede the reins a bit. The traditional hierarchy of the average software company might have its limits when it comes to Facebook.
Andrew Marantz: I should point out, as long as we’re talking about cynicism and who is empowered to make decisions within the company, that the Timesrecently followed up on that Mosseri comment by asking Zuckerberg directly, “Are you losing any sleep? Do you feel guilty about the role Facebook is playing in the world?” Zuckerberg’s response was more than two hundred words long, but it basically amounted to, “Nah, not really.”
Nathan Heller: “Every man’s insomnia is as different from his neighbor’s as are their daytime hopes and aspirations.” That’s F. Scott Fitzgerald, from his cracked period. An emphasis may fall on the difference in hope.
You make an acute point, Anna—and I certainly don’t think you’re cynical, Adrian. Facebook is a big company with an even bigger platform; it has commercial momentum and pressures presumably larger than any one person’s general good intent. How do you get the apparatus on the right side of your supposed values? Andrew, I think this pertains to the January News Feed change. I guess I should have said “plainly against its commercial interests and in support of a long-term benefit for users” (who, as you point out, Anna, have had mostly sweet nothings whispered in their ears so far). Surely, for users, the best solution to fake news isn’t less news. A show of good faith, I’d imagine, would require a deeper cut than that.
I like that we’ve landed on the phrase “social responsibility.” It seems like possibly the core of our concerns. What does social responsibility mean for a tech giant like Facebook (but not just for Facebook)? We haven’t really discussed regulatory parameters, but, in a sense, responsibility parameters come first. There has been a tendency in Silicon Valley to think of social responsibility as something on top of the platform: a mission, or a service base, or just something that you do with all your cash. I wonder whether, through Facebook’s missteps, we’re arriving at a reckoning with the idea that social responsibility is inherent in the nitty-gritty of a platform itself: how information travels, how its transfer is guarded, how the platform’s algorithms are designed. That would be a very different notion than what has traditionally obtained, certainly in the public eye. It’s like a social ethics of coding. Is this a healthy, useful reckoning to come to? I’d say yes.
Adrian Chen joined The New Yorker as a staff writer in 2016. Read more »
Nathan Heller began contributing to The New Yorker in 2011, and joined the magazine as a staff writer in 2013. Read more »
Andrew Marantz, a contributing editor, has written for The New Yorker since 2011. Read more »
No comments:
Post a Comment