By Emily Taylor
Omnes cives Googlani sumus. We are all Google citizens now. Google has colonized more countries than the Ancient Romans. In 95 percent of countries, either Google or YouTube (which, of course, is owned by Google) is the most popular website. But instead of taking hundreds of years, Google has done it in a couple of decades without firing a single shot. We were lulled into submission by the digital equivalent of “bread and circuses”: cat videos and PewDiePie.
How did this happen? How can we be sure that Google and other big tech companies will safeguard our rights and freedoms like the cheerful buddies they present themselves as? What can be done to hold big tech to account?
Within 25 years of the invention of the World Wide Web, a handful of private (mostly Silicon Valley) companies have become dominant. Some—Microsoft, Amazon, Apple—have achieved market power through a familiar route: selling products or services for money. Others, like Google and Facebook, have a different kind of power, one that we’re only beginning to understand. They have conquered the world through what Shoshana Zuboff, a professor at Harvard Business School, calls surveillance capitalism, a big data world in which we’re the products.
In her magisterial SPQR, historian Mary Beard considers why the Roman Republic collapsed, despite its mechanisms to rotate officeholders and avoid concentrations of power. Beard concludes that Ancient Rome’s governance structures simply didn’t scale. They were not robust enough to cope with the demands of a running global empire.
Similarly, international governance structures are too feeble to cope with the impact of the internet on societies. So we’ve ceded that power to corporates.
The Google empire has been created by a mixture of great products, corporate ambition, and first-mover advantage. So far, so normal. But governments are contributing by handing over to tech companies stuff that’s too difficult or too expensive, like collecting surveillance data or mediating cross-border freedom of expression issues. We as consumers also play our part, as Al Gore noted at a conference in 2014: “Every time we—collectively—have had a choice between convenience and privacy/security, we’ve chosen convenience.”
For instance: Have you ever read your favorite providers’ terms of service?
When we click “I agree,” we hand overthe right to track us, to profile us, to scan our email and chat content; to remove, edit, or share and sell our content without giving us a cut of the revenues. Facebook “likes” alone can reveal an individual’s gender, age, race, sexuality, political, and religious views. Target correctly inferred that a teenage girl was pregnant based on big data profiling—as her father found out when he phoned to complain that she’d been sent coupons for baby products.
We don’t know—because we’re rarely told—how long all this data will hang around. Our words and clicks today may be resurrected in the future, and conclusions drawn by advertisers or governments. Knowing that President-elect Trump will shortly have access to NSA data may or may not renew concerns about the “collect it all” strategies that have shaped counter-terrorism since 2001.
Whoever’s in charge of the free world, we should worry about the powerful alignment of state and corporate interests, and how enmeshed private companies are becoming in the machinery of law enforcement. As Bruce Schneier put it in Data and Goliath: “As long as these companies are already engaging in mass surveillance of their customers and users, it’s easier for them to comply with government demands and share the wealth with the NSA.” It works for everybody, apart from individual citizens.
Today’s popular platforms are increasingly forced to take on roles that used to be done by states. A European court judgment forced Google to implement the controversial right to be forgotten, a process for users to request that historic materials about them be suppressed from search results. It’s not so much the right to be forgotten, as the right not to be found. Users fill in a form, and Google decides. So far, Google has received nearly 600,000 requests under the scheme. Despite being a quasi-judicial process that has an impact on fundamental rights such as access to knowledge, freedom of expression and privacy, Google’s right to be forgotten falls way short of the rule of law gold standard. We don’t know who takes the decisions. There are no public judgments, no appeal process, no evidence about decision-makers’ qualifications or freedom from conflicts of interest.
Another area where big tech impacts our fundamental rights is in content moderation. Despite sincere commitment to free expression, platforms like Facebook and Twitter have found themselves drawn in to taking decisions about taking down content. That’s why, as Adrien Chen memorably put it in Wired, you don’t see “dick pics and beheadings” in your news feeds. The companies—no doubt uncomfortable that they’ve ended up with de facto censorship powers—are rather secretive about their processes, workforces, and what criteria are used to remove content.
Sure, we need processes that work at internet speed, not the snail’s pace of most judicial systems, but it’s also vital that there is transparency, due process, and right of appeal. Decisions about censorship or copyright infringement can be nuanced, and are highly dependent on culture and context. Big tech and their hidden workforces have ended up with these tasks through the lack of a better way to do it. This is dangerous for democracy and human rights: Private companies have different priorities than, say, the courts. They are duty bound to maximize profit and minimise potential liability. That doesn’t always make for robust legal decision-making.
Rather than having a whinge-orama, what can be done to re-balance the power of Google and other modern day data-empires? The Global Commission on Internet Governance says that it is up to all of us to create a healthy future for the internet. The commission calls for users to have more say in how their data will be used and to be offered meaningful alternatives, rather than the binary choice of sharing data or being excluded from the service.
It also says that big tech should be more accountable. Independent projects such as New America’s Ranking Digital Rights shine a light on some corporate practices. But Rebecca MacKinnon, who leads the project, notes that while companies are becoming more transparent about their responses to government requests for data or content restriction, they are “much more opaque about their internal decision-making processes around how and when their own rules are enforced.”
Another area where there needs to be more choice and transparency is in the handling of our personal data. We’ve all had the experience of being followed around the web by products we’ve almost just bought. It’s like being stalked by your own underwear, and it feels unsettling. Despite grumbling about the “creepiness” and “spookiness” of data profiling or targeted advertising, we just seem to put up with it.
But we may have more power than we think. If we were a little more feisty, we could, say, demand that companies differentiate between different types of data. If that happened, there could be more frequent deletion of ephemeral data such as chat conversations (like Snapchat’s delete by default), search terms, geolocation, and a “do-not-scan” for inherently confidential data such as legally privileged emails.
Observing the internet’s development can sometimes feel like watching evolution in fast-forward. Whether you’re living in 44 B.C. or 2016, running a giant empire creates governance risks, particularly concentrations of power and autocracy. If we learn from history, we should be wary of empire builders—whether they come armor-clad or in primary colors with Comic Sans fonts.
As Voltaire (or was it Spider-Man?) said, “With great power comes great responsibility.” And it’s a problem for all of us to resolve, not just the big tech companies. We need to wake up to the concentrations of power and demand that both states and corporates shoulder their responsibilities.
This article is part of the “Who Controls the Internet?” installment ofFuturography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Future Tense is a collaboration among Arizona State University, New America, and Slate.
No comments:
Post a Comment