BY JOSHUA FOUST, SIMON FRANKEL PRATT
After the storming of the U.S. Capitol by an insurgent lynch mob driven by far-right social media conspiracy theories and stirred on by then President Donald Trump, at least 10 market-dominating tech companies took action through content moderation and account suspension. Chief among those removed was Trump himself, banned from Twitter, and Parler, an alternative social media platform that markets itself to far-right extremists, which was ejected from its host, Amazon Web Services.
The ban had an immediate effect on internet discourse: Within a week, researchers tracked a 73 percent reduction in disinformation about election fraud on Twitter and other platforms. Amazon’s filing against Parler documents months of futile work to convince the platform to suppress users’ explicit calls for violence in accordance with their terms of service. While some argue that tech companies should take similar action against other world leaders who use populism to stir up mass violence, critics of the decision are alarmed at the supposed restriction on free speech by tech companies.
This debate is overwrought, but also raises bigger questions. In 2021, losing a Twitter account meaningfully limits the president’s influence, as it would any other figure. That shouldn’t be confused with his freedom of speech, which remains unshackled by the government. But it does point to the way in which big tech has come to dominate and shatter the public sphere. Yet, thanks to the failure of politicians to meaningfully act through legislation, tech firms are policing themselves through inconsistently enforced terms of service.
When leaders call for violence through social media, their influence is especially pernicious. As far back as 2015, Trump’s dehumanizing rhetoric was viewed by many hate groups as a tacit permission slip to engage in hate crimes. Subsequent studies showed violent metaphors by political leaders dramatically increase support for political violence, and is fuel for “moral disengagement,” serving to designate certain people or groups as fundamentally unworthy of protection, and as legitimate targets for violence. Violent rhetoric is also contagious: A 2017 National Academy of Sciences study likened hate speech to a pathogen. That pathogen manifested at the White House on Jan. 6.
Stopping hateful speech is thus vital to maintaining the public space that non-violent, deliberative democracy needs. Completely unmoderated speech endorsing lies and violence, as Parler cultivated and which has threatened to overwhelm mainstream social media platforms like Facebook and Twitter, risks fragmenting that public space. But navigating the tension between moderation and openness means reexamining basic political commitments.
The American philosopher John Dewey defined a public as a community of “all those who are affected by the indirect consequences of transactions to such an extent that it is deemed necessary to have those consequences systematically cared for.” In a single, unified public, actions affect strangers, which creates an ethical obligation to think about the ripples of behavior. But in a fragmented society, with smaller publics, people are less attuned to how their behavior affects others (see the mask debate).
The United States lacks a single public. Exacerbated urban-rural divides, class differences, and prolonged exposure on the right to a closed media ecology have shrunk the so-called mainstream, while the legacy of apartheid has always meant the exclusion of Black communities from anything resembling a single, universal public. Even within the parochial and breakaway political right there are pronounced fractures that produce a proliferation of mini-publics, with a division—albeit a shrinking one—between supposedly moderate Republicans and those who consume and represent the views advanced in extremist media spaces like Breitbart or One America News Network. Meanwhile, an increasingly revolutionary far-left public has emerged that agitates against Democrats as often as it does Republicans, and on the sidelines are various fringe communities driven by often-violent conspiracies like QAnon and anti-vaccine groups.
Publics are formed and maintained through a public sphere—a space to discuss social problems, debate solutions, and form agreements about collective ideals and goals. German philosopher Jurgen Habermas famously studied how a vibrant, if limited, public sphere formed in the coffee shops and salons of 18th-century Europe, only to collapse in the 19th century as mass printed media rose to prominence. A public sphere, he argued, exists on the basis of inclusivity, a commitment to good faith argument, and a collective willingness to cooperate in the search for meaningful agreement on how the world is and should be. Journalistic elites arguing in op-ed pages are no substitute.
But at least when popular media consumption was restricted to a smaller range of outlets and run according to consistent editorial standards, something like a public sphere could exist.
But at least when popular media consumption was restricted to a smaller range of outlets and run according to consistent editorial standards, something like a public sphere could exist. Citizens could broadly be on the same page, so to speak, about which facts and principles were under debate and which were not. This was not an especially egalitarian or inclusive discourse, but it was transparent and coherent enough to allow for some cross-sections of the population to meaningfully engage with one another (though this was not the case for many marginalized groups).
Social media might have offered a solution, as an open digital space where anyone could join, contribute, share information, and learn new ideas and skills. This was the utopian argument of American poet John Perry Barlow’s 1996 manifesto, “A Declaration of the Independence of Cyberspace,” which claimed that the non-material nature of cyberspace exempted it from considerations of place, money, property, and identity. But, as the mass deplatforming of Trump and his insurrectionists demonstrated, cyberspace has never been separate from material concerns, and it is certainly not above politics.
Social media platforms are not like coffee shops or salons. Facebook and Twitter are not a public sphere in any sense of the term. They are ostensibly inclusive—at least until individual members are driven away by threats—but not dedicated to good faith argumentation; they make no commitment toward constructive discussion. This is an intentional design choice, as shown by the domination of outrage content, or the campaigns of harassment that target women and minorities with particular ferocity. Privately owned and in command of vast powers of surveillance and control over what and how users communicate, they are even now reluctant to use those powers to create a healthy public sphere.
The problem is that they also monopolize expression on the internet. The current choice between social media or nothing has led dissidents like Russian opposition leader Alexei Navalny to call Twitter’s ban on Donald Trump a form of censorship on par with government suppression of speech. World leaders expressed alarm as well, from Andres Manuel López Obrador vowing to “fight” Twitter’s policies to German Chancellor Angela Merkel suggesting that the only actor authorized to make decisions about bans should be the government itself—the implication being that Twitter should not be allowed to determine who is allowed to use its platform.
If that sounds absurd, blame social media companies themselves for producing this crisis. Their lame and inconsistent regulations of content are driven not by a commitment to clear principles, and certainly not to the values of the public sphere or a commitment to free speech. Rather, their self-regulation is driven entirely by the need to monetize data, deliver targeted ads, and evade serious legal liability, with even billion-dollar fines barely amounting to quarterly rounding errors.
Instead of creating a new public sphere, a small number of monopolistic social media companies colonized the existing one, and then shattered it into jagged pieces. They have accelerated and exacerbated the erosion and fracturing of the American public, while facilitating mass right-wing violence.
There is no easy solution to this problem, but there are a few principles that might help us devise one. First, social media companies must regulate and manage their platforms to better secure the conditions for a public sphere: inclusivity, fact-checking, and safety from violence. These alone cannot produce the utopia of Habermas’s private dinner “discourse ethics,” but without them, no public can survive. Policymakers should incentivize this through legislation that holds these companies liable for failure and imposes meaningful financial consequences. They should set a clear set of standards for when content crosses the line into threats of violence or hate speech, and they should establish independent review of social media firms’ enforcement, to ensure that it is neither lax nor arbitrary. There is a difficult balance to be struck here between First Amendment rights and the obligation to enforce existing laws prohibiting threats and harassment, but the current approach is simply refusing to try—and repealing Section 230, as some have suggested, would not address the problem of radicalization and violence anyway.
Second, social media monopolies must be broken through more effective antitrust legislation. Imagine if every 18th-century coffee house had been a Starbucks! If social media spaces are the only place a public sphere can form in the 21st century, then they must be meaningfully diverse. The old blogosphere had many attributes of a public sphere, just as the earliest days of social media did. But blogs died as the big names became digital magazine columns, and as competition from social media drew more users in. The only plausible competition to social media has come from other social media, and this is where the antitrust case against Facebook becomes salient—think of Mark Zuckerberg’s private dinner with Trump right before Trump announced a ban on TikTok. In their current monopolistic state, social media resembles a government in its absolute power to exclude (and surveil), and produces the same dynamics of power and censorship that have led commentators to now conflate content moderation with institutional repression.
Third, and most broadly, the internet needs to be treated as a social good, as scholars like Ethan Zuckerman argue. This may sound aspirational, although in other countries access to broadband may soon become a public service. People in the developed world are inescapably online—a social transformation that is permanent and should be addressed through more than liberal management or utopian transhumanism. Our first act as a public should be to come up with digital equivalents of parks, community centers, local watering holes, and other places where earlier generations were able to gather and coexist outside of pervasive governmental or corporate control. If we don’t, the institutions of liberal democracy will not survive long enough for us to come up with another solution.
No comments:
Post a Comment