Jihadists and right-wing extremists use remarkably similar social media strategies.
The editorial board represents the opinions of the board, its editor and the publisher. It is separate from the newsroom and the Op-Ed section.
Social media has played a key role in the recent rise of violent right-wing extremism in the United States, including three recent incidents — one in which a man was accused of sending mail bombs to critics of the president, another in which a man shot dead two African-Americans in a Kroger’s grocery store in Kentucky, and a third in which a man is accused of conducting a murderous rampage at a synagogue in Pittsburgh.
Each of these attacks falls under the definition of right-wing extremism by the Global Terrorism Database at the University of Maryland: “violence in support of the belief that personal and/or national way of life is under attack and is either already lost or that the threat is imminent.” Antiglobalism, racial or ethnic supremacy, nationalism, suspicion of the federal government, obsessions over individual liberty — these are all hallmarks of this network of ideologies, which is, of course, shot through with conspiracy theories.
Yet, even as the body count of this fanaticism grows, the nation still lacks a coherent strategy for countering the violent extremism made possible through the internet.
Instead, the fundamental design of social media sometimes exacerbates the problem. It rewards loyalty to one’s own group, providing a dopamine rush of engagement that fuels platforms like Facebook and YouTube, as well as more obscure sites like Gab or Voat. The algorithms that underpin these networks also promote engaging content, in a feedback loop that, link by link, guides new audiences to toxic ideas.
This dynamic plays out around the globe. In Germany, one study showed that towns with heavier Facebook usage saw more anti-refugee attacks. In Sri Lankaand Myanmar, Facebook played a significant role in inciting violence.
While the motivations of violent actors may be different, the paths they travel toward violence are similar. Cesar Sayoc, the accused mail bomber, posted links on Twitter and Facebook to conspiracy theories about Hillary Clinton and illegal immigration. The accused Pittsburgh killer, Robert Bowers, was active on Gab, a social network established to harbor speech censored by mainstream platforms — including speech that many other platforms found too extremist. Two hours before the shooting, Mr. Bowers posted that a Jewish organization that aids refugees “likes to bring invaders in that kill our people. I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.”
Gregory Bush — the man accused of shooting the two people in a Kroger’s and saying, when confronted by a white man, “Whites don’t kill whites” — was a more passive consumer of social media. But his most recent likes on Facebook skewed heavily to conservative media, and a longtime online acquaintance said that Mr. Bush’s tweets — which had long been peppered with infrequent casual racism — became more and more vitriolic over the course of the 2016 election.
There was no organization behind these attacks. The three suspects most likely never met or interacted with one another. This is the new shape of extremism: self-directed, internet-inflamed terrorists.
Radicalization might start with casual conversations among video gamers. What begins with a few racist slurs may lead to exposure to overt white supremacist propaganda. A seemingly innocuous YouTube channel may recommend other, more inflammatory channels, which in turn may recommend ever more extremist content — a network identified by the Data & Society Research Institute as the Alternative Influence Network.
We already know how dangerous this cycle of radicalization can be, because similar mechanisms have fed Islamist terrorism in recent years. Anwar al-Awlaki, the cleric who communicated with the 2009 Fort Hood shooter and coached a young man to try to blow up an airliner over Detroit, left a digital footprint that survived on YouTube for years after his assassination by an Americandrone strikein Yemen. Videos of his sermons, even anodyne history lectures or self-help coaching, were always popular, thanks to his pleasant voice and serious demeanor. Now they also have a martyr’s allure.
If a viewer clicked on the cleric’s earlier, gentler, talks, YouTube’s algorithms would point the viewer to one of his later sermons, like one describing why it’s a Muslim’s duty to kill Americans. Dzhokhar Tsarnaev, one of the two Boston Marathon bombers, tweeted approvingly about Mr. Awlaki’s lectures. Chérif Kouachi, one of shooters who killed 12 people at the Paris offices of the magazine Charlie Hebdo in 2015, name-dropped Mr. Awlaki in a phone interview with a reporter before being shot by police. In death, as in life, Anwar al-Awlaki’s words inspired lonely, disturbed, or disaffected young men to kill.
By 2017, YouTube began to rethink its policies, and now all of Mr. Awlaki’s material — unless presented as news commentary or in a critical context — is banned from the platform. Facebook has long banned all of Mr. Awlaki’s videos. Both avow a commitment to combat hate speech, extremism and misinformation.
But platforms have been more tentative in dealing with the kind of right-wing extremism that focuses on white supremacy. Although organizations like the Anti-Defamation League and the Center for Strategic and International Studies provide information about these groups, official government sources are still crucial if there is to be an effective crackdown. Vast federal resources, for example, went into identifying the networks around Mr. Awlaki, who has been on a designated terrorist list since 2010.
But the government does not officially designate domestic terrorist organizations. The Trump administration has reduced or eliminated modest programs begun under President Barack Obama to counter violent extremism and deter recruitment, including among white supremacists. Mr. Trump has focused on Islamic extremism to the exclusion of other threats. Federal agencies do not even have common definitions of “domestic terrorist” and “domestic terrorism.”
Tech companies often draw on government lists to police their platforms for violent extremism. YouTube, for example, has long prohibited designated terrorists from having their own channels. For years, Facebook has banned the praise or support of organizations deemed dangerous or violent — a list at least partly informed by governments. (Facebook claims that it does not heavily rely on government lists.) Both platforms, along with Twitter and other technology companies, use a shared database of terrorist content — coordinated through the nonprofit Global Internet Forum to Counter Terrorism — to help take down extremist content faster. What the forum is capable of identifying is informed by what kind of information official organizations have about extremism.
While international terrorism has been the target of considerable attention and national resources, the threat from domestic terrorism has grown. Domestic terrorist attacks have been on the rise since 2008, and in 2017 alone there was a 57 percent increase in anti-Semitic incidents.
Past decades saw violence by left-wing groups, environmental extremists and black nationalists, but while attacks from those groups have fallen dramatically, violence from the right has risen. Right-wing extremists in the United States, particularly white supremacists, have been responsible for the vast majority of at least 387 domestic terrorist murders in the last decade. Last year, 20 of the 34 terrorist murders in the United States were connected to right-wing extremism.
These are statistics compiled by the Anti-Defamation League’s Center on Extremism, the most authoritative source for documenting the phenomenon, since the government doesn’t even keep good track of the danger. During the Obama years, conservative media manufactured a controversy over a 2009 Department of Homeland Security report about right-wing extremism, claiming politicized oppression. Under pressure from Republican lawmakers, Janet Napolitano, then the homeland security secretary, rescinded the report, and her department rolled back its work on violent right-wing extremism.
So the tech industry’s failings are not its alone. (Of course, Facebook’s dragging its heels and downplaying the extent of Russian influence on its platform does not give rise to optimism that the industry is doing its best.) The complex interplay of terrorism, propaganda and technology requires a concerted response by government and business. Private corporations should not be put in the position of trying to thwart extremism with help from only a handful of nonprofit groups.
Major platforms are applying machine learning and other techniques to remove noxious content, but what good is the most sophisticated artificial intelligence when the actual intelligence that feeds it is inadequate and skewed by biases in American society?
These biases are reflected in government lists, in policy decisions by tech companies and in the enforcement of those policies by moderators. Yet it’s quite clear that while the core philosophies of white supremacists and jihadists differ, their recruitment strategies and propaganda efforts are frequently similar.
Will Fears, who was arrested at a Gainesville, Fla., rally in support of the alt-right personality Richard Spencer, compared himself to the Boston Marathon bombers, the Tsarnaev brothers, in an interview with The New York Times Magazine. “Maybe he saw a lot of things in the world that bothered him and just didn’t know how to deal with it,” Mr. Fears said of Dzhokhar Tsarnaev, the young man who so loved Anwar al-Awlaki’s lectures. “I can sort of relate to that.”
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion)and Instagram.A version of this article appears in print on , on Page SR10 of the New York edition with the headline: The New Radicalization of the Internet
No comments:
Post a Comment