Twitter stops enforcing its covid-19 misinformation policy, and Tim Cook plans meetings in Washington. First:
As Twitter defends its counterterror work, experts fear a spike under Musk
Twitter and other social media platforms took to the Supreme Court this week to defend their efforts to police against terrorist content, arguing that a lawsuit alleging they bear responsibility for aiding a 2017 terrorist attack in Istanbul is meritless.
But as Twitter defends its past work to crack down on terrorism, there’s mounting concern among researchers that the company’s protections against violent extremism could be weakened under new owner Elon Musk.
In a pair of filings to the court on Tuesday, Twitter, Facebook and Google argued that a lower court erred in finding that they could be held liable for allegedly aiding and abetting the killing of a Jordanian citizen during an Islamic State attack by hosting the group online. The case is set to test what responsibility platforms bear to curb such content under anti-terrorism laws.
Twitter argued in its filing that the plaintiffs — the family of the victim — failed to show that the company substantially assisted in the attack itself, and that a company should not be held liable simply for knowing that “terrorist adherents” were among its userbase.
Twitter also cited its policies prohibiting the promotion of terrorist content and said that it has “terminated over 1.7 million accounts for violating those rules since August 2015.”
Experts on online extremism say those efforts are now up in the air under Musk, who has downsized the company’s content moderation teams and pledged to bring back a slew of suspended accounts.
Musk last week announced that the company would be providing a “general amnesty” to suspended accounts “provided that they have not broken the law or engaged in egregious spam.” Musk has also said that “incitement to violence” will result in suspensions.
It’s unclear, however, how that policy may apply to accounts that were previously suspended under Twitter’s rules banning users that “threaten or promote terrorism or violent extremism.” (Twitter, which gutted its communications team, and Musk did not respond to requests for comment.)
Researchers said that gaps in Musk’s stated amnesty plans could leave room for accounts engaged in radicalization or recruitment efforts that might not be illegal to make a comeback.
“A lot of terrorist content is about recruitment rather than calls to action … so that seems like a major gap in how they’re approaching it,” said Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Lab, a think tank that researches extremism.
While Twitter is defending its counterterror work in court, researchers said the company under Musk has yet to reaffirm its commitment to tackling the issue meaningfully.
“Musk's rhetoric on the issue and even a lot of the tweets that he's posted himself are clearly signaling either a disinterest or clearly a disengagement from counterterrorism,” said Pauline Moore, a political scientist at the Rand Corp. think tank.
The researchers said it’s still too early to assess the full impact Musk’s overhaul may have on the company’s counterterrorism efforts.
But one early analysis by the Institute for Strategic Dialogue think tank found 450 Islamic State accounts on Twitter in the first 12 days after Musk’s takeover, “a 69 per cent increase over the previous 12 days.”
“There's some early data that shows that extremist actors are recognizing that they have an opportunity to get back onto a platform that they had had success on previously and then lost their foothold on over the last few years,” said William Braniff, director of the University of Maryland’s National Consortium for the Study of Terrorism and Responses to Terrorism.
Even if Twitter’s counterterror and anti-extremism policies don’t change under Musk, Braniff said, his cuts to key content teams will probably mean more content goes undetected.
Sen. Mark Warner (D-Va.), who chairs the Senate Intelligence Committee, signaled he shares that concern.
“I’m incredibly concerned about the impact of the significant reductions in Twitter’s content moderation workforce,” Warner said in a statement, adding that platforms "have a responsibility to protect their users and prevent their sites from becoming tools for spreading hate, facilitating scams, and allowing violent extremists and terrorists to cause injury and harm.”
No comments:
Post a Comment