By Sheera Frenkel, Kate Conger and Kevin Roose
Russia created a playbook for spreading disinformation on social media. Now the rest of the world is following it.
Twitter said on Thursday that countries including Bangladesh and Venezuela had been using social media to disseminate government talking points, while Facebook detailed a broad Iranian disinformation campaign that touched on everything from the conflict in Syria to conspiracy theories about the Sept. 11 attacks.
The campaigns tied to various governments — as well as privately held accounts in the United States — followed a pattern similar to Russian disinformation efforts before and after the 2016 presidential election. Millions of people were targeted by content designed to widen political and social divisions among Americans.
The global spread of social media disinformation comes in a year when major elections are set to take place in countries including India and Ukraine. Last year, social media disinformation played a role in a number of campaigns, including the highly contested presidential election in Brazil.
“Elections are coming up around the world, and our goal is to protect their integrity to the best of our ability and to take the learnings from each with us,” said Carlos Monje Jr., Twitter’s director of public policy in the United States and Canada, in a blog post.
Twitter also described a spike in domestic disinformation, or Americans targeting fellow Americans with false or misleading information.
During the midterm elections in the United States last year, most of the false content on its site came from within the country itself, Twitter said. Many of the misleading messages focused on voter suppression, with the company deleting almost 6,000 tweets that included incorrect dates for the election or that falsely claimed that Immigrations and Customs Enforcement was patrolling polling stations.
Twitter users posted 99 million tweets about the midterms — more than the social media company has observed during any prior election, Mr. Monje said.
The company said it was still finding new suspicious activity by Russians, and that it had found and removed 418 accounts linked to Russia between last October and December. Previously, Twitter removed 3,843 accounts linked to the Russian government-associated troll farm called the Internet Research Agency.
The 418 new accounts linked to Russia mimicked the behavior of the 3,843 accounts that were run by the I.R.A. Yoel Roth, Twitter’s head of site integrity, said in the blog post that the company could not prove that the new accounts it discovered were run by the I.R.A.
Though Twitter and Facebook announced their findings separately, the companies — both under pressure to crack down on disinformation on their services — collaborated on the investigation.
The most successful and ambitious of the disinformation efforts detailed on Thursday was believed to be an Iranian-led campaign that used Facebook and Twitter to reach millions of people across dozens of countries.
The Iranian campaign had sought to sway public discourse in countries across the Middle East, Europe and Asia, Twitter and Facebook said. Some of the social media accounts involved in the campaign had been active for over a decade. Facebook said it had removed 783 pages, groups and accounts with ties to Iran, while Twitter removed 2,617 Iranian-linked accounts.
Facebook’s investigation focused on pages tied to Iran that in some cases were nearly nine years old. The page administrators and account owners claimed they were local and posted items on topics like Israeli-Palestinian relations and the conflicts in Syria and Yemen.
The Iranian effort had a number of goals, according to the Atlantic Council’s DFR Lab, which studies disinformation. The Facebook pages “promoted or amplified views in line with Iranian government’s international stances,” wrote the DFR lab in its initial analysis. Researchers noticed that the content shared included a strong pro-Iranian government bias, as well as an effort to advance Iranian interests.
In several examples viewed by the DFR lab, the campaign shared content as varied as pro-Palestinian images and conspiracy theory videos that argued the Sept. 11 attacks were an “inside job” executed by the government of the United States.
Interested in All Things Tech?
Facebook shared information about the campaign with the lab before the posts were removed.
Last year, Facebook announced it had taken down two separate Iranian-linked disinformation campaigns. In October, the company said a campaign originating in Iran had been targeting people in the United States and Britain. In August, Facebook said it had found an influence operation that originated in Iran and Russia.
Two other disinformation campaigns that Twitter removed were from Venezuela, which is currently grappling with political turmoil as Juan Guaidó, the opposition leader, has declared himself the country’s acting president in a challenge to the incumbent, Nicolás Maduro. (Both men have taken to Twitter to champion themselves.)
One Venezuelan campaign that Twitter uncovered was made up of 764 accounts that posted about American politics and the midterm elections, while another network of 1,196 accounts posted political content targeted at Venezuelan citizens.
Twitter was able to determine that the domestic Venezuelan campaign was organized by the Venezuelan government because of digital clues linking the accounts to the country. The activity also followed specific guidelines that were laid out in a troll farm guide compiled by the country’s government and obtained by Bloomberg, a person familiar with the campaign said.
Twitter has said it is difficult to definitively tie accounts to specific countries or governments, though it uses information about how someone logs in and what kinds of content is posted to the account to determine its origin.
Twitter and Facebook made their announcements Thursday as part of an effort to increase transparency around the fake accounts the companies find on their platforms. Twitter, for example, has published new data on the issue periodically since last October as it has faced scrutiny over how its service can be gamed to sway people’s thinking. Twitter said that it challenges 8 million to 10 million suspicious accounts every week.
Twitter, Facebook and Google have been criticized by lawmakers, regulators and users around the world for not doing enough to curb disinformation.
Social media executives, including Sheryl Sandberg, Facebook’s chief operating officer, and Jack Dorsey, Twitter’s chief executive, have since been called to testify about the problem before Congress. All have vowed to take measures to minimize the distribution of disinformation on their sites, many by using automated tools to detect fake and suspicious accounts.
“This is an encouraging example of the type of collaboration we’re working to build across industry,” said Nathaniel Gleicher, Facebook’s head of cybersecurity policy.
No comments:
Post a Comment