by Karen Kornbluh
This week’s Senate Intelligence Committee hearing with top executives of Facebook and Twitter was the most serious to date on digital disinformation. Instead of grandstanding or senators posing “gotcha” questions, the lawmakers and the executives largely agreed that policymakers, the companies, the intelligence community, and users all have roles to play in addressing the problem. It’s more likely, however, that any effective action will come from the companies, since new legislation is unlikely amid the Trump administration’s accusations that the platforms are suppressing conservative political speech.
Facebook’s Sheryl Sandberg and Twitter’s Jack Dorsey admitted that their companies were late to move against widespread disinformation campaigns by Russian entities aimed at influencing the 2016 U.S. election. What’s more, they said, their companies are still not finding all fake accounts fast enough. While this is unacceptable only weeks before the midterm elections, the executives were acknowledging what is generally known and accepted their obligations to do more. The two seemed open to working on policy proposals [PDF] from Senator Mark Warner (D-VA), the committee’s ranking member. They include steps to more clearly label large-scale bot networks so that users are not duped and increased transparency about users’ data collection.
Between a rock and a hard place?
During the Senate hearing, news broke that Attorney General Jeff Sessions would meet with state attorneys general to discuss investigating the digital companies for abusing their market size to stifle speech. In the afternoon, the House Energy and Commerce Committee held a separate hearing with Dorsey focused on unsubstantiated claims that the platforms suppress conservative content for political reasons.
The companies face a test. They can agree to reforms, or they can use the threats from the Trump administration as a rationale to largely continue business as usual, permitting disinformation to proliferate on their platforms to continuing corrosive effect. One immediate signal came from Twitter, which yesterday banned Infowars, the media arm of online conspiracy peddler Alex Jones.
Content moderation raises important issues of free expression, but the digital platforms could protect themselves against accusations of bias by being consistent and clear in how they enforce their terms of service, as well as by affording those whose accounts are frozen a mechanism for appeal. Increasing their own transparency and cracking down on fraudulent accounts in ways addressed at the hearing can help combat disinformation without threatening civil liberties.
As the executives made clear at the Senate hearing, failure to clean up their platforms will erode user trust. The Pew Research Center reports that 44 percent of Americans between the ages of eighteen and twenty-nine deleted the Facebook app during the last year.
What comes next?
The platforms could wait on the November 6 midterm election results to decide how to act. But with disinformation ramping up before the election and investigations ongoing at the U.S. Federal Trade Commission and the UK Information Commissioner’s Office stemming from the Cambridge Analytica scandal likely to fuel public pressure, the platforms could decide to accelerate their efforts to clean house but with more collaboration and transparency.
The Senate hearing may have been just one more instance of Washington theater, or it may one day be seen as the turning point when policymakers and industry leaders decided to work together to update the digital policy framework.
No comments:
Post a Comment