Pages

1 February 2021

Regulate Social-Media Companies

By DIVYA RAMJEE and ELSA B. KANIA

The events of January 6 showed existing approaches to quell disinformation and incitements to violence on social media platforms have failed, badly. Even though the companies that run these platforms are displaying a new willingness to police them, up to and including banning the worst offenders, claims that U.S. tech companies can self-regulate and moderate dangerous content comprehensively should be regarded with extreme skepticism. So too, Twitter’s recent launch of Birdwatch, a crowd-sourcing forum to combat misinformation, is a welcome measure but at best a partial and imperfect solution to a far more systemic problem. Instead, it is time, at long last, to regulate.

These reforms must extend beyond stronger antitrust regulation and enforcement against Big Tech companies, which are worthwhile, but do little to restructure fundamentally how social media platforms operate. New rules must be introduced for the algorithms that decide what users see and for the data these companies collect for themselves, as well as data scraping by third parties.

Algorithms designed and implemented by social media companies, particularly artificial intelligence algorithms, use what people click on and respond to in a bid to increase traffic and help advertisers—serving up more of the same content to appease users and the platforms’ pockets. For instance, Twitter uses AI to promote to users the ‘best’ tweets and content deemed “more relevant” into users’ timelines, and reportedly, the introduction of the algorithmic timeline has helped Twitter add millions of new users. Since people are more likely to engage with content, whether true or false, that reinforces their own biases and fears, there are strong incentives for social media companies to present such content and maximize their growth and profits.

When social media companies develop and employ algorithms that tend to be biased to their benefit, based on user data from their respective platforms, the marketplace of ideas suffers to the detriment of discourse in a democracy. We are seeing the dangers of a business model that has failed to account for the adverse impacts on users’ mental health or societal implications, including misinformation and political polarization. A false narrative can scarcely be stopped from taking hold once algorithms across social media platforms have rapidly exacerbated its persistence by suggesting and promoting related content. Beyond jokes about Twitter addictions and anxious “doomscrolling,” national security concerns that may arise from design and technical decisions can be far graver, as in multiple accounts of radicalization on YouTube.

New regulations could mandate improved data protection standards to protect consumers and enforce requirements for impact assessments of algorithms before their launch, particularly for AI algorithms that learn from and leverage consumer data. Unlike the European Union, the United States has no federal comprehensive privacy laws dictating data protection standards for data aggregated by commercial entities, including social media companies. To date, progress has only occurred at the state level, such as in California and Washington, and there are urgent reasons to introduce federal laws that protect sensitive data and personal information of consumers and dictate how it is used and aggregated by companies.

We’ve seen the adverse impacts of persistent failures to hold companies accountable for the privacy of user data and its aggregation. Even publicly available information, when collected at scale, can be readily exploited. Although the poor security practices that caused a massive breach at Parler recently, which helped researchers seeking to identify those complicit in provoking or involved in insurrection, could be regarded as exceptional, such incidents have been commonplace in recent history. For instance, a breach of a social media data broker in August 2020 exposed 235 million social media profiles.

Often, data that is commercially available can be available to and potentially exploited by any number of actors. Once again, already in 2021, a leak by a Chinese start-up is reported to have exposed over 400 gigabytes of data, including personally identifiable information, on social media users worldwide. And recurrent controversy has also arisen regarding the reported purchase of data from commercially available databases by the U.S. military and federal agencies, provoking controversy on issues of civil rights and privacy.

There are urgent reasons for the United States to undertake robust legislation on not only data protection standards for users of social media, but also data ethics as applied to data aggregation and the development and implementation of AI algorithms. However, Congress has proven ill-equipped to handle these issues to date, often lacking the scientific and technical expertise to legislate or regulate effectively. Instead, predictably, there has been a backlash against “Big Tech” that risks resulting in policy measures that are inadequate or counterproductive.

The U.S. government has a long ways to go to prepare to meet such emerging challenges, especially in AI. By contrast, technology assessments and consumer impact assessments are supported in the European Union’s regulations, and China’s policymaking leverages a major brain trust and various expert groups in approaching the governance of emerging technologies. As the latest advances in AI are increasingly implemented across industry, healthcare, the criminal justice system, and the military, among other use-cases, urgent questions and concern for policy will continue to arise.

To improve its capacity to craft legislation on technical issues, Congress should revisit the recurrent proposals to revive the Office of Technology Assessment, which once provided specialized expertise and independent technology assessments, governed by a bipartisan board. Another possibility is to expand the role of the Government Accountability Office’s Office of Science, Technology Assessment, and Analytics team, launched in 2019, which provides technical services to Congress.

To this end, Congress or the Executive branch should create an advisory board to provide ongoing guidance on tech policy, data security, and algorithmic transparency. This effort could convene scientists, attorneys, technical experts, ethicists, and private sector specialists, and its work would include reviews of social media companies’ data aggregation methods and use of AI algorithms. Congress must tackle issues of far-right extremism and white supremacist violence that have become global threats, and the U.S. Privacy and Civil Liberties Oversight Board also could be leveraged for guidance on balancing these concerns and priorities.

As social media continues to enable radicalization and mobilization by a range of threat actors, the questions that arise for tech policy and regulation are no longer only abstruse concerns for technocrats. To the contrary, these are core issues for U.S. national security that require attention and robust responses from multiple stakeholders. The attacks on the U.S. Capitol were coordinated across several platforms and fueled by falsehoods about a “rigged” election that spread online. These platforms have consistently facilitated the rapid diffusion of groundless conspiracies, often in ways that can prey on those viewing such content out of curiosity or by mere passive exposure. Their attempts to self-regulate and introduce new measures to constrain the diffusion of dangerous information, often removing it post by post or tweet by tweet, remain far from adequate given the challenges.

For accountability, Congress should expand the enforcement powers of the Federal Trade Commission to enforce civil penalties in violation of broader privacy standards. To do so, Congress must provide the necessary additional resources for investigations through appropriations budgeting. Already, the FTC has forced upon Facebook an unprecedented settlement of $5 billion in civil penalties in response to privacy violations. The agency has also released best practices for the use of AI algorithms and has additionally ordered Amazon, Facebook, Twitter, and TikTok to provide the agency with details on personal data and advertising and user engagement algorithms. The continuation of these efforts requires robust, bipartisan support and funding.

The expansion of FTC oversight should concentrate on protecting consumer privacy and require impact assessments and disclosure of the technical features of algorithms that are developed for use on these platforms. Furthermore, the Securities and Exchange Commission should require transparency for social media platforms on such points as counts and views of advertisements and how these relate to their algorithms and ultimate revenues. Any potential infringements upon securities laws could provoke possible criminal investigation, which introduce another dimension to accountability.

Such calls to reform and regulate will likely encounter resistance on several fronts or be critiqued as a departure from Internet freedom. Too often, a false dichotomy has been presented or perceived to exist between the Internet as entirely ungoverned, relative to the “sovereign Internet” and “cyber sovereignty,” such as China and Russia have respectively promoted, which leverage frequent censorship and propaganda. Certainly, recent events have demonstrated that a status quo that eschews regulation or legislation is unsustainable.

Nonetheless, state control over online discourse is not the answer. While certain conservatives have critiqued recent measures by social media companies to “deplatform” Mr. Trump and other right-wing extremists as censorship or impingements upon freedom of speech, to the contrary, these measures are within their legal purview as private companies and are needed, though inadequate, as steps towards countering current threats.

Meanwhile, while reaffirming its commitment to Internet freedom and freedom of speech, the U.S. government does have viable policy instruments to regulate the behavior of platforms and mitigate the threats to American society and democracy. The United States can create a workable alternative to Internet sovereignty, based on reasonable regulations geared to sustain free and open discourse across our information ecosystem.

The Biden-Harris administration will have an opportunity to take action at this moment of crisis, and Congress also has an opportunity and obligation to step up. While there are no perfect solutions, there are many measures that the U.S. government can start or should continue. These include a combination of public-private cooperation, increased federal regulation, design reform and oversight for AI algorithms used by social media, federal support for research on potential countermeasures to misinformation and disinformation, and investment in evidence-based public intervention and education efforts. The stakes are too high to continue on our current trajectory.

Divya Ramjee is a Ph.D. candidate and adjunct professor in American University’s Department of Justice, Law and Criminology. She is also a Senior Fellow at the Center for Security, Innovation, and New Technology. Her views are her own.

Elsa Kania is an Adjunct Senior Fellow with the Technology and National Security Program at the Center for a New American Security. She is also a Ph.D. candidate in Harvard University’s Department of Government. Her views are her own.

No comments:

Post a Comment