Pages

14 October 2022

The Supreme Court and social media platform liability

John Villasenor

Over a quarter of a century after its 1996 enactment, the liability shield known as Section 230 is heading to the Supreme Court. Section 230(c)(1) provides, with some exceptions, that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

This sentence is sometimes referred to as the “26 words that created the internet,” because it freed websites that host third-party content from the impossible task of accurately screening everything posted by their users. For example, if your neighbor posts a tweet falsely alleging that you are embezzling money from your employer, you can sue your neighbor for defamation. But a suit against Twitter will go nowhere. As the text of Section 230 makes clear, it is your neighbor and not Twitter that bears the liability for the defamatory tweet.

ARE TARGETED RECOMMENDATIONS PROTECTED BY SECTION 230?

But what about targeted recommendation decisions made by social media companies? For instance, a social media company will often recommend sports content to users who have a history of seeking out sports content. Are decisions about what content to recommend protected by Section 230? That question will soon be argued before the Supreme Court.

On October 3, the Supreme Court agreed to hear Gonzalez v. Google, a case initiated by relatives of Nohemi Gonzalez, a U.S. citizen killed by ISIS terrorists in a November 2015 attack at a Paris restaurant. After the attack, the plaintiffs filed a claim in a California federal district court against YouTube’s owner Google under the Anti-Terrorism Act, which provides a cause of action for the “estate, survivors, or heirs” of a U.S. national killed in “an act of international terrorism.”

In their complaint, the plaintiffs alleged that “by recommend[ing] ISIS videos to users, Google assists ISIS in spreading its message and thus provides material support to ISIS.” The district court dismissed the complaint, finding that multiple of the plaintiffs’ claims “fall within the scope of [Section 230’s] immunity provision” and that “Google’s provision of neutral tools such as its content recommendation feature does not make Google into a content developer under section 230.” The plaintiffs then appealed to the Ninth Circuit.

Citing its own 2019 precedent in Dyroff v. Ultimate Software Group, the Ninth Circuit affirmed in 2021, writing that a “website’s use of content-neutral algorithms, without more, does not expose it to liability for content posted by a third party.” Notably, two of the three judges on the Gonzalez v. Google panel—one in concurrence and one in dissent—argued that recommendation decisions should not be protected by Section 230. After the Ninth Circuit’s decision, the plaintiffs sought and have now been granted Supreme Court review.

A NARROWED SCOPE OF LIABILITY PROTECTION

The Supreme Court appears highly likely to use Gonzalez v. Google to limit the scope of Section 230’s liability protections. When the Court declined in October 2020 to review Malwarebytes v. Enigma Software Group (a case that involved a different provision of Section 230), Justice Thomas penned a detailed criticism of broad interpretations of Section 230. After noting that such interpretations “confer sweeping immunity on some of the largest companies in the world,” he concluded that “we need not decide today the correct interpretation of §230. But in an appropriate case, it behooves us to do so.” Two years later, at least four justices—almost certainly including Justice Thomas—have agreed that Gonzalez v. Google presents the right opportunity for the Court to weigh in on the scope of Section 230.

Even the choice of language in the Supreme Court’s description of the question at issue gives a nod to the likely nature of the decision. Noting that three appeals courts decisions have considered whether targeted recommendations are protected by Section 230, the description states that “[f]ive courts of appeals judges have concluded that section 230(c)(1) creates such immunity. Three courts of appeals judges have rejected such immunity. One appellate judge has concluded only that circuit precedent precludes liability for such recommendations.”

That statement is of course true, but it leaves unsaid the fact that all three decisions (Dyroff and Gonzalez v. Google in the Ninth Circuit; Force v. Facebook in the Second Circuit) found recommendation decisions to be protected under Section 230. In short, there is no circuit split on this topic. Instead, the Supreme Court’s description of the background regarding the “question presented” in Gonzalez v. Google highlights the differing views among the nine individual appellate judges across the multiple three-judge panels that issued those appellate court decisions.

UNINTENDED CONSEQUENCES

It is not particularly difficult to write a decision holding that 230(c)(1) does not apply to targeted recommendations. The challenges would lie in interpreting and applying that decision. It would trigger a wave of litigation, and with it, endlessly complex arguments over what constitutes a “targeted recommendation” for which liability protections do not apply.

It may also reopen what had been the largely settled (in the affirmative) question regarding whether traditional search engines are protected by 230(c)(1). For instance, if the Supreme Court rules that offering targeted recommendations creates liability exposure, does offering search results shaped in part by data about a user’s prior online viewing also create exposure?

In addition to creating new legal complexities, reversing the Ninth Circuit would complicate the ongoing legislative dialog regarding potential changes to Section 230. And, it would create difficult technological and policy challenges. In its decisions in Dyroff and Gonzalez v. Google, the Ninth Circuit endorsed content neutrality as a means to ensure that targeted recommendation decisions are protected under Section 230. A content-neutral recommendation engine operates the same way, regardless of the specific nature of the underlying content.

If, instead, social media sites (and other websites that recommend content posted by their users) are required to consider liability risk when providing targeted recommendations, they would need to decide what types of content might expose them to liability. They would also need to develop algorithms to automatically classify content, and policies to act on the resulting classifications when providing recommendations.

This would be impossible to do perfectly given the subjective judgment sometimes involved in those decisions and the difficulty that algorithms have in understanding concepts like parody. And, given the nature of the internet and the number of people who use it, almost any form of content could lead to a liability claim. It is unclear how this would alter the many services that help us find online content.

The upshot is that the unintended consequences of Gonzalez v. Google may be profound. Hopefully, the Supreme Court will give due consideration to these issues as it formulates its decision.

No comments:

Post a Comment