Introduction
On February 21, 2023, the Supreme Court of the United States ("Supreme Court") heard the oral arguments in the case Gonzales v. Google. (the "Case"). The dispute leading to the Case was brought by Reynaldo Gonzalez, whose daughter, an exchange student, was killed in the ISIS terror attack in Paris in 2015, and who alleges that YouTube's algorithm aided in the attack by recommending ISIS recruitment videos to people who would be most susceptible to their message. As the dispute and subsequently the Case also concerns Communications Decency Act's Section 230, the outcome of the Case is reported to be capable of deciding the future of social media platforms worldwide.
Background and Procedural History
As briefly stated above, the petitioners in the Case are the estate and family of Nohemi Gonzalez, an American killed during a 2015 terrorist attack in Paris, for which ISIS claimed responsibility. In 2016, the petitioners initially sued Google under the Anti-Terrorism Act, (18 U.S.C. § 2333) and alleged that, by operating YouTube, Google committed or abetted "an act of international terrorism" that caused Ms. Gonzalez's death. Their complaint also focused on Section 230(c)(1) and YouTube's alleged failure to prevent ISIS from posting content on the website and to promptly delete all ISIS content.
Google then successfully quashed the case in court by claiming Section 230 protection. The U.S. Court of Appeals for the 9th Circuit later affirmed this approach (in line with earlier precedents) with its decision dated June 22, 2021, by concluding that websites such as YouTube or Twitter should not be held liable if their algorithms showed illegal content (to the extent these are not intentionally designed to promote such illegal content).
Then the Supreme Court took the case upon Gonzalez's appeal to evaluate whether the lower courts' interpretation of Section 230 was too broad. This is the first time the Supreme Court has interpreted its scope. The Case gives the Supreme Court chance to reshape, redefine, or even repeal the foundational law of the internet:
Section 230 of the Communications Decency Act
In 1990s, the early forms of social and news websites faced a tremendous amount of legal challenges for featuring third party content on their platforms, while users of these platforms faced liability for merely reposting articles. Moreover, by mid-1990s, such websites accepted all third-party content without organizing or limiting content, to avoid liability and thus gave rise to the proliferation of content comprising hate speech, illegality and pornography.
To address this issue, the US Congress enacted the Communications Decency Act's Section 230 (47 U.S.C. § 230). Section 230(c)(1) states that: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." while, Section 230(c)(2) encourages the removal of "objectionable" content.
While the primary purpose of Section 230 of the Telecommunications Act, which was enacted back in 1996 was to increase competition in broadcasting and telecoms markets, it has also become a protection that has shielded companies whose platforms have enormous reach and influence from being held responsible for harms caused by extremist content and/or disinformation.
Without Section 230, any provider or user of an "interactive computer service" (such as a website or social media platform) that broadcasts, hosts, or recommends third-party content could possibly face liability. In other words, the purpose of Section 230 was to prevent platforms from being sanctioned whenever it is believed that the appropriate free expression on the internet is exceeded or violated to the detriment of people and general public. However, it is extremely hard to draw a line where free expression is no longer deemed appropriate, while also trying to uphold the freedom of expression, which is a legal right protected by the First Amendment to the US Constitution.
However, Section 230, has been largely debated for nearly two decades, yet these debates were rather confined to lower levels of the US federal court system and did not reach the Supreme Court level. With an escalating fashion starting with the 2016 presidential election of the US, especially conservatives started to demand greater scrutiny and restrictions on how platforms shall police content published on their platforms.
Issues and Analysis
The Case, on the other hand takes a different approach from those other debates and arguments against Section 230. It focuses on the platform' failure to deal with extremist content, and in fact accuses it for not only failing to prevent it, but also for facilitating hate speech and calls for violence.
The main question under scrutiny in the Case is whether "Section 230(c)(1) immunize[s] interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit[s] the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?"1
In its current form, Section 230 provides that an interactive computer service is not liable for any information provided by another information content provider that appears on the service's website (with a few exceptions). Therefore, if the Supreme Court were to uphold the decisions of the lower courts and keep the practice pertaining to Section 230, then social platforms, (or "interactive computer services" as referred to by the parties to the Case) will continue to benefit from the protective legal shield, which immunizes them against facing claims for third party content on their platforms, in most cases.
If the Supreme Court, however, were to issue a broad ruling, this would break the settled precedent on the matter and bring about a series of potential cases challenging every content moderation decision made by internet platforms of all sizes, ranging from Twitter to local platforms. There is no doubt that this would significantly impact large social network providers such as the platforms of Meta and TikTok, however, it is expected that it would be even more devastating for small platforms, as they may be forced to shut down comments/interactions in their platforms, due challenges regarding the expense and uncertainty of monitoring user submissions.
On this matter, Google argues, in the brief it submitted to the Supreme Court on January 12, 2023, that if the Supreme Court indeed decides to change the current and widely accepted application and scope of Section 230 in the above-described sense, this will result in a digital experience that reflects "the exact opposite of Congress's legislative intent" and would "impede access to information, limit free expression, hurt the economy, and leave consumers more vulnerable to harmful online content".
Reactions, Comments and Anticipations
a. Executive Branch
It is known that former US President Trump was a vocal critic of Section 230, most likely due to Twitter and Facebook's acts of deleting and tagging his posts that contained inaccuracies about Covid-19 and mail-in voting. Consequently he issued an executive order which stated that Section 230 protection should only apply to platforms that have "good faith" moderation, and then called on the FCC to make rules about what constituted good faith. The order asked regulators to redefine Section 230 more narrowly, bypassing the authority of Congress and the courts.
However, the above attempts did not succeed as President Biden revoked the executive order just a few months after taking Office. That said it is known that President Biden isn't supportive of Section 230 either, as he had proposed revoking Section 230 completely until Congress could agree on the lacking aspects of the said section.
b. Reactions from the Sector
Many players active in the tech, news and innovation sectors have submitted briefs to help inform the court in the Case. Many of these argued that technology companies and social network providers were no longer acting "voluntarily" or "in good faith" to restrict objectionable content as Section 230 requires, and, thus, they should no longer be entitled to the protection it offers.
Some NGOs, on the other hand, argued that limiting Section 230 comprised a profound policy change that should be left for Congress to determine, and not the judiciary.
Conclusion
Once the Supreme Court drafts a decision, and this gets agreed on by the majority of the justices, it will become controlling precedent. The Supreme Court is expected to issue its decision in late June or early July 2023. Depending on the outcome of the Case currently pending adjudication before the Supreme Court, as explained above, interactive computer services, social network providers and even interactive news websites could be deprived of the protection conferred on them by the provisions of Section 230, which could facilitate the global trend on regulating big tech and the way they moderate and control content on their platforms. The ruling in the Case may lead to significant conclusions that would not only directly impact the platforms, but will also shape how internet works at large.
Footnote
1. U.S. Supreme Court, Gonzalez v. Google LLC: On Petition for a Writ of Certiorari to the United States Court of Appeals for the Ninth Circuit," filed April 4, 2022.