The Thin Line Between  Countering Online Disinformation and Restricting Free Speech

The U.S. Supreme Court will soon rule on three cases with important implications for the ability of social networks to moderate content. These developments may also have an impact in the EU
Number: 84
Year: 2024
Author(s): Marco Bassini

The U.S. Supreme Court will soon rule on three cases with important implications for the ability of social networks to moderate content, particularly when it comes to disinformation and political propaganda. These developments may also have an impact in Europe

Bassini hate speech

A few years ago, the Supreme Court of the US, in a case well known to Internet law scholars (Packingham v. North Carolina), described cyberspace as the modern public square. The Supreme Court was just trying to come up with a nice metaphor to highlight the essential role of cyberspace, and especially social media, in modern society, and to underscore the shift from real places to virtual places as predominant forums for the exercise of free speech. 

The Court could hardly have imagined that this wording, far from implying a normative claim, could trigger a key debate on the legal status of social networks. Behind this simple metaphor lies the dilemma of whether social networks, given their vital role, should be held to the same standards of protection of freedom of expression that are binding on state authorities, where only illegal content can be silenced.

The constitutional background

In the U.S., as in Europe and other jurisdictions, interferences with free speech rights are constitutionally permissible in the case of illegal content, i.e., content that the law qualifies as such (e.g., in the case of defamation). ‘Censorship’ by government actors should therefore be limited to content that is qualified as such by law. Any other content-related restriction that is not based on law, such as in the case of disinformation, would be considered impermissible viewpoint discrimination.

However, this is not the case in social networks, where content moderation is primarily based on the terms of service (as a form of private governance, in Balkin’s words), and thus service providers enjoy greater discretion. This means that even content that is not considered illegal may be ‘censored’ simply because it may violate applicable community standards or guidelines: disinformation, if it does not harm a legally protected societal interest, is an example of content that is not illegal but - at least under certain conditions - harmful. 

As a result, individuals may enjoy less protection when their speech is hosted by private platforms, as the latter can silence what state authorities cannot. If this conclusion makes sense from a legal perspective, since private actors are not obliged to protect free speech in the absence of horizontal effects for constitutional obligations, it should be questioned whether it is sufficiently future proof in light of the key role of social networks in the modern public sphere. Aren’t social networks essential facilities for the exercise of freedom of expression (the ‘modern public square’)?

A right to content moderation? 

In the wake of some cases involving, among others, former President Donald Trump, lawmakers in the US have taken some initiatives to limit the ability of social networks to engage in content moderation, especially when it comes to the speech of political figures. 

In 2021, both Florida (through Senate Bill 7072) and Texas (through House Bill 20) passed two laws that included both transparency requirements and restrictions on content moderation, including by prohibiting the ‘deplatforming’ of candidates running for office. These initiatives were driven by government concerns that social networks might reflect their alleged bias toward certain political views in the way they police content, particularly during election periods. Both laws have now come under the scrutiny of the Supreme Court after being challenged and enjoined in NetChoice v. Moody and NetChoice v. Paxton, respectively.

Interestingly, the federal courts had very different views on the compatibility of the laws with the First Amendment and the existence of a ‘right’ to private content moderation as part of social media platforms’ free speech rights.

The Supreme Court’s upcoming decision on the two cases will be an opportunity to test the traditional understanding of social networks and their relationship to free speech rights, and to weigh in on the fit between private content moderation and the standards applicable to government actors.

‘I can’t do it but perhaps you can…’

Another case pending before the Supreme Court will soon address a similar issue, albeit from a slightly different perspective. In Murthy v. Missouri, the Court will review a preliminary injunction against the federal government that prevents some of its officials from influencing social networks in the way they perform content moderation for certain types of information that do not constitute illegal content. 

In the case at hand, pressure from government officials in various federal agencies was reported with respect to Covid 19-related disinformation during the pandemic and gave rise to some of the complaints that ultimately led to the Fifth Circuit Court of Appeals injunction now under review. In a nutshell, the government tried to get silenced through social media platforms content that it could not directly censor.

While Moody and Paxton concern whether it is consistent with the First Amendment to require social networks to host speech that they would remove, Murthy asks the Court to rule on the constitutionality of requiring the opposite, i.e., the removal of content that service providers would otherwise handle. In short, the question is whether social networks enjoy editorial freedom even though they are not publishers. This is not far from the point that both Moody and Paxton have already made.

Recall that the First Amendment prevents the government from abridging the freedom of speech. But what if the abridgment is not directly caused by the government, but is prompted by its conduct?

In Murthy, the Fifth Circuit held that a First Amendment violation occurs, because of a restriction of permissible speech, ‘when a private party is coerced or significantly encouraged by the government to such an extent that his 'choice'-which would be unconstitutional if made by the government-must as a matter of law be deemed to be that of the state’. Regardless of the validity of this assumption, which remains to be proven, the case raises the question of whether a line can be drawn between what constitutes permissible persuasion and what amounts to coercion or substantial encouragement, with potential implications for the right of platforms to engage in content moderation. And, of course, for the global fight against disinformation, which is a litmus test for the success of recent legislative initiatives in Europe and the US.

The European understanding of free speech online

The public versus private dilemma regarding the nature of online platforms is a common one for regulators and courts in different jurisdictions. It is no coincidence that back in 2022 the EU passed the Digital Services Act (DSA), a regulation that, among other things, introduced stricter obligations for very large online platforms to increase transparency in content moderation. 

Without questioning the private nature of these platforms and services, the DSA requires the major players in the digital services market to comply with specific requirements that reflect the key role they play in the modern public sphere. A clue to this rationale lies in Article 14, which requires hosting providers (whether very large or not) to ‘take due account of the rights and legitimate interests of all parties concerned, including the fundamental rights of recipients of the service’ when moderating content in accordance with their terms and conditions. 

This provision does not, per se, make freedom of expression directly enforceable against social networks, but reflects a commitment to impose some constraints on private content moderation in light of its huge impact on the public sphere and, ultimately, on democracy. This is why the DSA, along with other co-regulations measures (such as the 2022 Strengthened Code of Practice on Disinformation) is among the pillars of the EU strategy to fight disinformation. Whether the upcoming developments in the US Supreme Court case law will pave the way to revisiting this approach on the verge of the European Parliament and US presidential elections, however, remains to be seen.

Bassini disinformation 2

In the U.S., as in Europe and other jurisdictions, interferences with free speech rights are constitutionally permissible in the case of illegal content, i.e., content that the law qualifies as such (e.g., in the case of defamation)

IEP@BU does not express opinions of its own. The opinions expressed in this publication are those of the authors. Any errors or omissions are the responsibility of the authors.

If you want to stay up-to-date with the initiative of the Institute for European Policymaking@Bocconi University, subscribe to our monthly NEWSLETTER here.