About the Author: Shubham Singh, Fourth Year, B.A. LL.B. (Hons.) Candidate, Hidayatullah National Law University, Raipur.

On May 26, 2020, President Donald J. Trump tweeted that mail-in ballots would be substantially fraudulent and subject to robbery and forgery. In furtherance of its policy and terms of use to check any interference and manipulation with the election, Twitter tagged the tweet with a fact-check addendum that acted as a clickable link to other sources providing more information about the same. This act of Twitter received retaliation from President Trump which led to the issuance of the Executive Order on Preventing Online Censorship (the “Order”). It sought to impose liability over such platforms if they act beyond good faith to remove objectionable content and instead engage in deceptive or pre-textual actions. In this article, the author has attempted to explain that the Order would potentially lead to an extensive form of censorship consequentially curtailing free speech in violation of the First Amendment.

To interact over the internet, individuals depend on internet intermediaries such as internet service providers, social media platforms, websites, etc. These intermediaries subject individuals, who share information, opinions, and ideas over the platforms (such as Twitter or Facebook), to certain terms of use as part of their corporate social responsibility and censor the content that does not meet community standards. However, they are not under a legal obligation of censorship and in fact, are protected from any liability of content published on their platforms under the law. This protection emanates from § 230(c) of the Communications Decency Act, 1996 which states that internet intermediaries that publish content are protected from any law that would otherwise hold them liable for what third parties share through them.

With the advent of this Order, the protection is subject to dissolution which may lead to what Professor Jack Balkin defined as “Collateral Censorship.” It occurs when an entity censors another out of fear that the government will hold the former liable for the effects of the latter’s speech. Samuel S. Sadeghi provides an illustration for the same –

“Imagine a political candidate who wishes to place an advertisement in the local newspaper that criticizes his opponent. As the intermediary, the newspaper will engage in a rough cost-benefit analysis to decide whether running the advertisement is a profitable transaction. One would expect the newspaper to agree to run the advertisement in return for a profitable sum. But what if the newspaper believes that the advertisement may expose it to a defamation lawsuit and for that reason refuses to take the deal? This is an illustration of collateral censorship.”

Before concluding how collateral censorship would lead to curtailment of free speech, it is pertinent to understand why internet intermediaries would adopt it. If an intermediary has to moderate content to check whether it is defamatory, it has to deal with the questions of law and fact. As for the questions of law, it has to run through fifty different state laws on defamation and find which one is applicable. Further, defamation is subject to certain exceptions and preconditions which can only be resolved on a thorough investigation over the questions of fact. It is not only difficult to quickly determine whether certain speech is merely critical or actionable defamation but also the difficulty is amplified by the volume of content websites face.[i] A large investment in labour, time, and technology would be required for reaching an optimal accuracy in moderation.

In the matter of New York Times Co. v. Sullivan,[ii] the Court stated that websites may be deterred from permitting certain content, even though it is believed to be true and even though it is, in fact, true because of doubt whether it can be proved in court or fear of the expense of having to do so. It went on to state that whether or not they believe a potential lawsuit is meritorious, they will often default to removal because of the potential costs of litigation or an adverse result. In general, websites would err on the side of caution, defaulting to removing allegedly defamatory content instead of engaging in a costly legal and factual investigation.

Another probable solution can be employing complex Artificial Intelligence (AI) systems for moderation. However, studies show that depending on AI may lead to inequitable criminalisation on a population-wide level. The White House’s Office of Science and Technology Policy has noted the threat our increasing reliance on opaque artificial agents presents to privacy, civil rights, and individual autonomy, warning about the encoding of a potential discriminatory automation. Due to risk of costly litigation and adverse judgements, the expense of not censoring contents is much greater than that of collaterally censoring, which is considerably low.

This is where the Order false foul of the First Amendment which provides for the protection of free speech. In the matter of Bantam Books, Inc. v. Sullivan,[iii] the Court stated that First Amendment prohibits the government from censoring lawful speech; it includes online speech of ordinary people, through the threat of liability being directed at internet intermediaries that provide the platform for online speech. With the Order pushing for collateral censorship, the first form of speech to be suppressed would come from the most vulnerable and marginalised groups who share views that cannot be published anywhere else. Non-profitable websites like Wikipedia which are backed by § 230 of the Communications Decency Act would no longer be able to provide access to free information. And lastly, the non-defamatory speech of individuals with an iota of controversy would be curtailed, leaving a majority based view to thrive over minority expression.

Image Source: Here.

[i] Zeran v. AmOnline, Inc., 129 F.3d 327 (4th Cir. 1997).

[ii] 376 U.S. 254 (1964).

[iii] 372 U.S. 58 (1963).