By Anushka Verma

The 21st century is the age of what can be termed an ‘infodemic’. There is a constant flow of information that has left no corner of the world untouched. This infodemic is facilitated by social media platforms like Facebook, Twitter, Instagram, WhatsApp, Snapchat etc. Freedom of speech and expression has attained new heights, with everyone being able to update onto the internet whatever they feel, their works of art, and their opinions with the single click. This has however also given rise to content that may be inappropriate to widely circulate on internet.

For the last few years, Facebook has been in the thick of things. It was called out for the wide dissemination of posts that encouraged genocide in Myanmar, for not fact-checking speech in political ads, for banning historians who shared relevant but disturbing photos, for removing a Pulitzer-Prize winning photograph on account of ‘nudity’, for the failure to shut down fake news and misinformation during the US Presidential elections, and more famously for the Cambridge Analytica data breach scandal. To address these concerns, Facebook announced in November 2018 that it shall set up an independent oversight body that shall decide issues regarding the post have been taken down by Facebook. It shall essentially regulate the content of Facebook, and its decisions shall have to be accepted by the social media platform.

The oversight Board: An Overview

The Oversight Board is a major step towards corporate self-governance. Its charter was unveiled in June 2019, bylaws were released in January 2020 and on 6 May 2020, 20 members were appointed to the Board. The chosen members form an international panel of free expression advocates, journalists, a former prime minister, a Nobel laureate, and law professors. A notable appointment to the Board is Prof. Sudhir Krishnaswamy, the Vice-Chancellor of NLSIU, Bangalore. An essential feature of this Board is its independence. Although the Board is supported by a trust fund of $130 million granted by Facebook, it can accept contributions from other entities, and is completely independent of Facebook. It shall work initially for a term of 6 years, and this tenure cannot be prematurely ended.

The Board shall act as a ‘Supreme Court’. Whenever Facebook’s content moderation team takes down a post according to the platform’s policies, the users shall have a right to appeal against it. The channel of appeal ultimately leads to the Board, which will decide whether the post shall be taken down, or stay up on the platform. The decision shall be within the purview of freedom of speech and expression, but shall be circumscribed by the greater concerns of international norms of human rights. Facebook is bound to comply with the decisions of the Board. While the Board does not have powers to change the content policy of Facebook, it can issue non-binding advisory opinions on the same. In its initial phases, the Board will only rule over disputes that involve posts that have been taken down. As of now, no provision has been made in cases where there is a dispute as to the contents of the posts that have been left up on the platform. However, it has been stated that the scope of the Board shall gradually be expanded to include posts that have been left up as well.

In an opinion-editorial piece to the New York Times, the newly appointed Board members have said, “We will not be able to offer a ruling on every one of the many thousands of cases that we expect to be shared with us each year. We will focus on identifying cases that have a real-world impact, are important for public discourse and raise questions about current Facebook policies. Cases that examine the line between satire and hate speech, the spread of graphic content after tragic events, and whether manipulated content posted by public figures should be treated differently from other content are just some of those that may come before the board.”

The Way Forward or a Façade of oversight?

As it is to every development, there are 2 streams of thought with regard to the internet ‘Supreme Court’ as well. There are supporters of this development who believe that it shall bring a level of accountability to Facebook. People aggrieved by the platform’s decisions now have an independent body that shall address their grievances. It shall also add a level of transparency to Facebook’s actions and its content moderation policies. Once the Board has suggested policy change recommendations, Facebook shall have to publicly respond to such suggestions and give reasons as to why these suggestions will or will not be adopted. Another point raised is that this model can be adopted as a general model for platform governance across various social media platforms. It has also been suggested that since the Board is an independent body, gradually it can expand its operations to move beyond Facebook.

However, critics also weigh the faults in this system. One major bone of contention is the limited scope of what content the Board can oversee, by the virtue of Article 2 of the bylaws. It states that the following content shall not be within the purview of the Board’s powers:-

Content types: ​content posted through marketplace, fundraisers, Facebook dating, messages, and spam.

Decision types:​ decisions made on reports involving intellectual property or pursuant to legal obligations.

Services:​ content on WhatsApp, Messenger, Instagram Direct, and Oculus.

This considerably narrows down the scope of data to be analysed by the Board, to include posts on Facebook and Instagram feeds. It does not take into account the fear-mongering content, hate speech and fake news that spreads through what is now being called ‘WhatsApp University’. (A contrary point to this is that interference in private messaging spaces can be termed as a violation of privacy.)

This decision to exclude WhatsApp is interesting to note, particularly from an Indian standpoint. Since June 2019, Facebook has been involved in a tussle regarding the circulation of messages on WhatsApp. There was cases filed in the Madras, Bombay and Madhya Pradesh High Courts with a similar premise: the petitioners asked Facebook to track where messages were originating from, citing that public order and maintenance of peace and security was an exception culled out to tight of freedom. To the contrary, Facebook asserted that due to its end-to-end- encryption policy, it cannot pinpoint where the messages are originating from. It then appealed to the Supreme Court to club all these cases, pointing out that the platform has users across India and the apex forum would be an appropriate forum for such decisions. In January 2020, the Supreme Court directed the Madras High Court to transfer all files related to the ‘WhatsApp traceability’ case to the apex court. Although the case has not progressed in the Supreme Court, it is interesting to note that even when Facebook formulated its bylaws for the Oversight Board, it chose to overlook the ongoing litigation, and did not cull out an exception for WhatsApp, especially in cases concerning public order and maintenance of peace and security of states. It also is an indicator of Facebook’s attitude of not divulging the details/messages of the users, and not wishing to step on the toes of their privacy in private messaging spaces.

Further, pursuant to Article 5, the Board cannot amend critical bylaws without the approval of a majority of the individual trustees and with the agreement of Facebook and a majority of the board. This casts a qualm as to whether the Board is truly independent or not. Article 2 also suggests that the Supreme Court decisions shall not set a precedent as they shall only be applicable to the factual matrix which is being assessed. While a similar fact-set shall be analysed by Facebook, it is not bound to take action on the same.

Another concern is that the bylaws only allow moderation of content that has been taken down by Facebook. However, critics suggest that the problem lies much deeper, i.e., in the Facebook algorithms that relies on personal data and curate feeds according to what it thinks the viewer shall like, and ads which shall generate maximum profit. Algorithms that allow such information to be freely disseminated must be kept in check. The present model only takes decisions on individual grievances, and not the deep seated problems.

Another riveting debate that is reignited by the formation of the Oversight Board is the freedom of speech and expression v. censorship debate. Will the Board have an overarching impact on the freedom of speech of individuals? How will it maintain uniformity across countries, with different standards of what is acceptable? It may be so that something that is acceptable under the American freedom of speech standards is not permissible under the Indian freedom of speech and expression standards. It remains to be seen if an international panel can come to consensus on how the posts shall be moderated.

It is the opinion of the author that the Oversight Board is the first step towards platform governance. While critics may see it as a platform interference measure, we must also understand the need to regulate certain content that may be violative of international human rights is commensurate with the burgeoning infodemics that the world is facing. While I understand that multiple concerns of supremacy of free speech shall be raised, it is also imperative to understand that free speech cannot trump the principles of international harmony, and co-existence. The Board must soon expand its operations to decide on the posts that have been left up, as well as the content governance policies of Facebook itself.

Image: from here.