GUEST POST: MANIKA DAYAL- CHALLENGES TO AUTOMATED FILTERING: IS THERE AN ALTERNATIVE?

About the Author: Manika Dayal is a recent graduate of Jindal Global Law School, and has a key interest in Intellectual Property Laws and Policy.

Public demands for internet platforms to intervene more aggressively in online content are steadily mounting. Calls for companies like YouTube and Facebook to tackle issues ranging from “fake news” to virulent misogyny to online radicalization which seem to make daily headlines. Some of the most emphatic and politically ascendant messages concern countering violent extremism. Platforms are now facing increasing pressure to detect and remove illegal content. In the United States, for example, bills are pending in the House that argue for the  removal of safe harbour protection for platforms that do not remove illegal content related to sex trafficking. This increasing pressure on online intermediaries is consequently clashing with the  attention mounting on – whether algorithms, that can automate the process of content moderation for the detection and ultimate removal of illegal content, are suited for the subject matter or not.

The hunger for information is never ending, coming from algorithms on social media platforms which often are a direct reflection of the narrative that the people want to sew, even if the final narrative is not desirable. Furthermore, filter bubbles on these platforms tend to pose a serious impediment which makes it pertinent to think about algorithmic transparency and accountability. Discussions about the impact of artificial intelligence on information intermediaries are often overstated, considering the practical realities of artificial intelligence technology. An example for the same could be that in certain occurrences have resulted in  AI-backed content moderation systems failing to recognize that these systems are not sophisticated enough to moderate content without human intervention. However, in other areas, AI systems have already had significant impact, raising concerns for issues like AI bias and algorithmic discrimination.

Laws Governing Content Moderation in the United States


In the early 1990s, with growing concerns around the proliferation of illicit content on the web, especially pornography and piracy, policymakers in the United States created a legislative reaction to these issues by enacting the Communication Decency Act (CDA). In 1996, the CDA made it illegal to provide “obscene or indecent” material to minors. However the next year, in Reno v. ACLU  the ban was determined to be unconstitutional by the Supreme Court. Although, parts of the law which entailed the safe harbour protection survived – section 230 of the CDA, providing “interactive computer service providers” against any liability for harmful material their users might provide. It is pertinent to note that the section consists of two parts, first that the intermediaries are not liable for the speech of their users. Which translates into the fact that the intermediaries don’t need to continuously monitor and police the content. Consequently, the second part of the section envisaging that if they choose to do so, the safe harbour clause won’t be taken away from them. The primary conclusion that shines from the reading of this section is that  Section 230 of the CDA provides broad immunity to platforms, with the express goals of promoting economic development and free expression. Like Section 230, the Safe Harbour provisions of the Section 512(c) of the Digital Millennium Copyright Act (“DMCA”) also provide certain protection for online services providers in the form of an affirmative defence to copyright infringement claims arising out of certain user content displayed at the direction of a user if certain “safe harbour” conditions have been met. Similar to the CDA, the legislative intent behind section 512 was it to be enacted as a welfare legislation to help foster the growth of Internet-based services by placing limitations on liability for compliant service providers.

How platforms are governed: Intermediary Liability to ‘Platform Responsibility’

Intermediaries have been a subject of consideration right from the inception of the internet. Consequently, a number of approaches have been developed for governing their responsibilities and liabilities. It clear that the private ordering of speech by platforms demands greater regulation than merely framing conditions for their liability for third-party content. The discussion surrounding intermediary liability is fast changing into one that demands more from platforms, moving towards what the Internet Governance Forum has termed “platform responsibility. In the U.S., the regulations that we discussed above are limited by a constant reluctance to constraint speech, whereas internationally, these same platforms face a wider array of restrictions. The goals and objectives that social media platforms primarily entail are not limited to simply meet the legal requirements, or to avoid having additional policies imposed but also to avoid losing offended or harassed users, to placate advertisers eager to associate their brands with a healthy online community, to protect their corporate image, and to honour their own personal and institutional ethics. Platforms vary, both in the terms of  influence they can assert over users and for how they should be governed. Legally perceiving, broad and conditional safe harbours are evidently advantageous for internet intermediaries however, takedown requirements generate real challenges for platforms, and are often prone to abuse. As evidently most of these laws were not designed with social media platforms in mind, the legislation did not contemplate platforms where user anonymity is the norm and where illicit content moves freely through jurisdictions . Social media platforms especially are not only in between user and user, and user and public, but also between citizens and law enforcement, policymakers, and regulators charged with governing their behaviour. Despite the lack of anticipation, social media platforms continue to enjoy the safe harbour provisions. Although unlike other ISP’s, these platforms are both legal and corporate entities in the United States, and they serve millions of users living in countries that impose much stricter liability, or have specific requirements about responding to state or court requests to remove content. Therefore, major social media platforms have had to develop their own policies on how to respond to requests from foreign governments to remove content. Section 230 of the CDA, acts as a shield for almost any claim that would hold a platform responsible for its users’ speech. Platforms however lose the immunity if they help create or develop that speech. Section 230 additionally, in the name of promoting economic development and free speech, leaves room for lax enforcement and remedies against users who create illicit content. Further, it does not provide recourse for those harmed by anonymous online speech or online trolls.  Therefore practically, the platforms struggle in adhering to the laws without working towards moderating “online trolling” in the process.

The DMCA on the other hand, undeniably has played a major role in the unprecedented growth of the internet. However, several critics have highlighted that increase in the internet has also led to a delay in responding to the complaints against illicit content. Platforms struggle in identifying and requesting removal of infringing content with the same speed as the new infringing content is being posted. DMCA further, imposes an obligation to disable access to specifically identified infringing files but no obligation to locate and remove other copies of works identified in particular takedown requests. Consequently, to do so they must send a takedown notice for each copy of a particular work on any given website which is more extensive than what has been warranted. The courts in the United States have suggested implementing content filtering technologies to pre-screen all user uploads as a requirement to qualify for the safe harbour. Automated moderation on online content quickly presents itself to be the easy solution to this problem.

Automated Filtering and its Challenges

There have been significant advances in content filtering technologies since the adoption of the DMCA. To handle the expanded volume of takedowns, both major notifiers and major platforms rely increasingly on automation rather than human review. However, all of the tools which are assigned with the responsibility of locating infringing material are subject to severe limitations with respect to their accuracy and adaptability. One thing that is pertinent to note is the nature of platforms itself. It’s nature is a form of moderation, it tends to “promote” a certain category of content, that the user prefers based on his/her activity on the platform. The problem arises when the users preference includes parts of or wholly illicit content, and gets aggravated through filter bubbles and echo chambers. This underlying structure in itself creates an unending series of crises that needs to be removed through moderation. Even if each individual incident can be managed by removing the most harmful content, the repercussions would already have been set in motion. This basic dynamic highlights the predominant problem with platforms: virality.

To help explain why filtering technologies cannot realistically be expected to accurately and eliminate online platform copyright infringement, it is helpful to examine the basic functionality of the most commonly-employed filtering technologies and to describe their uses and limitations.

Automated filtering while making enforcements of rights easier also creates room for more error. Individual platforms and Industry groups therefore, build their own filtering or monitoring technologies to identify and remove unwanted content. This has been subject to various current debates pertaining to platform responsibility. In the April, 2018 Congressional Debate, Facebook CEO Mark Zuckerberg said that filters aided by Artificial Intelligence would one day block harmful content ranging from fake news to terrorist propaganda. But the technology behind content filters is often not decoded effectively. The most technologically advanced automated language filter are NLP’s (Natural Language Processing) and sentiment analysis, which attempts to identify illicit text by relying on a blocklist of forbidden phrases. However the accuracy rate for the same has been in the  70–80 percent range. The persisting problem that these tools also entail is that these filters inevitably miss humour and sarcasm and they often do not decoded the language spoken by the developers efficiently. More sophisticated technologies like Hash-Based Identification and Audio/Visual Fingerprints, allows a platform to search for duplicates and automatically remove them or flag them for human evaluation. Mainstream online social media platforms built up on these technologies and developed its own Content ID system, which reportedly cost 60 million dollars, but was still subject to human and technical errors and reportedly in its function to remove non-infringing content removed Ariana Grande’s benefit concert disappeared midstream from the artist’s own YouTube account.

Platforms inherently have dual motives i.e. political and commercial ones. The omnipresent obstacle that persists in automated moderating tools is that they lack the ability to “judge” content and based on mere visualization decide whether it is violative of the law or not. There is a lack of evidentiary support to suggest that automation tools are developed enough to be put in charge of deciding what is illegal content. Laws governing speech also vary through jurisdictions, different countries have their own interpretation as to what is “unlawful” speech, for example in Germany, Nazi propaganda is illegal; in Spain, it is illegal to insult the king. When applying this observation to automated detection of illegal content, things become complicated. It becomes impossible to train a single classifier that can be applied generally; essentially, each jurisdiction needs its own classifier. With the speech and norms evolving through time, a word that was immaterial today could entail a new meaning tomorrow. a model that bases classification on the presence of certain keywords can produce unexpected false positives, especially in the absence of context. However, a different threshold may be possible for what we call “hate speech”, which pertinently could be identifiable, but that is specific, and for some other piece.

Fundamentally, automated filters can hardly substitute human judgment, particularly for legal queries. Simply put, an algorithm is bound to perceive an album cover image promoting illicit content  the same as an album cover for a concert. An example for the same being, YouTube’s Content ID taking down the channel of a UK-based human rights organization documenting war crimes in Syria.  For issues such as child pornography which don’t have a legal context, algorithm based moderation does not result in curbing speech, however, for complex speech that demands context substantial problems may arise.

In order to combat various impediments such as mentioned above, Mark Zuckerberg recently announced an initiative towards efficient content moderation consequent to several years of Facebook being criticised for its content moderation policies and algorithm. Titled as the Facebook ‘Supreme Court’, the independent board comprising of 20 members from 27 countries will hold the power to overrule the decisions made by the CEO of the company on the content that will be displayed on Facebook and Instagram.

Conclusion

We tend to defend platforms as free conduits of speech until we are too troubled by something that freely moves through their system. When the government, or the aggrieved user, or the population at large, demands that the platform “do something” about the problem, that request generally lies somewhere between a genuine belief in the platform’s responsibility and the practicality of looking to them to intervene. Before considering dangerous mandatory content filtering rules, policymakers should understand the inherent limitations of filtering technologies. It is evident that the presence and intervention of a human would be necessary in most cases. The immediate solution that presents itself is automated moderation being an assistive tool rather than it being the only tool. Figuring out the appropriate division of labour between machines and humans is a challenging technical, social, and legal problem. Modification of the principle of safe harbour for social media platforms is also a step in the positive direction where the same is not borrowed in its entirety but from a law designed for ISPs and search engines and other online intermediaries.