Centre for Internet & Society

On the 25th of February this year The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new Rules broaden the scope of which entities can be considered as intermediaries to now include curated-content platforms (Netflix) as well as digital news publications. This blogpost analyzes the rule on automated filtering, in the context of the growing use of automated content moderation.

This article first appeared on the KU Leuven's Centre for IT and IP (CITIP) blog. Cross-posted with permission.

----

Mathew Sag in his 2018 paper on internet safe harbours discussed how the internet resulted in a shift from the traditional gatekeepers of knowledge (publishing houses) that used to decide what knowledge could be showcased, to a system where everybody who has access to the internet can showcase their work. A “content creator” today ranges from legacy media companies to any person who has access to a smartphone and an internet connection. In a similar trajectory, with the increase in websites and mobile apps and the functions that they serve, the scope of what is an internet intermediary has widened all over the world. 

Who is an Intermediary?

In India the definition of “intermediary” is found under Section 2(w) of the Information Technology (IT) Act 2000, which defines an Intermediary as “with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecoms service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-marketplaces and cyber cafes”. The all-encompassing nature of the definition has allowed the dynamic nature of intermediaries to be included under the definition of the Act, and the Guidelines that have been  published periodically (2011, 2018 and 2021). With more websites and social media companies, and even more content creators online today, there is a need to look at ways in which intermediaries can remove illegal content or content that goes against their community guidelines.

Along with the definition of an intermediary, the IT Act, under Section 79, provides exemptions which grant safe harbours to internet intermediaries, from liability from third-party content, and further empowers the central government to make Rules that act as guidelines for the intermediaries to follow. The Intermediary Liability Rules hence seek to regulate content and lay down safe harbour provisions for intermediaries and internet service providers. To keep up with the changing nature of the internet and internet intermediaries, India relies on the Intermediary Liability Rules to regulate and provide a conducive environment for intermediaries. In view of this provision India has as of now published three versions of the Intermediary Liability (IL) Rules. The first Rules came out in 2011, followed by the introduction of draft amendments to the law in 2018 and finally the latest 2021 version, which would supersede the earlier Rules of 2011. 

The Growing Use of Automated Content Moderation 

With each version of the Rules there seemed to be changes that ensured that they were abreast with the changing face of the internet and the changing nature of both content and content creator. Hence the 2018 version of the Rules showcase a push towards automated content filtering. The text of Rule 3(9) reads as follows: “The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content”.

Under Rule 3(9), intermediaries were required to deploy automated tools or appropriate mechanisms to proactively identify, remove or disable public access to unlawful content. However, neither the 2018 IL Rules, nor the parent Act (the IT Act) specified which content can be deemed unlawful. The 2018 Rules also failed to establish the specific responsibilities of the intermediaries, instead relying on vague terms like “appropriate mechanisms” and with “appropriate controls”. Hence it can be seen that though the Rules mandated the use of automated tools, neither them nor the IT Act provided clear guidelines on what could be removed. 

The lack of clear guidelines and list of content that can be removed had  left the decision up to the intermediaries to decide which content, if not actively removed, could cost them their immunity. It has been previously documented that the lack of clear guidelines in the 2011 version of the Rules, led to intermediaries over complying with take down notices, often taking down content that did not warrant it. The existing tendency to over-comply, combined  with automated filtering could have resulted in a number of unwarranted take downs.

While the 2018 Rules mandated the deployment of automated tools, the year 2020, (possibly due to the pandemic induced work from home safety protocols and global lockdowns) saw major social media companies announcing the move towards a fully automated system of content moderation. Though the use of automated content removal seems like the right step considering the trauma that human moderators had to go through,  the algorithms that are being used now to remove content are relying on the parameters, practices and data from earlier removals made by the human moderators. More recently, in India with the emergence of the second wave of the COVID19  wave, the Ministry of Electronics and Information Technology has asked social media platforms to remove “unrelated, old and out of the context images or visuals, communally sensitive posts and misinformation about COVID19 protocols”.

The New IL Rules - A ray of hope?

The 2021 version of the IL Rules provides a more nuanced approach to the use of automated content filtering compared to the earlier version. Rule 4(4) now requires only “significant social media intermediaries” to use automated tools to identity and take down content pertaining to “child sexual abuse material”, or “depicting rape”, or any information which is identical to a content that has already been removed through a take-down notice. The Rules define a social media intermediary as “intermediary which primarily or solely enables interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services” .The Rules also go a step further to create another type of intermediary, the  significant social media intermediary. A significant social media intermediary is defined as one “having a number of registered users in India above such threshold as notified by the Central Government''. Hence what can be considered as a social media intermediary that qualifies as a significant one could change at any time.

Along with adding a new threshold (qualifying as a significant social media intermediary) the Rules, in contrast to the 2018 version, also emphasises the need of such removal to be proportionate to the interests of freedom of speech and expression and privacy of users. The Rules also call for “appropriate human oversight” as well as a periodic review of the tools used for content moderation. The Rules by using the term “shall endeavor” aids in reducing the pressure on the intermediary to set up these mechanisms. This also means  that the requirement is now on a best effort basis, as opposed to the word “shall” in the 2018 version of the Rules, which made it mandatory.

Although the Rules now narrow down the instances where automated content removal can take place, the concerns around over compliance and censorship still loom. One of the reasons for concern is that the Rules still fail to require the intermediaries to set up a mechanism for redress or for appeals to such removal. Additionally, the provision that states that automated systems could remove content that have been previously taken down, creates a cause for worry as the propensity of the intermediaries to over comply and take down content has already been documented. This then brings us back to the previous issue where the social media company’s automated systems were removing legitimate news sources. Though the 2021 Rules tries to clarify certain provisions related to automated filtering, like the addition of the safeguards, the Rules also suffer from vague provisions that could cause issues related to compliance. The use of terms such as “proportionate”, “having regard to free speech” etc. fail to lay down definitive directions for the intermediaries (in this case SSMI) to comply with. Additionally, as earlier stated, being qualified  as a SSMI can change at any time, either based on the change in the number of users, or the change in the threshold of users, mandated by the government. The absence of human intervention during removal, vague guidelines and fear of losing out on safe harbour provisions, add to the already increasing trend of censorship in social media. With the use of automated means and the fast, and almost immediate removal of content would mean that certain content creators might not even be able to post their content online. With the use of proactive filtering through automated means the content can be removed almost immediately. With India’s current trend of new internet users, some of these creators would also be first time users of the internet. 

Conclusion

The need for automated removal of content is understandable, based not only on  the sheer volume of content but also  the nightmare stories of the toll it takes on human content moderators, who otherwise have to go through hours of disturbing content. Though the Indian Intermediary Liability Guidelines have improved from the earlier versions in terms of moving away from mandating proactive filtering, there still needs to be consideration of how these technologies are used, and the laws should understand the shift in the definition of who a content creator is. There needs to be ways of recourse to unfair removal of content and a means to get an explanation of why the content was removed, via notices to the user. In the case of India, the notices should be in Indian languages as well, so that the people are able to understand them. 

In the absence of further clear guidelines, the perils of over-censorship by the intermediaries in order to stay out of trouble could lead to further stifling of not just freedom of speech but also access to information. In addition, the fear of content being taken down or even potential prosecution could mean that people resort to self-censorship, preventing them from exercising their fundamental rights to freedom of speech and expression, as guaranteed by the Indian Constitution. We hope that the next version of the Rules take a more nuanced approach to automated content removal and ensure adequate and specific safeguards to ensure a conducive environment for both intermediaries and content creators. 

The views and opinions expressed on this page are those of their individual authors. Unless the opposite is explicitly stated, or unless the opposite may be reasonably inferred, CIS does not subscribe to these views and opinions which belong to their individual authors. CIS does not accept any responsibility, legal or otherwise, for the views and opinions of these individual authors. For an official statement from CIS on a particular issue, please contact us directly.