In retaliation to Twitter labelling one of US President Donald Trump’s tweets as being misleading, the White House signed an executive order on May 28 that seeks to dilute protections that social media companies in the US have with respect to third-party content on their platforms.

The order argues that social media companies that engage in censorship stop functioning as ‘passive bulletin boards’: they must consequently be treated as ‘content creators’, and be held liable for content on their platforms as such. The shockwaves of the decision soon reached India, with news coverage of the event starting to debate the consequences of Trump’s order on how India regulates internet services and social media companies.

The debate on the responsibilities of online platforms is not new to India, and recently took main stage in December 2018 when the Ministry of Electronics and Information Technology, Meity, published a draft set of guidelines that most online services – ‘intermediaries’ – must follow. The draft rules, which haven’t been notified yet, propose to significantly expand the obligations on intermediaries.

Trump’s executive order, however, comes in the context of content moderation practices by social media platforms, i.e. when platforms censor speech of their volition, and not because of legal requirements. The legal position of content moderation is relatively under-discussed, at least in legal terms, when it comes to India.

In contrast to commentators who have implicitly assumed that Indian law permits content moderation by social media companies, we believe Indian law fails to adequately account for content moderation and curation practices performed by social media companies. There may be adverse consequences for the exercise of freedom of expression in India if this lacuna is not filled soon.

India vs US

A useful starting point for the analysis is to compare how the US and India regulate liability for online services. In the US, Section 230 of the Communications Decency Act provides online services with broad immunity from liability for third party content that they host or transmit.

There are two critical components to what is generally referred to as Section 230.

First, providers of an ‘interactive computer service’, like your internet service provider or a company like Facebook, will not be treated as publishers or speakers of third-party content. This system has allowed the internet speech and economy to flourish since it allows companies to focus on their service without a constant paranoia for what users are transmitting through their service.

The second part of Section 230 states that services are allowed to moderate and remove, in ‘good faith’, such third-party content that they may deem offensive or obscene. This allows for online services to instate their own community guidelines or content policies.

In India, section 79 of the Information Technology Act is the analogous provision: it grants intermediaries conditional ‘safe harbour’. This means intermediaries, again like Facebook or your internet provider, are exempt from liability for third-party content – like messages or videos posted by ordinary people – provided their functioning meets certain requirements, and they comply with the allied rules, known as Intermediary Guidelines.

The notable and stark difference between Indian law and Section 230 is that India’s IT Act is largely silent on content moderation practices. As Rahul Matthan points out, there is no explicit allowance in Indian law for platforms to take down content based on their own policies, even if such actions are done in good faith.

Safe harbour

 

One may argue that the absence of an explicit permission does not necessarily mean that any platform engaging in content moderation practices will lose its safe harbour. However, the language of Section 79 and the allied rules may even create room for divesting social media platforms of their safe harbour.

The first such indication is that the conditions to qualify for safe harbour, intermediaries must not modify said content, not select the recipients of particular content, and take information down when it is brought to their notice by governments or courts.

Most of the conditions are almost a verbatim copy of a ‘mere conduit’ as defined by the EU Directive on E-Commerce, 2000. This definition was meant to encapsulate the functioning of services like infrastructure providers, which transmit content without exerting any real control. Thus, by adopting this definition for all intermediaries, Indian law mostly considers internet services, even social media platforms, to be passive plumbing through which information flows.

It is easy to see how this narrow conception of online services is severely lacking.

Most prominent social media platforms remove or hide content, algorithmically curate news-feeds to make users keep coming back for more, and increasingly add labels to content. If the law is interpreted strictly, these practices may be adjudged to run afoul of the aforementioned conditions that intermediaries need to satisfy in order to qualify for safe harbour.

Platforms or editors?

For instance, it can be argued that social media platforms initiate transmission in some form when they pick and ‘suggest’ relevant third-party content to users. When it comes to newsfeeds, neither the content creator nor the consumer have as much control over how their content is disseminated or curated as much as the platform does. By curating newsfeeds, social media platforms can be said to essentially ‘selecting the receiver’ of transmissions.

The Intermediary Guidelines further complicate matters by specifically laying out what is not to be construed as ‘editing’ under the law. Under rule 3(3), the act of taking down content pursuant to orders under the Act will not be considered as ‘editing’ of said content.

Since the term ‘editing’ has been left undefined beyond the negative qualification, several social media intermediaries may well qualify as editors. They use algorithms that curate content for their users; like traditional news editors, these algorithms use certain ‘values’ to determine what is relevant to their audiences. In other words, one can argue that it is difficult to draw a bright line between editorial and algorithmic acts.

To retain their safe harbour, the counter-argument that social media platforms can rely is the fact that Rule 3(5) of the Intermediary Guidelines requires intermediaries to inform users that intermediaries reserve the right to take down user content that relates to a wide of variety of acts, including content that threatens national security, or is “[...] grossly harmful, harassing, blasphemous, [etc.]”.

In practice, however, the content moderation practices of some social media companies may go beyond these categories. Additionally, the rule does not address the legal questions created by these platforms’ curation of news-feeds.

The purpose of highlighting how Section 79 treats the practices of social media platforms is not with the intention of arguing that these platforms should be held liable for user-generated content. Online spaces created by social media platforms have allowed for individuals to express themselves and participate in political organisation and debate.

A level of protection of intermediaries from immunity is therefore critical for the protection of several human rights, especially the right to freedom of speech. This piece only serves to highlight that section 79 is antiquated and unfit to deal with modern online services. The interpretative dangers that exist in the provision create regulatory uncertainty for organisations operating in India.

Dangers to speech

These dangers may not just be theoretical.

Only last year, Twitter CEO Jack Dorsey was summoned by the Parliamentary Committee on Information Technology to answer accusations of the platform having a bias against ‘right-wing’ accounts. More recently, BJP politician Vinit Goenka encouraged people to file cases against Twitter for promoting separatist content.

Recent interventions from the Supreme Court have imposed proactive filtration and blocking requirements on intermediaries, but these have been limited to reasonable restrictions that may be imposed on free speech under Article 19 of India’s Constitution. Content moderation policies of intermediaries like Twitter and Facebook go well beyond the scope of Article 19 restrictions, and the apex court has not yet addressed this.

The Delhi High Court, in Christian Louboutin v. Nakul Bajaj, has already highlighted criteria for when e-commerce intermediaries can stake claim to Section 79 safe harbour protections based on the active (or passive) nature of their services. While the order came in the context of intellectual property violations, nothing keeps a court from similarly finding that Facebook and Twitter play an ‘active’ role when it comes to content moderation and curation.

These companies may one day find the ‘safe harbour’ rug pulled from under their feet if a court reads section 79 more strictly. In fact, judicial intervention may not even be required. The threat of such an interpretation may simply be exploited by the government, and used as leverage to get social media platforms to toe the government line.

Protection and responsibility

Unfortunately, the amendments to the intermediary guidelines proposed in 2018 do not address the legal position of content moderation either. More recent developments suggest that the Meity may be contemplating amending the IT Act. This presents an opportunity for a more comprehensive reworking of the Indian intermediary liability regime than what is possible through delegated legislation like the intermediary rules.

Intermediaries, rather than being treated uniformly, should be classified based on their function and the level of control they exercise over the content they process. For instance, network infrastructure should continue to be treated as ‘mere conduits’ and enjoy broad immunity from liability for user-generated content.

More complex services like search engines and online social media platforms can have differentiated responsibilities based on the extent they can contextualise and change content. The law should carve out an explicit permission to platforms to moderate content in good faith. Such an allowance should be accompanied by outlining best practices that these platforms can follow to ensure transparency and accountability to their users.

For a robust and rights-respecting public sphere, India needs to ensure that large social media platforms receive adequate protections, and are made more responsible to its users.

Anna Liz Thomas is a law graduate and a policy researcher, currently working with the Centre for Internet and Society. Gurshabad Grover manages research in the freedom of expression and internet governance team at CIS.