Role of Intermediaries in Countering Online Abuse
The Internet can be a hostile space and protecting users from abuse without curtailing freedom of expression requires a balancing act on the part of online intermediaries.
This got published as two blog entries in the NALSAR Law Tech Blog. Part 1 can be accessed here and Part 2 here.
As platforms and services coalesce around user-generated content (UGC) and entrench themselves in the digital publishing universe, they are increasingly taking on the duties and responsibilities of protecting rights including taking reasonable measures to restrict unlawful speech. Arguments around the role of intermediaries tackling unlawful content usually center around the issue of regulation—when is it feasible to regulate speech and how best should this regulation be enforced?
Recently, Twitter found itself at the periphery of such questions when an anonymous user of the platform, @LutyensInsider, began posting slanderous and sexually explicit comments about Swati Chaturvedi, a Delhi-based journalist. The online spat which began in February last year, culminated into Swati filing an FIR against the anonymous user, last week. Within hours of the FIR, the anonymous user deleted the tweets and went silent. Predictably, Twitter users hailed this as a much needed deterrence to online harassment. Swati’s personal victory is worth celebrating, it is an encouragement for the many women bullied daily on the Internet, where harassment is rampant. However, while Swati might be well within her legal rights to counter slander, the rights and liabilities of private companies in such circumstances are often not as clear cut.
Should platforms like Twitter take on the mantle of deciding what speech is permissible or not? When and how should the limits on speech be drawn? Does this amount to private censorship?The answers are not easy and as the recent Grand Chamber of the European Court of Human Rights (ECtHR) judgment in the case of Delfi AS v. Estonia confirms, the role of UGC platforms in balancing the user rights, is an issue far from being settled. In its ruling, the ECtHR reasoned that because of their role in facilitating expression, online platforms have a requirement “to take effective measures to limit the dissemination of hate speech and speech inciting violence was not ‘private censorship”.
This is problematic because the decision moves the regime away from a framework that grants immunity from liability, as long as platforms meet certain criteria and procedures. In other words the ruling establishes strict liability for intermediaries in relation to manifestly illegal content, even if they may have no knowledge. The 'obligation' placed on the intermediary does not grant them safe harbour and is not proportionate to the monitoring and blocking capacity thus necessitated. Consequently, platforms might be incentivized to err on the side of caution and restrict comments or confine speech resulting in censorship. The ruling is especially worrying, as the standard of care placed on the intermediary does not recognize the different role played by intermediaries in detection and removal of unlawful content. Further, intermediary liability is its own legal regime and is at the same time, a subset of various legal issues that need an understanding of variation in scenarios, mediums and technology both globally and in India.
Law and Short of IT
Earlier this year, in a leaked memo, the Twitter CEO Dick Costolo took personal responsibility for his platform's chronic problem and failure to deal with harassment and abuse. In Swati's case, Twitter did not intervene or take steps to address harrassment. If it had to, Twitter (India), as all online intermediaries would be bound by the provisions established under Section 79 and accompanying Rules of the Information Technology Act. These legislations outline the obligations and conditions that intermediaries must fulfill to claim immunity from liability for third party content. Under the regime, upon receiving actual knowledge of unlawful information on their platform, the intermediary must comply with the notice and takedown (NTD) procedure for blocking and removal of content.
Private complainants could invoke the NTD procedure forcing intermediaries to act as adjudicators of an unlawful act—a role they are clearly ill-equipped to perform, especially when the content relates to political speech or alleged defamation or obscenity. The SC judgment in Shreya Singhal addressing this issue, read down the provision (Section 79 by holding that a takedown notice can only be effected if the complainant secures a court order to support her allegation. Further, it was held that the scope of restrictions under the mechanism is restricted to the specific categories identified under Article 19(2). Effectively, this means Twitter need not take down content in the absence of a court order.
Content Policy as Due Diligence
Another provision, Rule 3(2) prescribes a content policy which, prior to the Shreya Singhal judgment was a criteria for administering takedown. This content policy includes an exhaustive list of types of restricted expressions, though worryingly, the terms included in it are not clearly defined and go beyond the reasonable restrictions envisioned under Article 19(2). Terms such as “grossly harmful”, “objectionable”, “harassing”, “disparaging” and “hateful” are not defined anywhere in the Rules, are subjective and contestable as alternate interpretation and standard could be offered for the same term. Further, this content policy is not applicable to content created by the intermediary.
Prior to the SC verdict in Shreya Singhal, actual knowledge could have been interpreted to mean the intermediary is called upon its own judgement under sub-rule (4) to restrict impugned content in order to seek exemption from liability. While liability accrued from not complying with takedown requests under the content policy was clear, this is not the case anymore. By reading down of S. 79 (3) (b) the court has addressed the issue of intermediaries complying with places limits on the private censorship of intermediaries and the invisible censorship of opaque government takedown requests as they must and should adhere, to the boundaries set by Article 19(2). Following the SC judgment intermediaries do not have to administer takedowns without a court order thereby rendering this content policy redundant. As it stands, the content policy is an obligation that intermediaries must fulfill in order to be exempted from liability for UGC and this due diligence is limited to publishing rules and regulations, terms and conditions or user agreement informing users of the restrictions on content. The penalties for not publishing this content policy should be clarified.
Further, having been informed of what is permissible users are agreeing to comply with the policy outlined, by signing up to and using these platforms and services. The requirement of publishing content policy as due diligence is unnecessary given that mandating such ‘standard’ terms of use negates the difference between different types of intermediaries which accrue different kinds of liability. This also places an extraordinary power of censorship in the hands of the intermediary, which could easily stifle freedom of speech online. Such heavy handed regulation could make it impossible to publish critical views about anything without the risk of being summarily censored.
Twitter may have complied with its duties by publishing the content policy, though the obligation does not seem to be an effective deterrence. Strong safe harbour provisions for intermediaries are a crucial element in the promotion and protection of the right to freedom of expression online. By absolving platforms of responsibility for UGC as long as they publish a content policy that is vague and subjective is the very reason why India’s IT Rules are in fact, in urgent need of improvement.
Size Matters
The standards for blocking, reporting and responding to abuse vary across different categories of platforms. For example, it may be easier to counter trolls and abuse on blogs or forums where the owner or an administrator is monitoring comments and UGC. Usually platforms outline monitoring and reporting policies and procedures including recourse available to victims and action to be taken against violators. However, these measures are not always effective in curbing abuse as it is possible for users to create new accounts under different usernames. For example, in Swati’s case the anonymous user behind @LutyensInsider account changed their handle to @gregoryzackim and @gzackim before deleting all tweets. In this case, perhaps the fear of criminal charges ahead was enough to silence the anonymous user, which may not always be the case.
Tackling the Trolls
Most large intermediaries have privacy settings which restrict the audience for user posts as well as prevent strangers from contacting them as a general measure against online harassment. Platforms also publish monitoring policy outlining the procedure and mechanisms for users to register their complaint or report abuse. Often reporting and blocking mechanisms rely on community standards and users reporting unlawful content. Last week Twitter announced a new feature allowing lists of blocked users to be shared between users. An improvement on existing mechanism for blocking, the feature is aimed at making the service safer for people facing similar issues and while an improvement on standard policies defining permissible limits on content, such efforts may have their limitations.
The mechanisms follow a one-size-fits-all policy. First, such community driven efforts do not address concerns of differences in opinion and subjectivity. Swati in defending her actions stressed the “coarse discourse” prevalent on social media, though as this article points out she might be assumed guilty of using offensive and abusive language. Subjectivity and many interpretations of the same opinion can pave the way for many taking offense online. Earlier this month, Nikhil Wagle’s tweets criticising Prime Minister Narendra Modi as a “pervert” was interpreted as “abusive”, “offensive” and “spreading religious disharmony”. While platforms are within their rights to establish policies for dealing with issues faced by users, there is a real danger of them doing so for “political reasons” and based on “popularity” measures which may chill free speech. When many get behind a particular interpretation of an opinion, lawful speech may also be stifled as Sreemoyee Kundu found out. A victim of online abuse her account was blocked by Facebook owing to multiple reports from a “faceless fanatical mob”. Allowing the users to set standards of permissible speech is an improvement, though it runs the risk of mob justice and platforms need to be vigilant in applying such standards.
While it may be in the interest of platforms to keep a hands off approach to community policies, certain kind of content may necessiate intervention by the intermediary. There has been an increase in private companies modifying their content policy to place reasonable restriction on certain hateful behaviour in order to protect vulnerable or marginalised voices. Twitter and Reddit's policy change in addressing revenge porn are reflective of a growing understanding amongst stakeholders that in order to promote free expression of ideas, recognition and protection of certain rights on the Internet may be necessary. However, any approach to regulate user content must assess the effect of policy decisions on user rights. Google's stand on tackling revenge porn may be laudable, though the decision to push down 'piracy' sites in its search results could be seen to adversely impact the choice that users have. Terms of service implemented with subjectivity and lack of transparency can and does lead to private censorship.
The Way Forward
Harassment is damaging, because of the feeling of powerlessness that it invokes in the victims and online intermediaries represent new forms of power through which users' negotiate and manage their online identity. Content restriction policies and practices must address this power imbalance by adopting baseline safeguards and best practices. It is only fair that based on principles of equality and justice, intermediaries be held responsible for the damage caused to users due to wrongdoings of other users or when they fail to carry out their operations and services as prescribed by the law. However, in its present state, the intermediary liability regime in India is not sufficient to deal with online harassment and needs to evolve into a more nuanced form of governance.
Any liability framework must evolve bearing in mind the slippery slope of overbroad regulation and differing standards of community responsibility. Therefore, a balanced framework would need to include elements of both targeted regulation and soft forms of governance as liability regimes need to balance fundamental human rights and the interests of private companies. Often, achieving this balance is problematic given that these companies are expected to be adjudicators and may also be the target of the breach of rights, as is the case in Delfi v Estonia. Global frameworks such as the Manila Principles can be a way forward in developing effective mechanisms. The determination of content restriction practices should always adopt the least restrictive means of doing so, distinguishing between the classes of intermediary. They must evolve considering the proportionality of the harm, the nature of the content and the impact on affected users including the proximity of affected party to content uploader.
Further, intermediaries and governments should communicate a clear mechanism for review and appeal of restriction decisions. Content restriction policies should incorporate an effective right to be heard. In exceptional circumstances when this is not possible, a post facto review of the restricton order and its implementation must take place as soon as practicable. Further, unlawful content restricted for a limited duration or within a specific geography, must not extend beyond these limits and a periodic review should take place to ensure the validity of the restriction. Regular, systematic review of rules and guidelines guiding intermediary liability will go a long way in ensuring that such frameworks are not overly burdensome and remain effective.