The Centre for Internet and Society
https://cis-india.org
These are the search results for the query, showing results 1 to 15.
Comments to the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
https://cis-india.org/internet-governance/blog/comments-to-draft-amendments-to-the-it-rules-2021
<b>The Centre for Internet & Society (CIS) presented its comments on the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (‘the rules’), which were released on 6 June, 2022 for public comments.</b>
<p style="text-align: justify; ">These comments examine whether the proposed amendments are in adherence to established principles of constitutional law, intermediary liability and other relevant legal doctrines. We thank the Ministry of Electronics and Information Technology (MEITY) for allowing us this opportunity. Our comments are divided into two parts. In the first part, we reiterate some of our comments to the existing version of the rules, which we believe holds relevance for the proposed amendments as well. And in the second part, we provide issue-wise comments that we believe need to be addressed prior to finalising the amendments to the rules.</p>
<hr />
<p style="text-align: justify; ">To access the full text of the Comments to the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, <a href="https://cis-india.org/internet-governance/blog/comments-to-draft-amendments-to-it-rules-2021.pdf" class="internal-link">click here</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/comments-to-draft-amendments-to-the-it-rules-2021'>https://cis-india.org/internet-governance/blog/comments-to-draft-amendments-to-the-it-rules-2021</a>
</p>
No publisherAnamika Kundu, Digvijay Chaudhary, Divyansha Sehgal, Isha Suri and Torsha SarkarDigital MediaInternet GovernanceIntermediary LiabilityInformation Technology2022-07-07T02:39:28ZBlog EntryFinding Needles in Haystacks - Discussing the Role of Automated Filtering in the New Indian Intermediary Liability Rules
https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules
<b>On the 25th of February this year The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new Rules broaden the scope of which entities can be considered as intermediaries to now include curated-content platforms (Netflix) as well as digital news publications. This blogpost analyzes the rule on automated filtering, in the context of the growing use of automated content moderation.
</b>
<p class="p1"><span class="s1">This article first <a class="external-link" href="https://www.law.kuleuven.be/citip/blog/finding-needles-in-haystacks/">appeared</a> on the KU Leuven's Centre for IT and IP (CITIP) blog. Cross-posted with permission.</span></p>
<p class="p1"><span class="s1">----</span></p>
<p class="p1"><span class="s1">Mathew Sag in his 2018 <a href="https://scholarship.law.nd.edu/cgi/viewcontent.cgi?article=4761&context=ndlr"><span class="s2">paper</span></a> on internet safe harbours discussed how the internet resulted in a shift from the traditional gatekeepers of knowledge (publishing houses) that used to decide what knowledge could be showcased, to a system where everybody who has access to the internet can showcase their work. A “<em>content creator</em>” today ranges from legacy media companies to any person who has access to a smartphone and an internet connection. In a similar trajectory, with the increase in websites and mobile apps and the functions that they serve, the scope of what is an internet intermediary has widened all over the world. </span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1"><strong>Who is an Intermediary?</strong></span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">In India the definition of “<em>intermediary</em>” is found under Section 2(w) of the <a href="https://www.meity.gov.in/writereaddata/files/itbill2000.pdf"><span class="s2">Information Technology (IT) Act 2000</span></a>, which defines an Intermediary as <em>“with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecoms service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-marketplaces and cyber cafes”.</em> The all-encompassing nature of the definition has allowed the dynamic nature of intermediaries to be included under the definition of the Act, and the Guidelines that have been published periodically (<a href="https://www.meity.gov.in/writereaddata/files/GSR314E_10511%25281%2529_0.pdf"><span class="s2">2011</span></a>, <a href="https://www.meity.gov.in/writereaddata/files/Draft_Intermediary_Amendment_24122018.pdf"><span class="s2">2018</span></a> and <a href="https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf"><span class="s2">2021</span></a>). With more websites and social media companies, and even more content creators online today, there is a need to look at ways in which intermediaries can remove illegal content or content that goes against their community guidelines.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">Along with the definition of an intermediary, the IT Act, under Section 79, provides exemptions which grant safe harbours to internet intermediaries, from liability from third-party content, and further empowers the central government to make Rules that act as guidelines for the intermediaries to follow. The Intermediary Liability Rules hence seek to regulate content and lay down safe harbour provisions for intermediaries and internet service providers. To keep up with the changing nature of the internet and internet intermediaries, India relies on the Intermediary Liability Rules to regulate and provide a conducive environment for intermediaries. In view of this provision India has as of now published three versions of the Intermediary Liability (IL) Rules. The first Rules came out in<a href="https://www.meity.gov.in/writereaddata/files/GSR314E_10511%25281%2529_0.pdf"><span class="s2"> 2011</span></a>, followed by the introduction of draft amendments to the law in<a href="https://www.meity.gov.in/writereaddata/files/Draft_Intermediary_Amendment_24122018.pdf"><span class="s2"> 2018</span></a> and finally the latest <a href="https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf"><span class="s2">2021 </span></a>version, which would supersede the earlier Rules of 2011. </span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1"><strong>The Growing Use of Automated Content Moderation </strong></span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">With each version of the Rules there seemed to be changes that ensured that they were abreast with the changing face of the internet and the changing nature of both content and content creator. Hence the 2018 version of the Rules showcase a push towards automated content filtering. The text of Rule 3(9) reads as follows: “<em>The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content</em>”.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">Under Rule 3(9), intermediaries were required to deploy automated tools or appropriate mechanisms to proactively identify, remove or disable public access to unlawful content. However, neither the 2018 IL Rules, nor the parent Act (the IT Act) specified which content can be deemed unlawful. The 2018 Rules also failed to establish the specific responsibilities of the intermediaries, instead relying on vague terms like “<em>appropriate mechanisms</em>” and with “<em>appropriate controls</em>”. Hence it can be seen that though the Rules mandated the use of automated tools, neither them nor the IT Act provided clear guidelines on what could be removed. </span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">The lack of clear guidelines and list of content that can be removed had left the decision up to the intermediaries to decide which content, if not actively removed, could cost them their immunity. It has been previously documented that the lack of clear guidelines in the 2011 version of the <a href="https://cis-india.org/internet-governance/chilling-effects-on-free-expression-on-internet"><span class="s2">Rules</span></a>, led to intermediaries over complying with take down notices, often taking down content that did not warrant it. The existing tendency to over-comply, combined with automated filtering could have resulted in a number of <a href="https://cis-india.org/internet-governance/how-india-censors-the-web-websci#:~:text=One%2520of%2520the%2520primary%2520ways,certain%2520websites%2520for%2520its%2520users."><span class="s2">unwarranted take downs</span></a>.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">While the 2018 Rules mandated the deployment of automated tools, the year 2020, (possibly due to the pandemic induced work from home safety protocols and global lockdowns) saw major social media companies announcing the move towards a fully automated system of content<a href="https://www.medianama.com/2020/03/223-facebook-content-moderation-coronavirus-medianamas-take/"><span class="s2"> moderation</span></a>. Though the use of automated content removal seems like the right step considering the <a href="https://www.businessinsider.in/tech/news/facebook-content-moderator-who-quit-reportedly-wrote-a-blistering-letter-citing-stress-induced-insomnia-among-other-trauma/articleshow/82075608.cms"><span class="s2">trauma </span></a>that human moderators had to go through, the algorithms that are being used now to remove content are relying on the parameters, practices and data from earlier removals made by the human moderators. More recently, in India with the emergence of the second wave of the COVID19 wave, the Ministry of Electronics and Information Technology has <a href="https://www.thehindu.com/news/national/govt-asks-social-media-platforms-to-remove-100-covid-19-related-posts/article34406733.ece"><span class="s2">asked </span></a>social media platforms to remove “<em>unrelated, old and out of the context images or visuals, communally sensitive posts and misinformation about COVID19 protocols</em>”.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1"><strong>The New IL Rules - A ray of hope?</strong></span></p>
<p class="p3"><span class="s3">The 2021 version of the IL Rules provides a more nuanced approach to the use of automated content filtering compared to the earlier version. Rule 4(4) now requires only “</span><span class="s1">significant social media intermediaries” to use automated tools to identity and take down content pertaining to “child sexual abuse material”, or “depicting rape”, or any information which is identical to a content that has already been removed through a take-down notice. The Rules define a social media intermediary as “<em>intermediary which primarily or solely enables interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services”</em> .The Rules also go a step further to create another type of intermediary, the significant social media intermediary. A significant social media intermediary is defined as one “<em>having a number of registered users in India above such threshold as notified by the Central Government</em>''. Hence what can be considered as a social media intermediary that qualifies as a significant one could change at any time.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s4">Along with adding a new threshold (qualifying as a significant social media intermediary) the Rules, in contrast to the 2018 version, also emphasises the need of such removal to be </span><span class="s1">proportionate to the interests of freedom of speech and expression and privacy of users. The Rules also call for “<em>appropriate human oversight</em>” as well as a periodic review of the tools used for content moderation. The Rules by using the term “<em>shall endeavor</em>” aids in reducing the pressure on the intermediary to set up these mechanisms. This also means that the requirement is now on a best effort basis, as opposed to the word “<em>shall</em>” in the 2018 version of the Rules, which made it mandatory.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">Although the Rules now narrow down the instances where automated content removal can take place, the concerns around over compliance and censorship still loom. One of the reasons for concern is that the Rules still fail to require the intermediaries to set up a mechanism for redress or for appeals to such removal. Additionally, the provision that states that automated systems could remove content that have been previously taken down, creates a cause for worry as the propensity of the intermediaries to over comply and take down content has already been documented. This then brings us back to the previous issue where the social media company’s automated systems were removing legitimate news sources. Though the 2021 Rules tries to clarify certain provisions related to automated filtering, like the addition of the safeguards, the Rules also suffer from vague provisions that could cause issues related to compliance. The use of terms such as “<em>proportionate</em>”, “<em>having regard to free speech</em>” etc. fail to lay down definitive directions for the intermediaries (in this case SSMI) to comply with. Additionally, as earlier stated, being qualified as a SSMI can change at any time, either based on the change in the number of users, or the change in the threshold of users, mandated by the government. The absence of human intervention during removal, vague guidelines and fear of losing out on safe harbour provisions, add to the already increasing trend of censorship in social media. With the use of automated means and the fast, and almost immediate removal of content would mean that certain content creators might not even be able to post their content <a href="https://www.eff.org/wp/unfiltered-how-youtubes-content-id-discourages-fair-use-and-dictates-what-we-see-online"><span class="s2">online.</span><span class="s5"> With the use of proactive filtering through automated means the content can be removed almost immediately.</span></a></span><span class="s6"> </span><span class="s1">With India’s current trend of new internet users, some of these creators would also be <a href="https://timesofindia.indiatimes.com/business/india-business/for-the-first-time-india-has-more-rural-net-users-than-urban/articleshow/75566025.cms"><span class="s2">first time users</span></a> of the internet. </span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p3"><span class="s1"><strong>Conclusion</strong></span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p3"><span class="s1">The need for automated removal of content is understandable, based not only on the sheer volume of content but also the nightmare stories of the toll it takes on human content moderators, who otherwise have to go through hours of disturbing content. Though the Indian Intermediary Liability Guidelines have improved from the earlier versions in terms of moving away from mandating proactive filtering, there still needs to be consideration of how these technologies are used, and the laws should understand the shift in the definition of who a content creator is. There needs to be ways of recourse to unfair removal of content and a means to get an explanation of why the content was removed, via notices to the user. In the case of India, the notices should be in Indian languages as well, so that the people are able to understand them. </span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p3"><span class="s1">In the absence of further clear guidelines, the perils of over-censorship by the intermediaries in order to stay out of trouble could lead to further stifling of not just freedom of speech but also access to information. In addition, the fear of content being taken down or even potential prosecution could mean that people resort to self-censorship, preventing them from exercising their fundamental rights to freedom of speech and expression, as guaranteed by the Indian Constitution. We hope that the next version of the Rules take a more nuanced approach to automated content removal and ensure adequate and specific safeguards to ensure a conducive environment for both intermediaries and content creators. </span></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules'>https://cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules</a>
</p>
No publisherShweta Mohandas and Torsha SarkarInternet GovernanceIntermediary LiabilityArtificial Intelligence2021-08-03T07:28:53ZBlog EntryThe Ministry And The Trace: Subverting End-To-End Encryption
https://cis-india.org/internet-governance/blog/the-ministry-and-the-trace-subverting-end-to-end-encryption
<b>A legal and technical analysis of the 'traceability' rule and its impact on messaging privacy.</b>
<p> </p>
<p>The paper was published in the <a class="external-link" href="http://nujslawreview.org/2021/07/09/the-ministry-and-the-trace-subverting-end-to-end-encryption/">NUJS Law Review Volume 14 Issue 2 (2021)</a>.</p>
<hr />
<h2>Abstract</h2>
<div class="justify">
<div class="pbs-main-wrapper">
<p>End-to-end
encrypted messaging allows individuals to hold confidential
conversations free from the interference of states and private
corporations. To aid surveillance and prosecution of crimes, the Indian
Government has mandated online messaging providers to enable
identification of originators of messages that traverse their platforms.
This paper establishes how the different ways in which this
‘traceability’ mandate can be implemented (dropping end-to-end
encryption, hashing messages, and attaching originator information to
messages) come with serious costs to usability, security and privacy.
Through a legal and constitutional analysis, we contend that
traceability exceeds the scope of delegated legislation under the
Information Technology Act, and is at odds with the fundamental right to
privacy.</p>
<p> </p>
<p>Click here to read the <a class="external-link" href="http://nujslawreview.org/2021/07/09/the-ministry-and-the-trace-subverting-end-to-end-encryption/">full paper</a>.</p>
</div>
</div>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/the-ministry-and-the-trace-subverting-end-to-end-encryption'>https://cis-india.org/internet-governance/blog/the-ministry-and-the-trace-subverting-end-to-end-encryption</a>
</p>
No publisherGurshabad Grover, Tanaya Rajwade and Divyank KatiraCryptographyIntermediary LiabilityConstitutional LawInternet GovernanceMessagingEncryption Policy2021-07-12T08:18:18ZBlog EntryRight to Exclusion, Government Spaces, and Speech
https://cis-india.org/internet-governance/blog/right-to-exclusion-government-spaces-and-speech
<b>The conclusion of the litigation surrounding Trump blocking its critiques on Twitter brings to forefront two less-discussed aspects of intermediary liability: a) if social media platforms could be compelled to ‘carry’ speech under any established legal principles, thereby limiting their right to exclude users or speech, and b) whether users have a constitutional right to access social media spaces of elected officials. This essay analyzes these issues under the American law, as well as draws parallel for India, in light of the ongoing litigation around the suspension of advocate Sanjay Hegde’s Twitter account.</b>
<p> </p>
<p>This article first appeared on the Indian Journal of Law and Technology (IJLT) blog, and can be accessed <a class="external-link" href="https://www.ijlt.in/post/right-to-exclusion-government-controlled-spaces-and-speech">here</a>. Cross-posted with permission. </p>
<p>---</p>
<h2><span class="s1">Introduction</span></h2>
<p class="p2"><span class="s1">On April 8, the Supreme Court of the United States (SCOTUS), vacated the judgment of the US Court of Appeals for Second Circuit’s in <a href="https://int.nyt.com/data/documenthelper/1365-trump-twitter-second-circuit-r/c0f4e0701b087dab9b43/optimized/full.pdf%23page=1"><span class="s2"><em>Knight First Amendment Institute v Trump</em></span></a>. In that case, the Court of Appeals had precluded Donald Trump, then-POTUS, from blocking his critics from his Twitter account on the ground that such action amounted to the erosion of constitutional rights of his critics. The Court of Appeals had held that his use of @realDonaldTrump in his official capacity had transformed the nature of the account from private to public, and therefore, blocking users he disagreed with amounted to viewpoint discrimination, something that was incompatible with the First Amendment.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">The SCOTUS <a href="https://www.supremecourt.gov/opinions/20pdf/20-197_5ie6.pdf"><span class="s2">ordered</span></a> the case to be dismissed as moot, on account of Trump no longer being in office. Justice Clarence Thomas issued a ten-page concurrence that went into additional depth regarding the nature of social media platforms and user rights. It must be noted that the concurrence does not hold any direct precedential weightage, since Justice Thomas was not joined by any of his colleagues at the bench for the opinion. However, given that similar questions of public import, are currently being deliberated in the ongoing <em>Sanjay Hegde</em> <a href="https://www.barandbench.com/news/litigation/delhi-high-court-sanjay-hegde-challenge-suspension-twitter-account-hearing-july-8"><span class="s2">litigation</span></a> in the Delhi High Court, Justice Thomas’ concurrence might hold some persuasive weightage in India. While the facts of these litigations might be starkly different, both of them are nevertheless characterized by important questions of applying constitutional doctrines to private parties like Twitter and the supposedly ‘public’ nature of social media platforms.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">In this essay, we consider the legal questions raised in the opinion as possible learnings for India. In the first part, we analyze the key points raised by Justice Thomas, vis-a-vis the American legal position on intermediary liability and freedom of speech. In the second part, we apply these deliberations to the <em>Sanjay Hegde </em>litigation, as a case-study and a roadmap for future legal jurisprudence to be developed.<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">A flawed analogy</span></h2>
<p class="p2"><span class="s1">At the outset, let us briefly refresh the timeline of Trump’s tryst with Twitter, and the history of this litigation: the Court of Appeals decision was <a href="https://int.nyt.com/data/documenthelper/1365-trump-twitter-second-circuit-r/c0f4e0701b087dab9b43/optimized/full.pdf%23page=1"><span class="s2">issued</span></a> in 2019, when Trump was still in office. Post-November 2020 Presidential Election, where he was voted out, his supporters <a href="https://indianexpress.com/article/explained/us-capitol-hill-siege-explained-7136632/"><span class="s2">broke</span></a> into Capitol Hill. Much of the blame for the attack was pinned on Trump’s use of social media channels (including Twitter) to instigate the violence and following this, Twitter <a href="https://blog.twitter.com/en_us/topics/company/2020/suspension"><span class="s2">suspended</span></a> his account permanently.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">It is this final fact that seized Justice Thomas’ reasoning. He noted that a private party like Twitter’s power to do away with Trump’s account altogether was at odds with the Court of Appeals’ earlier finding about the public nature of the account. He deployed a hotel analogy to justify this: government officials renting a hotel room for a public hearing on regulation could not kick out a dissenter, but if the same officials gather informally in the hotel lounge, then they would be within their rights to ask the hotel to kick out a heckler. The difference in the two situations would be that, <em>“the government controls the space in the first scenario, the hotel, in the latter.” </em>He noted that Twitter’s conduct was similar to the second situation, where it “<em>control(s) the avenues for speech</em>”. Accordingly, he dismissed the idea that the original respondents (the users whose accounts were blocked) had any First Amendment claims against Trump’s initial blocking action, since the ultimate control of the ‘avenue’ was with Twitter, and not Trump.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">In the facts of the case however, this analogy was not justified. The Court of Appeals had not concerned itself with the question of private ‘control’ of entire social media spaces, and given the timeline of the litigation, it was impossible for them to pre-empt such considerations within the judgment. In fact, the only takeaway from the original decision had been that an elected representative’s utilization of his social media account for official purposes transformed </span><span class="s3">only that particular space</span><span class="s1"><em> </em>into a public forum where constitutional rights would find applicability. In delving into questions of ‘control’ and ‘avenues of speech’, issues that had been previously unexplored, Justice Thomas conflates a rather specific point into a much bigger, general conundrum. Further deliberations in the concurrence are accordingly put forward upon this flawed premise.<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">Right to exclusion (and must carry claims)</span></h2>
<p class="p2"><span class="s1">From here, Justice Thomas identified the problem to be “<em>private, concentrated control over online content and platforms available to the public</em>”, and brought forth two alternate regulatory systems — common carrier and public accommodation — to argue for ‘equal access’ over social media space. He posited that successful application of either of the two analogies would effectively restrict a social media platform’s right to exclude its users, and “<em>an answer may arise for dissatisfied platform users who would appreciate not being blocked</em>”. Essentially, this would mean that platforms would be obligated to carry <em>all </em>forms of (presumably) legal speech, and users would be entitled to sue platforms in case they feel their content has been unfairly taken down, a phenomenon Daphne Keller <a href="http://cyberlaw.stanford.edu/blog/2018/09/why-dc-pundits-must-carry-claims-are-relevant-global-censorship"><span class="s2">describes</span></a> as ‘must carry claims’.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">Again, this is a strange place to find the argument to proceed, since the original facts of the case were not about ‘<em>dissatisfied platform users’,</em> but an elected representative’s account being used in dissemination of official information. Beyond the initial ‘private’ control deliberation, Justice Thomas did not seem interested in exploring this original legal position, and instead emphasized on analogizing social media platforms in order to enforce ‘equal access’, finally arriving at a position that would be legally untenable in the USA.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">The American law on intermediary liability, as embodied in Section 230 of the Communications Decency Act (CDA), has two key components: first, intermediaries are <a href="https://www.eff.org/issues/cda230"><span class="s2">protected</span></a> against the contents posted by its users, under a legal model <a href="https://www.article19.org/wp-content/uploads/2018/02/Intermediaries_ENGLISH.pdf"><span class="s2">termed</span></a> as ‘broad immunity’, and second, an intermediary does not stand to lose its immunity if it chooses to moderate and remove speech it finds objectionable, popularly <a href="https://intpolicydigest.org/section-230-how-it-actually-works-what-might-change-and-how-that-could-affect-you/"><span class="s2">known</span></a> as the Good Samaritan protection. It is the effect of these two components, combined, that allows platforms to take calls on what to remove and what to keep, translating into a ‘right to exclusion’. Legally compelling them to carry speech, under the garb of ‘access’ would therefore, strike at the heart of the protection granted by the CDA.<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">Learnings for India</span></h2>
<p class="p2"><span class="s1">In his petition to the Delhi High Court, Senior Supreme Court Advocate, Sanjay Hegde had contested that the suspension of his Twitter account, on the grounds of him sharing anti-authoritarian imagery, was arbitrary and that:<span class="Apple-converted-space"> </span></span></p>
<ol style="list-style-type: lower-alpha;" class="ol1"><li class="li2"><span class="s1">Twitter was carrying out a public function and would be therefore amenable to writ jurisdiction under Article 226 of the Indian Constitution; and</span></li><li class="li2"><span class="s1">The suspension of his account had amounted to a violation of his right to freedom of speech and expression under Article 19(1)(a) and his rights to assembly and association under Article 19(1)(b) and 19(1)(c); and</span></li><li class="li2"><span class="s1">The government has a positive obligation to ensure that any censorship on social media platforms is done in accordance with Article 19(2).<span class="Apple-converted-space"> </span></span></li></ol>
<p class="p3"><span class="s1"></span></p>
<p class="p5"><span class="s1">The first two prongs of the original petition are perhaps easily disputed: as previous <a href="https://indconlawphil.wordpress.com/2020/01/28/guest-post-social-media-public-forums-and-the-freedom-of-speech-ii/"><span class="s2">commentary</span></a> has pointed out, existing Indian constitutional jurisprudence on ‘public function’ does not implicate Twitter, and accordingly, it would be a difficult to make out a case that account suspensions, no matter how arbitrary, would amount to a violation of the user’s fundamental rights. It is the third contention that requires some additional insight in the context of our previous discussion.<span class="Apple-converted-space"> </span></span></p>
<h3><span class="s1">Does the Indian legal system support a right to exclusion?<span class="Apple-converted-space"> </span></span></h3>
<p class="p2"><span class="s1">Suing Twitter to reinstate a suspended account, on the ground that such suspension was arbitrary and illegal, is in its essence a request to limit Twitter’s right to exclude its users. The petition serves as an example of a must-carry claim in the Indian context and vindicates Justice Thomas’ (misplaced) defence of ‘<em>dissatisfied platform users</em>’. Legally, such claims perhaps have a better chance of succeeding here, since the expansive protection granted to intermediaries via Section 230 of the CDA, is noticeably absent in India. Instead, intermediaries are bound by conditional immunity, where availment of a ‘safe harbour’, i.e., exemption from liability, is contingent on fulfilment of statutory conditions, made under <a href="https://indiankanoon.org/doc/844026/"><span class="s2">section 79</span></a> of the Information Technology (IT) Act and the rules made thereunder. Interestingly, in his opinion, Justice Thomas had briefly visited a situation where the immunity under Section 230 was made conditional: to gain Good Samaritan protection, platforms might be induced to ensure specific conditions, including ‘nondiscrimination’. This is controversial (and as commentators have noted, <a href="https://www.lawfareblog.com/justice-thomas-gives-congress-advice-social-media-regulation"><span class="s2">wrong</span></a>), since it had the potential to whittle down the US' ‘broad immunity’ model of intermediary liability to a system that would resemble the Indian one.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">It is worth noting that in the newly issued Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, proviso to Rule 3(1)(d) allows for “<em>the removal or disabling of access to any information, data or communication link [...] under clause (b) on a voluntary basis, or on the basis of grievances received under sub-rule (2) [...]</em>” without dilution of statutory immunity. This does provide intermediaries a right to exclude, albeit limited, since its scope is restricted to content removed under the operation of specific sub-clauses within the rules, as opposed to Section 230, which is couched in more general terms. Of course, none of this precludes the government from further prescribing obligations similar to those prayed in the petition.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">On the other hand, it is a difficult proposition to support that Twitter’s right to exclusion should be circumscribed by the Constitution, as prayed. In the petition, this argument is built over the judgment in <a href="https://indiankanoon.org/doc/110813550/"><span class="s2"><em>Shreya Singhal v Union of India</em></span></a>, where it was held that takedowns under section 79 are to be done only on receipt of a court order or a government notification, and that the scope of the order would be restricted to Article 19(2). This, in his opinion, meant that “<em>any suo-motu takedown of material by intermediaries must conform to Article 19(2)</em>”.</span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">To understand why this argument does not work, it is important to consider the context in which the <em>Shreya Singhal </em>judgment was issued. Previously, intermediary liability was governed by the Information Technology (Intermediaries Guidelines) Rules, 2011 issued under section 79 of the IT Act. Rule 3(4) made provisions for sending takedown orders to the intermediary, and the prerogative to send such orders was on ‘<em>an affected person</em>’. On receipt of these orders, the intermediary was bound to remove content and neither the intermediary nor the user whose content was being censored, had the opportunity to dispute the takedown.</span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">As a result, the potential for misuse was wide-open. Rishabh Dara’s <a href="https://cis-india.org/internet-governance/intermediary-liability-in-india.pdf"><span class="s2">research</span></a> provided empirical evidence for this; intermediaries were found to act on flawed takedown orders, on the apprehension of being sanctioned under the law, essentially chilling free expression online. The <em>Shreya Singhal</em> judgment, in essence, reined in this misuse by stating that an intermediary is legally obliged to act <em>only when </em>a takedown order is sent by the government or the court. The intent of this was, in the court’s words: “<em>it would be very difficult for intermediaries [...] to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not.</em>”<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p5"><span class="s1">In light of this, if Hegde’s petition succeeds, it would mean that intermediaries would now be obligated to subsume the entirety of Article 19(2) jurisprudence in their decision-making, interpret and apply it perfectly, and be open to petitions from users when they fail to do so. This might be a startling undoing of the court’s original intent in <em>Shreya Singhal</em>. Such a reading also means limiting an intermediary’s prerogative to remove speech that may not necessarily fall within the scope of Article 19(2), but is still systematically problematic, including unsolicited commercial communications. Further, most platforms today are dealing with an unprecedented spread and consumption of harmful, misleading information. Limiting their right to exclude speech in this manner, we might be <a href="https://www.hoover.org/sites/default/files/research/docs/who-do-you-sue-state-and-platform-hybrid-power-over-online-speech_0.pdf"><span class="s2">exacerbating</span></a> this problem. <span class="Apple-converted-space"> </span></span></p>
<h3><span class="s1">Government-controlled spaces on social media platforms</span></h3>
<p class="p2"><span class="s1">On the other hand, the original finding of the Court of Appeals, regarding the public nature of an elected representative’s social media account and First Amendment rights of the people to access such an account, might yet still prove instructive for India. While the primary SCOTUS order erases the precedential weight of the original case, there have been similar judgments issued by other courts in the USA, including by the <a href="https://globalfreedomofexpression.columbia.edu/cases/davison-v-randall/"><span class="s2">Fourth Circuit</span></a> court and as a result of a <a href="https://knightcolumbia.org/content/texas-attorney-general-unblocks-twitter-critics-in-knight-institute-v-paxton"><span class="s2">lawsuit</span></a> against a Texas Attorney General.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">A similar situation can be envisaged in India as well. The Supreme Court has <a href="https://indiankanoon.org/doc/591481/"><span class="s2">repeatedly</span></a> <a href="https://indiankanoon.org/doc/27775458/"><span class="s2">held</span></a> that Article 19(1)(a) encompasses not just the right to disseminate information, but also the right to <em>receive </em>information, including <a href="https://indiankanoon.org/doc/438670/"><span class="s2">receiving</span></a> information on matters of public concern. Additionally, in <a href="https://indiankanoon.org/doc/539407/"><span class="s2"><em>Secretary, Ministry of Information and Broadcasting v Cricket Association of Bengal</em></span></a>, the Court had held that the right of dissemination included the right of communication through any media: print, electronic or audio-visual. Then, if we assume that government-controlled spaces on social media platforms, used in dissemination of official functions, are ‘public spaces’, then the government’s denial of public access to such spaces can be construed to be a violation of Article 19(1)(a).<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">Conclusion</span></h2>
<p class="p2"><span class="s1">As indicated earlier, despite the facts of the two litigations being different, the legal questions embodied within converge startlingly, inasmuch that are both examples of the growing discontent around the power wielded by social media platforms, and the flawed attempts at fixing it.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">While the above discussion might throw some light on the relationship between an individual, the state and social media platforms, many questions still continue to remain unanswered. For instance, once we establish that users have a fundamental right to access certain spaces within the social media platform, then does the platform have a right to remove that space altogether? If it does so, can a constitutional remedy be made against the platform? Initial <a href="https://indconlawphil.wordpress.com/2018/07/01/guest-post-social-media-public-forums-and-the-freedom-of-speech/"><span class="s2">commentary</span></a> on the Court of Appeals’ decision had contested that the takeaway from that judgment had been that constitutional norms had a primacy over the platform’s own norms of governance. In such light, would the platform be constitutionally obligated to <em>not </em>suspend a government account, even if the content on such an account continues to be harmful, in violation of its own moderation standards?<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">This is an incredibly tricky dimension of the law, made trickier still by the dynamic nature of the platforms, the intense political interests permeating the need for governance, and the impacts on users in the instance of a flawed solution. Continuous engagement, scholarship and emphasis on having a human rights-respecting framework underpinning the regulatory system, are the only ways forward.<span class="Apple-converted-space"> </span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space"><br /></span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space">---</span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space"><br /></span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space"></span></span></p>
<p>The author would like to thank Gurshabad Grover and Arindrajit Basu for reviewing this piece. </p>
<div> </div>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/right-to-exclusion-government-spaces-and-speech'>https://cis-india.org/internet-governance/blog/right-to-exclusion-government-spaces-and-speech</a>
</p>
No publisherTorSharkFreedom of Speech and ExpressionIntermediary LiabilityInformation Technology2021-07-02T12:05:13ZBlog EntrySubmission to the Facebook Oversight Board in Case 2021-008-FB-FBR: Brazil, Health Misinformation and Lockdowns
https://cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-in-case-2021-008-fb-fbr-brazil-health-misinformation-and-lockdowns
<b>In this note, we answer questions set out by the Board, pursuant to case 2021-008-FB-FBR, which concerned a post made by a Brazilian sub-national health official, and raised questions on health misinformation and enforcement of Facebook's community standards. </b>
<h1 style="text-align: justify;" dir="ltr">Background </h1>
<p dir="ltr">The <a href="https://about.fb.com/news/tag/oversight-board/">Oversight Board</a> is an expert body created to exercise oversight over Facebook’s content moderation decisions and enforcement of community guidelines. It is entirely independent from Facebook in its funding and administration and provides decisions on questions of policy as well as individual cases. It can also make recommendations on Facebook’s content policies. Its decisions are binding on Facebook, unless implementing them could violate the law. Accordingly, Facebook <a href="https://transparency.fb.com/oversight/oversight-board-cases/">implements</a> these decisions across identical content with parallel context, when it is technically and operationally possible to do so. </p>
<p dir="ltr">In June 2021, the Board made an <a href="https://oversightboard.com/news/170403765029629-announcement-of-case-2021-008-fb-fbr/">announcement</a> soliciting public comments on case 2021-008-FB-FBR, concerning a Brazilian state level medical council’s post questioning the effectiveness of lockdowns during the COVID-19 pandemic. Specifically, the post noted that lockdowns (i) are ineffective; (ii) lead to an increase in mental disorders, alcohol abuse, drug abuse, economic damage etc.; (iii) are against fundamental rights under the Brazilian Constitution; and (iv) are condemned by the World Health Organisation (“WHO”). These assertions were backed up by two statements (i) an alleged quote by Dr. Nabarro (WHO) stating that “the lockdown does not save lives and makes poor people much poorer”; and (ii) an example of how the Brazilian state of Amazonas had an increase in deaths and hospital admissions after lockdown. Ultimately, the post concluded that effective COVID-19 preventive measures include education campaigns about hygiene measures, use of masks, social distancing, vaccination and extensive monitoring by the government — but never the decision to adopt lockdowns. The post was viewed around 32,000 times and shared over 270 times. It was not reported by anyone. </p>
<p dir="ltr">Facebook did not take any action against the post, since it had opined that the post is not violative of its community standards. Moreover, WHO has also not advised Facebook to remove claims against lockdowns. In such a scenario, Facebook referred the case to the Oversight Board citing its public importance. </p>
<p dir="ltr">In its announcement, the Board sought answers on the following points: </p>
<ol><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Whether Facebook’s decision to take no action against the content was consistent with its Community Standards and other policies, including the Misinformation and Harm policy (which sits within the rules on <a href="https://www.facebook.com/communitystandards/credible_violence">Violence and Incitement</a>). </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Whether Facebook’s decision to take no action is consistent with the company’s stated values and human rights commitments. </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Whether, in this case, Facebook should have considered alternative enforcement measures to removing the content (e.g., the <a href="https://www.facebook.com/communitystandards/false_news">False News</a> Community Standard places an emphasis on “reduce” and “inform,” including: labelling, downranking, providing additional context etc.), and what principles should inform the application of these measures. </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">How Facebook should treat content posted by the official accounts of national or sub-national level public health authorities, including where it may diverge from official guidance from international health organizations. </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Insights on the post’s claims and their potential impact in the context of Brazil, including on national efforts to prevent the spread of COVID-19. </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Whether Facebook should create a new Community Standard on health misinformation, as recommended by the Oversight Board in case decision <a href="https://oversightboard.com/decision/FB-XWJQBU9A/">2020-006-FB-FBR</a>.</p>
</li></ol>
<h1 style="text-align: justify;" dir="ltr">Submission to the Board</h1>
<p dir="ltr">Facebook’s decision to take no action against the post is consistent with its (i) <a href="https://www.facebook.com/communitystandards/credible_violence">Violence and Incitement</a> community standard read with the <a href="https://www.facebook.com/help/230764881494641">COVID-19 Policy Updates and Protections</a>; and (ii) <a href="https://www.facebook.com/communitystandards/false_news">False News</a> community standard. Facebook’s<a href="https://about.fb.com/news/2018/08/hard-questions-free-expression/"> website</a> as well as all of the Board’s <a href="https://oversightboard.com/decision/FB-6YHRXHZR/">past</a> <a href="https://oversightboard.com/decision/FB-QBJDASCV/">decisions</a> refer to the International Covenant on Civil and Political Rights’ (ICCPR) jurisprudence based <a href="https://www2.ohchr.org/english/bodies/hrc/docs/gc34.pdf">three-pronged test</a> of legality, legitimate aim, and necessity and proportionality in determining violations of Facebook’s community standards. Facebook must apply the same principles to guide the use of its enforcement actions too, keeping in mind the context, intent, tone and impact of the speech. </p>
<p dir="ltr">First, none of Facebook’s aforementioned rules contain explicit prohibitions on content questioning lockdown effectiveness. There is nothing to indicate that “misinformation”, which is undefined, includes within its scope information about the effectiveness of lockdowns. The World Health Organisation has also not advised against such posts. Applying the principle of legality, any person cannot reasonably foresee that such content is prohibited. Accordingly, Facebook’s community standards have not been violated, </p>
<p dir="ltr">Second, the post does not meet the threshold of causing “imminent” harm stipulated in the community standards. Case decision <a href="https://oversightboard.com/decision/FB-XWJQBU9A/">2020-006-FB-FBR</a>, notes that an assessment of “imminence” is made with reference to factors like context, speaker credibility, language etc. Presently, the post’s language and tone, including its quoting of experts and case studies, indicate that its intent is to encourage informed, scientific debate on lockdown effectiveness. </p>
<p dir="ltr">Third, Facebook’s false news community standard does contain any explicit prohibitions. Hence there is no question of its violation. Any decision to the contrary may go against the standard’s stated policy logic of not stifling public discourse, and create a chilling effect on posts questioning the lockdown efficacy. This will set a problematic precedent that Facebook will be mandated to implement.</p>
<p dir="ltr">Presently, Facebook cannot remove the post since no community standards have been violated. Facebook must not reduce the post’s circulation since this may stifle public discussion around lockdown effectiveness. Further, its removal would have resulted in violation of the user’s right to freedom of opinion and expression, as guaranteed by the Universal Declaration of Human Rights (UDHR) and the ICCPR, which are in turn part of Facebook’s Corporate Human Rights Policy. </p>
<p dir="ltr">Instead, Facebook can provide additional context along with the post through its “<a href="https://about.fb.com/news/2018/04/inside-feed-article-context/">related articles</a>” feature, by showing fact checked articles talking about the benefits of lockdown. This approach is the most beneficial since (i) it is less restrictive than reducing circulation of the post; (ii) it balances interests better than not taking any actions by allowing people to be informed about both sides of the debate on lockdowns so that they can make an informed assessment. </p>
<p dir="ltr">Further, Facebook’s treatment of content posted by official accounts of national or sub-national health authorities should be circumscribed by its updated <a href="https://transparency.fb.com/features/approach-to-newsworthy-content/">Newsworthy Content Policy</a>, and the Board’s decision in the <a href="https://oversightboard.com/decision/FB-691QAMHJ/">2021-001-FB-FBR</a>, which had adopted the <a href="https://www.ohchr.org/en/issues/freedomopinion/articles19-20/pages/index.aspx">Rabat Plan of Action</a> to determine whether a restriction on freedom of expression is required to prevent incitement. The Rabat Plan of Action proposes a six-prong test, that considers: a) the social and political context, b) status of the speaker, c) intent to incite the audience against a target group, d) content and form of the speech, e) extent of its dissemination and f) likelihood of harm, including imminence. Apart from taking these factors into consideration, Facebook must <a href="https://transparency.fb.com/features/approach-to-newsworthy-content/">perform</a> a balancing test to determine whether the public interest of the information in the post outweighs the risks of harm. </p>
<p dir="ltr">In the Board’s decision in <a href="https://oversightboard.com/decision/FB-XWJQBU9A/">2020-006-FB-FBR</a>, it was recommended to Facebook to: a) set out a clear and accessible Community Standard on health misinformation, b) consolidate and clarify existing rules in one place (including defining key terms such as misinformation) and c) provision of "detailed hypotheticals that illustrate the nuances of interpretation and application of [these] rules" to provide further clarity for users. Following this, Facebook has <a href="https://assets.documentcloud.org/documents/20491921/covid-19-response-full.pdf">notified</a> its implementation measures, where it has fully implemented these recommendations, thereby bringing it into compliance.</p>
<p dir="ltr">Finally, Brazil is one of the <a href="https://www.bbc.com/news/world-51235105">worst affected</a> countries in the pandemic. It has also been <a href="https://www.ft.com/content/ea62950e-89c0-4b8b-b458-05c90a55b81f">struggling </a>to combat the spread of fake news during the pandemic. President Bolsanaro has been <a href="https://www.hrw.org/news/2021/01/28/brazil-crackdown-critics-covid-19-response">criticised</a> for <a href="https://www.theguardian.com/commentisfree/2020/feb/07/democracy-and-freedom-of-expression-are-under-threat-in-brazil">curbing free speech</a> by using a dictatorship-era <a href="http://www.iconnectblog.com/2021/02/undemocratic-legislation-to-undermine-freedom-of-speech-in-brazil/">national security law</a>., and questioned on his handling of the pandemic, including his own controversial <a href="https://www.bbc.com/news/world-latin-america-56479614">statements </a>questioning lockdown effectiveness. In such a scenario, the post may be perceived in a political colour rather than as an attempt at scientific discussion. However, it is unlikely that the post will lead to any-knee jerk reactions, since people are already familiar with the lockdown debate on which much has already been said and done. A post like this which merely reiterates one side of an ongoing debate is not likely to cause people to take any action to violate lockdown.</p>
<p dir="ltr">For detailed explanation on these questions, please see <a class="external-link" href="https://cis-india.org/internet-governance/facebook-oversight-board-submission-brazil">here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-in-case-2021-008-fb-fbr-brazil-health-misinformation-and-lockdowns'>https://cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-in-case-2021-008-fb-fbr-brazil-health-misinformation-and-lockdowns</a>
</p>
No publisherTanvi Apte and Torsha SarkarInternet FreedomMisinformationIntermediary LiabilityInformation Technology2021-07-01T07:34:09ZBlog EntryOn the legality and constitutionality of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
https://cis-india.org/internet-governance/blog/on-the-legality-and-constitutionality-of-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021
<b>This note examines the legality and constitutionality of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The analysis is consistent with previous work carried out by CIS on issues of intermediary liability and freedom of expression. </b>
<p><span id="docs-internal-guid-6127737f-7fff-b2eb-1b4a-ff9009a1050f"></span></p>
<p dir="ltr">On 25 February 2021, the Ministry of Electronics and Information Technology (Meity) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (hereinafter, ‘the rules’). In this note, we examine whether the rules meet the tests of constitutionality under Indian jurisprudence, whether they are consistent with the parent Act, and discuss potential benefits and harms that may arise from the rules as they are currently framed. Further, we make some recommendations to amend the rules so that they stay in constitutional bounds, and are consistent with a human rights based approach to content regulation. Please note that we cover some of the issues that CIS has already highlighted in comments on previous versions of the rules.</p>
<p dir="ltr"> </p>
<p dir="ltr">The note can be downloaded <a class="external-link" href="https://cis-india.org/internet-governance/legality-constitutionality-il-rules-digital-media-2021">here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/on-the-legality-and-constitutionality-of-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021'>https://cis-india.org/internet-governance/blog/on-the-legality-and-constitutionality-of-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021</a>
</p>
No publisherTorsha Sarkar, Gurshabad Grover, Raghav Ahooja, Pallavi Bedi and Divyank KatiraFreedom of Speech and ExpressionInternet GovernanceIntermediary LiabilityInternet FreedomInformation Technology2021-06-21T11:52:39ZBlog EntryNew intermediary guidelines: The good and the bad
https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad
<b>In pursuance of the government releasing the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, this blogpost offers a quick rundown of some of the changes brought about the Rules, and how they line up with existing principles of best practices in content moderation, among others. </b>
<p> </p>
<p>This article originally appeared in the Down to Earth <a class="external-link" href="https://www.downtoearth.org.in/blog/governance/new-intermediary-guidelines-the-good-and-the-bad-75693">magazine</a>. Reposted with permission.</p>
<p>-------</p>
<p>The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The operation of these rules would be in supersession of the existing intermediary liability rules under the Information Technology (IT) Act, made back in 2011.</p>
<p>These IL rules would have a significant impact on our relationships with internet ‘intermediaries’, i.e. gatekeepers and getaways to the internet, including social media platforms, communication and messaging channels.</p>
<p>The rules also make a bid to include entities that have not traditionally been considered ‘intermediaries’ within the law, including curated-content platforms such as Netflix and Amazon Prime as well as digital news publications.</p>
<p>These rules are a significant step-up from the draft version of the amendments floated by the Union government two years ago; in this period, the relationship between the government around the world and major intermediaries changed significantly. </p>
<p>The insistence of these entities in the past, that they are not ‘arbiters of truth’, for instance, has not always held water in their own decision-makings.</p>
<p>Both Twitter and Facebook, for instance, have locked the former United States president Donald Trump out of their platforms. Twitter has also resisted to fully comply with government censorship requests in India, spilling into an interesting policy tussle between the two entities. It is in the context of these changes, therefore, that we must we consider the new rules.</p>
<p><strong>What changed for the good?</strong></p>
<p>One of the immediate standouts of these rules is in the more granular way in which it aims to approach the problem of intermediary regulation. The previous draft — and in general the entirety of the law — had continued to treat ‘intermediaries’ as a monolithic entity, entirely definable by section 2(w) of the IT Act, which in turn derived much of its legal language from the EU E-commerce Directive of 2000.</p>
<p>Intermediaries in the directive were treated more like ‘simple conduits’ or dumb, passive carriers who did not play any active role in the content. While that might have been the truth of the internet when these laws and rules were first enacted, the internet today looks much different.</p>
<p>Not only is there a diversification of services offered by these intermediaries, there’s also a significant issue of scale, wielded by a few select players, either by centralisation or by the sheer number of user bases. A broad, general mandate would, therefore, miss out on many of these nuances, leading to imperfect regulatory outcomes.</p>
<p>The new rules, therefore, envisage three types of entities:</p>
<ul><li>There are the ‘intermediaries’ within the traditional, section 2(w) meaning of the IT Act. This would be the broad umbrella term for all entities that would fall within the ambit of the rules.</li><li>There are the ‘social media intermediaries’ (SMI), as entities, which enable online interaction between two or more users.</li><li>The rules identify ‘significant social media intermediaries’ (SSMI), which would mean entities with user-thresholds as notified by the Central Government.</li></ul>
<p>The levels of obligations vary based on these hierarchies of classification. For instance, an SSMI would be obligated with a much higher standard of transparency and accountability towards their users. They would have to fulfill by publishing six-monthly transparency reports, where they have to outline how they dealt with requests for content removal, how they deployed automated tools to filter content, and so on.</p>
<p>I have previously argued how transparency reports, when done well, are an excellent way of understanding the breadth of government and social media censorships. Legally mandating this is then perhaps a step in the right direction.</p>
<p>Some other requirements under this transparency principle include giving notice to users whose content has been disabled, allowing them to contest such removal, etc.</p>
<p>One of the other rules from the older draft that had raised a significant amount of concern was the proactive filtering mandate, where intermediaries were liable to basically filter for all unlawful content. This was problematic on two counts:</p>
<ul><li>Developments in machine learning technologies are simply not up there to make this a possibility, which would mean that there would always be a chance that legitimate and legal content would get censored, leading to general chilling effect on digital expression</li><li>The technical and financial burden this would impose on intermediaries would have impacted the competition in the market.</li></ul>
<p>The new rules seemed to have lessened this burden, by first, reducing it from being mandatory to being best endeavour-basis; and second, by reducing the ambit of ‘unlawful content’ to only include content depicting sexual abuse, child sexual abuse imagery (CSAM) and duplicating to already disabled / removed content.</p>
<p>This specificity would be useful for better deployment of such technologies, since previous research has shown that it’s considerably easier to train a machine learning tool on corpus of CSAM or abuse, rather than on more contextual, subjective matters such as hate speech.</p>
<p><strong>What should go?</strong></p>
<p>That being said, it is concerning that the new rules choose to bring online curated content platforms (OCCPs) within the ambit of the law by proposals of a three-tiered self-regulatory body and schedules outlining guidelines about the rating system these entities should deploy.</p>
<p>In the last two years, several attempts have been made by the Internet and Mobile Association of India (IAMAI), an industry body consisting of representatives of these OCCPs, to bring about a self-regulatory code that fills in the supposed regulatory gap in the Indian law.</p>
<p>It is not known if these stakeholders were consulted before the enactment of these provisions. Some of this framework would also apply to publishers of digital news portals.</p>
<p>Noticeably, this entire chapter was also missing from the old draft, and introducing it in the final form of the law without due public consultations is problematic.</p>
<p>Part III and onwards of the rules, which broadly deal with the regulation of these entities, therefore, should be put on hold and opened up for a period of public and stakeholder consultations to adhere to the true spirit of democratic participation.</p>
<p><em>The author would like to thank Gurshabad Grover for his editorial suggestions. </em></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad'>https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad</a>
</p>
No publisherTorSharkIT ActIntermediary LiabilityInternet GovernanceCensorshipArtificial Intelligence2021-03-15T13:52:46ZBlog EntryTwitter's India troubles show tough path ahead for digital platforms
https://cis-india.org/internet-governance/news/dw-june-21-2021-aditya-sharma-twitter-india-troubles-show-tough-path-ahead-for-digital-platforms
<b>Twitter is in a standoff with Indian authorities over the government's new digital rules. Critics see the rules as an attempt to curb free speech, while others say more action is needed to hold tech giants accountable.
</b>
<p style="text-align: justify; ">The blog by Aditya Sharma <a class="external-link" href="https://www.dw.com/en/twitters-india-troubles-show-tough-path-ahead-for-digital-platforms/a-57980916">was published by DW</a> on 21 June 2021. Torsha Sarkar was quoted.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; "><img src="https://cis-india.org/home-images/Intermediary.jpg/@@images/08eb8de3-4fd6-408f-94d2-3f202da0e730.jpeg" alt="Intermediary" class="image-right" title="Intermediary" /></p>
<p style="text-align: justify; ">Twitter holds a relatively low share of India's social media market. But, since 2017, the huge nation has emerged as Twitter's fastest-growing market, becoming critical to its global expansion plans.</p>
<p style="text-align: justify; ">In February, the Indian government <a href="https://www.dw.com/en/india-targets-twitter-whatsapp-with-new-regulatory-rules/a-56708566">introduced new guidelines</a> to regulate digital content on rapidly growing social media platforms.</p>
<p style="text-align: justify; ">The so-called Intermediary Guidelines are aimed at regulating content on internet platforms such as Twitter and Facebook, making them more accountable to legal requests for the removal of posts and sharing information about the originators of messages.</p>
<p style="text-align: justify; ">Employees at these companies can be held criminally liable for not complying with the government's requests.</p>
<p style="text-align: justify; ">Large social media firms must also set up mechanisms to address grievances and appoint executives to liaise with law enforcement under the new rules, as well as appoint an India-based compliance officer who would be held criminally liable for the content on their platforms.</p>
<p style="text-align: justify; ">The Indian government says the rules empower "users who become victims of defamation, morphed images, sexual abuse," among other online crimes. It also said that the rules seek to tackle the problem of disinformation.</p>
<p style="text-align: justify; ">But critics fear that the rules could be used to target government opponents and make sure dissidents don't use the platforms.</p>
<p style="text-align: justify; ">Social media companies were expected to comply with the new rules by May 25.</p>
<p style="text-align: justify; ">Some Indian media reports have recently said that Twitter lost its status as an "intermediary" and the legal protection that came with it, due to its failure to comply with the new rules.</p>
<h3 style="text-align: justify; ">Failure to comply and serious implications</h3>
<p style="text-align: justify; ">Apar Gupta, the executive director of the Internet Freedom Foundation, a New Delhi-based digital rights advocacy group, says failure to comply with the rules could threaten Twitter's India operations.</p>
<p style="text-align: justify; ">"Not complying with the rules would pose a real risk to Twitter's operational environment," he told DW.</p>
<p style="text-align: justify; ">"It will need to go to court to defend itself each time criminal prosecutions are launched against it," he added.</p>
<p style="text-align: justify; ">The first case against Twitter was filed last week, where it was charged with failing to stop the spread of a video on its platform that allegedly incited "hate and enmity" between two religious groups.</p>
<p style="text-align: justify; "><span>'Heavy censorship'</span></p>
<p style="text-align: justify; ">Gupta says adhering to all the government's demands would substantially change Twitter.</p>
<p style="text-align: justify; ">"Absolute compliance would mean heavy censorship of individual tweets, removal of the manipulated media tags, and blocking/suspension of accounts at the government's behest," he said.</p>
<p style="text-align: justify; ">Torsha Sarkar, policy officer at the Bengaluru-based Centre for Internet and Society, fears that Twitter might at times be compelled to overcomply with government demands, threatening user rights.</p>
<p style="text-align: justify; ">"This can be either by over-complying with flawed information requests, thereby selling out its users, or taking down content that offends the majoritarian sensibilities," she told DW.</p>
<p style="text-align: justify; ">Last week, three special rapporteurs appointed by a top UN human rights body expressed "serious concerns" that certain parts of the guidelines "may result in the limiting or infringement of a wide range of human rights."</p>
<p style="text-align: justify; ">They urged New Delhi to review the rules, adding that they did not conform to India's international human rights obligations and could threaten the digital rights of Indians.</p>
<h3 style="text-align: justify; ">Twitter's balancing act</h3>
<p style="text-align: justify; ">It is not the first time that Twitter has been accused of giving in to government pressure to censor content on its platform.</p>
<p style="text-align: justify; ">At the height of the long-running farmer protests, <a href="https://www.dw.com/en/farmer-protests-india-blocks-prominent-twitter-accounts-detains-journalists/a-56411354">Twitter blocked hundreds of tweets</a> and accounts, including the handle of a prominent news magazine. It subsequently unblocked them following public outrage.</p>
<p style="text-align: justify; ">The US company stopped short of complying with demands to block the accounts of activists, politicians and journalists, arguing that such a move would "violate their fundamental right to free expression under Indian law."</p>
<p style="text-align: justify; ">According to local media reports, Twitter's Indian executives were reportedly threatened with fines and imprisonment if the accounts were not taken down.</p>
<h3 style="text-align: justify; ">Special police notify Twitter offices</h3>
<p style="text-align: justify; ">Last month, the labeling of a tweet by a politician from the ruling BJP as "manipulated media" prompted a special unit of the <a href="https://www.dw.com/en/india-police-visit-twitter-offices-over-manipulated-tweet/a-57650193">Delhi police to visit Twitter's offices</a> in the capital and neighboring Gurgaon. Police notified the offices about an investigation into the labeling of the post.</p>
<p style="text-align: justify; ">Twitter India's managing director, Manish Maheswari, was said to have been asked to appear before the police for questioning, according to media reports.</p>
<p style="text-align: justify; ">Some Twitter employees have refused to talk about the ongoing tensions for fear of government reprisals.</p>
<p style="text-align: justify; ">"Such kind of intimidation does not happen every day. (But) Everyone at Twitter India is terrified," people familiar with the matter told DW on the condition of anonymity.</p>
<h3 style="text-align: justify; ">Big Tech VS sovereign power?</h3>
<p style="text-align: justify; ">Those calling for better regulation of tech giants say transnational <a href="https://www.dw.com/en/india-social-media-conflict/a-57702394">social media companies like Twitter lack accountability</a>, blaming them for the alleged inaction against online abuse and disinformation campaigns.</p>
<p style="text-align: justify; ">"The problem with these rules is that they centralize greater power toward the government without providing for the objective benefit of rights toward users," Gupta said.</p>
<p style="text-align: justify; ">"If Twitter were to comply with these rules, it would make a bad situation worse," he said.</p>
<p style="text-align: justify; ">Twitter is unlikely to ditch a major market such as India.</p>
<p style="text-align: justify; ">Sarkar from the Centre for Internet and Society said "It might be difficult to say how the powers of big tech are going to collide with sovereign nations, especially in light of flawed legal interventions around the world."</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/dw-june-21-2021-aditya-sharma-twitter-india-troubles-show-tough-path-ahead-for-digital-platforms'>https://cis-india.org/internet-governance/news/dw-june-21-2021-aditya-sharma-twitter-india-troubles-show-tough-path-ahead-for-digital-platforms</a>
</p>
No publisherAditya SharmaSocial MediaInternet GovernanceIntermediary LiabilityInformation Technology2021-06-26T02:54:19ZNews ItemDonald Trump is attacking the social media giants; here’s what India should do differently
https://cis-india.org/internet-governance/blog/donald-trump-is-attacking-the-social-media-giants-here2019s-what-india-should-do-differently
<b>For a robust and rights-respecting public sphere, India needs to ensure that large social media platforms receive adequate protections, and are made more responsible to its users.</b>
<p>This piece was first published at <a class="external-link" href="https://scroll.in/article/965151/donald-trump-is-attacking-the-social-media-giants-heres-what-india-should-do-differently">Scroll</a>. The authors would like to thank Torsha Sarkar for reviewing and editing the piece, and to Divij Joshi for his feedback.</p>
<hr />
<div id="article-contents" class="article-body">
<p>In retaliation to Twitter <a class="link-external" href="https://www.nytimes.com/2020/05/26/technology/twitter-trump-mail-in-ballots.html" rel="nofollow noopener" target="_blank">labelling</a> one of US President Donald Trump’s tweets as being misleading, the White House signed an <a class="link-external" href="https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/" rel="nofollow noopener" target="_blank">executive order</a>
on May 28 that seeks to dilute protections that social media companies
in the US have with respect to third-party content on their platforms.</p>
<p>The
order argues that social media companies that engage in censorship stop
functioning as ‘passive bulletin boards’: they must consequently be
treated as ‘content creators’, and be held liable for content on their
platforms as such. The shockwaves of the decision soon reached India,
with news coverage of the event <a class="link-external" href="https://www.business-standard.com/article/companies/trump-twitter-spat-debate-rages-on-role-of-social-media-companies-120053100055_1.html" rel="nofollow noopener" target="_blank">starting</a> to <a class="link-external" href="https://economictimes.indiatimes.com/tech/internet/feud-between-donald-trump-and-jack-dorsey-can-have-long-lasting-effects-on-how-we-consume-media-in-india/articleshow/76111556.cms" rel="nofollow noopener" target="_blank">debate</a> the <a class="link-external" href="https://economictimes.indiatimes.com/tech/internet/trumps-move-against-social-media-cos-unlikely-to-change-indias-stand/articleshow/76094586.cms?from=mdr" rel="nofollow noopener" target="_blank">consequences</a> of Trump’s order on how India regulates internet services and social media companies.</p>
<p>The
debate on the responsibilities of online platforms is not new to India,
and recently took main stage in December 2018 when the Ministry of
Electronics and Information Technology, Meity, published a draft set of
guidelines that most online services – ‘intermediaries’ – must follow.
The draft rules, which haven’t been notified yet, propose to
significantly expand the obligations on intermediaries.</p>
<p>Trump’s
executive order, however, comes in the context of content moderation
practices by social media platforms, i.e. when platforms censor speech
of their volition, and not because of legal requirements. The legal
position of content moderation is relatively under-discussed, at least
in legal terms, when it comes to India.</p>
<p>In contrast to
commentators who have implicitly assumed that Indian law permits content
moderation by social media companies, we believe Indian law fails to
adequately account for content moderation and curation practices
performed by social media companies. There may be adverse consequences
for the exercise of freedom of expression in India if this lacuna is not
filled soon.</p>
<h3 class="cms-block cms-block-heading">India vs US<br /></h3>
<p>A
useful starting point for the analysis is to compare how the US and
India regulate liability for online services. In the US, Section 230 of
the Communications Decency Act provides online services with broad
immunity from liability for third party content that they host or
transmit.</p>
<p>There are two critical components to what is generally referred to as Section 230.</p>
<p>First,
providers of an ‘interactive computer service’, like your internet
service provider or a company like Facebook, will not be treated as
publishers or speakers of third-party content. This system has allowed
the internet speech and economy to <a class="link-external" href="https://law.emory.edu/elj/content/volume-63/issue-3/articles/how-law-made-silicon-valley.html" rel="nofollow noopener" target="_blank">flourish</a>
since it allows companies to focus on their service without a constant
paranoia for what users are transmitting through their service.</p>
<p>The
second part of Section 230 states that services are allowed to moderate
and remove, in ‘good faith’, such third-party content that they may
deem offensive or obscene. This allows for online services to instate
their own community guidelines or content policies.</p>
<p>In India,
section 79 of the Information Technology Act is the analogous provision:
it grants intermediaries conditional ‘safe harbour’. This means
intermediaries, again like Facebook or your internet provider, are
exempt from liability for third-party content – like messages or videos
posted by ordinary people – provided their functioning meets certain
requirements, and they comply with the allied rules, known as
Intermediary Guidelines.</p>
<p>The notable and stark difference between
Indian law and Section 230 is that India’s IT Act is largely silent on
content moderation practices. As Rahul Matthan <a class="link-external" href="https://www.livemint.com/opinion/columns/shield-online-platforms-for-content-moderation-to-work-11591116270685.html" rel="nofollow noopener" target="_blank">points out</a>,
there is no explicit allowance in Indian law for platforms to take down
content based on their own policies, even if such actions are done in
good faith.</p>
<h3 class="cms-block cms-block-heading">Safe harbour</h3>
<div> </div>
<p>One
may argue that the absence of an explicit permission does not
necessarily mean that any platform engaging in content moderation
practices will lose its safe harbour. However, the language of Section
79 and the allied rules may even create room for divesting social media
platforms of their safe harbour.</p>
<p>The first such indication is
that the conditions to qualify for safe harbour, intermediaries must not
modify said content, not select the recipients of particular content,
and take information down when it is brought to their notice by
governments or courts.</p>
<p>Most of the conditions are almost a
verbatim copy of a ‘mere conduit’ as defined by the EU Directive on
E-Commerce, 2000. This definition was meant to encapsulate the
functioning of services like infrastructure providers, which transmit
content without exerting any real control. Thus, by adopting this
definition for all intermediaries, Indian law mostly considers internet
services, even social media platforms, to be passive plumbing through
which information flows.</p>
<p>It is easy to see how this narrow conception of online services is severely <a class="link-external" href="https://georgetownlawtechreview.org/wp-content/uploads/2018/07/2.2-Gilespie-pp-198-216.pdf" rel="nofollow noopener" target="_blank">lacking</a>.</p>
<p>Most prominent social media platforms <a class="link-external" href="http://guidelines." rel="nofollow noopener" target="_blank">remove</a> or <a class="link-external" href="https://techcrunch.com/2019/12/16/instagram-fact-checking/" rel="nofollow noopener" target="_blank">hide</a> content, <a class="link-external" href="https://about.fb.com/news/2016/06/building-a-better-news-feed-for-you/" rel="nofollow noopener" target="_blank">algorithmically curate</a> news-feeds to make users keep coming back for more, and increasingly add <a class="link-external" href="https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html" rel="nofollow noopener" target="_blank">labels</a>
to content. If the law is interpreted strictly, these practices may be
adjudged to run afoul of the aforementioned conditions that
intermediaries need to satisfy in order to qualify for safe harbour.</p>
<h3 class="cms-block cms-block-heading">Platforms or editors?<br /></h3>
<p>For
instance, it can be argued that social media platforms initiate
transmission in some form when they pick and ‘suggest’ relevant
third-party content to users. When it comes to newsfeeds, neither the
content creator nor the consumer have as much control over how their
content is disseminated or curated as much as the platform does. By
curating newsfeeds, social media platforms can be said to essentially
‘selecting the receiver’ of transmissions.</p>
<p>The Intermediary
Guidelines further complicate matters by specifically laying out what is
not to be construed as ‘editing’ under the law. Under rule 3(3), the
act of taking down content pursuant to orders under the Act will not be
considered as ‘editing’ of said content.</p>
<p>Since the term ‘editing’
has been left undefined beyond the negative qualification, several
social media intermediaries may well qualify as editors. They use
algorithms that curate content for their users; like traditional news
editors, these algorithms use certain <a class="link-external" href="https://www.researchgate.net/profile/Michael_Devito/publication/302979999_From_Editors_to_Algorithms_A_values-based_approach_to_understanding_story_selection_in_the_Facebook_news_feed/links/5a19cc3d4585155c26ac56d4/From-Editors-to-Algorithms-A-values-based-approach-to-understanding-story-selection-in-the-Facebook-news-feed.pdf" rel="nofollow noopener" target="_blank">‘values’</a>
to determine what is relevant to their audiences. In other words, one
can argue that it is difficult to draw a bright line between editorial
and algorithmic acts.</p>
<p>To retain their safe harbour, the
counter-argument that social media platforms can rely is the fact that
Rule 3(5) of the Intermediary Guidelines requires intermediaries to
inform users that intermediaries reserve the right to take down user
content that relates to a wide of variety of acts, including content
that threatens national security, or is “[...] grossly harmful,
harassing, blasphemous, [etc.]”.</p>
<p>In practice, however, the
content moderation practices of some social media companies may go
beyond these categories. Additionally, the rule does not address the
legal questions created by these platforms’ curation of news-feeds.</p>
<p>The
purpose of highlighting how Section 79 treats the practices of social
media platforms is not with the intention of arguing that these
platforms should be held liable for user-generated content. Online
spaces created by social media platforms have allowed for individuals to
express themselves and participate in political organisation and <a class="link-external" href="https://www.pewresearch.org/internet/2018/07/11/public-attitudes-toward-political-engagement-on-social-media/" rel="nofollow noopener" target="_blank">debate</a>.</p>
<p>A
level of protection of intermediaries from immunity is therefore
critical for the protection of several human rights, especially the
right to freedom of speech. This piece only serves to highlight that
section 79 is antiquated and unfit to deal with modern online services.
The interpretative dangers that exist in the provision create regulatory
uncertainty for organisations operating in India.</p>
<h3 class="cms-block cms-block-heading">Dangers to speech<br /></h3>
<p>These dangers may not just be theoretical.</p>
<p>Only last year, Twitter CEO Jack Dorsey was <a class="link-external" href="https://www.hindustantimes.com/india-news/twitter-ceo-jack-dorsey-summoned-by-parliamentary-panel-on-feb-25-panel-refuses-to-hear-other-officials/story-8x9OUbNBo36uvp92L5nOKI.html" rel="nofollow noopener" target="_blank">summoned</a>
by the Parliamentary Committee on Information Technology to answer
accusations of the platform having a bias against ‘right-wing’ accounts.
More recently, BJP politician Vinit Goenka <a class="link-external" href="https://www.medianama.com/2020/06/223-vinit-goenka-twitter-khalistan/" rel="nofollow noopener" target="_blank">encouraged people to file cases against Twitter</a> for promoting separatist content.</p>
<p>Recent <a class="link-external" href="https://sflc.in/sites/default/files/reports/Intermediary_Liability_2_0_-_A_Shifting_Paradigm.pdf" rel="nofollow noopener" target="_blank">interventions</a>
from the Supreme Court have imposed proactive filtration and blocking
requirements on intermediaries, but these have been limited to
reasonable restrictions that may be imposed on free speech under Article
19 of India’s Constitution. Content moderation policies of
intermediaries like Twitter and Facebook go well beyond the scope of
Article 19 restrictions, and the apex court has not yet addressed this.</p>
<p>The
Delhi High Court, in Christian Louboutin v. Nakul Bajaj, has already
highlighted criteria for when e-commerce intermediaries can stake claim
to Section 79 safe harbour protections based on the active (or passive)
nature of their services. While the order came in the context of
intellectual property violations, nothing keeps a court from similarly
finding that Facebook and Twitter play an ‘active’ role when it comes to
content moderation and curation.</p>
<p>These companies may one day
find the ‘safe harbour’ rug pulled from under their feet if a court
reads section 79 more strictly. In fact, judicial intervention may not
even be required. The threat of such an interpretation may simply be
exploited by the government, and used as leverage to get social media
platforms to toe the government line.</p>
<h3 class="cms-block cms-block-heading">Protection and responsibility<br /></h3>
<p>Unfortunately,
the amendments to the intermediary guidelines proposed in 2018 do not
address the legal position of content moderation either. More recent
developments <a class="link-external" href="https://www.medianama.com/2020/04/223-meity-information-technology-act-amendments/" rel="nofollow noopener" target="_blank">suggest</a>
that the Meity may be contemplating amending the IT Act. This presents
an opportunity for a more comprehensive reworking of the Indian
intermediary liability regime than what is possible through delegated
legislation like the intermediary rules.</p>
<p>Intermediaries, rather
than being treated uniformly, should be classified based on their
function and the level of control they exercise over the content they
process. For instance, network infrastructure should continue to be
treated as ‘mere conduits’ and enjoy broad immunity from liability for
user-generated content.</p>
<p>More complex services like search engines
and online social media platforms can have differentiated
responsibilities based on the extent they can contextualise and change
content. The law should carve out an explicit permission to platforms to
moderate content in good faith. Such an allowance should be accompanied
by outlining best practices that these platforms can follow to ensure <a class="link-external" href="https://santaclaraprinciples.org/" rel="nofollow noopener" target="_blank">transparency and accountability</a> to their users.</p>
<p>For
a robust and rights-respecting public sphere, India needs to ensure
that large social media platforms receive adequate protections, and are
made more responsible to its users.</p>
<p><em>Anna Liz Thomas is a law
graduate and a policy researcher, currently working with the Centre for
Internet and Society. Gurshabad Grover manages research in the freedom
of expression and internet governance team at CIS</em>.</p>
</div>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/donald-trump-is-attacking-the-social-media-giants-here2019s-what-india-should-do-differently'>https://cis-india.org/internet-governance/blog/donald-trump-is-attacking-the-social-media-giants-here2019s-what-india-should-do-differently</a>
</p>
No publisherAnna Liz Thomas and Gurshabad GroverContent takedownFreedom of Speech and ExpressionIntermediary Liability2020-06-25T09:07:52ZBlog EntryWhy should we care about takedown timeframes?
https://cis-india.org/internet-governance/blog/why-should-we-care-about-takedown-timeframes
<b>The issue of content takedown timeframe - the time period an intermediary is allotted to respond to a legal takedown order - has received considerably less attention in conversations about intermediary liability. This article examines the importance of framing an appropriate timeframe towards ensuring that speech online is not over-censored, and frames recommendations towards the same.
</b>
<p> </p>
<p> </p>
<p><em>This article first <a class="external-link" href="https://cyberbrics.info/why-should-we-care-about-takedown-timeframes/">appeared</a> in the CyberBRICS website. It has since been <a class="external-link" href="https://www.medianama.com/2020/04/223-content-takedown-timeframes-cyberbrics/">cross-posted</a> to the Medianama.</em></p>
<p><em>The findings and opinions expressed in this article are derived from the larger research report 'A deep dive into content takedown timeframes', which can be accessed <a class="external-link" href="https://cis-india.org/internet-governance/files/a-deep-dive-into-content-takedown-frames">here</a>.</em></p>
<p><strong>Introduction</strong></p>
<p>Since the Ministry of Electronics and Information Technology (MeitY) proposed the draft amendments to the intermediary liability guidelines in December of 2018, speculations regarding their potential effects have been numerous. These have included, <a class="external-link" href="http://www.medianama.com/2020/01/223-traceability-accountability-necessary-intermediary-liability/">mapping</a> the requirement of traceability of originators vis-a-vis chilling effect on free speech online, or <a class="external-link" href="http://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/">critiquing</a> the proactive filtering requirement as potentially leading to censorship.</p>
<p>One aspect, however, that has received a lesser amount of attention is encoded within Rule 3(8) of the draft amendments. By the virtue of that rule, the time-limit given to the intermediaries to respond to a legal content takedown request (“turnaround time”) has been reduced from 36 hours (as it was in the older version of the rules) to 24 hours. In essence, intermediaries, when faced with a takedown order from the government or the court, would now have to remove the concerned piece of content within 24 hours of receipt of the notice.</p>
<p>Why is this important? Consider this: the <a class="external-link" href="http://indiacode.nic.in/bitstream/123456789/1999/3/A2000-21.pdf">definition</a> of an ‘intermediary’ within the Indian law encompasses a vast amount of entities – cyber cafes, online-marketplaces, internet service providers and more. Governance of any intermediary liability norms would accordingly require varying levels of regulation, each of which recognizes the different composition of these entities. In light of that, the content takedown requirement, and specifically the turnaround time becomes problematic. Let alone that the vast amount of entities under the definition of intermediaries would probably find it impossible to implement this obligation due to their technical architecture, this obligation also seems to erase the nuances existing within entities which would actually fall within its scope. </p>
<p>Each category of online content, and more importantly, each category of intermediary are different, and any content takedown requirement must appreciate these differences. A smaller intermediary may find it more difficult to adhere to a stricter, shorter timeframe, than an incumbent. A piece of ‘terrorist’ content may be required to be treated with more urgency than something that is defamatory. These contextual cues are critical, and must be accordingly incorporated in any law on content takedown.</p>
<p>While making our submissions to the draft amendments, we found that there was a lack of research from the government’s side justifying the shortened turnaround time, nor were there any literature which focussed on turnaround time-frames as a critical point of regulation of intermediary liability. Accordingly, I share some findings from our research in the subsequent sections, which throw light on certain nuances that must be considered before proposing any content takedown time-frame. It is important to note that our research has not yet found what should be an appropriate turnaround time in a given situation. However, the following findings would hopefully start a preliminary conversation which may ultimately lead us to a right answer.</p>
<p><strong>What to consider when regulating takedown time-frames?</strong></p>
<p>I classify the findings from our research into a chronological sequence: a) broad legal reforms, b) correct identification of scope and extent of the law, c) institution of proper procedural safeguards, and d) post-facto review of the time-frame for evidence based policy-making.</p>
<p><em>1. Broad legal reforms: Harmonize the law on content takedown.</em></p>
<p>The Indian law for content takedown is administered through two different provisions under the Information Technology (IT) Act, each with their own legal procedures and scope. While the 24-hour turnaround time would be applicable for the procedure under one of them, there would continue to <a class="external-link" href="http://cis-india.org/internet-governance/resources/information-technology-procedure-and-safeguards-for-blocking-for-access-of-information-by-public-rules-2009">exist</a> a completely different legal procedure under which the government could still effectuate content takedown. For the latter, intermediaries would be given a 48-hour timeframe to respond to a government request with clarifications (if any).</p>
<p>Such differing procedures contributes to the creation of a confusing legal ecosystem surrounding content takedown, leading to arbitrary ways in which Indian users experience internet censorship. Accordingly, it is important to harmonize the existing law in a manner that the procedures and safeguards are seamless, and the regulatory process of content takedown is streamlined.</p>
<p><em>2. Correct identification of scope and extent of the law: Design a liability framework on the basis of the differences in the intermediaries, and the content in question.</em></p>
<p>As I have highlighted before, regulation of illegal content online cannot be <a class="external-link" href="https://blog.mozilla.org/netpolicy/2018/07/11/sustainable-policy-solutions-for-illegal-content/">one-size-fits-all</a>. Accordingly, a good law on content takedown must account for the nuances existing in the way intermediaries operate and the diversity of speech online. More specifically, there are two levels of classification that are critical.</p>
<p><em>One</em>, the law must make a fundamental classification between the intermediaries within the scope of the law. An obligation to remove illegal content can be implemented only by those entities whose technical architecture allows them to. While a search engine would be able to delink websites that are declared ‘illegal’, it would be absurd to expect a cyber cafe to follow a similar route of responding to a legal takedown order within a specified timeframe.</p>
<p>Therefore, one basis of classification must incorporate this difference in the technical architecture of these intermediaries. Apart from this, the law must also design liability for intermediaries on the basis of their user-base, annual revenue generated, and the reach, scope and potential impact of the intermediary’s actions.</p>
<p><em>Two, </em>it is important that the law recognizes that certain types of content would require more urgent treatment than other types of content. Several regulations across jurisdiction, including the NetzDG and the EU Regulation on Preventing of Dissemination of Terrorist Content Online, while problematic in their own counts, attempt to either limit their scope of application or frame liability based on the nature of content targeted.</p>
<p>The Indian law on the other hand, encompasses within its scope, a vast, varying array of content that is ‘illegal’, which includes on one hand, critical items like threatening ‘the sovereignty and integrity of India’ and on the other hand, more subjective speech elements like ‘decency or morality’. While an expedited time-frame may be permissible for the former category of speech, it is difficult to justify the same for the latter. More contextual judgments may be needed to assess the legality of content that is alleged to be defamatory or obscene, thereby making it problematic to have a shorter time-frame for the same.</p>
<p><em>3. Institution of proper procedural safeguards: Make notices mandatory and make sanctions gradated</em>.</p>
<p>Apart from the correct identification of scope and extent, it is important that there are sufficient procedural safeguards to ensure that the interests of the intermediaries and the users are not curtailed. While these may seem ancillary to the main point, how the law chooses to legislate on these issues (or does not), nevertheless has a direct bearing on the issue of content takedown and time-frames.</p>
<p>Firstly, while the Indian law mandates content takedown, it does not mandate a process through which a user is notified of such an action being taken. The mere fact that an incumbent intermediary is able to respond to removal notifications within a specified time-frame does not imply that its actions would not have ramifications on free speech. Ability to takedown content does not translate into accuracy of the action taken, and the Indian law fails to take this into account.</p>
<p>Therefore, additional obligations of informing users when their content has been taken down, institutes due process in the procedure. In the context of legal takedown, such notice mechanisms also <a class="external-link" href="http://www.eff.org/wp/who-has-your-back-2019">empower</a> users to draw attention to government censorship and targeting.</p>
<p>Secondly, a uniform time-frame of compliance, coupled with severe sanctions goes on to disrupt the competition against the smaller intermediaries. While the current law does not clearly elaborate upon the nature of sanctions that would be imposed, general principles of the doctrine of safe harbour dictate that upon failure to remove the content, the intermediary would be subject to the same level of liability as the person uploading the content. This threat of sanctions may have adverse effects on free speech online, resulting in potential <a class="external-link" href="http://cis-india.org/internet-governance/intermediary-liability-in-india.pdf">over-censorship</a> of legitimate speech.</p>
<p>Accordingly, sanctions should be restricted to instances of systematic violations. For critical content, the contours of what constitutes systematic violation may differ. The regulator must accordingly take into account the nature of content which the intermediary failed to remove, while assessing their liability.</p>
<p><em>4. Post-facto review of the time-frame for evidence based policy-making: Mandate transparency reporting.</em></p>
<p>Transparency reporting, apart from ensuring accountability of intermediary action, is also a useful tool for understanding the impact of the law, specifically with relation to time period of response. The NetzDG, for all its criticism, has received <a class="external-link" href="https://www.article19.org/wp-content/uploads/2017/09/170901-Legal-Analysis-German-NetzDG-Act.pdfhttp://">support</a> for requiring intermediaries to produce bi-annual transparency reports. These reports provide us important insight into the efficacy of any proposed turnaround time, which in turn helps us to propose more nuanced reforms into the law.</p>
<p>However, to cull out the optimal amount of information from these reports, it is important that these reporting practices are standardized. There exists some international body of work which proposes a methodology for standardizing transparency reports, including the Santa Clara Principles and the Electronic Frontier Foundation’s (EFF) ‘Who has your back?’ reports. We have also previously proposed a methodology that utilizes some of these pointers.</p>
<p>Additionally, due to the experimental nature of the provision, including a review provision in the law would ensure the efficacy of the exercise can also be periodically assessed. If the discussion in the preceding section is any indication, the issue of an appropriate turnaround time is currently in a regulatory flux, with no correct answer. In such a scenario, periodic assessments compel policymakers and stakeholders to discuss effectiveness of solutions, and the nature of the problems faced, leading to <a class="external-link" href="http://www.livemint.com/Opinion/svjUfdqWwbbeeVzRjFNkUK/Making-laws-with-sunset-clauses.html">evidence-based</a> policymaking.</p>
<p><strong>Why should we care?</strong></p>
<p>There is a lot at stake while regulating any aspect of intermediary liability, and the lack of smart policy-making may result in the dampening of the interests of any one of the stakeholder groups involved. As the submissions to the draft amendments by various civil societies and industry groups show, the updated turnaround time suffers from issues, which if not addressed, may lead to over-removal, and lack of due process in the content removal procedure.</p>
<p>Among others, these submissions pointed out that the shortened time-frame did not allow the intermediaries sufficient time to scrutinize a takedown request to ensure that all technical and legal requirements are adhered to. This in turn, may also prompt third-party action against user actions. Additionally, the significantly short time-frame also raised several implementational challenges. For smaller companies with fewer employees, such a timeframe can both be burdensome, from both a financial and capability point of view. This in turn, may result in over-censorship of speech online.</p>
<p>Failing to recognize and incorporate contextual nuances into any law on intermediary liability therefore, may critically alter the way we interact with online intermediaries, and in a larger scheme, with the internet.</p>
<p> </p>
<p> </p>
<p> </p>
<div> </div>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/why-should-we-care-about-takedown-timeframes'>https://cis-india.org/internet-governance/blog/why-should-we-care-about-takedown-timeframes</a>
</p>
No publisherTorSharkContent takedownIntermediary LiabilityChilling Effect2020-04-10T04:58:56ZBlog EntryContent takedown and users' rights
https://cis-india.org/internet-governance/blog/content-takedown-and-users-rights-1
<b>After Shreya Singhal v Union of India, commentators have continued to question the constitutionality of the content takedown regime under Section 69A of the IT Act (and the Blocking Rules issued under it). There has also been considerable debate around how the judgement has changed this regime: specifically about (i) whether originators of content are entitled to a hearing, (ii) whether Rule 16 of the Blocking Rules, which mandates confidentiality of content takedown requests received by intermediaries from the Government, continues to be operative, and (iii) the effect of Rule 16 on the rights of the originator and the public to challenge executive action. In this opinion piece, we attempt to answer some of these questions.</b>
<p style="text-align: justify;" class="normal"> </p>
<p style="text-align: justify;" class="normal">This article was first <a class="external-link" href="http://https://theleaflet.in/content-takedown-and-users-rights/">published</a> at the Leaflet. It has subsequently been republished by <a class="external-link" href="https://scroll.in/article/953146/how-india-is-using-its-information-technology-act-to-arbitrarily-take-down-online-content">Scroll.in</a>, <a class="external-link" href="https://kashmirobserver.net/2020/02/15/content-takedown-and-users-rights/">Kashmir Observer</a> and the <a class="external-link" href="https://cyberbrics.info/content-takedown-and-users-rights/">CyberBRICS blog</a>. </p>
<p style="text-align: justify;" class="normal"><strong><br /></strong></p>
<p style="text-align: justify;" class="normal"><strong>Introduction</strong></p>
<p style="text-align: justify;" class="normal">Last year, several Jio users from different states <a href="https://www.medianama.com/2019/03/223-indiankanoon-jio-block/">reported</a> that sites like Indian Kanoon, Reddit and Telegram were inaccessible through their connections. While attempting to access the website, the users were presented with a notice that the websites were blocked on orders from the Department of Telecommunications (DoT). When contacted by the founder of Indian Kanoon, Reliance Jio <a href="https://in.reuters.com/article/us-india-internet-idINKCN1RF14D">stated</a> that the website had been blocked on orders of the government, and that the order had been rescinded the same evening. However, in response to a Right to Information (RTI) request, the DoT <a href="https://twitter.com/indiankanoon/status/1218193372210323456">said</a> they had no information about orders relating to the blocking of Indian Kanoon.</p>
<p style="text-align: justify;" class="normal">Alternatively, consider that the Committee to Protect Journalists (CPJ) <a href="https://cpj.org/blog/2019/10/india-opaque-legal-process-suppress-kashmir-twitter.php">expressed concern</a> last year that the Indian government was forcing Twitter to suspend accounts or remove content relating to Kashmir. They reported that over the last two years, the Indian government suppressed a substantial amount of information coming from the area, and prevented Indians from accessing more than five thousand tweets.</p>
<p style="text-align: justify;" class="normal">These instances are <a href="https://www.hindustantimes.com/analysis/to-preserve-freedoms-online-amend-the-it-act/story-aC0jXUId4gpydJyuoBcJdI.html">symptomatic</a> of a larger problem of opaque and arbitrary content takedown in India, enabled by the legal framework under the Information Technology (IT) Act. The Government derives its powers to order intermediaries (entities storing or transmitting information on behalf of others, a definition which includes internet service providers and social media platforms alike) to block online resources through <a href="https://indiankanoon.org/doc/10190353/">section 69A</a> of the IT Act and the <a href="https://meity.gov.in/writereaddata/files/Information%20Technology%20%28%20Procedure%20and%20safeguards%20for%20blocking%20for%20access%20of%20information%20by%20public%29%20Rules%2C%202009.pdf">rules</a> [“the blocking rules”] notified thereunder. Apart from this, <a href="https://indiankanoon.org/doc/844026/">section 79</a> of the IT Act and its allied rules also prescribe a procedure for content removal. <a href="https://cis-india.org/internet-governance/files/a-deep-dive-into-content-takedown-frames">Conversations</a> with one popular intermediary revealed that the government usually prefers to use its powers under section 69A, possibly because of the opaque nature of the procedure that we highlight below.</p>
<p style="text-align: justify;" class="normal">Under section 69A, a content removal request can be sent by authorised personnel in the Central Government not below the rank of a Joint Secretary. The grounds for issuance of blocking orders under section 69A are: “<em>the interest of the sovereignty and integrity of India, defence of India, the security of the state, friendly relations with foreign states or public order or for preventing incitement to the commission of any cognisable offence relating to the above.</em>” Specifically, the blocking rules envisage the process of blocking to be largely executive-driven, and require strict confidentiality to be maintained around the issuance of blocking orders. This shrouds content takedown orders in a cloak of secrecy, and makes it impossible for users and content creators to ascertain the legitimacy or legality of the government action in any instance of blocking.</p>
<p style="text-align: justify;" class="normal"><strong>Issues</strong></p>
<p style="text-align: justify;" class="normal">The Supreme Court had been called to determine the constitutional validity of section 69A and the allied rules in <a href="https://indiankanoon.org/doc/110813550/"><em>Shreya Singhal v Union of India</em></a>. The petitioners had contended that as per the procedure laid down by these rules, there was no guarantee of pre-decisional hearing afforded to the originator of the information. Additionally, the petitioners pointed out that the safeguards built into section 95 and 96 of the Code of Criminal Procedure (CrPC), which allow state governments to ban publications and persons to initiate legal challenges to those actions respectively, were absent from the blocking procedures. Lastly, the petitioners assailed rule 16 of the blocking rules, which mandated confidentiality of blocking procedures, on the grounds that it was affecting their fundamental rights.</p>
<p style="text-align: justify;" class="normal">The Court, however, found little merit in these arguments. Specifically, the Court found that section 69A was narrowly drawn and had sufficient procedural safeguards, which included the grounds of issuance of a blocking order being specifically drawn, and mandating that the reasons of the website blocking be in writing, thus making it amenable to judicial review. Further, the Court also found that the provision of setting up of a review committee saved the law from being constitutional infirmity. In the Court’s opinion, the mere absence of additional safeguards, as the ones built into the CrPC, did not mean that the law was unconstitutional.</p>
<p style="text-align: justify;" class="normal">But do the ground realities align with the Court’s envisaged implementation of these principles? Apar Gupta, a counsel for the petitioners, <a href="https://indianexpress.com/article/opinion/columns/but-what-about-section-69a/">pointed</a> out that there was no recorded instance of pre-decisional hearing being granted to show that this safeguard contained in the rules was actually being implemented. However, Gautam Bhatia <a href="https://indconlawphil.wordpress.com/2015/03/25/the-supreme-courts-it-act-judgment-and-secret-blocking/">read</a> <em>Shreya Singhal </em>to make an important advance: that the right of hearing be mandatorily extended to the ‘originator’, i.e. the content creator.</p>
<p style="text-align: justify;" class="normal">Additionally, Bhatia also noted that the Court, while upholding the constitutionality of the procedure under section 69A, held that the “<em>reasons have to be recorded in writing in such blocking order so that they may be assailed in a writ petition under Article 226 of the Constitution.</em>”</p>
<p style="text-align: justify;" class="normal">There are two important takeaways from this. <em>Firstly</em>, he argued that the broad contours of the judgment invoke an established constitutional doctrine — that the fundamental right under Article 19(1)(a) does not merely include the right of expression, but also the <em>right of access to information. </em>Accordingly, the right of challenging a blocking order was not only vested in the originator or the concerned intermediary, but may rest with the general public as well. And <em>secondly</em>, by the doctrine of necessary implication, it followed that for the general public to challenge any blocking order under Article 226, the blocking orders must be made public. While Bhatia concedes that public availability of blocking orders may be an over-optimistic reading of the judgment, recent events suggest that even the commonly-expected result, i.e. that the content creators having the right to a hearing, has not been implemented by the Government.</p>
<p style="text-align: justify;" class="normal">Consider the <a href="https://internetfreedom.in/delhi-hc-issues-notice-to-the-government-for-blocking-satirical-dowry-calculator-website/">blocking</a> of the satirical website DowryCalculator.com in September 2019 on orders from the government. The website displayed a calculator that suggests a ‘dowry’ depending on the salary and education of a prospective groom: even if someone misses the satire, the contents of the website are not immediately relatable to any grounds of removal listed under section 69A of the IT Act.</p>
<p style="text-align: justify;" class="normal"> Tanul Thakur, the creator of the website, was not granted a hearing despite the fact that he had publicly claimed the ownership of the website at various times and that the website had been covered widely by the press. The information associated with the domain name also publicly lists Thakur’s name and contact information. Clearly, the government made no effort to contact Thakur when passing the order. Perhaps even more worryingly, when he <a href="https://internetfreedom.in/delhi-hc-issues-notice-to-the-government-for-blocking-satirical-dowry-calculator-website/">tried</a> to access a copy of the blocking order by filing a RTI, the MeitY cited the confidentiality rule to deny him the information.</p>
<p style="text-align: justify;" class="normal">This incident documents a fundamental problem plaguing the rules: the confidentiality clause is still being used to deny disclosure of key information on content takedown orders. The government has also used the provision to deny citizens a list of blocked websites , as responses to RTI requests have proven <a href="https://cis-india.org/internet-governance/blog/rti-application-to-bsnl-for-the-list-of-websites-blocked-in-india">time</a> and <a href="https://sflc.in/deity-provides-list-sites-blocked-2013-withholds-orders">again</a>.</p>
<p style="text-align: justify;" class="normal">Clearly, the Supreme Court’s rationale in considering Section 69A and the blocking rules as constitutional is not one that is implemented in reality. The confidentiality clause is preventing legal challenges to content blocking in totality: content creators are unable access the orders, and hence are unable to understand the executive’s reasoning in ordering their content to be blocked from public access.</p>
<p style="text-align: justify;" class="normal">As we noted earlier, the grounds of issuing a blocking order under section 69A pertain to certain reasonable restrictions on expression permitted by Article 19(2), which are couched in broad terms. The government’s implementation of section 69A and the rules make it impossible for any judicial review or accountability on the conformity of blocking orders with the mentioned grounds under the rules, or any reasonable restriction at all.</p>
<p style="text-align: justify;" class="normal"><strong>The Way Forward</strong></p>
<p style="text-align: justify;" class="normal">From the opacity of proceedings under the law, to the lack of information regarding the same on public domain, the Indian content takedown regime leaves a lot to be desired from both the government and intermediaries at play. </p>
<p style="text-align: justify;" class="normal">First, we believe the Supreme Court’s decision in <em>Shreya Singhal v. Union of India</em> casts an obligation on the government to attempt to contact the content creator if they are passing a content takedown order to an intermediary. <em>Second</em>, even if the content creator is unavailable for a hearing at that instance, the confidentiality clause should not be used to prevent future disclosure of information to the content creator, so that affected citizens can access and challenge these orders.</p>
<p style="text-align: justify;" class="normal">While we wait for legal reform, intermediaries can also step up to ensure the rights of users online are upheld. On receiving formal orders, intermediaries should <a href="https://cis-india.org/internet-governance/blog/torsha-sarkar-suhan-s-and-gurshabad-grover-october-30-2019-through-the-looking-glass">assess</a> the legality of the received request. This should involve ensuring that only authorised agencies and personnel have sent the content removal orders, that the order specifically mentions what provision the government is exercising the power under, and that the content removal requests relate to the grounds of removal that are permissible under section 69A. For instance, intermediaries should refuse to entertain content removal requests under section 69A of the IT Act if they relate to obscenity, a ground not covered by the provision.</p>
<p style="text-align: justify;" class="normal">The representatives of the intermediary should also push for the committee to grant a hearing to the content creator. Here, the intermediary can act as a liaison between the uploader and the governmental authorities.</p>
<p style="text-align: justify;" class="normal">The Supreme Court’s recent decision in <a href="https://indiankanoon.org/doc/82461587/"><em>Anuradha Bhasin v. Union of India</em></a><em> </em>offers a glimmer of hope for user rights online<em>. </em>While the case primarily challenged the orders imposing section 144 of the CrPC and a communication blockade in Jammu and Kashmir, the final decision does affirm the fundamental principle that government-imposed restrictions on the freedom of expression and assembly must be made available to the public and affected parties to enable challenges in a court of law.</p>
<p style="text-align: justify;" class="normal"> The judiciary has yet another opportunity to consider the provision and the rules: late last year, Tanul Thakur <a href="https://internetfreedom.in/delhi-hc-issues-notice-to-the-government-for-blocking-satirical-dowry-calculator-website/">approached</a> the Delhi High Court to challenge the orders passed by the government to ISPs to block his website. One hopes that the future holds robust reforms to the content takedown regime.</p>
<p style="text-align: justify;" class="normal"> We live in an era where the ebb and flow of societal discourse is increasingly channeled through intermediaries on the internet. In the absence of a mature, balanced and robust framework that enshrines the rule of law, we risk arbitrary modulation of the marketplace of ideas by the executive.</p>
<p style="text-align: justify;" class="normal"><em> </em></p>
<p style="text-align: justify;" class="normal"><em>Torsha Sakar and Gurshabad Grover are researchers at the Centre for Internet and Society.</em></p>
<p style="text-align: justify;" class="normal"><em>Disclosure: The Centre for Internet and Society is a recipient of research grants from Facebook and Google.</em></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/content-takedown-and-users-rights-1'>https://cis-india.org/internet-governance/blog/content-takedown-and-users-rights-1</a>
</p>
No publisherTorsha Sarkar, Gurshabad GroverInternet FreedomInternet GovernanceIntermediary LiabilityCensorship2020-02-17T05:18:25ZBlog EntryA Deep Dive into Content Takedown Timeframes
https://cis-india.org/internet-governance/blog/torsha-sarkar-november-30-2019-a-deep-dive-into-content-takedown-timeframes
<b>Since the 1990s, internet usage has seen a massive growth, facilitated in part, by growing importance of intermediaries, that act as gateways to the internet. Intermediaries such as Internet Service Providers (ISPs), web-hosting providers, social-media platforms and search engines provide key services which propel social, economic and political development. However, these developments are also offset by instances of users engaging with the platforms in an unlawful manner. The scale and openness of the internet makes regulating such behaviour challenging, and in turn pose several interrelated policy questions.</b>
<p style="text-align: justify;">In this report, we will consider one such question by examining the appropriate time frame for an intermediary to respond to a government content removal request. The way legislations around the world choose to frame this answer has wider ramifications on issues of free speech and ease of carrying out operations for intermediaries. Through the course of our research, we found, for instance:</p>
<ol>
<li style="text-align: justify;">An one-size-fits-all model for illegal content may not be productive. The issue of regulating liability online contain several nuances, which must be considered for more holistic law-making. If regulation is made with only the tech incumbents in mind, then the ramifications of the same would become incredibly burdensome for the smaller companies in the market. </li>
<li style="text-align: justify;">Determining an appropriate turnaround time for an intermediary must also consider the nature and impact of the content in question. For instance, the Impact Assessment on the Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online cites research that shows that one-third of all links to Daesh propaganda were disseminated within the first one-hour of its appearance, and three-fourths of these links were shared within four hours of their release. This was the basic rationale for the subsequent enactment of the EU Terrorism Regulation, which proposed an one-hour time-frame for intermediaries to remove terrorist content.</li>
<li style="text-align: justify;">Understanding the impact of specific turnaround times on intermediaries requires the law to introduce in-built transparency reporting mechanisms. Such an exercise, performed periodically, generates useful feedback, which can be, in turn used to improve the system.</li></ol>
<div style="text-align: justify;"> </div>
<div style="text-align: justify;"><strong>Corrigendum: </strong>Please note that in the section concerning 'Regulation on Preventing the Dissemination of Terrorist Content Online', the report mentions that the Regulation has been 'passed in 2019'. At the time of writing the report, the Regulation had only been passed in the European Parliament, and as of May 2020, is currently in the process of a trilogue. </div>
<div style="text-align: justify;"> </div>
<div style="text-align: justify;"><strong>Disclosure</strong>: CIS is a recipient of research grants from Facebook India. </div>
<div style="text-align: justify;"> </div>
<hr />
<p style="text-align: justify;"><a class="external-link" href="http://cis-india.org/internet-governance/files/a-deep-dive-into-content-takedown-frames">Click to download the research paper</a> by Torsha Sarkar (with research assistance from Keying Geng and Merrin Muhammed Ashraf; edited by Elonnai Hickok, Akriti Bopanna, and Gurshabad Grover; inputs from Tanaya Rajwade)</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/torsha-sarkar-november-30-2019-a-deep-dive-into-content-takedown-timeframes'>https://cis-india.org/internet-governance/blog/torsha-sarkar-november-30-2019-a-deep-dive-into-content-takedown-timeframes</a>
</p>
No publishertorshaFreedom of Speech and ExpressionInternet GovernanceIntermediary Liability2020-06-26T11:59:06ZBlog EntryRoundtable Discussion on Intermediary Liability
https://cis-india.org/internet-governance/news/roundtable-discussion-on-intermediary-liability
<b>Tanaya Rajwade participated in a roundtable discussion on intermediary liability organised by SFLC and the Dialogue in New Delhi on October 17, 2019.</b>
<p>Click to view the <a class="external-link" href="http://cis-india.org/internet-governance/files/internet-liability">agenda</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/roundtable-discussion-on-intermediary-liability'>https://cis-india.org/internet-governance/news/roundtable-discussion-on-intermediary-liability</a>
</p>
No publisherAdminFreedom of Speech and ExpressionInternet GovernanceIntermediary Liability2019-10-20T07:08:11ZNews ItemRethinking the intermediary liability regime in India
https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india
<b>The article consolidates some of our broad thematic concerns with the draft amendments to the intermediary liability rules, published by MeitY last December.
</b>
<p>The blog post by Torsha Sarkar was <a class="external-link" href="https://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/">published by CyberBRICS</a> on August 12, 2019.</p>
<hr />
<h3 style="text-align: justify; ">Introduction</h3>
<p style="text-align: justify; ">In December 2018, the Ministry of Electronics and Information Technology (“MeitY”) released the Intermediary Liability Guidelines (Amendment) Rules (“the Guidelines”), which would be significantly altering the intermediary liability regime in the country. While the Guidelines has drawn a considerable amount of attention and criticism, from the perspective of the government, the change has been overdue.</p>
<p style="text-align: justify; ">The Indian government has been determined to overhaul the pre-existing safe harbour regime since last year. The draft<a href="https://www.medianama.com/wp-content/uploads/Draft-National-E-commerce-Policy.pdf">version</a> of the e-commerce policy, which were leaked last year, also hinted at similar plans. As effects of mass dissemination of disinformation, propaganda and hate speech around the world spill over to offline harms, governments have been increasingly looking to enact interventionist laws that leverage more responsibility on the intermediaries. India has not been an exception.</p>
<p style="text-align: justify; ">A major source of these harmful and illegal content in India come through the popular communications app WhatsApp, despite the company’s enactment of several anti-spam measures over the past few years. Last year, rumours circulated on WhatsApp prompted a series of lynchings. In May, Reuters <a href="https://in.reuters.com/article/india-election-socialmedia-whatsapp/in-india-election-a-14-software-tool-helps-overcome-whatsapp-controls-idINKCN1SL0PZ" rel="noreferrer noopener" target="_blank">reported</a> that clones and software tools were available at minimal cost in the market, for politicians and other interested parties to bypass these measures, and continue the trend of bulk messaging.</p>
<p style="text-align: justify; ">These series of incidents have made it clear that disinformation is a very real problem, and the current regulatory framework is not enough to address it. The government’s response to this has been accordingly, to introduce the Guidelines. This rationale also finds a place in its preliminary<a href="https://www.meity.gov.in/comments-invited-draft-intermediary-rules" rel="noreferrer noopener" target="_blank">statement of reasons</a>.</p>
<p style="text-align: justify; ">While enactment of such interventionist laws has triggered fresh rounds of debate on free speech and censorship, it would be wrong to say that such laws were completely one-sided, or uncalled for.</p>
<p style="text-align: justify; ">On one hand, automated amplification and online mass circulation of purposeful disinformation, propaganda, of terrorist attack videos, or of plain graphic content, are all problems that the government would concern itself with. On the other hand, several online companies (including <a href="https://www.blog.google/outreach-initiatives/public-policy/oversight-frameworks-content-sharing-platforms/" rel="noreferrer noopener" target="_blank">Google</a>) also seem to be in an uneasy agreement that simple self-regulation of content would not cut it. For better oversight, more engagement with both government and civil society members is needed.</p>
<p style="text-align: justify; ">In March this year, Mark Zuckerberg wrote an<a href="https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html?utm_term=.4d177c66782f" rel="noreferrer noopener" target="_blank">op-ed</a> for the Washington Post, calling for more government involvement in the process of content regulation on its platform. While it would be interesting to consider how Zuckerberg’s view aligns with those similarly placed, it would nevertheless be correct to say that online intermediaries are under more pressure than ever to keep their platforms clean of content that is ‘illegal, harmful, obscene’. And this list only grows.</p>
<p style="text-align: justify; ">That being said, the criticism from several stakeholders is sharp and clear in instances of such law being enacted – be it the ambitious <a href="https://www.ivir.nl/publicaties/download/NetzDG_Tworek_Leerssen_April_2019.pdf" rel="noreferrer noopener" target="_blank">NetzDG</a> aimed at combating Nazi propaganda, hate speech and fake news, or the controversial new European Copyright Directive which has been welcomed by journalists but has been severely critiqued by online content creators and platforms as detrimental against user-generated content.</p>
<p style="text-align: justify; ">In the backdrop of such conflicting interests on online content moderation, it would be useful to examine the Guidelines released by MeitY. In the first portion we would be looking at certain specific concerns existing within the rules, while in the second portion, we would be pushing the narrative further to see what an alternative regulatory framework may look like.</p>
<p style="text-align: justify; ">Before we jump to the crux of this discussion, one important disclosure must be made about the underlying ideology of this piece. It would be unrealistic to claim that the internet should be absolutely free from regulation. Swathes of content on child sexual abuse, or terrorist propaganda, or even the hordes of death and rape threats faced by women online are and should be concerns of a civil society. While that is certainly a strong driving force for regulation, this concern should not override the basic considerations for human rights (including freedom of expression). These ideas would be expanded a bit more in the upcoming sections.</p>
<h3 style="text-align: justify; ">Broad, thematic concerns with the Rules</h3>
<h3 style="text-align: justify; ">A uniform mechanism of compliance</h3>
<h3 style="text-align: justify; ">Timelines</h3>
<p style="text-align: justify; ">Rule 3(8) of the Guidelines mandates intermediaries, prompted by <em>a</em> <em>court order or a government notification</em>, to take down content relating to unlawful acts within 24 hours of such notification. In case they fail to do so, the safe harbour applicable to them under section 79 of the Information Technology Act (“the Act”) would cease to apply, and they would be liable. Prior to the amendment, this timeframe was 36 hours.</p>
<p style="text-align: justify; ">There is a visible lack of research which could rationalize that a 24-hour timeline for compliance is the optimal framework, for <em>all</em> intermediaries, irrespective of the kind of services they provide, or the sizes or resources available to them. As Mozilla Foundation has <a href="https://blog.mozilla.org/netpolicy/2018/07/11/sustainable-policy-solutions-for-illegal-content/" rel="noreferrer noopener" target="_blank">commented</a>, regulation of illegal content online simply cannot be done in an one-size-fits-all approach, nor can <a href="https://blog.mozilla.org/netpolicy/2019/04/10/uk_online-harms/" rel="noreferrer noopener" target="_blank">regulation be made</a> with only the tech incumbents in mind. While platforms like YouTube can comfortably <a href="https://www.bmjv.de/SharedDocs/Pressemitteilungen/DE/2017/03142017_Monitoring_SozialeNetzwerke.html" rel="noreferrer noopener" target="_blank">remove</a> criminal prohibited content within a span of 24 hours, this still can place a large burden on smaller companies, who may not have the necessary resources to comply within this timeframe. There are a few unintended consequences that would arise out of this situation.</p>
<p style="text-align: justify; ">One, sanctions under the Act, which would include both organisational ramifications like website blocking (under section 69A of the Act) as well as individual liability, would affect the smaller intermediaries more than it would affect the bigger ones. A bigger intermediary like Facebook may be able to withstand a large fine in lieu of its failure to control, say, hate speech on its platform. That may not be true for a smaller online marketplace, or even a smaller online social media site, targeted towards a very specific community. This compliance mechanism, accordingly, may just go on to strengthen the larger companies, and eliminating the competition from the smaller companies.</p>
<p style="text-align: justify; ">Two, intermediaries, in fear of heavy criminal sanctions would err on the side of law. This would mean that the decisions involved in determining whether a piece of content is illegal or not would be shorter, less nuanced. This would also mean that legitimate speech would also be under risk from censorship, and intermediaries would pay <a href="https://cis-india.org/internet-governance/intermediary-liability-in-india.pdf" rel="noreferrer noopener" target="_blank">less heed</a> to the technical requirements or the correct legal procedures required for content takedown.</p>
<h3 style="text-align: justify; ">Utilization of ‘automated technology’</h3>
<p style="text-align: justify; ">Another place where the Guidelines assume that all intermediaries operating in India are on the same footing is Rule 3(9). This mandates these entities to proactively monitor for ‘unlawful content’ on their platforms. Aside the unconstitutionality of this provision, this also assumes that all intermediaries would have the requisite resource to actually set up this tool and operate it successfully. YouTube’s ContentID, which began in 2007, has already seen a whopping <a href="https://www.blog.google/outreach-initiatives/public-policy/protecting-what-we-love-about-internet-our-efforts-stop-online-piracy/" rel="noreferrer noopener" target="_blank">100 million dollars investment by 2018</a>.</p>
<p style="text-align: justify; ">Funnily enough, ContentID is a tool exclusively dedicated to finding copyright violation of rights-holder, and even then, it has been proven to be not <a href="https://www.plagiarismtoday.com/2019/01/10/youtubes-copyright-insanity/" rel="noreferrer noopener" target="_blank">infallible</a>. The Guidelines’ sweeping net of ‘unlawful’ content include far many more categories than mere violations of IP rights, and the framework assumes that intermediaries would be able to set up and run an automated tool that would filter through <em>all</em> these categories of ‘unlawful content’ at one go.</p>
<h3 style="text-align: justify; ">The problems of AI</h3>
<p style="text-align: justify; ">Aside the implementation-related concerns, there are also technical challenges related with Rule 3(9). Supervised learning systems (like the one envisaged under the Guidelines) use training data sets for pro-active filtering. This means if the system is taught that for ten instances of A being the input, the output would be B, then for the eleventh time, it sees A, it would give the output B. In the lingo of content filtering, the system would be taught, for example, that nudity is bad. The next time the system encounters nudity in a picture, it would automatically flag it as ‘bad’ and violating the community standards.</p>
<p style="text-align: justify; "><a href="https://www.theguardian.com/technology/2016/sep/08/facebook-mark-zuckerberg-napalm-girl-photo-vietnam-war" rel="noreferrer noopener" target="_blank">Except, that is not how it should work</a>. For every post that is under the scrutiny of the platform operators, numerous nuances and contextual cues act as mitigating factors, none of which, at this point, would be<a href="https://scholarship.law.nd.edu/cgi/viewcontent.cgi?referer=https://www.google.co.in/&httpsredir=1&article=1704&context=ndlr" rel="noreferrer noopener" target="_blank">understandable</a> by a machine.</p>
<p style="text-align: justify; ">Additionally, the training data used to feed the system <a href="https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf" rel="noreferrer noopener" target="_blank">can be biased</a>. A self-driving car who is fed training data from only one region of the country would learn the customs and driving norms of that particular region, and not the patterns that apply across the intended purpose of driving throughout the country.</p>
<p style="text-align: justify; ">Lastly, it is not disputed that bias would be completely eliminated in case the content moderation was undertaken by a human. However, the difference between a human moderator and an automated one, would be that there would be a measure of accountability in the first one. The decision of the human moderator can be disputed, and the moderator would have a chance to explain his reasons for the removal. Artificial intelligence (“AI”) is identified by the algorithmic ‘<a href="http://raley.english.ucsb.edu/wp-content/Engl800/Pasquale-blackbox.pdf" rel="noreferrer noopener" target="_blank">black box</a>’ that processes inputs, and generates usable outputs. Implementing workable accountability standards for this system, including figuring out appeal and grievance redressal mechanisms in cases of dispute, are all problems that the regulator must concern itself with.</p>
<p style="text-align: justify; ">In the absence of any clarity or revision, it seems unlikely that the provision would actually ever see full implementation. Neither would the intermediaries know what kind of ‘automated technology’ they are supposed to use for filtering ‘unlawful content’, nor would there be any incentives for them to actually deploy this system effectively for their platforms.</p>
<h3 style="text-align: justify; ">What can be done?</h3>
<p style="text-align: justify; ">First, more research is needed to understand the effect of compliance timeframes on the accuracy of content takedown. Several jurisdictions are operating now on different timeframes of compliance, and it would be a far more holistic regulation should the government consider the dialogue around each of them and see what it means for India.</p>
<p style="text-align: justify; ">Second, it might be useful to consider the concept of an independent regulator as an alternative and as a compromise between pure governmental regulation (which is more or less what the system is) or self-regulation (which the Guidelines, albeit problematically, also espouse through Rule 3(9)).</p>
<p style="text-align: justify; ">The <a href="https://www.gov.uk/government/consultations/online-harms-white-paper" rel="noreferrer noopener" target="_blank">UK White Paper on Harms</a>, a piece of important document in the system of liability overhaul, proposes an arms-length regulator who would be responsible for drafting codes of conduct for online companies and responsible for their enforcement. While the exact merits of the system is still up for debate, the concept of having a separate body to oversee, formulate and also possibly<a href="https://medium.com/adventures-in-consumer-technology/regulating-social-media-a-policy-proposal-a2a25627c210" rel="noreferrer noopener" target="_blank">arbitrate</a> disputes regarding content removal, is finding traction in several parallel developments.</p>
<p style="text-align: justify; ">One of the Transatlantic Working Group Sessions seem to discuss this idea in terms of having an ‘<a href="https://medium.com/whither-news/proposals-for-reasonable-technology-regulation-and-an-internet-court-58ac99bec420" rel="noreferrer noopener" target="_blank">internet court</a>’ for illegal content regulation. This would have the noted advantage of a) formulating norms of online content in a transparent, public fashion, something previously done behind closed doors of either the government or the tech incumbents and b) having specially trained professionals who would be able to dispose of matters in an expeditious manner.</p>
<p style="text-align: justify; ">India is not unfamiliar to the idea of specialized tribunals, or quasi-judicial bodies for dealing with specific challenges. In 2015, for example, the Government of India passed the Commercial Courts Act, by which specific courts were tasked to deal with matters of very large value. This is neither an isolated instance of the government choosing to create new bodies for dealing with a specific problem, nor would it be inimitable in the future.</p>
<p style="text-align: justify; ">There is no<a href="https://www.thehindubusinessline.com/opinion/resurrecting-the-marketplace-of-ideas/article26313605.ece" rel="noreferrer noopener" target="_blank"> silver bullet</a> when it comes to moderation of content on the web. However, in light of these parallel convergence of ideas, the appeal of an independent regulatory system as a sane compromise between complete government control and <em>laissez-faire</em>autonomy, is worth considering.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india'>https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india</a>
</p>
No publishertorshaInternet GovernanceIntermediary LiabilityArtificial Intelligence2019-08-16T01:49:47ZBlog EntryWebinar on counter-comments to the draft Intermediary Guidelines
https://cis-india.org/internet-governance/news/webinar-on-counter-comments-to-the-draft-intermediary-guidelines
<b>CCAOI and the ISOC Delhi Chapter organised a webinar on February 11 to discuss the comments submitted to the Information Technology [Intermediary Guidelines (Amendment) Rules] 2018, and counter-comments that were due by February 14. </b>
<p>The agenda of the discussion was:</p>
<ul>
<li>A brief introduction to the counter comment process [Shashank Mishra]</li>
<li>Invited stakeholders comment on key issues and perspectives on the submissions and the points to be countered.</li>
</ul>
<p>The following people participated:</p>
<ul>
<li>Amba Kak, Mozilla</li>
<li>Rajesh Chharia, ISPAI</li>
<li>Gurshabad Grover, CIS</li>
<li>Priyanka Chaudhari, SFLC</li>
<li>Divij Joshi, Vidhi Centre for Legal Policy</li>
</ul>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/webinar-on-counter-comments-to-the-draft-intermediary-guidelines'>https://cis-india.org/internet-governance/news/webinar-on-counter-comments-to-the-draft-intermediary-guidelines</a>
</p>
No publisherAdminInternet GovernanceIntermediary LiabilityInformation Technology2019-02-22T01:51:19ZNews Item