The Centre for Internet and Society
https://cis-india.org
These are the search results for the query, showing results 1 to 6.
Transference: Reimagining Data Systems: Beyond the Gender Binary
https://cis-india.org/internet-governance/events/transference-reimagining-data-systems-beyond-the-gender-binary
<b>The Centre for Internet and Society (CIS) invites you to participate in a day-long convening on the rights of transgender persons, specifically right to privacy and digital rights. Through this convening, we hope to highlight the concerns of transgender persons in accessing digital data systems and the privacy challenges faced by the community. These challenges include access to their rights — their right to self-identify their gender and welfare services offered by the State and the privacy challenges faced by transgender and intersex persons in revealing their identity.</b>
<p dir="ltr" style="text-align: justify; ">As the meaning of the word ‘Transference’ goes, through this convening, as a learning, we hope to capture and transfer the realities of transgender persons with engaging and being a part of digital data systems in India. Given the rapid digitisation of different public and private data systems in India, we hope to initiate a conversation that understands their struggles and challenges to realistically initiate the re-imagination of data systems — digital and otherwise — one that is mindful about their everyday struggles with privacy and access.</p>
<p dir="ltr" style="text-align: justify; ">Owing to the history of systemic exclusions faced by transgender persons, it is important to highlight their difficulties in accessing technological systems and the impact on their privacy, as central issues that require serious consideration. Presently, their realities seem to be ignored by the State while designing most technology laws and policies governing digital systems.</p>
<h3 dir="ltr" style="text-align: justify; ">Background</h3>
<p><span id="docs-internal-guid-491cb7c5-7fff-049a-e44a-d55b71b690d7"> </span></p>
<p dir="ltr" style="text-align: justify; "><span>In the landmark verdict in 2014, NALSA Vs Union of India, the Supreme Court of India for the first time recognised the right of an individual to self-identify their gender as male, female or transgender. This verdict detailed nine directives to be implemented by the central and state governments in India for the inclusion of transgender persons.</span></p>
<p dir="ltr" style="text-align: justify; "><span>Similarly, 2017 was a watershed moment in India’s constitutional history when the Supreme Court held the right to privacy to be a fundamental right. More importantly, the Court expounded on this right and held that the protection of an individual’s gender identity is an essential component of the right to privacy and that privacy at its core includes the preservation of personal intimacies, autonomy, the sanctity of family life, marriage, procreation, the home and sexual orientation.</span></p>
<p dir="ltr" style="text-align: justify; "><span>The 2017 privacy judgement led to the Supreme Court pronouncing the </span><span>Navtej Johar v Union of India in 2018</span><span>, striking down the </span><span>Koushal </span><span>judgement and decriminalising acts of consensual non-hetrosexual acts of intimacy. In 2019, the Personal Data Protection Bill, 2019 was introduced in Parliament for the regulation and protection of personal data. The PDP Bill classifies data into two categories as (i) personal data; and (ii) sensitive personal data. As per the PDP Bill, data identifying the transgender status and intersex status falls within the ambit of sensitive personal data. Around the time of the PDP Bill being tabled in Parliament, the Transgender Persons (Protection of Rights) Act 2019 was passed by the Parliament despite </span><a href="https://scroll.in/article/944943/explainer-despite-criticism-the-transgender-persons-bill-was-just-passed-whats-next"><span>severe opposition</span></a><span> to the Bill from civil society members as well as members of Parliament.</span></p>
<p dir="ltr" style="text-align: justify; "><span>There is a lack of clarity on the interplay between the PDP Bill and the Transgender Act and the challenges the PDP Bill may pose to the transgender community. Moving beyond mere mentions in the definition of the law through a cisgendered heteronormative lens, it is important for the discourse on data and privacy to broaden its scope to realistically include people of different sexual orientations, gender and sexual identities, gender expressions and sex characteristics.</span></p>
<h3><span>About the Event</span></h3>
<p dir="ltr" style="text-align: justify; ">Through these panel discussions, we propose to highlight the concerns of transgender persons with accessing digital data systems and the privacy challenges faced by them . These challenges include access to their rights — their right to self-identify their gender and access welfare services offered by the State and the privacy challenges faced by transgender persons in revealing their identity.</p>
<p dir="ltr" style="text-align: justify; ">The objective of these discussions is to initiate more conversations about the technological and data exclusions faced by this historically marginalised community in India. The intent is to better understand the realities of transgender persons and contribute to the larger advocacy on privacy, intersectionality and (digital) systems design.</p>
<hr />
<p>Click to register for the event <a class="external-link" href="https://us06web.zoom.us/meeting/register/tZUpcOiqrD8uG9X_4L6EIzXI-QFCipmFqqDV"><b>here</b></a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/events/transference-reimagining-data-systems-beyond-the-gender-binary'>https://cis-india.org/internet-governance/events/transference-reimagining-data-systems-beyond-the-gender-binary</a>
</p>
No publishertorshaGender, Welfare, and PrivacyEventInternet Governance2021-12-15T12:58:31ZEventMedia Market Risk Ratings
https://cis-india.org/internet-governance/media-market-risk-ratings.pdf
<b></b>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/media-market-risk-ratings.pdf'>https://cis-india.org/internet-governance/media-market-risk-ratings.pdf</a>
</p>
No publishertorsha2021-07-12T09:26:30ZFileIn Twitter India’s Arbitrary Suspensions, a Question of What Constitutes a Public Space
https://cis-india.org/internet-governance/blog/the-wire-torsha-sarkar-december-7-2019-twitter-arbitrary-suspension-public-space
<b>A discussion is underway about the way social media platforms may have to operate within the tenets of constitutional protections of free speech.</b>
<p style="text-align: justify; ">The article by Torsha Sarkar was <a class="external-link" href="https://thewire.in/tech/twitter-arbitrary-suspension-public-space">published in the Wire</a> on December 7, 2019.</p>
<hr />
<p style="text-align: justify; ">On October, 26 2019, Twitter suspended the account of senior advocate Sanjay Hegde. The reason? He had previously put up the famous photo of August Landmesser refusing to do the Nazi salute in a sea of crowd in the Blohm Voss shipyard.</p>
<p style="text-align: justify; ">According to the social media platform, the image violated Twitter’s ‘hateful imagery’ guidelines, despite the photo being around for decades and usually being recognised as a sign of resistance against blind authoritarianism.</p>
<p style="text-align: justify; "><img src="https://cis-india.org/home-images/AugustLandmesser.png/@@images/bf841f6d-fd25-4bd8-b421-8e55d81c021b.png" alt="August Landmasser" class="image-inline" title="August Landmasser" /></p>
<p style="text-align: justify; "><i>August Landmesser. Photo: Public Domain</i></p>
<p style="text-align: justify; ">Twitter briefly revoked the suspension on October 27, but promptly suspended Hegde’s account again. This time, the action was prompted by Hegde quote-tweeting parts of a poem by Gorakh Pandey, titled ‘Hang him’, which was written in protest of the first death penalties given to two peasant revolutionaries in an independent India. This time, Hegde was informed that his account would not be restored.</p>
<p style="text-align: justify; ">Spurred by what he believed was Twitter’s arbitrary exercise of power, he proceeded to file a legal notice with Twitter, and <a href="https://www.livelaw.in/news-updates/sr-adv-sanjay-hegde-serves-legal-notice-on-twitter-for-restoration-of-account-149579">asked</a> the Ministry of Electronics and Information Technology (MeitY) to intervene in the matter. It is the subject matter of this ask that becomes of interest.</p>
<p style="text-align: justify; ">In his complaint, Hegde first outlines how the content shared by him did not violate any of Twitter’s community guidelines. He then goes on to highlight how his fundamental right of dissemination and receipt of information under Article 19(1)(a) were obstructed by the action of Twitter. Here, he places reliance to several key decisions of the Indian and the US Supreme court on media freedom, which provided thrust to his argument that a citizen’s right to free speech is meaningless if control was concentrated in the hands of a few private parties.</p>
<h3 style="text-align: justify; ">Vertical or horizontal?</h3>
<p style="text-align: justify; ">One of the first things we learn about fundamental rights is that they are enforceable against the government, and that they allow the individual to have a remedy against the excesses of the all-powerful state. This understanding of fundamental rights is usually called the ‘vertical’ approach – where the state, or the allied public authority is at the top and the individual, a non-public entity is at the bottom.</p>
<p style="text-align: justify; ">However, there is another, albeit underdeveloped, thread of constitutional jurisprudence that argues that in certain circumstances these rights can be claimed against another private entity. This is called the ‘horizontal’ application of fundamental rights.</p>
<p style="text-align: justify; ">In that note, Hegde’s contention essentially becomes this – claiming an enforceable remedy against the private entity for supposedly violating his fundamental right. This is clearly an ask for the Centre to consider a horizontal application of Article 19(1)(a) against large social media companies.</p>
<h3 style="text-align: justify; ">What could this mean?</h3>
<p style="text-align: justify; ">Lawyer Gautam Bhatia has <a href="https://indconlawphil.wordpress.com/2015/05/24/horizontality-under-the-indian-constitution-a-schema/">argued</a> that there are several ways in which a fundamental right can be enforced against another private entity. It must be noted that he derives this classification on the touchstone of existing judicial decisions, which is different from seeking an executive intervention. Nevertheless, it is interesting to consider the logic of his arguments as a thought exercise. Bhatia points out that one of the ways in which fundamental rights can be applied to a private entity is by assimilating the concerned entity as a ‘state’ as per Article 12.</p>
<p style="text-align: justify; ">There is a considerable amount of jurisprudence on the nature of the test to determine whether the assailed entity is state. In 2002, the Supreme Court <a href="https://indiankanoon.org/doc/471272/">held</a> that for an entity to be deemed state, it must be ‘functionally, financially and administratively dominated by or under the control of the Government’. If we go by this test, then a social media platform would most probably not come within the ambit of Article 12.</p>
<p style="text-align: justify; ">However, there is a thread of recent developments that might be interesting to consider. Earlier this year, a federal court of appeals in the US <a href="https://int.nyt.com/data/documenthelper/1365-trump-twitter-second-circuit-r/c0f4e0701b087dab9b43/optimized/full.pdf#page=1">ruled</a> that the First Amendment prohibits President Donald Trump, who used his Twitter for government purposes, from blocking his critics. The court further held that when a public official uses their account for official purposes, then the account ceases to be a mere private account. This judgment has a sharp bearing in the current discussion, and the way social media platforms may have to operate within the tenets of constitutional protections of free speech.</p>
<p style="text-align: justify; ">Although the opinion of the federal court clearly noted that they did not concern themselves with the application of the First Amendment rights to the social media platforms, one cannot help but wonder – if the court rules that certain spaces in a social media account are ‘public’ by default, and that politicians cannot exclude critiques from those spaces, then <a href="https://www.forbes.com/sites/kalevleetaru/2017/08/01/is-social-media-really-a-public-space/#2ca9795b2b80">can</a> the company itself block or impede certain messages? If the company does it, can an enforceable remedy then be made against them?</p>
<p style="text-align: justify; "><img src="https://cis-india.org/home-images/Trump.png/@@images/9bd98eba-124f-4be0-b60c-13482b76ae80.png" alt="Trump" class="image-inline" title="Trump" /></p>
<p style="text-align: justify; "><span style="text-align: center; "><i>A US court ruled that Donald Trump cannot block people on his Twitter account. Photo: Reuters</i></span></p>
<h3 style="text-align: justify; ">What can be done?</h3>
<p style="text-align: justify; ">Of course, there is no straight answer to this question. On one hand, social media platforms, owing to the enormous concentration of power and opaque moderating policies, have become gatekeepers of online speech to a large extent. If such power is left unchecked, then, as Hegde’s request demonstrates, a citizen’s free speech rights are meaningless.</p>
<p class="_yeti_done" style="text-align: justify; ">On the other hand, if we definitively agree that in certain circumstances, citizens should be allowed to claim remedies against these companies’ arbitrary exercise of power, then are we setting ourselves for a slippery slope? Would we make exceptions to the nature of spaces in the social media based on who is using it? If we do, then what would be the extent to which we would limit the company’s power of regulating speech in such space? How would such limitation work in consonance with the company’s need to protect public officials from targeted harassment?</p>
<p style="text-align: justify; ">At this juncture, given the novelty of the situation, our decisions should also be measured. One way of addressing this obvious paradigm shift is by considering the idea of oversight structures more seriously.</p>
<p style="text-align: justify; ">I have previously <a href="https://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/">written</a> about the possibility of having an independent regulator as a compromise between overtly stern government regulation and allowing social media companies to have free reign over the things that go on their platforms. In light of the recent events, this might be a useful alternative to consider.</p>
<p style="text-align: justify; ">Hegde had also asked the MeitY to issue guidelines to ensure that any censorship of speech in these social media platforms is to be done in accordance with the principles of Article 19.</p>
<p style="text-align: justify; ">If we presume that certain social media platforms are large and powerful enough to be treated akin to public spaces, then having an oversight authority to arbitrate and ensure the enforcement of constitutional principles for future disputes may just be the first step towards more evidence-based policymaking.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/the-wire-torsha-sarkar-december-7-2019-twitter-arbitrary-suspension-public-space'>https://cis-india.org/internet-governance/blog/the-wire-torsha-sarkar-december-7-2019-twitter-arbitrary-suspension-public-space</a>
</p>
No publishertorshaFreedom of Speech and ExpressionInternet Governance2019-12-12T16:54:05ZBlog EntryA Deep Dive into Content Takedown Frames
https://cis-india.org/internet-governance/files/a-deep-dive-into-content-takedown-frames
<b></b>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/files/a-deep-dive-into-content-takedown-frames'>https://cis-india.org/internet-governance/files/a-deep-dive-into-content-takedown-frames</a>
</p>
No publishertorsha2019-12-03T02:11:30ZFileA Deep Dive into Content Takedown Timeframes
https://cis-india.org/internet-governance/blog/torsha-sarkar-november-30-2019-a-deep-dive-into-content-takedown-timeframes
<b>Since the 1990s, internet usage has seen a massive growth, facilitated in part, by growing importance of intermediaries, that act as gateways to the internet. Intermediaries such as Internet Service Providers (ISPs), web-hosting providers, social-media platforms and search engines provide key services which propel social, economic and political development. However, these developments are also offset by instances of users engaging with the platforms in an unlawful manner. The scale and openness of the internet makes regulating such behaviour challenging, and in turn pose several interrelated policy questions.</b>
<p style="text-align: justify;">In this report, we will consider one such question by examining the appropriate time frame for an intermediary to respond to a government content removal request. The way legislations around the world choose to frame this answer has wider ramifications on issues of free speech and ease of carrying out operations for intermediaries. Through the course of our research, we found, for instance:</p>
<ol>
<li style="text-align: justify;">An one-size-fits-all model for illegal content may not be productive. The issue of regulating liability online contain several nuances, which must be considered for more holistic law-making. If regulation is made with only the tech incumbents in mind, then the ramifications of the same would become incredibly burdensome for the smaller companies in the market. </li>
<li style="text-align: justify;">Determining an appropriate turnaround time for an intermediary must also consider the nature and impact of the content in question. For instance, the Impact Assessment on the Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online cites research that shows that one-third of all links to Daesh propaganda were disseminated within the first one-hour of its appearance, and three-fourths of these links were shared within four hours of their release. This was the basic rationale for the subsequent enactment of the EU Terrorism Regulation, which proposed an one-hour time-frame for intermediaries to remove terrorist content.</li>
<li style="text-align: justify;">Understanding the impact of specific turnaround times on intermediaries requires the law to introduce in-built transparency reporting mechanisms. Such an exercise, performed periodically, generates useful feedback, which can be, in turn used to improve the system.</li></ol>
<div style="text-align: justify;"> </div>
<div style="text-align: justify;"><strong>Corrigendum: </strong>Please note that in the section concerning 'Regulation on Preventing the Dissemination of Terrorist Content Online', the report mentions that the Regulation has been 'passed in 2019'. At the time of writing the report, the Regulation had only been passed in the European Parliament, and as of May 2020, is currently in the process of a trilogue. </div>
<div style="text-align: justify;"> </div>
<div style="text-align: justify;"><strong>Disclosure</strong>: CIS is a recipient of research grants from Facebook India. </div>
<div style="text-align: justify;"> </div>
<hr />
<p style="text-align: justify;"><a class="external-link" href="http://cis-india.org/internet-governance/files/a-deep-dive-into-content-takedown-frames">Click to download the research paper</a> by Torsha Sarkar (with research assistance from Keying Geng and Merrin Muhammed Ashraf; edited by Elonnai Hickok, Akriti Bopanna, and Gurshabad Grover; inputs from Tanaya Rajwade)</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/torsha-sarkar-november-30-2019-a-deep-dive-into-content-takedown-timeframes'>https://cis-india.org/internet-governance/blog/torsha-sarkar-november-30-2019-a-deep-dive-into-content-takedown-timeframes</a>
</p>
No publishertorshaFreedom of Speech and ExpressionInternet GovernanceIntermediary Liability2020-06-26T11:59:06ZBlog EntryRethinking the intermediary liability regime in India
https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india
<b>The article consolidates some of our broad thematic concerns with the draft amendments to the intermediary liability rules, published by MeitY last December.
</b>
<p>The blog post by Torsha Sarkar was <a class="external-link" href="https://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/">published by CyberBRICS</a> on August 12, 2019.</p>
<hr />
<h3 style="text-align: justify; ">Introduction</h3>
<p style="text-align: justify; ">In December 2018, the Ministry of Electronics and Information Technology (“MeitY”) released the Intermediary Liability Guidelines (Amendment) Rules (“the Guidelines”), which would be significantly altering the intermediary liability regime in the country. While the Guidelines has drawn a considerable amount of attention and criticism, from the perspective of the government, the change has been overdue.</p>
<p style="text-align: justify; ">The Indian government has been determined to overhaul the pre-existing safe harbour regime since last year. The draft<a href="https://www.medianama.com/wp-content/uploads/Draft-National-E-commerce-Policy.pdf">version</a> of the e-commerce policy, which were leaked last year, also hinted at similar plans. As effects of mass dissemination of disinformation, propaganda and hate speech around the world spill over to offline harms, governments have been increasingly looking to enact interventionist laws that leverage more responsibility on the intermediaries. India has not been an exception.</p>
<p style="text-align: justify; ">A major source of these harmful and illegal content in India come through the popular communications app WhatsApp, despite the company’s enactment of several anti-spam measures over the past few years. Last year, rumours circulated on WhatsApp prompted a series of lynchings. In May, Reuters <a href="https://in.reuters.com/article/india-election-socialmedia-whatsapp/in-india-election-a-14-software-tool-helps-overcome-whatsapp-controls-idINKCN1SL0PZ" rel="noreferrer noopener" target="_blank">reported</a> that clones and software tools were available at minimal cost in the market, for politicians and other interested parties to bypass these measures, and continue the trend of bulk messaging.</p>
<p style="text-align: justify; ">These series of incidents have made it clear that disinformation is a very real problem, and the current regulatory framework is not enough to address it. The government’s response to this has been accordingly, to introduce the Guidelines. This rationale also finds a place in its preliminary<a href="https://www.meity.gov.in/comments-invited-draft-intermediary-rules" rel="noreferrer noopener" target="_blank">statement of reasons</a>.</p>
<p style="text-align: justify; ">While enactment of such interventionist laws has triggered fresh rounds of debate on free speech and censorship, it would be wrong to say that such laws were completely one-sided, or uncalled for.</p>
<p style="text-align: justify; ">On one hand, automated amplification and online mass circulation of purposeful disinformation, propaganda, of terrorist attack videos, or of plain graphic content, are all problems that the government would concern itself with. On the other hand, several online companies (including <a href="https://www.blog.google/outreach-initiatives/public-policy/oversight-frameworks-content-sharing-platforms/" rel="noreferrer noopener" target="_blank">Google</a>) also seem to be in an uneasy agreement that simple self-regulation of content would not cut it. For better oversight, more engagement with both government and civil society members is needed.</p>
<p style="text-align: justify; ">In March this year, Mark Zuckerberg wrote an<a href="https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html?utm_term=.4d177c66782f" rel="noreferrer noopener" target="_blank">op-ed</a> for the Washington Post, calling for more government involvement in the process of content regulation on its platform. While it would be interesting to consider how Zuckerberg’s view aligns with those similarly placed, it would nevertheless be correct to say that online intermediaries are under more pressure than ever to keep their platforms clean of content that is ‘illegal, harmful, obscene’. And this list only grows.</p>
<p style="text-align: justify; ">That being said, the criticism from several stakeholders is sharp and clear in instances of such law being enacted – be it the ambitious <a href="https://www.ivir.nl/publicaties/download/NetzDG_Tworek_Leerssen_April_2019.pdf" rel="noreferrer noopener" target="_blank">NetzDG</a> aimed at combating Nazi propaganda, hate speech and fake news, or the controversial new European Copyright Directive which has been welcomed by journalists but has been severely critiqued by online content creators and platforms as detrimental against user-generated content.</p>
<p style="text-align: justify; ">In the backdrop of such conflicting interests on online content moderation, it would be useful to examine the Guidelines released by MeitY. In the first portion we would be looking at certain specific concerns existing within the rules, while in the second portion, we would be pushing the narrative further to see what an alternative regulatory framework may look like.</p>
<p style="text-align: justify; ">Before we jump to the crux of this discussion, one important disclosure must be made about the underlying ideology of this piece. It would be unrealistic to claim that the internet should be absolutely free from regulation. Swathes of content on child sexual abuse, or terrorist propaganda, or even the hordes of death and rape threats faced by women online are and should be concerns of a civil society. While that is certainly a strong driving force for regulation, this concern should not override the basic considerations for human rights (including freedom of expression). These ideas would be expanded a bit more in the upcoming sections.</p>
<h3 style="text-align: justify; ">Broad, thematic concerns with the Rules</h3>
<h3 style="text-align: justify; ">A uniform mechanism of compliance</h3>
<h3 style="text-align: justify; ">Timelines</h3>
<p style="text-align: justify; ">Rule 3(8) of the Guidelines mandates intermediaries, prompted by <em>a</em> <em>court order or a government notification</em>, to take down content relating to unlawful acts within 24 hours of such notification. In case they fail to do so, the safe harbour applicable to them under section 79 of the Information Technology Act (“the Act”) would cease to apply, and they would be liable. Prior to the amendment, this timeframe was 36 hours.</p>
<p style="text-align: justify; ">There is a visible lack of research which could rationalize that a 24-hour timeline for compliance is the optimal framework, for <em>all</em> intermediaries, irrespective of the kind of services they provide, or the sizes or resources available to them. As Mozilla Foundation has <a href="https://blog.mozilla.org/netpolicy/2018/07/11/sustainable-policy-solutions-for-illegal-content/" rel="noreferrer noopener" target="_blank">commented</a>, regulation of illegal content online simply cannot be done in an one-size-fits-all approach, nor can <a href="https://blog.mozilla.org/netpolicy/2019/04/10/uk_online-harms/" rel="noreferrer noopener" target="_blank">regulation be made</a> with only the tech incumbents in mind. While platforms like YouTube can comfortably <a href="https://www.bmjv.de/SharedDocs/Pressemitteilungen/DE/2017/03142017_Monitoring_SozialeNetzwerke.html" rel="noreferrer noopener" target="_blank">remove</a> criminal prohibited content within a span of 24 hours, this still can place a large burden on smaller companies, who may not have the necessary resources to comply within this timeframe. There are a few unintended consequences that would arise out of this situation.</p>
<p style="text-align: justify; ">One, sanctions under the Act, which would include both organisational ramifications like website blocking (under section 69A of the Act) as well as individual liability, would affect the smaller intermediaries more than it would affect the bigger ones. A bigger intermediary like Facebook may be able to withstand a large fine in lieu of its failure to control, say, hate speech on its platform. That may not be true for a smaller online marketplace, or even a smaller online social media site, targeted towards a very specific community. This compliance mechanism, accordingly, may just go on to strengthen the larger companies, and eliminating the competition from the smaller companies.</p>
<p style="text-align: justify; ">Two, intermediaries, in fear of heavy criminal sanctions would err on the side of law. This would mean that the decisions involved in determining whether a piece of content is illegal or not would be shorter, less nuanced. This would also mean that legitimate speech would also be under risk from censorship, and intermediaries would pay <a href="https://cis-india.org/internet-governance/intermediary-liability-in-india.pdf" rel="noreferrer noopener" target="_blank">less heed</a> to the technical requirements or the correct legal procedures required for content takedown.</p>
<h3 style="text-align: justify; ">Utilization of ‘automated technology’</h3>
<p style="text-align: justify; ">Another place where the Guidelines assume that all intermediaries operating in India are on the same footing is Rule 3(9). This mandates these entities to proactively monitor for ‘unlawful content’ on their platforms. Aside the unconstitutionality of this provision, this also assumes that all intermediaries would have the requisite resource to actually set up this tool and operate it successfully. YouTube’s ContentID, which began in 2007, has already seen a whopping <a href="https://www.blog.google/outreach-initiatives/public-policy/protecting-what-we-love-about-internet-our-efforts-stop-online-piracy/" rel="noreferrer noopener" target="_blank">100 million dollars investment by 2018</a>.</p>
<p style="text-align: justify; ">Funnily enough, ContentID is a tool exclusively dedicated to finding copyright violation of rights-holder, and even then, it has been proven to be not <a href="https://www.plagiarismtoday.com/2019/01/10/youtubes-copyright-insanity/" rel="noreferrer noopener" target="_blank">infallible</a>. The Guidelines’ sweeping net of ‘unlawful’ content include far many more categories than mere violations of IP rights, and the framework assumes that intermediaries would be able to set up and run an automated tool that would filter through <em>all</em> these categories of ‘unlawful content’ at one go.</p>
<h3 style="text-align: justify; ">The problems of AI</h3>
<p style="text-align: justify; ">Aside the implementation-related concerns, there are also technical challenges related with Rule 3(9). Supervised learning systems (like the one envisaged under the Guidelines) use training data sets for pro-active filtering. This means if the system is taught that for ten instances of A being the input, the output would be B, then for the eleventh time, it sees A, it would give the output B. In the lingo of content filtering, the system would be taught, for example, that nudity is bad. The next time the system encounters nudity in a picture, it would automatically flag it as ‘bad’ and violating the community standards.</p>
<p style="text-align: justify; "><a href="https://www.theguardian.com/technology/2016/sep/08/facebook-mark-zuckerberg-napalm-girl-photo-vietnam-war" rel="noreferrer noopener" target="_blank">Except, that is not how it should work</a>. For every post that is under the scrutiny of the platform operators, numerous nuances and contextual cues act as mitigating factors, none of which, at this point, would be<a href="https://scholarship.law.nd.edu/cgi/viewcontent.cgi?referer=https://www.google.co.in/&httpsredir=1&article=1704&context=ndlr" rel="noreferrer noopener" target="_blank">understandable</a> by a machine.</p>
<p style="text-align: justify; ">Additionally, the training data used to feed the system <a href="https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf" rel="noreferrer noopener" target="_blank">can be biased</a>. A self-driving car who is fed training data from only one region of the country would learn the customs and driving norms of that particular region, and not the patterns that apply across the intended purpose of driving throughout the country.</p>
<p style="text-align: justify; ">Lastly, it is not disputed that bias would be completely eliminated in case the content moderation was undertaken by a human. However, the difference between a human moderator and an automated one, would be that there would be a measure of accountability in the first one. The decision of the human moderator can be disputed, and the moderator would have a chance to explain his reasons for the removal. Artificial intelligence (“AI”) is identified by the algorithmic ‘<a href="http://raley.english.ucsb.edu/wp-content/Engl800/Pasquale-blackbox.pdf" rel="noreferrer noopener" target="_blank">black box</a>’ that processes inputs, and generates usable outputs. Implementing workable accountability standards for this system, including figuring out appeal and grievance redressal mechanisms in cases of dispute, are all problems that the regulator must concern itself with.</p>
<p style="text-align: justify; ">In the absence of any clarity or revision, it seems unlikely that the provision would actually ever see full implementation. Neither would the intermediaries know what kind of ‘automated technology’ they are supposed to use for filtering ‘unlawful content’, nor would there be any incentives for them to actually deploy this system effectively for their platforms.</p>
<h3 style="text-align: justify; ">What can be done?</h3>
<p style="text-align: justify; ">First, more research is needed to understand the effect of compliance timeframes on the accuracy of content takedown. Several jurisdictions are operating now on different timeframes of compliance, and it would be a far more holistic regulation should the government consider the dialogue around each of them and see what it means for India.</p>
<p style="text-align: justify; ">Second, it might be useful to consider the concept of an independent regulator as an alternative and as a compromise between pure governmental regulation (which is more or less what the system is) or self-regulation (which the Guidelines, albeit problematically, also espouse through Rule 3(9)).</p>
<p style="text-align: justify; ">The <a href="https://www.gov.uk/government/consultations/online-harms-white-paper" rel="noreferrer noopener" target="_blank">UK White Paper on Harms</a>, a piece of important document in the system of liability overhaul, proposes an arms-length regulator who would be responsible for drafting codes of conduct for online companies and responsible for their enforcement. While the exact merits of the system is still up for debate, the concept of having a separate body to oversee, formulate and also possibly<a href="https://medium.com/adventures-in-consumer-technology/regulating-social-media-a-policy-proposal-a2a25627c210" rel="noreferrer noopener" target="_blank">arbitrate</a> disputes regarding content removal, is finding traction in several parallel developments.</p>
<p style="text-align: justify; ">One of the Transatlantic Working Group Sessions seem to discuss this idea in terms of having an ‘<a href="https://medium.com/whither-news/proposals-for-reasonable-technology-regulation-and-an-internet-court-58ac99bec420" rel="noreferrer noopener" target="_blank">internet court</a>’ for illegal content regulation. This would have the noted advantage of a) formulating norms of online content in a transparent, public fashion, something previously done behind closed doors of either the government or the tech incumbents and b) having specially trained professionals who would be able to dispose of matters in an expeditious manner.</p>
<p style="text-align: justify; ">India is not unfamiliar to the idea of specialized tribunals, or quasi-judicial bodies for dealing with specific challenges. In 2015, for example, the Government of India passed the Commercial Courts Act, by which specific courts were tasked to deal with matters of very large value. This is neither an isolated instance of the government choosing to create new bodies for dealing with a specific problem, nor would it be inimitable in the future.</p>
<p style="text-align: justify; ">There is no<a href="https://www.thehindubusinessline.com/opinion/resurrecting-the-marketplace-of-ideas/article26313605.ece" rel="noreferrer noopener" target="_blank"> silver bullet</a> when it comes to moderation of content on the web. However, in light of these parallel convergence of ideas, the appeal of an independent regulatory system as a sane compromise between complete government control and <em>laissez-faire</em>autonomy, is worth considering.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india'>https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india</a>
</p>
No publishertorshaInternet GovernanceIntermediary LiabilityArtificial Intelligence2019-08-16T01:49:47ZBlog Entry