The Centre for Internet and Society
https://cis-india.org
These are the search results for the query, showing results 21 to 35.
Rethinking the intermediary liability regime in India
https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india
<b>The article consolidates some of our broad thematic concerns with the draft amendments to the intermediary liability rules, published by MeitY last December.
</b>
<p>The blog post by Torsha Sarkar was <a class="external-link" href="https://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/">published by CyberBRICS</a> on August 12, 2019.</p>
<hr />
<h3 style="text-align: justify; ">Introduction</h3>
<p style="text-align: justify; ">In December 2018, the Ministry of Electronics and Information Technology (“MeitY”) released the Intermediary Liability Guidelines (Amendment) Rules (“the Guidelines”), which would be significantly altering the intermediary liability regime in the country. While the Guidelines has drawn a considerable amount of attention and criticism, from the perspective of the government, the change has been overdue.</p>
<p style="text-align: justify; ">The Indian government has been determined to overhaul the pre-existing safe harbour regime since last year. The draft<a href="https://www.medianama.com/wp-content/uploads/Draft-National-E-commerce-Policy.pdf">version</a> of the e-commerce policy, which were leaked last year, also hinted at similar plans. As effects of mass dissemination of disinformation, propaganda and hate speech around the world spill over to offline harms, governments have been increasingly looking to enact interventionist laws that leverage more responsibility on the intermediaries. India has not been an exception.</p>
<p style="text-align: justify; ">A major source of these harmful and illegal content in India come through the popular communications app WhatsApp, despite the company’s enactment of several anti-spam measures over the past few years. Last year, rumours circulated on WhatsApp prompted a series of lynchings. In May, Reuters <a href="https://in.reuters.com/article/india-election-socialmedia-whatsapp/in-india-election-a-14-software-tool-helps-overcome-whatsapp-controls-idINKCN1SL0PZ" rel="noreferrer noopener" target="_blank">reported</a> that clones and software tools were available at minimal cost in the market, for politicians and other interested parties to bypass these measures, and continue the trend of bulk messaging.</p>
<p style="text-align: justify; ">These series of incidents have made it clear that disinformation is a very real problem, and the current regulatory framework is not enough to address it. The government’s response to this has been accordingly, to introduce the Guidelines. This rationale also finds a place in its preliminary<a href="https://www.meity.gov.in/comments-invited-draft-intermediary-rules" rel="noreferrer noopener" target="_blank">statement of reasons</a>.</p>
<p style="text-align: justify; ">While enactment of such interventionist laws has triggered fresh rounds of debate on free speech and censorship, it would be wrong to say that such laws were completely one-sided, or uncalled for.</p>
<p style="text-align: justify; ">On one hand, automated amplification and online mass circulation of purposeful disinformation, propaganda, of terrorist attack videos, or of plain graphic content, are all problems that the government would concern itself with. On the other hand, several online companies (including <a href="https://www.blog.google/outreach-initiatives/public-policy/oversight-frameworks-content-sharing-platforms/" rel="noreferrer noopener" target="_blank">Google</a>) also seem to be in an uneasy agreement that simple self-regulation of content would not cut it. For better oversight, more engagement with both government and civil society members is needed.</p>
<p style="text-align: justify; ">In March this year, Mark Zuckerberg wrote an<a href="https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html?utm_term=.4d177c66782f" rel="noreferrer noopener" target="_blank">op-ed</a> for the Washington Post, calling for more government involvement in the process of content regulation on its platform. While it would be interesting to consider how Zuckerberg’s view aligns with those similarly placed, it would nevertheless be correct to say that online intermediaries are under more pressure than ever to keep their platforms clean of content that is ‘illegal, harmful, obscene’. And this list only grows.</p>
<p style="text-align: justify; ">That being said, the criticism from several stakeholders is sharp and clear in instances of such law being enacted – be it the ambitious <a href="https://www.ivir.nl/publicaties/download/NetzDG_Tworek_Leerssen_April_2019.pdf" rel="noreferrer noopener" target="_blank">NetzDG</a> aimed at combating Nazi propaganda, hate speech and fake news, or the controversial new European Copyright Directive which has been welcomed by journalists but has been severely critiqued by online content creators and platforms as detrimental against user-generated content.</p>
<p style="text-align: justify; ">In the backdrop of such conflicting interests on online content moderation, it would be useful to examine the Guidelines released by MeitY. In the first portion we would be looking at certain specific concerns existing within the rules, while in the second portion, we would be pushing the narrative further to see what an alternative regulatory framework may look like.</p>
<p style="text-align: justify; ">Before we jump to the crux of this discussion, one important disclosure must be made about the underlying ideology of this piece. It would be unrealistic to claim that the internet should be absolutely free from regulation. Swathes of content on child sexual abuse, or terrorist propaganda, or even the hordes of death and rape threats faced by women online are and should be concerns of a civil society. While that is certainly a strong driving force for regulation, this concern should not override the basic considerations for human rights (including freedom of expression). These ideas would be expanded a bit more in the upcoming sections.</p>
<h3 style="text-align: justify; ">Broad, thematic concerns with the Rules</h3>
<h3 style="text-align: justify; ">A uniform mechanism of compliance</h3>
<h3 style="text-align: justify; ">Timelines</h3>
<p style="text-align: justify; ">Rule 3(8) of the Guidelines mandates intermediaries, prompted by <em>a</em> <em>court order or a government notification</em>, to take down content relating to unlawful acts within 24 hours of such notification. In case they fail to do so, the safe harbour applicable to them under section 79 of the Information Technology Act (“the Act”) would cease to apply, and they would be liable. Prior to the amendment, this timeframe was 36 hours.</p>
<p style="text-align: justify; ">There is a visible lack of research which could rationalize that a 24-hour timeline for compliance is the optimal framework, for <em>all</em> intermediaries, irrespective of the kind of services they provide, or the sizes or resources available to them. As Mozilla Foundation has <a href="https://blog.mozilla.org/netpolicy/2018/07/11/sustainable-policy-solutions-for-illegal-content/" rel="noreferrer noopener" target="_blank">commented</a>, regulation of illegal content online simply cannot be done in an one-size-fits-all approach, nor can <a href="https://blog.mozilla.org/netpolicy/2019/04/10/uk_online-harms/" rel="noreferrer noopener" target="_blank">regulation be made</a> with only the tech incumbents in mind. While platforms like YouTube can comfortably <a href="https://www.bmjv.de/SharedDocs/Pressemitteilungen/DE/2017/03142017_Monitoring_SozialeNetzwerke.html" rel="noreferrer noopener" target="_blank">remove</a> criminal prohibited content within a span of 24 hours, this still can place a large burden on smaller companies, who may not have the necessary resources to comply within this timeframe. There are a few unintended consequences that would arise out of this situation.</p>
<p style="text-align: justify; ">One, sanctions under the Act, which would include both organisational ramifications like website blocking (under section 69A of the Act) as well as individual liability, would affect the smaller intermediaries more than it would affect the bigger ones. A bigger intermediary like Facebook may be able to withstand a large fine in lieu of its failure to control, say, hate speech on its platform. That may not be true for a smaller online marketplace, or even a smaller online social media site, targeted towards a very specific community. This compliance mechanism, accordingly, may just go on to strengthen the larger companies, and eliminating the competition from the smaller companies.</p>
<p style="text-align: justify; ">Two, intermediaries, in fear of heavy criminal sanctions would err on the side of law. This would mean that the decisions involved in determining whether a piece of content is illegal or not would be shorter, less nuanced. This would also mean that legitimate speech would also be under risk from censorship, and intermediaries would pay <a href="https://cis-india.org/internet-governance/intermediary-liability-in-india.pdf" rel="noreferrer noopener" target="_blank">less heed</a> to the technical requirements or the correct legal procedures required for content takedown.</p>
<h3 style="text-align: justify; ">Utilization of ‘automated technology’</h3>
<p style="text-align: justify; ">Another place where the Guidelines assume that all intermediaries operating in India are on the same footing is Rule 3(9). This mandates these entities to proactively monitor for ‘unlawful content’ on their platforms. Aside the unconstitutionality of this provision, this also assumes that all intermediaries would have the requisite resource to actually set up this tool and operate it successfully. YouTube’s ContentID, which began in 2007, has already seen a whopping <a href="https://www.blog.google/outreach-initiatives/public-policy/protecting-what-we-love-about-internet-our-efforts-stop-online-piracy/" rel="noreferrer noopener" target="_blank">100 million dollars investment by 2018</a>.</p>
<p style="text-align: justify; ">Funnily enough, ContentID is a tool exclusively dedicated to finding copyright violation of rights-holder, and even then, it has been proven to be not <a href="https://www.plagiarismtoday.com/2019/01/10/youtubes-copyright-insanity/" rel="noreferrer noopener" target="_blank">infallible</a>. The Guidelines’ sweeping net of ‘unlawful’ content include far many more categories than mere violations of IP rights, and the framework assumes that intermediaries would be able to set up and run an automated tool that would filter through <em>all</em> these categories of ‘unlawful content’ at one go.</p>
<h3 style="text-align: justify; ">The problems of AI</h3>
<p style="text-align: justify; ">Aside the implementation-related concerns, there are also technical challenges related with Rule 3(9). Supervised learning systems (like the one envisaged under the Guidelines) use training data sets for pro-active filtering. This means if the system is taught that for ten instances of A being the input, the output would be B, then for the eleventh time, it sees A, it would give the output B. In the lingo of content filtering, the system would be taught, for example, that nudity is bad. The next time the system encounters nudity in a picture, it would automatically flag it as ‘bad’ and violating the community standards.</p>
<p style="text-align: justify; "><a href="https://www.theguardian.com/technology/2016/sep/08/facebook-mark-zuckerberg-napalm-girl-photo-vietnam-war" rel="noreferrer noopener" target="_blank">Except, that is not how it should work</a>. For every post that is under the scrutiny of the platform operators, numerous nuances and contextual cues act as mitigating factors, none of which, at this point, would be<a href="https://scholarship.law.nd.edu/cgi/viewcontent.cgi?referer=https://www.google.co.in/&httpsredir=1&article=1704&context=ndlr" rel="noreferrer noopener" target="_blank">understandable</a> by a machine.</p>
<p style="text-align: justify; ">Additionally, the training data used to feed the system <a href="https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf" rel="noreferrer noopener" target="_blank">can be biased</a>. A self-driving car who is fed training data from only one region of the country would learn the customs and driving norms of that particular region, and not the patterns that apply across the intended purpose of driving throughout the country.</p>
<p style="text-align: justify; ">Lastly, it is not disputed that bias would be completely eliminated in case the content moderation was undertaken by a human. However, the difference between a human moderator and an automated one, would be that there would be a measure of accountability in the first one. The decision of the human moderator can be disputed, and the moderator would have a chance to explain his reasons for the removal. Artificial intelligence (“AI”) is identified by the algorithmic ‘<a href="http://raley.english.ucsb.edu/wp-content/Engl800/Pasquale-blackbox.pdf" rel="noreferrer noopener" target="_blank">black box</a>’ that processes inputs, and generates usable outputs. Implementing workable accountability standards for this system, including figuring out appeal and grievance redressal mechanisms in cases of dispute, are all problems that the regulator must concern itself with.</p>
<p style="text-align: justify; ">In the absence of any clarity or revision, it seems unlikely that the provision would actually ever see full implementation. Neither would the intermediaries know what kind of ‘automated technology’ they are supposed to use for filtering ‘unlawful content’, nor would there be any incentives for them to actually deploy this system effectively for their platforms.</p>
<h3 style="text-align: justify; ">What can be done?</h3>
<p style="text-align: justify; ">First, more research is needed to understand the effect of compliance timeframes on the accuracy of content takedown. Several jurisdictions are operating now on different timeframes of compliance, and it would be a far more holistic regulation should the government consider the dialogue around each of them and see what it means for India.</p>
<p style="text-align: justify; ">Second, it might be useful to consider the concept of an independent regulator as an alternative and as a compromise between pure governmental regulation (which is more or less what the system is) or self-regulation (which the Guidelines, albeit problematically, also espouse through Rule 3(9)).</p>
<p style="text-align: justify; ">The <a href="https://www.gov.uk/government/consultations/online-harms-white-paper" rel="noreferrer noopener" target="_blank">UK White Paper on Harms</a>, a piece of important document in the system of liability overhaul, proposes an arms-length regulator who would be responsible for drafting codes of conduct for online companies and responsible for their enforcement. While the exact merits of the system is still up for debate, the concept of having a separate body to oversee, formulate and also possibly<a href="https://medium.com/adventures-in-consumer-technology/regulating-social-media-a-policy-proposal-a2a25627c210" rel="noreferrer noopener" target="_blank">arbitrate</a> disputes regarding content removal, is finding traction in several parallel developments.</p>
<p style="text-align: justify; ">One of the Transatlantic Working Group Sessions seem to discuss this idea in terms of having an ‘<a href="https://medium.com/whither-news/proposals-for-reasonable-technology-regulation-and-an-internet-court-58ac99bec420" rel="noreferrer noopener" target="_blank">internet court</a>’ for illegal content regulation. This would have the noted advantage of a) formulating norms of online content in a transparent, public fashion, something previously done behind closed doors of either the government or the tech incumbents and b) having specially trained professionals who would be able to dispose of matters in an expeditious manner.</p>
<p style="text-align: justify; ">India is not unfamiliar to the idea of specialized tribunals, or quasi-judicial bodies for dealing with specific challenges. In 2015, for example, the Government of India passed the Commercial Courts Act, by which specific courts were tasked to deal with matters of very large value. This is neither an isolated instance of the government choosing to create new bodies for dealing with a specific problem, nor would it be inimitable in the future.</p>
<p style="text-align: justify; ">There is no<a href="https://www.thehindubusinessline.com/opinion/resurrecting-the-marketplace-of-ideas/article26313605.ece" rel="noreferrer noopener" target="_blank"> silver bullet</a> when it comes to moderation of content on the web. However, in light of these parallel convergence of ideas, the appeal of an independent regulatory system as a sane compromise between complete government control and <em>laissez-faire</em>autonomy, is worth considering.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india'>https://cis-india.org/internet-governance/blog/cyber-brics-august-12-2019-torsha-sarkar-rethinking-the-intermediary-liability-regime-in-india</a>
</p>
No publishertorshaInternet GovernanceIntermediary LiabilityArtificial Intelligence2019-08-16T01:49:47ZBlog EntryResponsible AI Workshop
https://cis-india.org/internet-governance/news/responsible-ai-workshop
<b>Sunil Abraham participated in this meeting organized by Facebook on September 17, 2019 in New Delhi. </b>
<p><a class="external-link" href="http://cis-india.org/internet-governance/files/responsible-ai">Click to view the agenda</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/responsible-ai-workshop'>https://cis-india.org/internet-governance/news/responsible-ai-workshop</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-09-20T14:50:47ZNews ItemPracticing Feminist Principles
https://cis-india.org/raw/practicing-feminist-principles
<b>AI can serve to challenge social inequality and dismantle structures of power.</b>
<p style="text-align: justify; "><span>Artificial intelligence systems have been heralded as a tool to purge our systems of social biases, opinions, and behaviour, and produce ‘hard objectivity’. However, on the contrary, it has become evident that AI systems can sharpen inequalities and bias by hard coding it. If left unattended, automated decision-making can be dangerous and dystopian.</span></p>
<p style="text-align: justify; "><strong>However, when appropriated by feminists, AI can serve to challenge social inequality and dismantle structures of power. There are many routes to such appropriation – resisting authoritarian uses through movement-building and creating our own alternative systems that harness the strength of AI towards achieving social change.</strong></p>
<p style="text-align: justify; "><strong>Feminist principles can be a handy framework to understand and transform the impact of AI systems. Key principles include reflexivity, participation, intersectionality, and working towards structural change.</strong> When operationalised, these principles can be used to enhance the capacities of local actors and institutions working towards developmental goals. They can also be used to theoretically ground collective action against the use of AI systems by institutions of power.</p>
<p style="text-align: justify; "><strong>Reflexivity</strong> in the design and implementation of AI would imply a check on the privilege and power, or lack thereof, of the various stakeholders involved in an ecosystem. By being reflexive, designers can take steps to account for power hierarchies in the process of design. A popular example of the impact of power differentials is in national statistics. Collected largely by male surveyors speaking to male heads of households, national statistics can often undervalue or misrepresent women’s labour and health. See Data2x. “<a class="external-link" href="https://www.data4sdgs.org/sites/default/files/2017-09/Gender%20Data%20-%20Data4SDGs%20Toolbox%20Module.pdf">Gender Data: Sources, Gaps, and Measurement Opportunities</a>,” March 2017 and Statistics Division. “Gender, Statistics and Gender Indicators Developing a Regional Core Set of Gender Statistics and Indicators in Asia and the Pacific.” <a class="external-link" href="https://www.unescap.org/sites/default/files/Framework-and-Indicator-set.pdf">United Nations Economic and Social Commission for Asia and the Pacific, 2013</a>. <span>AI systems would need to be reflexive of such gaps and plan steps to mitigate them.</span></p>
<p style="text-align: justify; "><strong>Participation</strong> as a principle focuses on the process. A participatory process would account for the perspectives and lived experiences of various stakeholders, including those most impacted by its deployment. <strong>In the health ecosystem, for instance, this would include policymakers, public and private healthcare providers, frontline workers, and patients. A health information system with a bottom-up design would account for metrics of success determined by not just high-level organisations such as the World Health Organisation and national governments, but also by providers and frontline workers</strong>. Among other benefits, participation in designing AI systems also leads to buy-in and ownership of the technology right at the outset, promoting widespread adoption.</p>
<p style="text-align: justify; "><strong>Intersectionality</strong> calls for addressing the social difference in the datasets, design, and deployment of AI. <strong>Research across fields has shown the perpetuation of inequality based on gender, income, race, and other characteristics through AI that is based on biased datasets.</strong></p>
<p style="text-align: justify; ">The most critical principle is to ensure that AI systems are working to challenge inequality, including inequality perpetrated by patriarchal, racist, and capitalist systems. Aligning with feminist objectives means that systems that have objectives that do not align with feminist goals – such as those that enhance state capacities to surveil and police – would immediately be excluded. Systems that are designed to exclude and oppress will not work to further feminist goals, even if they integrate other progressive elements such as intersectional datasets or dynamic consent architecture (which would allow users to opt in and out easily).</p>
<p style="text-align: justify; ">We must work towards decreasing social inequality and achieve egalitarian outcomes in and through its practice. Thus, while explicitly feminist projects such as those that produce better datasets or advocate for participatory mechanisms are of course practicing this principle, I would argue that it is also practiced by any project that furthers feminist goals. Take for example AI projects that aim to reduce hate speech and misinformation online. Given that women and other marginalised groups are often at the receiving end of violence, such work can be classified as feminist even if it doesn’t actively target gender-based violence.</p>
<p style="text-align: justify; ">All technology is embedded in social relations. Practicing feminist principles in the design of AI only serves to account for these social relations and design better, more robust systems. <strong>Feminist practitioners can mobilise these to ensure a future of AI with inclusive, community-owned, participatory systems, combined with collective challenges to systems of domination.</strong></p>
<hr />
<h3>References</h3>
<p>Haraway, Donna. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14, no. 3 (1988): 575–99. https://doi.org/10.2307/3178066.</p>
<p>Link to the original article <a class="external-link" href="https://feministai.pubpub.org/pub/practicing-feminist-principles/release/1?readingCollection=c218d365">here</a></p>
<p>
For more details visit <a href='https://cis-india.org/raw/practicing-feminist-principles'>https://cis-india.org/raw/practicing-feminist-principles</a>
</p>
No publisherambikaGender, Welfare, and PrivacyCISRAWResearchers at WorkArtificial Intelligence2021-12-07T00:54:54ZBlog EntryPolicy Lab on Artificial Intelligence & Democracy
https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy
<b>Shweta Mohandas participated in a policy lab on Artificial Intelligence & Democracy in India organised by Tandem Research, in partnership with Microsoft Research and Friedrich-Ebert-Stiftung on 2 & 3 April, 2019, in Bangalore.
</b>
<p>
For more details visit <a href='https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy'>https://cis-india.org/telecom/news/policy-lab-on-artificial-intelligence-democracy</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-04-12T01:32:32ZNews ItemPolicies for the Platform Economy
https://cis-india.org/internet-governance/news/policies-for-the-platform-economy
<b>Anubha Sinha and Amber Sinha will be panelists in this event being organized by IT for Change at India Habitat Centre in New Delhi on August 30, 2019. </b>
<p>The agenda for the event <a class="external-link" href="http://cis-india.org/internet-governance/files/agenda-for-policies-for-the-platform-economy">is here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/policies-for-the-platform-economy'>https://cis-india.org/internet-governance/news/policies-for-the-platform-economy</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-08-27T00:19:26ZNews ItemParticipation in the meetings of ISO/IEC JTC 1/SC 27 'IT Security techniques'
https://cis-india.org/internet-governance/news/participation-in-the-meetings-of-iso-iec-jtc-1-sc-27-it-security-techniques
<b>From 30 September 2018 to 4 October 2018, Gurshabad Grover participated in the meetings of the working groups of ISO/IEC JTC 1/SC 27 'IT Security techniques' held in Gjøvik, Norway. The meetings were organized by Standards Norway with support from NTNU, Microsoft, Telenor, et.al.</b>
<p>Gurshabad mainly focused on the meetings of Working Group 5 responsible for standards and research in "Identity management and privacy technologies" in SC 27. I attended sessions discussing work related to current ISO/IEC standards and upcoming work in the WG, such as:</p>
<ul>
<li>Establishing a PII deletion concept in organizations</li>
</ul>
<ul>
<li>Privacy guidelines for smart cities</li>
</ul>
<ul>
<li>Additional privacy-enhancing data de-identification standards</li>
</ul>
<ul>
<li>Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management</li>
</ul>
<ul>
<li>User-centric framework for PII handling based on user privacy preferences</li>
</ul>
<p><br />Gurshabad will be a co-rapporteur on a 12-month study period to investigate the 'Impact of Artificial Intelligence on Privacy' which was initiated by the WG in the meeting. Additionally, I was a part of the drafting committee which prepared the final resolutions and liaison statements from the meeting.</p>
<p style="text-align: justify; ">Gurshabad also attended the Norwegian Business Forum on cyber security which was held on October 4th, which featured talks by professionals and academicians working in cyber security in their different sectors. The agenda for the business forum can be <a class="external-link" href="http://www.standard.no/en/kurs-og-arrangementer/arrangement-standard-norge-og-nek/arrangement-fra-standard-norge/business-forum---cyber-security/">found here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/participation-in-the-meetings-of-iso-iec-jtc-1-sc-27-it-security-techniques'>https://cis-india.org/internet-governance/news/participation-in-the-meetings-of-iso-iec-jtc-1-sc-27-it-security-techniques</a>
</p>
No publisherAdminInternet GovernanceArtificial IntelligencePrivacy2018-10-31T01:28:29ZNews ItemPanelist at launch of Google-UNESCAP AI Report
https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report
<b>Arindrajit Basu was a speaker at the panel launching the Google-UNESCAP AI Report at the GovInsider Forum held at the United Nations Convention Centre in Bangkok on October 16, 2019. </b>
<p>Click to <a class="external-link" href="http://cis-india.org/internet-governance/files/launch-the-ai-report">view the agenda</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report'>https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-11-02T06:48:25ZNews ItemOWASP Seasides Conference
https://cis-india.org/internet-governance/news/owasp-seasides-conference
<b>Karan Saini attended the OWASP Seasides security conference held on February 27 and 28, 2019 at Cavelossim, Goa. The event was organized by OWASP Seasides.</b>
<p>For conference details <a class="external-link" href="https://www.owaspseasides.com/schedule/workshops">click here</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/owasp-seasides-conference'>https://cis-india.org/internet-governance/news/owasp-seasides-conference</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-03-07T23:53:47ZNews ItemNITI Aayog Discussion Paper: An aspirational step towards India’s AI policy
https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy
<b>The National Strategy for Artificial Intelligence — a discussion paper on India’s path forward in AI, is a welcome step towards a comprehensive document that reflects the government's AI ambitions. The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability.</b>
<p style="text-align: justify; "><a class="external-link" href="http://cis-india.org/internet-governance/files/niti-aayog-discussion-paper"><strong>Download the Report</strong></a></p>
<hr />
<p style="text-align: justify; "><span>The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability. The paper identifies five focus areas where AI could have a positive impact in India.</span><span> It also focuses on reskilling as a response to the potential problem of job loss due the future large-scale adoption of AI in the job market.</span><span> This blog is a follow up to the comments made by CIS on Twitter</span><span> on the paper and seeks to reflect on the National Strategy as a well researched AI roadmap for India. In doing so, it identifies areas that can be strengthened and built upon.</span></p>
<p><strong>Identified Focus Areas for AI Intervention</strong></p>
<p style="text-align: justify; "><span>The paper identifies five focus areas—Healthcare, Agriculture, Education, Smart Cities and Infrastructure, Smart Mobility and Transportation, which Niti Aayog believes will benefit most from the use of AI in bringing about social welfare for the people of India.</span><span> Although these sectors are essential in the development of a nation, the failure to include manufacturing and services sectors is an oversight. Focussing on manufacturing is fundamental not only in terms of economic development and user base, but also regarding questions of safety and the impact of AI on jobs and economic security. The same holds true for the service sector particularly since AI products are being made for the use of consumers, not just businesses. Use of AI in the services sector also raises critical questions about user privacy and ethics. Another sector the paper fails to include is defense, this is worrying since India is chairing the Group of Governmental Experts </span><span>on Lethal Autonomous Weapons Systems (LAWS) in 2018.</span><span> Across sectors, the report fails to look at how AI could be utilised to ensure accessibility and inclusion for the disabled. This is surprising, as aid for the differently abled and accessibility technology was one of the 10 domains identified in the Task Force Report on AI published earlier this year. </span><span>This should have been a focus point in the paper as it aims to identify applications with maximum social impact and inclusion.</span></p>
<p style="text-align: justify; "><span>In its vision for the use of AI in smart cities, the</span><span> paper suggests the adoption of a sophisticated surveillance system as well as the use of social media intelligence platforms to check and monitor people’s movement both online and offline to maintain public safety.</span><span> This is at variance with constitutional standards of due process and criminal law principles of reasonable ground and reasonable suspicion. Further, use of such methods will pose issues of judicial inscrutability. From a rights perspective, state surveillance can directly interfere with fundamental rights including privacy, freedom of expression, and freedom of assembly. Privacy organizations around the world have raised concerns regarding the increased public surveillance through the use of AI.</span><span> Though the paper recognized the impact on privacy that such uses would have, it failed to set a strong and forward looking position on the issue - such as advocating that such surveillance must be lawful and inline with international human rights norms.</span></p>
<p><span><strong>Harnessing the Power of AI and Accelerating Research</strong></span></p>
<p style="text-align: justify; "><span>One of the ways suggested for the proliferation of AI in India was to increase research, both core and applied, to bring about innovation that can be commercialised.</span><span> In order to attain this goal the paper proposes a two-tier integrated approach: the establishment of COREs (Centres of Research Excellence in Artificial Intelligence) and ICTAI (International Centre for Transformational Artificial Intelligence).</span><span> However the roadmap to increase research in AI fails to acknowledge the principles of public funded research such as free and open source software (FOSS), open standards and open data. The report also blames the current Indian Intellectual Property regime for being “unattractive” and averse to incentivising research and adoption of AI.</span><span> Section 3(k) of Patents Act exempts algorithms from being patented, and the Computer Related Inventions (CRI) Guidelines have faced much controversy over the patentability of mere software without a novel hardware component.</span><span> The paper provides no concrete answers to the question of whether it should be permissible to patent algorithms, and if yes, to to what extent. Furthermore, there needs to be a standard either in the CRI Guidelines or the Patent Act, that distinguishes between AI algorithms and non-AI algorithms. Additionally, given that there is no historical precedence on the requirement of patent rights to incentivise creation of AI, innovative investment protection mechanisms that have lesser negative externalities, such as compensatory liability regimes</span><span> would be more desirable. The report further failed to look at the issue holistically and recognize that facilitating rampant patenting can form a barrier to smaller companies from using or developing AI. This is important to be cognizant of given the central role of startups to the AI ecosystem in India and because it can work against the larger goal of inclusion articulated by the report.</span></p>
<p><span><strong>Ethics, Privacy, Security and Safety</strong></span></p>
<p style="text-align: justify; "><span>In a positive step forward, the paper addresses a broader range of ethical issues concerning AI including transparency, fairness, privacy and security and safety in more detail when compared to the earlier report of the Task Force.</span><span> Yet despite a dedicated section covering these issues, a number of concerns still remain unanswered.</span></p>
<p><span><strong>Transparency</strong></span></p>
<p style="text-align: justify; "><span>The section on transparency and opening the Black Box has several lacunae.</span><span> First, AI that is used by the government, to an acceptable extent, must be available in the public domain for audit, if not under Free and Open Source Software (FOSS). This should hold true in particular for uses that impinge on fundamental rights. Second, if the AI is utilised in the private sector, there currently exists a right to reverse engineer within the Indian Copyright Act,</span><span> which is not accounted for in the paper. Furthermore, if the AI was involved both in the commission of a crime or the violation of human rights, or in the investigations of such transgressions, questions with regard to judicial scrutability of the AI remain. In addition to explainability, the source code must be made circumstantially available, since explainable AI</span><span> alone cannot solve all the problems of transparency. In addition to availability of source code and explainability, a greater discussion is needed about the tradeoff between a complex and potentially more accurate AI system (with more layers and nodes) vs. an AI system which is potentially not as accurate but is able to provide a human readable explanation.</span><span> It is interesting to note that transparency within human-AI interaction is absent in the paper. Key questions on transparency, such as whether an AI should disclose its identity to a human have not been answered.</span></p>
<p><span><strong>Fairness</strong></span></p>
<p style="text-align: justify; "><span>With regards to fairness, the paper mentions how AI can amplify bias in data and create unfair outcomes.</span><span> However, the paper neither suggests detailed or satisfactory solutions nor does it deal with biased historical data in an Indian context. More specifically, there seems to be no mention of regulatory tools to tackle the problem of fairness, such as:</span></p>
<ul>
<li><span>Self-certification</span></li>
<li><span>Certification by a self-regulatory body</span></li>
<li><span>Discrimination impact assessments</span></li>
<li><span>Investigations by the privacy regulator </span></li>
</ul>
<p><span>Such tools will proactively need to ensure</span><span> inclusion, diversity, and equity in composition and decisions.</span></p>
<p style="text-align: justify; "><span>Additionally, with reference to correcting bias in AI, it should be noted that the technocratic view that as an AI solution continues to be trained on larger amounts of data , systems will self correct, does not fully recognize the importance of data quality and data curation, and is inconsistent with fundamental rights. Policy objectives of AI innovation must be technologically nuanced and cannot be at the cost of intermediary denial of rights and services.</span></p>
<p style="text-align: justify; "><span>Further, the paper does not deal with issues of multiple definitions and principles of fairness, and that building definitions into AI systems may often involve choosing one definition over the other. For instance, it can be argued that the set of AI ethical principles articulated by Google</span><span> are more consequentialist in nature involving a a cost-benefit analysis, whereas a human rights approach may be more deontological in nature. In this regard, there is a need for interdisciplinary research involving computer scientists, statisticians, ethicists and lawyers.</span></p>
<p><span><strong>Privacy</strong></span></p>
<p style="text-align: justify; "><span>Though the paper underscores the importance of privacy and the need for a privacy legislation in India - the paper limits the potential privacy concerns arising from AI to collection, inappropriate use of data, personal discrimination, unfair gain from insights derived from consumer data (the solution being to explain to consumers about the value they as consumers gain from this), and unfair competitive advantage by collecting mass amounts of data (which is not directly related to privacy).</span><span> In this way the paper fails to discuss the full implications on privacy that AI might have and fails to address the data rights necessary to enable the right to privacy in a society where AI is pervasive. The paper fails to engage with emerging principles from data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI. Further, there is no discussion on the issues such as data minimisation and purpose limitation which some big data and AI proponents argue against. To that extent, there is a lack of appreciation of the difficult policy questions concerning privacy and AI. The paper is also completely silent on redress and remedy. Further the paper endorses the seven data protection principles postulated by the Justice Srikrishna Committee.</span><span> However CIS has pointed out that these principles are generic and not specific to data protection.</span><span> Moreover, the law chapter of IEEE’s ‘</span><em><span>Global Initiative on Ethics of Autonomous and Intelligent Systems’</span></em><span> has been ignored in favor of the chapter on ‘</span><em><span>Personal Data and Individual Access Control in Ethically Aligned Design</span></em><span>’</span><span> as the recommended international standard.</span><span> Ideally, both chapters should be recommended for a holistic approach to the issue of ethics and privacy with respect to AI. </span></p>
<p><span><strong>AI Regulation and Sectoral Standards</strong></span></p>
<p style="text-align: justify; "><span>The discussion paper’s approach towards sectoral regulation advocates collaboration with industry to formulate regulatory frameworks for each sector. However, the paper is silent on the possibility of reviewing existing sectoral regulation to understand if they require amending. We believe that this is an important solution to consider since amending existing regulation and standards often takes less time than formulating and implementing new regulatory frameworks.</span><span> Furthermore, although the emphasis on awareness in the paper is welcome, it must complement regulation and be driven by all stakeholders, especially given India’s limited regulatory budget. The over reliance on industry self-regulation, by itself, is not advisable, as there is an absence of robust industry governance bodies in India and self-regulation raises questions about the strength and enforceability of such practices. The privacy debate in India has recognized this and reports, like the Report of the Group of Experts on Privacy, recommend a co-regulatory framework with industry developing binding standards that are inline with the national privacy law and that are approved and enforced by the Privacy Commissioner.</span><span> That said, the UN Guiding Principles on Business and Human Rights and its “protect, respect, and remedy” framework should guide any self regulatory action.</span></p>
<p><span><strong>Security and Safety of AI Systems</strong></span></p>
<p style="text-align: justify; "><span>In terms of security and safety of AI systems the paper seeks to shift the discussion of accountability being primarily about liability, to that of one about the explainability of AI.</span><span> Furthermore, there is no recommendation of immunities or incentives for whistleblowers or researchers to report on privacy breaches and vulnerabilities. The report also does not recognize certain uses of AI as being more critical than others because of their potential harm to the human. This would include uses in healthcare and autonomous transportation. A key component of accountability in these sectors will be the evolution of appropriate testing and quality assurance standards. Only then, should safe harbours be discussed as an extension of the negligence test for damages caused by AI software. Additionally, the paper fails to recommend kill switches, which should be mandatory for all kinetic AI systems.</span><span> Finally, there is no mention of mandatory human-in-the-loop in all systems where there are significant risks to safety and human rights. Autonomous AI is only viewed as an economic boost, but its potential risks have not been explored sufficiently. A welcome recommendation would be for all autonomous AI to go through human rights impact assessments.</span></p>
<p><span><strong>Research and Education</strong></span></p>
<p style="text-align: justify; "><span>Being a government think-tank, the NITI Aayog could have dealt in detail with the AI policies of the government and looked at how different arms of the government are aiming to leverage AI and tackle the problems arising out of the use of AI. Instead of tabulating the government’s role in each area and especially research, the report could have also listed out the various areas where each department could play a role in the AI ecosystem through regulation, education, funding research etc. In terms of the recommendations for introducing AI curriculums in schools, and colleges,</span><span> the government could also ensure that ethics and rights are part of the curriculum - especially in technical institutions. A possible course of action could include corporations paying for a pan-Indian AI education campaign.This would also require the government to formulate the required academic curriculum that is updated to include rights and ethics. </span></p>
<p><span><strong>Data Standards and Data Sharing</strong></span></p>
<p style="text-align: justify; "><span>Based on the amount of data the Government of India collects through its numerous schemes, it has the potential to be the largest aggregator of data specific to India. However the paper does not consider the use of this data with enough gravity. For example, the paper recommends Corporate Data Sharing for “social good” and making government datasets from the social sector available publicly.</span><span> Yet this section does not mention privacy enhancing technologies/standards such as pseudonymization, anonymization standards, differential privacy etc. Additionally there should be provisions that allow the government to prevent the formation of monopolies by regulating companies from hoarding user data. The open data standards could also be applicable to the private companies, so that they can also share their data in compliance with the privacy enhancing technologies mentioned above. The paper also acknowledges that AI Marketplaces require monitoring and maintenance of quality. It recognises the need for “continuous scrutiny of products, sellers and buyers”</span><span>, and proposes that the government enable these regulations in a manner that private players could set up the marketplace. This is a welcome suggestion, but the legal and ethical framework of the AI Marketplace requires further discussion and clarification.</span></p>
<p><span><strong>An AI Garage for Emerging Economies</strong></span></p>
<p style="text-align: justify; "><span>The discussion paper also qualifies India as an “ideal test-bed”</span><span> for trying out AI related solutions. This is problematic since questions of regulation in India with respect to AI have yet to be legally clarified and defined and India does not have a comprehensive privacy law. Without a strong ethical and regulatory framework, the use of new and possibly untested technologies in India could lead to unintended and possibly harmful outcomes.The government's ambition to position India as a leader amongst developing countries on AI related issues should not be achieved by using Indians as test subjects for technologies whose effects are unknown.</span></p>
<p><span><strong>Conclusion</strong></span></p>
<p style="text-align: justify; "><span>In conclusion, NITI Aayog’s discussion paper represents a welcome step towards a comprehensive AI strategy for India. However, the trend of inconspicuously releasing reports (this and the AI Task Force) as well as the lack of a call for public comments, seems to be the wrong way to foster discussion on emerging technologies that will be as pervasive as AI. </span></p>
<p style="text-align: justify; "><span>The blanket recommendations were provided without looking at its viability in each sector.</span><span> Furthermore, the discussion paper does not sufficiently explore or, at times, completely omits key areas. It barely touched upon societal, cultural and sectoral challenges to the adoption of AI — research that CIS is currently in the process of undertaking.</span><span>Future reports on Indian AI strategy should pay more attention to the country’s unique legal context and to possible defense applications and take the opportunity to establish a forward looking, human rights respecting, and holistic position in global discourse and developments. Reports should also consider infrastructure investment as an important prerequisite for AI development and deployment. Digitised data and connectivity as well as more basic infrastructure, such as rural electricity and well-maintained roads, require more funding to more successfully leverage AI for inclusive economic growth. Although there are important concerns, the discussion paper is an aspirational step toward India’s AI strategy. </span></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy'>https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy</a>
</p>
No publisherSunil Abraham, Elonnai Hickok, Amber Sinha, Swaraj Barooah, Shweta Mohandas, Pranav M Bidare, Swagam Dasgupta, Vishnu Ramachandran and Senthil KumarInternet GovernanceArtificial Intelligence2018-06-13T13:08:47ZBlog EntryNew intermediary guidelines: The good and the bad
https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad
<b>In pursuance of the government releasing the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, this blogpost offers a quick rundown of some of the changes brought about the Rules, and how they line up with existing principles of best practices in content moderation, among others. </b>
<p> </p>
<p>This article originally appeared in the Down to Earth <a class="external-link" href="https://www.downtoearth.org.in/blog/governance/new-intermediary-guidelines-the-good-and-the-bad-75693">magazine</a>. Reposted with permission.</p>
<p>-------</p>
<p>The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The operation of these rules would be in supersession of the existing intermediary liability rules under the Information Technology (IT) Act, made back in 2011.</p>
<p>These IL rules would have a significant impact on our relationships with internet ‘intermediaries’, i.e. gatekeepers and getaways to the internet, including social media platforms, communication and messaging channels.</p>
<p>The rules also make a bid to include entities that have not traditionally been considered ‘intermediaries’ within the law, including curated-content platforms such as Netflix and Amazon Prime as well as digital news publications.</p>
<p>These rules are a significant step-up from the draft version of the amendments floated by the Union government two years ago; in this period, the relationship between the government around the world and major intermediaries changed significantly. </p>
<p>The insistence of these entities in the past, that they are not ‘arbiters of truth’, for instance, has not always held water in their own decision-makings.</p>
<p>Both Twitter and Facebook, for instance, have locked the former United States president Donald Trump out of their platforms. Twitter has also resisted to fully comply with government censorship requests in India, spilling into an interesting policy tussle between the two entities. It is in the context of these changes, therefore, that we must we consider the new rules.</p>
<p><strong>What changed for the good?</strong></p>
<p>One of the immediate standouts of these rules is in the more granular way in which it aims to approach the problem of intermediary regulation. The previous draft — and in general the entirety of the law — had continued to treat ‘intermediaries’ as a monolithic entity, entirely definable by section 2(w) of the IT Act, which in turn derived much of its legal language from the EU E-commerce Directive of 2000.</p>
<p>Intermediaries in the directive were treated more like ‘simple conduits’ or dumb, passive carriers who did not play any active role in the content. While that might have been the truth of the internet when these laws and rules were first enacted, the internet today looks much different.</p>
<p>Not only is there a diversification of services offered by these intermediaries, there’s also a significant issue of scale, wielded by a few select players, either by centralisation or by the sheer number of user bases. A broad, general mandate would, therefore, miss out on many of these nuances, leading to imperfect regulatory outcomes.</p>
<p>The new rules, therefore, envisage three types of entities:</p>
<ul><li>There are the ‘intermediaries’ within the traditional, section 2(w) meaning of the IT Act. This would be the broad umbrella term for all entities that would fall within the ambit of the rules.</li><li>There are the ‘social media intermediaries’ (SMI), as entities, which enable online interaction between two or more users.</li><li>The rules identify ‘significant social media intermediaries’ (SSMI), which would mean entities with user-thresholds as notified by the Central Government.</li></ul>
<p>The levels of obligations vary based on these hierarchies of classification. For instance, an SSMI would be obligated with a much higher standard of transparency and accountability towards their users. They would have to fulfill by publishing six-monthly transparency reports, where they have to outline how they dealt with requests for content removal, how they deployed automated tools to filter content, and so on.</p>
<p>I have previously argued how transparency reports, when done well, are an excellent way of understanding the breadth of government and social media censorships. Legally mandating this is then perhaps a step in the right direction.</p>
<p>Some other requirements under this transparency principle include giving notice to users whose content has been disabled, allowing them to contest such removal, etc.</p>
<p>One of the other rules from the older draft that had raised a significant amount of concern was the proactive filtering mandate, where intermediaries were liable to basically filter for all unlawful content. This was problematic on two counts:</p>
<ul><li>Developments in machine learning technologies are simply not up there to make this a possibility, which would mean that there would always be a chance that legitimate and legal content would get censored, leading to general chilling effect on digital expression</li><li>The technical and financial burden this would impose on intermediaries would have impacted the competition in the market.</li></ul>
<p>The new rules seemed to have lessened this burden, by first, reducing it from being mandatory to being best endeavour-basis; and second, by reducing the ambit of ‘unlawful content’ to only include content depicting sexual abuse, child sexual abuse imagery (CSAM) and duplicating to already disabled / removed content.</p>
<p>This specificity would be useful for better deployment of such technologies, since previous research has shown that it’s considerably easier to train a machine learning tool on corpus of CSAM or abuse, rather than on more contextual, subjective matters such as hate speech.</p>
<p><strong>What should go?</strong></p>
<p>That being said, it is concerning that the new rules choose to bring online curated content platforms (OCCPs) within the ambit of the law by proposals of a three-tiered self-regulatory body and schedules outlining guidelines about the rating system these entities should deploy.</p>
<p>In the last two years, several attempts have been made by the Internet and Mobile Association of India (IAMAI), an industry body consisting of representatives of these OCCPs, to bring about a self-regulatory code that fills in the supposed regulatory gap in the Indian law.</p>
<p>It is not known if these stakeholders were consulted before the enactment of these provisions. Some of this framework would also apply to publishers of digital news portals.</p>
<p>Noticeably, this entire chapter was also missing from the old draft, and introducing it in the final form of the law without due public consultations is problematic.</p>
<p>Part III and onwards of the rules, which broadly deal with the regulation of these entities, therefore, should be put on hold and opened up for a period of public and stakeholder consultations to adhere to the true spirit of democratic participation.</p>
<p><em>The author would like to thank Gurshabad Grover for his editorial suggestions. </em></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad'>https://cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad</a>
</p>
No publisherTorSharkIT ActIntermediary LiabilityInternet GovernanceCensorshipArtificial Intelligence2021-03-15T13:52:46ZBlog EntryMWC19 Shanghai AI and Trust in APAC and China
https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china
<b>Sunil Abraham will be making a presentation at the summit on AI and Trust in APAC and China at MWC19 Shanghai on June 27, 2019. Sunil has been invited as a speaker on panel ‘Framing AI for Digital Upstarts’.</b>
<p style="text-align: justify; ">MWC Shanghai is a three-day conference and exhibition bringing together over 200 AI business leaders, 65,000 attendees, and 550 companies from across different industries and perspectives to address business and technical concerns in the Intelligent Connectivity era and debate tough problems for today and tomorrow. More <a class="external-link" href="http://cis-india.org/internet-governance/files/mwc19-shanghai-ai-and-trust-in-apac-and-china">info here</a>. For event details <a class="external-link" href="https://www.mwcshanghai.com/session/ai-trust-in-apac-and-china/">see this page</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china'>https://cis-india.org/internet-governance/news/mwc19-shanghai-ai-and-trust-in-apac-and-china</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-06-05T07:10:50ZNews ItemInternational Conference on Justice Education:Legal Implications of Artificial Intelligence
https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence
<b>Arindrajit Basu attended the International Conference on Justice Education with the theme "Artificial Intelligence and its Legal Implications" at Institute of Law Nirma University. The event was organized by Nirma University in Ahmedabad on March 15 - 16, 2019. Arindrajit was a theme speaker for the panel on Legal Implications of Artificial Intelligence and was a judge of the presentations in the same session.</b>
<p>Click to <a class="external-link" href="http://cis-india.org/internet-governance/files/icje-conference-schedule">read the agenda</a></p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence'>https://cis-india.org/internet-governance/news/international-conference-on-justice-education-legal-implications-of-artificial-intelligence</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-03-20T15:52:29ZNews ItemInsult to Kannada shows Google AI in a poor light
https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light
<b>A Google search for ‘the ugliest language in India’ yielded ‘Kannada’ as the answer late last week, causing widespread outrage.
</b>
<p>The article by Krupa Joseph was <a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-your-bond-with-bengaluru/insult-to-kannada-shows-google-ai-in-a-poor-light-995307.html">published in Deccan Herald</a> on June 8, 2021. Pranesh Prakash and Shweta Mohandas have been quoted.</p>
<hr />
<p>Google has since apologised, saying the answer does not reflect its views, but questions still remain about why this happened at all, and who drafted the answer.</p>
<p style="text-align: justify; ">“When artificial intelligence gets it wrong, things can go really wrong, says tech entrepreneur,”Hari Prasad Nadig, who has worked on Kannada in free and open source soft ware.“Usually, you would expect Google to give an answer based on citings from multiple sources,and at least one or two credible sources.</p>
<p style="text-align: justify; ">Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.</p>
<p style="text-align: justify; ">“Usually, you would expect Google to give an answer based on citings from multiple sources, and at least one or two credible sources. Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.</p>
<h3 style="text-align: justify; ">Fallible process</h3>
<p style="text-align: justify; ">Pranesh Prakash, Centre for Internet and Society, Bengaluru, says the incident exposes the fallibility of the process by which Google selects its “featured snippets”.</p>
<p style="text-align: justify; ">“It is not an opinion that Google or its employees or its algorithms have come up with, but rather an existing opinion that Google wrongly amplified,” he says.It demonstrates that the snippets that Google features as ‘facts’ aren’t necessarily based on facts, he says.</p>
<h3 style="text-align: justify; ">Periodic checks</h3>
<p style="text-align: justify; ">Shweta Mohandas, researcher with the Center for Internet and Society, says Google does not create content, but only provides content that is available on the Internet.</p>
<p style="text-align: justify; ">“Hence, the biases come from the tags, then used to train the AI. There should be periodic checks on the data fed into the system,” she says. Such blunders can be prevented if the tags and results are audited periodically, and a mechanism is put in place to enable people to report them, she says.</p>
<h3 style="text-align: justify; ">Who was upto mischief?</h3>
<p style="text-align: justify; ">The answer was created on a financial services website whose owners aren’t revealing their names Pavanaja UB, CEO, Vishva Kannada Softech, says the answer was attributed to a website called debt consolidations questions.com — but he was unable to find this post anywhere on the site.“This is a website registered in Russia and it offers questions and answers on many topics. But this particular page could not be found. Maybe it was removed following the outrage,” he says. Pavanaja believes this was a deliberate attempt to upset people. “The website lists no information about the owner and gives no contact details. Even if such a question did exist on the page before, how did it get to the top of the Google search results?” he wonders.</p>
<p style="text-align: justify; ">He suggests that someone planted the answer and kept searching for it until it reached the top.“But who would take so much effort?” he says.</p>
<h3 style="text-align: justify; ">Furore and after</h3>
<p>‘Kannada’ came up as an answer to a query in Google about ‘the ugliest language in India’.</p>
<p style="text-align: justify; ">Aravind Limbavali, minister for Kannada and Culture, demanded an apology from Google, and threatened legal action against the company “for maligning the image of our beautiful language.”</p>
<p>Google removed the answer and issued a statement:</p>
<p style="text-align: justify; ">“We know this is not ideal, but we take swift corrective action when we are made aware of an issue and are continually working to improve our algorithms. Naturally, these are not reflective of the opinions of Google, and we apologise for the misunderstanding and hurting any sentiments."</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light'>https://cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light</a>
</p>
No publisherKrupa JosephInternet GovernanceArtificial Intelligence2021-06-26T05:25:38ZNews ItemImpact of Industrial Revolution 4.0 - IT and Automotive Sector in India by the Dialogue and FES
https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes
<b>On August 21, 2019, Aayush Rathi, attended a report launch event and focus group discussion on the "Impact of Industrial Revolution 4.0 - IT and Automotive Sector in India". Research conducted by the Dialogue in collaboration with the Friedrich-Ebert-Stiftung (FES) were being presented. </b>
<p class="moz-quote-pre" style="text-align: justify; ">At CIS, we have previously produced research on the future of work in these sectors. Aayush attended the event to understand how other researchers are approaching the subject of the future of work in terms of the methodological approach and the questions being asked and policy responses being proposed. In what may be treated as validation of our research design, FES and the Dialogue have addressed similar questions and adopted an empirical+desk based approach to do so as well.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes'>https://cis-india.org/internet-governance/news/impact-of-industrial-revolution-4-0-it-and-automotive-sector-in-india-by-the-dialogue-and-fes</a>
</p>
No publisherAdminIndustry 4.0Internet GovernanceInformation TechnologyArtificial Intelligence2019-08-27T00:13:32ZNews ItemFuture Tech and Future Law
https://cis-india.org/internet-governance/news/future-tech-and-future-law
<b>The Dept. of IT & BT, Government of Karnataka organised the 21st edition of Bengaluru Tech Summit from November 29, 2018 to December 1, 2018 at Palace Grounds, Bengaluru. Arindrajit Basu was a speaker at the panel on 'Future Tech and Future Law'.</b>
<p class="moz-quote-pre" style="text-align: justify; ">The discussion was moderated by Tanvi Ratna. Aayush's co-panelists were Apar Gupta,Jaideep Reddy and Nilesh Trivedi. During his remarks, he attempted to focus on our AI research thus far and our suggestions for AI regulation.</p>
<p class="moz-quote-pre" style="text-align: justify; ">For more details <a class="external-link" href="https://www.bengalurutechsummit.com/">see this page</a>.</p>
<p>
For more details visit <a href='https://cis-india.org/internet-governance/news/future-tech-and-future-law'>https://cis-india.org/internet-governance/news/future-tech-and-future-law</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-01-03T01:17:29ZNews Item