Centre for Internet & Society
What’s up with WhatsApp?

Silhouettes of mobile users next to a screen projection of the WhatsApp logo. Photo: REUTERS/Dado Ruvic/Illustration

Internet Shutdown Stories

Cover Image Aakash Hassan

Private-public partnership for cyber security

For security The private sector has a long history of fostering global pacts iStockphoto - Getty Images/iStockphoto

To preserve freedoms online, amend the IT Act

In the absence of transparency, we have to rely on a mix of user reports and media reports that carry leaked government documents to get a glimpse into what websites the government is blocking(Getty Images)

Data Protection: We can innovate, leapfrog

by Sunil Abraham last modified Jan 22, 2018 01:45 AM
About 27% of India's population is still illiterate or barely literate. Most privacy policies and terms of services for web and mobile applications are in English and therefore it is only 10% of us who can actually read them before we provide our consent.

The article was published in the Deccan Herald on January 20, 2018.

Even if we can read them, we may not have the necessary legal training to understand them. According to a tweet thread by Pat Walshe (@privacymatters), the Tetris app, a popular video game, has a privacy policy that details the third-party advertising companies that they share data with. These third-parties include "123 Ad Networks; 13 Online Analytics companies; 62 Mobile Advertising Networks; 14 Mobile Analytics companies. The linked privacy policies for Tetris run to 407,000 words, compared to 450,000 words for the entire 'Lord of the Rings trilogy'." The child aged four and above that plays the game and her parents need an intermediary to deal with the corporations hiding behind Tetris.

Unlike the European Union, which has more than 37 years of history when it comes to data protection law, India is starting with a near blank slate after the Supreme Court confirmed that privacy is a constitutionally-guaranteed fundamental right in the Puttaswamy case judgement. While we would want to maintain adequacy and compatibility with the EU General Data Protection Regulation (GDPR) because it has become the global standard, we must realise that there is an opportunity for leapfrogging. This article attempts to introduce the reader to three different visions for intermediaries that have emerged within the Indian data protection debate around the accountability principle. I will also provide a brief sketch of an idea that we are developing at the Centre for Internet and Society. This is an incomplete list as there must be more proposals for regulatory innovation around the accountability principle that I am currently unaware of.

n Account Aggregators: The 'India Stack' ecosystem that has been built around the Aadhaar programme first proposed intermediaries called Account Aggregators. Account Aggregators manage consent artifacts. India Stack has traditionally been described as having four layers -- presenceless, paperless, cashless and consent. The consent layer is supposed to feature Account Aggregators. If, for example, a data subject wanting an insurance policy visits an insurance portal, the portal would collect personal information and a consent artifact from her and pass it on to multiple insurance companies. These insurance companies would send personalised bids to the portal, which would be displayed on a comparative grid to enable empowered selection.

The data structure consent artifact has been provided in the Master Direction from RBI titled "Non-Banking Financial Company Account Aggregator Directions," published in September 2016. How does this work? The fields includes (i) identity and optional contact information; (ii) nature of the financial information requested; (iii) purpose; (iv) the identity of the recipients, if any; (v) URL/address for notifications when the consent artifact is used; (vi) consent artifact creation date, expiry date, identity and signature/digital signature of the Account Aggregator; and (vii) any other attribute as may be prescribed by the RBI. While Account Aggregators make it frictionless for the grant of consent and also for the harvesting of consent by data controllers, it does not make it easy for you to manage and revoke your consent.

n Data Trusts: Most recently, Na.Vijayashankar, a Bengaluru-based cybersecurity and cyberlaw expert, has proposed intermediaries called 'Data Trusts' registered with the regulator and who (i) will work as escrow agents for the personal data (which would be classified by type for different degrees of protection); (ii) will make privacy notices accessible by translating them into accessible language and formats; (iii) disclose data minimally to different data controllers based on the purpose limitation; (iv) issue tokens or pseudonymous identifiers and monetise the data for the benefit of the data subject. To ensure that Data Trusts truly protect the interests of the data subject, Vijayashankar proposes three requirements: (a) public performance reviews (b) audits by the regulator and (c) "an arms-length relationship with the data collectors." In his proposal, Data Trusts are firms with "the ability to process a real-time request from the data subject to supply appropriate data to the data collector."

n Learned Intermediaries: The Takshashila Institution published a paper titled Beyond Consent: A New Paradigm for Data Protection, authored by Rahul Matthan, partner at the law firm Trilegal. Learned Intermediaries would perform mandatory audits on all data controllers above a particular threshold. Like Vijayashankar, Matthan also requires these intermediaries to be certified by an appropriate authority. The main harm that he focuses on is, bias or discrimination. He proposes three stages of audit which are designed for the age of Big Data and Artificial Intelligence: "(i) Database Query Review; (ii) Black Box Audits; and (iii) Algorithm Review". Matthan also tentatively considers a rating system. Learned Intermediaries are a means to address information asymmetry in the market by making data subjects more aware. The impact of churn on their bottom-lines, it is hoped, will force data controllers to behave in an accountable manner, protecting rights and mitigating harms.

n Consent Brokers: Finally, I have proposed the model of a 'Consent Broker' by modifying the concept of the Account Aggregator. Like the Account Aggregator proposal, we would want a competitive set of consent brokers who will manage consent artifacts for data subjects. However, I believe there should be a 1:1 relationship between data subjects and consent brokers so that the latter compete for the business of data subjects. Like Vijayashankar, I believe that the consent broker must have an "arms-length distance" from data controllers and must be prohibited from making any money from them. Consent brokers could also be trusted to take proactive actions for the data subjects, such as access and correction.

The need of the hour is the production of regulatory innovations and robust discussions around them for all the nine privacy principles in the Justice AP Shah committee report -- notice, choice and consent, collection limitation, purpose limitation, access and correction, disclosure of information, security, openness and accountability.

Artificial Intelligence in India: A Compendium

by Centre for Internet & Society — last modified Jun 11, 2019 02:24 PM
Artificial Intelligence (AI) is fast emerging as a key technological paradigm in different sectors across the globe including India.

Towards understanding the state of AI in India, challenges to the development and adoption of the same, and ethical concerns that arise out of the use of AI - CIS is undertaking research to understand and document  national developments, discourse, and impact (actual and potential) to ethical and regulatory solutions and compare the same against global developments in the space. As part of this, CIS is creating a compendium of reports that dive into the use of AI across sectors including healthcare, manufacturing, governance, and finance.

Each report seeks to map the present state of AI in the respective sector. In doing so, it explores:  Use: What is the present use of AI in the sector? What is the narrative and discourse around AI in the sector? Actors: Who are the key stakeholders involved in the development, implementation​ ​and​ ​regulation​ ​of​ ​AI​ ​in​ ​the sector?  Impact: What is the potential and existing impact of AI in the sector?  Regulation: What are the challenges faced in policy making around AI in the sector?

The reports are as follows:

 
 

The research is funded by Google India. Comments and feedback are welcome. The reports are drafts.

CIS Submission to the Committee of Experts on a Data Protection Framework for India

by Amber Sinha last modified Apr 18, 2018 04:39 PM
Contributors: With inputs from Elonnai Hickok and Sunil Abraham, Research assistance from Anushka Sinha, Sunidhi Sawhney and Kanav Burman
This submission presents comments by the Centre for Internet and Society, India (“CIS”) on the ‘White Paper of the Committee of Experts on a Data Protection Framework for India’ (“White Paper”) released by the Ministry of Electronics and Information Technology. The White paper was drafted by a Committee of Expert (“Committee”) constituted by the Ministry. CIS has conducted research on the issues of privacy, data protection and data security since 2010 and is thankful for the opportunity to put forth its views. The submission was made on January 31, 2018.

The submission is divided into four parts — I. Preliminary, II. Scope and Exemption, III. Grounds of Processing, Obligations of Entities and Individual Rights and IV. Regulation and Enforcement. The submission follows the same the order as adopted by the White Paper.

Please access the full submission here.

AI and Manufacturing and Services in India: Looking Forward

by Shweta Mohandas and Pranav M. Bidare — last modified Feb 14, 2018 11:13 AM
This Report provides an overview of the proceedings of the Roundtable on Artificial Intelligence (AI) in Manufacturing and Services: Looking Forward (hereinafter referred to as ‘the Roundtable’), conducted at The Energy Resource Institute (TERI), in Bangalore on January 19, 2018.

 

Event Report: Download (PDF)


The Roundtable comprised of participants from different sides of the AI and manufacturing and services spectrum including practitioners, representatives from multinational companies, think tanks, academicians, and researchers. The Roundtable discussed various questions regarding AI in the manufacturing and services industry in India.

The round of discussions began with initial observations from the in progress research that the Centre for Internet and Society (CIS) is undertaking, on the use of AI in manufacturing and services. Some of the uses of AI that the research had thus far identified across various sectors included AI platforms in IT services for accurate forecasting for businesses, AI driven automation of routine tasks in manufacturing and production, and AI driven analytics for forecasting in the agriculture sector. The discussion then proceeded to the benefits of using AI - including efficient and effective results, precision, and automation of repetitive maintenance tasks. The draft research also acknowledges that although the use of AI is beneficial in many ways, there are also some key concerns around job displacement, privacy, lack of awareness, and a needed capacity to fully understand and use new AI technologies. The draft research also identified a few key AI initiatives in India, such as Wipro Holmes, TCS Ignio, and G.E, that were providing solutions to help automating software maintenance tasks and helping in the smooth working of SAP (Systems, Applications & Products) operations. Innovative uses of AI in areas such as crop production (M.I.T.R.A.) and dairy optimization (StellApps) were also identified.

To understand the present state of AI and impact of the same, the session was opened to discussion on the following questions: See the full report here.

 

Unpacking Data Protection Law: A Visual Representation

by Amber Sinha last modified Feb 15, 2018 01:22 PM
This visual explainer unpacking data protection law was developed by Amber Sinha (research) and Pooja Saxena (design), and published as part of the Data Privacy Week celebrations on the Privacy International blog. Join the conversation on Twitter using #dataprivacyweek.

 

Cross-posted from Privacy International blog.

Credits: Flag illustrations, when not created by the authors, are from Ibrandify / Freepik.


Data protection law systems are usually seen as a dichotomy between the United State of America and the European Union This dichotomy is not an accurate representation of the issue. Today, close to a hundred countries follow the omnibus approach, while less than a dozen, including the US, use the sectoral approach. If too many laws apply to the same actor, compliance becomes difficult. As a result, the sectoral approach to data protection is becoming less relevant. Data protection regulation involve interaction between regulators and industry. To be an effective data protection regulator, an entire range of regulatory tools are required, which the regulator can use to reward, support and sanction.

 

The Fundamental Right to Privacy - A Visual Guide

by Amber Sinha last modified Feb 16, 2018 05:31 AM
Privacy is the ability of an individual or group to seclude themselves, or information about themselves, and thereby express themselves selectively. This visual guide to the story of privacy law in India and the recent judgement of the Puttaswamy v. Union of India case is developed by Amber Sinha (research and content) and Pooja Saxena (design and conceptualisation).

 

The Fundamental Right to Privacy - A Visual Guide: Download (PDF)


 

A Series of Op-eds on Data Protection

by Amber Sinha last modified Feb 19, 2018 02:08 AM
I wrote a short series of three op-eds for Asia Times this week.

The first article "User consent is the key to data protection in India" examines the debate around consent and the arguments made to discard it. I question the premise of big data exceptionalism, particularly in the absence of any mature governance models which address use regulation.

In the second article "Robust economic argument for a sound Indian data protection law", I examine the substance of the argument of 'innovation' as a legitimate competing interest with respect to privacy, and questionthe economic arguments made in support of innovation enabled by unregulated access to data.

In the third article "India’s data protection law needs graded enforcement mechanism", I look at the two competing arms of regulation - enforcement and compliance, and how a balance of two is need in India,with an empowered regulator and drawing from the principles from responsive regulation theory.

People Driven and Tech Enabled – How AI and ML are Changing the Future of Cyber Security in India

by Shweta Mohandas — last modified Mar 11, 2018 03:30 PM
On the 27th of February, Peter Sparkes the Senior Director, Cyber Security Services, Symantec conducted a webinar on the ‘5 Essentials of Every Next-Gen SOC’. In this webinar, he evaluated the problems that Security Operations Centers (SOCs) are currently facing, and explored possible solutions to these problems. The webinar also put emphasis on AI and ML as tools to improve cyber security. This blog draws key insights from the webinar, and explains how AI and ML can improve the cyber security process of Indian enterprises.

Introduction

In a study conducted by Cisco, it was found that in the past 12-18 months, cyber attacks have caused Indian companies to incur financial damages amounting to USD 500,000.

There is a need to strengthen the nodal agencies in an enterprise that can deal with these threats to prevent irreparable damage to enterprises and their customers. An SOC within any organization is the team responsible for detecting, monitoring, analyzing, communicating and remedying security threats. The SOC technicians employ a combination of technologies and processes to ensure that an enterprise’s security is not compromised. As instances of cyber attacks increase both in number and sophistication, SOCs need to use state of the art technologies to stay one step ahead of the attackers. Presently, SOCs face a number of infrastructural problems such as the low priority given to a cyber security budget, slower and passive response to threats, dearth of skilled technicians, and the absence of a global intelligence network for cyber-threats. This is where technologies such as Artificial Intelligence and Machine learning are helping, by monitoring the system to identify cyber attacks, and analyse the severity of the threat, and in some cases by blocking such threats.

Evolution of Security Operations Centers

In the same study, Cisco looked at the evolution of cyber threats and how companies were using technologies such as AI and ML to ameliorate those threats. Another key insight the study brought out was that 53 and 51 percent of the subject companies were reliant on ML and AI respectively. One of the reasons behind AI and ML’s effectiveness in cyber security is their capacity not only to detect known threats but also to use their learnings from data to detect unknown threats. In his webinar, Peter Sparkes also stated that SOCs were evolving into a ‘people driven and tech enabled’ system.

People Driven and Tech Enabled

In the case of cyber security, which in itself is a relatively new field, technologies such as AI and ML are helping companies to not only overcome infrastructural barriers but also to respond proactively to threats. A study conducted by the Enterprise Strategy Group, revealed that one-third of the respondents believed that ML technology could detect new and unknown malware.

The study also stated that the use of machine learning to detect and prevent threats from unknown malware reduced the number of cases the cyber security team had to investigate.

Similarly, the tasks of monitoring and blocking which were earlier conducted by entry level analysts were now done by systems, using machine learning. Typically, the AI acts as the first monitoring system after which the threat is examined by the company’s technicians who possess the requisite skill set and experience. By delegating the time consuming task of continuous monitoring to an ML system, the technicians now have time to look at serious threats. In this way AI and humans are working together to build a stronger and responsive security protocol.

Detecting the Unknown

Cyber criminals are becoming increasingly sophisticated, and in order to prevent attacks the monitoring systems (both human and automated) need to be able to detect them before the security is compromised. The detection of threats through AI and ML is done in a similar way as it is done for the identification of spam, where the system is trained on a large amount of data which teaches the algorithm to identify right from wrong.

There have been numerous cases of stealthy cyber attacks such as wannacry and ransomware, that have evaded detection by conventional security firewalls and caused crippling damage. There is also the need to use deception technology which involves automatic detection and analysis of attacks. This technology then tricks the attackers and defeats them to bring back normalcy to the system.

The systems that can handle threats by themselves do so by following a predetermined procedure, or playbook where the AI detects activities that go against the procedure/playbook. This is more effective compared to the earlier system where the technicians would analyse the attacks on a case by case basis.

AI and ML can help in reducing the time required to detect threats enabling technicians to act proactively and prevent damage. As AI and ML systems are less prone to make mistakes compared to human beings, each threat is dealt with in a prompt and accurate manner. AI systems also help by categorising attacks based on their propensity for damage. These systems can use the large volumes of data collected about previous attacks and adapt over time to give enterprises a strong line of defence against attacks.

Passive to Active Defense

Threat to cyber security can emerge even in seemingly safe departments, such as Human Resources. It is therefore important to proactively hunt for threats across all departments uniformly.

In order to detect an anomaly, the AI and ML system will require both large volumes of data as well as a significant amount of processing power, which is difficult for smaller companies to provide. A possible solution to improve defense is to have a system of sharing SOC data between companies, and thereby creating a global database of intelligence. A system of global intelligence and threat data sharing could help smaller companies combat cyber threats without having to compromise on core business development.

Use of AI in Cyber Security in India

In 2017, Indian enterprises were infected by two lethal cyber attacks called Nyetya that crept through a trusted software - Ccleaner and infected computers

. These attacks may just be the tip of the iceberg , since there may be many other attacks that might have gone unreported, or worse, undetected. Cisco reported that less than 55 per cent of the Indian enterprises were reliant on AI or ML for combating cyber threats. Although the current numbers seem bleak, there are a number of Indian enterprises that have recently begun using AI and ML in cyber security.

One such example is HDFC bank which is in the process of introducing an AI based Cyber Security Operations Centre (CSOC).

This CSOC is based on a four point approach to dealing with threats - prevent, detect, respond and recover. The government of India has also taken its first step towards the use of AI in cyber security through a project that aims to provide cyber forensic services to the various agencies of the government including law enforcement.

Indian intelligence agencies have also entered into an agreement with tech startup Innefu, which utilizes AI, to process data and decipher threats by looking at the patterns of past threats.

As India is increasingly becoming data dense both private and public organizations need to consider cyber security with utmost seriousness and protect the data from crippling attacks.

Conclusion

Enterprises have become storehouses of user data and the SOCs have a responsibility to protect this data. The companies’ SOCs have been plagued with several problems such as lack of skilled technicians, delay in response time and the inability to proactively respond to attacks. AI and ML can help in a system of continuous monitoring as well as take over the more repetitive and time consuming tasks, leaving the technicians with more time to work on damage control. Although it must be kept in mind that AI is not a silver bullet, since attackers will try their best to confuse the AI systems through evasion techniques such as adversarial AI (where the attackers design machine learning models that are intended to confuse the AI model into making a mistake).

Hence, human intervention and monitoring of AI and ML systems in cyber security is essential to maintain the defence and protection mechanisms of enterprises.

A few topics that Indian SOCs need to consider while using AI and ML :

1. The companies need to understand that AI and ML need human expertise and supervision to be effective and hence substituting people for AI is not ideal.

2. The companies need to give equal if not more importance to data security.

3. The companies need to constantly upgrade their systems and re-skill their technicians to combat cyber security threats.

4. The AI and ML systems need to be regularly audited to ensure that they are not compromised by cyber attacks and also to ensure that they are not generating false positives.


[]. Cisco, (2018, February). Annual Cybersecurity Report. Retrieved March 8, 2018, from https://www.cisco.com/c/dam/m/digital/elq-cmcglobal/witb/acr2018/acr2018final.pdf?dtid=odicdc000016&ccid=cc000160&oid=anrsc005679&ecid=8196&elqTrackId=686210143d34494fa27ff73da9690a5b&elqaid=9452&elqat=2

[]. Ibid.

[]. Enterprise Strategy Group (2017, March ). Top-of-mind Threats and Their Impact on Endpoint Security Decisions. Retrieved March 8, 2018 from https://www.cylance.com/content/dam/cylance/pdfs/reports/ESG-Research-Insights-Report-Summary-Cylance-Oct-2017.pdf

[]. Ibid.

[]. Vorobeychik,Y (2016). Adversarial AI. Retrieved March 8, 2018, from https://www.ijcai.org/Proceedings/16/Papers/609.pdf

[]. Quora. ( 2081, February 15). How Will Artificial Intelligence And Machine Learning Impact Cyber Security? Retrieved March 8, 2018, from https://www.forbes.com/sites/quora/2018/02/15/how-will-artificial-intelligence-and-machine-learning-impact-cyber-security/#569454786147

[]. Sparkes, P. (2018, February 27). The 5 Essentials of Every Next-Gen SOC. Retrieved March 8, 2018, from https://www.brighttalk.com/webcast/13389/303251/the-5-essentials-of-every-next-gen-soc

[]. PTI. ( 2018, February 21).Indian companies lost $500,000 to cyber.Retrieved March 8, 2018, from https://economictimes.indiatimes.com/tech/internet/indian-companies-lost-500000-to-cyber-attacks-in-1-5-years-cisco/articleshow/63019927.cms

[]. Cisco, (2018, February). Annual Cybersecurity Report. Retrieved March 8, 2018, from https://www.cisco.com/c/dam/m/digital/elq-cmcglobal/witb/acr2018/acr2018final.pdf?dtid=odicdc000016&ccid=cc000160&oid=anrsc005679&ecid=8196&elqTrackId=686210143d34494fa27ff73da9690a5b&elqaid=9452&elqat=2

[]. Raval, A. ( 2018,January 30). AI takes cyber security to a new level for HDFC Bank.Retrieved March 8, 2018, from http://computer.expressbpd.com/magazine/ai-takes-cyber-security-to-a-new-level-for-hdfc-bank/23580/

[]. “The Centre for Development of Advanced Computing (C-DAC) under the Ministry of Electronics and Information Technology (MeitY) is working on a project to provide cyber forensic services to law-enforcing and other government and non-government agencies.” Ohri, R. (2018, February 15. Government readies AI-muscled cyber security plan. Retrieved March 8, 2018, from https://economictimes.indiatimes.com/news/politics-and-nation/government-readies-ai-muscled-cyber-security-plan/articleshow/62922403.cms utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

[]. Chowdhury, P.A. (2017, January 30). Cyber Warfare at large in Southeast Asia, India leverages AI for the same cause Retrieved March 8, 2018, from https://analyticsindiamag.com/cyber-warfare-large-southeast-asia-india-leverages-ai-cause/

[]. Open AI.(2017 February 24). Attacking Machine Learning with Adversarial Examples. Retrieved March 8, 2018, from https://blog.openai.com/adversarial-example-research/

Analysis of ICANN revenue shows ambiguity in their records

by Sunil Abraham, Arjun Venkatraman and Akriti Bopanna — last modified Apr 27, 2018 10:01 AM
We, The Centre for Internet and Society, have been instrumental in having ICANN become transparent about their revenue with our persistent requests for their sources of revenue.

Click to download a PDF of the Analysis


In 2014, CIS' Sunil Abraham demanded greater financial transparency of ICANN at both the Asia Pacific IGF and the ICANN Open Forum at the IGF. Later that year, CIS was provided with a list of ICANN's sources of revenue for the financial year 2014, including payments from registries, registrars, sponsors, among others, by ICANN India Head Mr. Samiran Gupta.This was a big step for CIS and the Internet community, as before this, no details on granular income had ever been publicly divulged by ICANN on request.1 Our efforts have resulted in this information now being publicly available from the years 2012 onwards. We then decided to analyze all these years of financial data collaborating with Ashoka fellow Arjun Venkatraman and following are our observations:

To get a clear picture of ICANN's revenue, it can be seen that over the years it has been growing steadily. In 2016 it was 1.7 times the revenue it made in 2012.

ICANN's total revenue from 2012 to 2016

A breakdown by country reveals that a significantly higher proportion of their revenue is from sources registered in the United States.

Country wise contribution to ICANN revenue

It is also interesting to note that revenue from China has seen a spike in the past 2 years, especially in the period of 2015-2016. Verisign CEO, James Bidzos confirmed in an interview to analysts that Chinese activity had surprised them as well though they expected the activity to slow down in the second quarter of 2016.2

Country wise contribution without USA

Verisign also happens to be the top paying customer for ICANN every year, running the .com/.net names. Their payments are orders of magnitude greater than payments made by any other single entity or even several collective entities.

Highest contributing individual entities

ICANN differentiates its sources of revenues by each class of entity which stand for the following:

  • RYN - Registry
  • OTH - Other
  • RYG - Registry
  • RIR - Regional Internet Registry
  • RYC - ccTLD (Top Level Domains)
  • IDN - Internationalized Domain Names
  • RAR - Registrar
  • SPN - Sponsor

Contribution of entities by class

It is evident that the Registries and Registrars contribute the most to revenue however the classification of these groups in itself is ambiguous. RYG and RYN both stand for registry but we do not find any explanation given for the double entry for a single group. Secondly, Sponsors are included yet it is unclear how they have sponsored ICANN, whether through travel and accommodation of personnel or any other mode of institutional sponsorship. The Regional Internet Registries are clubbed under one heading and as a consequence it is not possible to determine individual RIR contribution such as how much did APNIC pay for the Asia and Pacific region. The total payment made by RIRs is a small fraction of the payments made by many other entities and they all pay through the Numbers Resources Organization (NRO), who is listed as paying from Uruguay however the MOU creating the NRO does not specify their location as being there. The NRO website states that " RIRs may be audited by external parties with regards to their financial activities or their operations. RIRs may also allow third parties to report security incidents with regards to their services." 3 Their records show that financial disclosure is done in an inconsistent manner with the last publication from AFRINIC being for the year 2013 4 while the RIPE NCC who coordinates the area of Central Europe, Middle East and Russia last published an annual report for the year 2016 but had no financial information in it. 5

The most frequently found words in their sources which can give us an idea of the structure of the contributing entity yields the following result.

Different structures of the sources

Several clients have registered multiple corporate entities to increase their payments to ICANN such as DropCatch, Everest and Camelot. 6 The first of them, DropCatch, is a domain drop catcher, essentially selling expired domain names to the highest bidder. By the end of 2016, about 43% of all ICANN-accredited registrars were controlled by them. 7

Many clients have reported themselves from different countries over the years as well such as 'Verisign Sarl' which has been reported as originating from Switzerland and in a different year from the United States. 8 Another curious case is of the entity, 'Afilias plc', which when categorized as a sponsor (SPN) is reported from Ireland however as a registry (both RYG and RYN) is reported from the United States. Some entities have originated from one place such as the United Arab Emirates and then moved to other countries such as India.

To summarize, the key takeaways from the information we have dissected so far are:

- ICANN's revenue has been steadily increasing with the 2016 seeing a 1.6 times increase of its revenue generated in 2012.

- United States is the country that most of the revenue originates from.

- After the US, China is now the largest contribution to ICANN revenue, significantly increase their contributions from 2015.

- Verisign is the top contributing entity, their contribution much greater than other entities.

- Registries and Registrars are the main sources of revenue though there is ambiguity as to the classifications provided by ICANN such as the difference between RYG and RYN. The mode of contribution of sponsors exactly is not highlighted either.

- Several entities have been listed from different places in different years, sometimes depending on the role they have played such as whether they are a sponsor or registry. Registering multiple corporate entities to acquire more registrars has occurred as well.


1. Venkataraman, P. (2017). CIS' Efforts Towards Greater Financial Disclosure by ICANN . [online] The Centre for Internet and Society.[Accessed 14 Mar. 2018].

3. Nro.net. (2018). RIR Accountability Questions and Answers | The Number Resource Organization . [online] [Accessed 11 Mar. 2018].

7. Id

8. Detailed list is available on request

Cambridge Analytica scandal: How India can save democracy from Facebook

by Sunil Abraham last modified Mar 28, 2018 03:44 PM
Hegemonic incumbents like Google and Facebook need to be tackled with regulation; govt should use procurement power to fund open source alternatives.

The article was published in the Business Standard on March 28, 2018


The Cambridge Analytica scandal came to light when Wylie accused Cambridge Analytica of gathering details of 50 million Facebook users. Cambridge Analytica used this data to psychologically profile these users and manipulated their opinion in favour of Donald Trump. BJP and Congress have accused each other of using the services of Cambridge Analytica in India as well. How can India safeguard the democratic process against such intervention? The author tries to answer this question in this Business Standard Special.

Those that celebrate the big data/artificial intelligence moment claim that traditional approaches to data protection are no longer relevant and therefore must be abandoned. The Cambridge Analytica episode, if anything, demonstrates how wrong they are. The principles of data protection need to be reinvented and weaponized, not discarded. In this article I shall discuss the reinvention of three such data protection principles. Apart from this I shall also briefly explore competition law solutions.

Collect data only if mandated by regulation

One, data minimization is the principle that requires the data controller to collect data only if mandated to do so by regulation or because it is a prerequisite for providing a functionality. For example, Facebook’s messenger app on Android harvests call records and meta-data, without any consumer facing feature on the app that justifies such collection. Therefore, this is a clear violation of the data minimization principle. One of the ways to reinvent this principle is by borrowing from the best practices around warnings and labels on packaging introduced by the global anti-tobacco campaign. A permanent bar could be required in all apps, stating ‘Facebook holds W number of records across X databases over the time period Y, which totals Z Gb’. Each of these alphabets could be a hyperlink, allowing the user to easily drill down to the individual data record.

Consent must be explicit, informed and voluntary

Two, the principle of consent requires that the data controller secure explicit, informed and voluntary consent from the data subject unless there are exceptional circumstances. Unfortunately, consent has been reduced to a mockery today through obfuscation by lawyers in verbose “privacy notices” and “terms of services”. To reinvent consent we need to bring ‘Do Not Dial’ registries into the era of big data. A website maintained by the future Indian data protection regulator could allow individuals to check against their unique identifiers (email, phone number, Aadhaar). The website would provide a list of all data controllers that are holding personal information against a particular unique identifier. The data subject should then be able to revoke consent with one-click. Once consent is revoked, the data controller would have to delete all personal information that they hold, unless retention of such information is required under law (for example, in banking law). One-click revocation of consent will make data controllers like Facebook treat data subjects with greater respect.

There must be a right to explanation

Three, the right to explanation, most commonly associated with the General Data Protection Directive from the EU, is a principle that requires the data controller to make transparent the automated decision-making process when personal information is implicated. So far it has been seen as a reactive measure for user empowerment. In other words, the explanation is provided only when there is a demand for it.

The Facebook feeds that were used for manipulation through micro-targeting of content is an example of such automated decision making. Regulation in India should require a user empowerment panel accessible through a prominent icon that appears repeatedly in the feed. On clicking the icon the user will be able to modify the objectives that the algorithm is maximizing for. She can then choose to see content that targets a bisexual rather than a heterosexual, a Muslim rather than a Hindu, a conservative rather a liberal, etc. At the moment, Facebook only allows the user to stop being targeted for advertisements based on certain categories. However, to be less susceptible to psychological manipulation, the user should be allowed to define these categories, for both content and advertisements.

How to fix the business model?

From a competition perspective, Google and Facebook have destroyed the business model for real news, and replaced it with a business model for fake news, by monopolizing digital advertising revenues. Their algorithms are designed to maximize the amount of time that users spend on their platforms, and therefore, don’t have any incentive to distinguish between truth and falsehood. This contemporary crisis requires three types of interventions: one, appropriate taxation and transparency to the public, so that the revenue streams for fake news factories can be ended; two, the construction of a common infrastructure that can be shared by all traditional and new media companies in order to recapture digital advertising revenues; and three, immediate action by the competition regulator to protect competition between advertising networks operating in India.

The Google challenge

With Google, the situation is even worse, since Google has dominance in both the ad network market and in the operating system market. During the birth of competition law, policy-makers and decision-makers acted to protect competition per se. This is because they saw competition as an essential component of democracy, open society, innovation, and a functioning market. When the economists from the Chicago school began to influence competition policy in the USA, they advocated for a singular focus on the maximization of consumer interest. The adoption of this ideology has resulted in competition regulators standing powerlessly by while internet giants wreck our economy and polity. We need to return to the foundational principles of competition law, which might even mean breaking Google into two companies. The operating system should be divorced from other services and products to prevent them from taking advantage of vertical integration. We as a nation need to start discussing the possible end stages of such a breakup.

In conclusion, all the fixes that have been listed above require either the enactment of a data protection law, or the amendment of our existing competition law. This, as we all know, can take many years. However, there is an opportunity for the government to act immediately if it wishes to. By utilizing procurement power, the central and state governments of India could support free and open source software alternatives to Google’s products especially in the education sector. The government could also stop using Facebook, Google and Twitter for e-governance, and thereby stop providing free advertising for these companies for print and broadcast media. This will make it easier for emerging firms to dislodge hegemonic incumbents.

DIDP Request #29 - Revenue breakdown by source for FY 2017

by Akriti Bopanna last modified Apr 26, 2018 11:06 AM
We requested ICANN for financial information they have not yet provided for the period ending June 2017.

ICANN publication of its financial records for 2017 were missing a crucial document which lists down their revenue as per the all the legal entities as sources who contributed to it including Regional Internet Registries, various registrars and their source of origin among other details. We have requested them for this document in order to get a better idea of the how these entities contribute to ICANN.

In response to our DIDP, ICANN notified us that they are in the process of compiling this report for the year ending June 2017 and will publish the same by 31st of May, 2018. Further they remarked that this procedure of making public their revenue by source was developed as part of ICANN’s enhancements to transparency in response to CIS’s earlier DIDP which was submitted in 2015.

The said report will be published on their Financial page within the time frame mentioned.

Government gives free publicity worth 40k to Twitter and Facebook

by Akriti Bopanna — last modified Apr 27, 2018 09:52 AM
We conducted a 2 week survey of newspapers for links between government advertisement to social media giants. As citizens, we should be worried about the close nexus between the Indian government and digital behemoths such as Facebook, Google and Twitter. It has become apparent to us after a 2 week print media analysis that our Government has been providing free publicity worth Rs 40,000 to these entities. There are multiple issues with this as this article attempts at pointing out.

null

We analyzed 5 English language newspapers daily for 2 weeks from March 12th to 26th, one week of the newspapers in Lucknow and the second week in Bangalore. Facebook, Twitter, Instagram and Alphabet backed services such as Youtube and Google Plus were part of our survey. Of a total of 33 advertisements (14 in Lucknow+19 in Bangalore), Twitter stands out as the most prominent advertising platform used by government agencies with 30 ads but Facebook at 29 was more expensive. In order to ascertain the rates of publicity, current advertisement rates for Times of India as our purpose was to solely give a rough estimation of how much the government is spending.

Advertising of this nature is not merely an inherent problem of favoring some social media companies over others but also symptomatic of a bigger problem, the lack of our native e-governance mechanisms which cause the Government to rely and promote others. Where we do have guidelines they are not being followed. By outsourcing their e-governance platforms to Twitter such as TwitterSeva, a feature created by the Twitter India team to help citizens connect better with government services, there is less of an impetus to construct better websites of their own.

If this is so because we currently do not have the capacity to build them ourselves then it is imperative that this changes. We should either be executing government functions on digital infrastructure owned by them or on open and interoperable systems. If anything, the surveyed social media platforms can be used to enhance pre-existing facilities. However, currently the converse is true with these platforms overshadowing the presence of e-governance websites. Officials have started responding to complaints on Twitter, diluting the significance of such complaint mechanisms on their respective department’s portal. Often enough such features are not available on the relevant government website. This sets a dangerous precedent for a citizen management system as the records of such interactions are then in the hands of these companies who may not exist in the future. As a result, they can control the access to such records or worse tamper with them. Posterity and reliability of such data can be ensured only if they are stored within the Government’s reach or if they are open and public with a first copy stored on Government records which ensures transparency as well. Data portability is an important facet to this issue as well as being a right consumers should possess. It provides for support of many devices, transition to alternative technologies and lastly, makes sure that all the data like other public records will be available upon request through the Right to Information procedure. The last is vital to uphold the spirit of transparency envisioned through the RTI process since interactions of government with citizens are then under its ambit and available for disclosure for whomsoever concerned.

Secondly, such practices by the Government are enhancing the monopoly of the companies in the market effectively discouraging competition and eventually, innovation. While a certain elite strata of the population might opt for Twitter or Facebook as their mode of conveying grievance, this may not hold true for the rest of the online India population.

Picking players in a free market is in violation of technology and vendor neutrality, a practice essential in e-governance to provide a level playing field for all and competing technologies. Projecting only a few platforms as de facto mediums of communication with the government inhibits the freedom of choice of citizens to air their grievances through a vendor or technology they are comfortable with. At the same time it makes the Government a mouthpiece for such companies who are gaining free publicity and consolidating their popularity. Government apps such as the SwachBharat one which is an e-governance platform do not offer much more in terms of functionality but either reflect the website or are a less mature version of the same. This leads to the problem of fracturing with many avenues of complaining such as the website, app, Twitter etc. Consequently, the priority of the people dealing with the complaints in terms of platform of response is unsure. Will I be responded to sooner if I tweet a complaint as opposed to putting it up on the app? Having an interoperable system can solve this where the Government can have a dashboard of their various complaints and responses are then made out evenly. Twitter itself could implement this by having complaints from Facebook for example and then the Twitter Seva would be an equal platform as opposed to the current issue where only they are favored.

Recent events have illustrated how detrimental the storage of data by these giants can be in terms of privacy. Data security concerns are also a consequence of such leaks. Not only is this a long overdue call for a better data protection law but at the same time also for the Government to realize that these platforms cannot be trusted. The hiring of Cambridge Analytica to influence voters in the US elections, based on their Facebook profiles and ancillary data, effectively put the governance of the country on sale by exploiting these privacy and security issues. By basing e-governance on their backbone, India is not far from inviting trouble as well. It is unnecessary and dangerous to have a go-between for matters that pertain between an individual and state.

As this article was being written, it was confirmed by the Election Commission that they are partnering with Facebook for the Karnataka Assemby Elections to promote activities such as encourage enrollment of Voter ID and voter participation. Initiatives like these tying the government even closer to these companies are of concern and cementing the latter’s stronghold.

Note: Our survey data and results are attached to this post. All research was collected by Shradha Nigam, a Vth year student at NLSIU, Bangalore.


Survey Data and Results

This report is based on a survey of government advertisements in English language newspapers in relation to their use of social media platforms and dedicated websites (“Survey”). For the purpose of this report, the ambit of the social media platforms has been limited to the use of Facebook, Twitter, YouTube, Google Plus and Instagram. The report was prepared by Shradha Nigam, a student from National Law School of India University, Bangalore. Read the full report here.

Artificial Intelligence in Governance: A Report of the Roundtable held in New Delhi

by Saman Goudarzi and Natallia Khaniejo — last modified May 03, 2018 03:49 PM
This Report provides an overview of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance, conducted at the Indian Islamic Cultural Centre, in New Delhi on March 16, 2018. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context. This report summarises the discussions on the development and implementation of AI in various aspects of governance in India. The event was attended by participants from academia, civil society, the legal sector, the finance sector, and the government.

Event Report: Download (PDF)


This report provides a summary of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance (hereinafter referred to as ‘the Roundtable’). The Roundtable took place at the India Islamic Cultural Centre in New Delhi on March 16, 2018 and included participation  from academia, civil society, law, finance, and government. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context.

The Roundtable began with a presentation by Amber Sinha (Centre for Internet and Society - CIS) providing an overview of the CIS’s research objectives and findings thus far. During this presentation, he defined both AI and the scope of CIS’s research, outlining the areas of law enforcement, defense, education, judicial decision making, and the discharging of administrative functions as the main areas of concerns for the study. The presentation then outlined the key AI deployments and implementations that have been identified by the research in each of these areas. Lastly, the presentation raised some of the ethical and legal concerns related to this phenomenon.

The presentation was followed by the Roundtable discussion that saw various topics in regards to the usages, challenges, ethical considerations and implications of AI in the sector being discussed. This report has identified a number of key themes of importance evident throughout these discussions.These themes include: (1) the meaning and scope of AI, (2) AI’s sectoral applications, (3) human involvement with automated decision making, (4) social and power relations surrounding AI, (5) regulatory approaches to AI and, (6) challenges to adopting AI. These themes in relation to the Roundtable are explored further below.

Meaning and Scope of AI

One of the first tasks recommended by the group of participants was to define the meaning and scope of AI and the way those terms are used and adopted today. These concerns included the need to establish a distinction between the use of algorithms, machine learning, automation and artificial intelligence. Several participants believed that establishing consensus around these terms was essential before proceeding towards a stage of developing regulatory frameworks around them.

The general fact agreed to was that AI as we understand it does not necessarily extend to complete independence in terms of automated decision making but it refers instead to the varying levels of machine learning (ML), and the automation of certain processes that has already been achieved. Several concerns that emerged during the course of the discussion centred around the question of autonomy and transparency in the process of ML and algorithmic processing. Stakeholders recommended that over and above the debates of humans in the loop [1] on the loop [2] and out of the loop, [3] there were several other gaps with respect to AI and its usage in the industry today which also need to be considered before building a roadmap for future usage. Key issues like information asymmetries, communication lags, a lack of transparency, the increased mystification of the coding process and the centralization of power all needed to be examined and analysed under the rubric of developing regulatory frameworks.

Takeaway Point: The group brought out the need for standardization of terminology as well as the establishment of globally replicable standards surrounding the usage, control and proliferation of AI. The discussion also brought up the problems with universal applicability of norms. One of the participants brought up an issue regarding the lack of normative frameworks around the usage and proliferation of AI. Another participant responded to the concern by alluding to the Asilomar AI principles.[4] The Asilomar AI principles are a set of 23 principles aimed at directing and shaping AI research in the future. The discussion brought out further issues regarding the enforceability as well universal applicability of the principles and their global relevance as well. Participants recommended the development of a shorter, more universally applicable regulatory framework that could address various contextual limitations as well.

AI Sectoral Applications

Participants mentioned a number of both current and potential applications of AI technologies, referencing the defence sector, the financial sector, and the agriculture sector. There are several developments taking place on the Indian military front with the Committee on AI and National Security being established by the Ministry of Defence. Through the course of the discussion it was also stated that the Indian Armed Forces were very interested in the possibilities of using AI for their own strategic and tactical purposes. From a technological standpoint, however, there has been limited progress in India in researching and developing AI.

While India does deploy some Unmanned Aerial Vehicles (UAVs), they are mostly bought from Israel, and often are not autonomous. It was also pointed out that contrary to reportage in the media, the defence establishment in India is extremely cautious about the adoption of autonomous weapons systems, and that the autonomous technology being rolled out by the CAIR is not yet considered trustworthy enough for deployment.

Discussions further revealed that the few technologies that have a relative degree of autonomy are primarily loitering ammunitions and are used to target radar insulations for reconnaissance purposes. One participant mentioned that while most militaries are interested in deploying AI, it is primarily from an Intelligence, Surveillance and Reconnaissance (ISR) perspective. The only exception to this generalization is China where the military ethos and command structure would work better with increased reliance on independent AI systems. One major AI system rolled out by the US is Project Maven which is primarily an ISR system. The aim of using these systems is to improve decision making and enhance data analysis particularly since battlefields generate a lot of data that isn’t used anywhere.

Another sector discussed was the securities market where algorithms were used from an analytical and data collection perspective. A participant referred to the fact that machine learning was being used for processes like credit and trade scoring -- all with humans on the loop. The participant further suggested that while trade scoring was increasingly automated, the overall predictive nature of such technologies remained within a self limiting capacity wherein statistical models, collected data and pattern analysis were used to predict future trends. The participant questioned whether these algorithms could be considered as AI in the truest sense of the term since they primarily performed statistical functions and data analysis.

One participant also recommended the application of AI to sectors like agriculture with the intention of gradually acclimatizing users to the technology itself. Respondents also stated that while AI technologies were being used in the agricultural space it was primarily from the standpoint of data collection and analysis as opposed to predictive methods. It was mentioned that a challenge to the broad adoption of AI in this sector is the core problem of adopting AI as a methodology – namely information asymmetries, excessive data collection, limited control/centralization and the obfuscatory nature of code – would not be addressed/modified. Lastly, participants also suggested that within the Indian framework not much was being done aside from addressing farmers’ queries and analysing the data from those concerns.

Takeaway Point: The discussion drew attention to the various sectors where AI was currently being used -- such as the military space, agricultural development and the securities market -- as well as potential spaces of application -- such as healthcare and manual scavenging. The key challenges that emerged were information asymmetries with respect to the usage of these technologies as well as limited capacity in terms of technological advancement.

Human Involvement with Automated Decision Making

Large parts of discussions throughout the Roundtable event were preoccupied with automated decision making and specifically, the involvement of humans (human on and in the loop) or lack thereof (human out of the loop) in this process. These discussions often took place with considerations of AI for prescriptive and descriptive uses.

Participants expressed that human involvement was not needed when AI was being used for descriptive uses, such as determining relationships between various variables in large data sets. Many agreed to the superior ability of ML and similar AI technologies in describing large and unorganized datasets. It was the prescriptive uses of AI where participants saw the need for human involvement, with many questioning the technology making more important decisions by itself.

The need for human involvement in automated decision making was further justified by references to various instances of algorithmic bias in the American context. One participant, for example, brought up the use of algorithmic decision making by a school board in the United States for human resource practices (hirings, firing, etc.) based on the standardized test scores of students. In this instance, such practices resulted in the termination of teachers primarily from low income neighbourhoods.[5] The main challenge participants identified in regards to human on the loop automated decision making is the issue of capacity, as significant training would have to be achieved for sectors to have employees actively involved in the automated decision making workflow.

An example in the context of the healthcare field was brought up by one participant arguing for human in the loop in regards to prescriptive scenarios. The participant suggested that AI technology, when given x-ray or MRI data for example, should only be limited to pointing out the correlations of diseases with patients’ scans/x-rays. Analysis of such correlations should be reserved for the medical expertise of doctors who would then determine if any instances of causality can be identified from this data and if it’s appropriate for diagnosing patients.

It was emphasized that, despite a preference for human on/in the loop in regards to automated decision making, there is a need to be cognisant of techno-solutionism due to the human tendency of over reliance on technology when making decisions. A need for command and control structures and protocols was emphasized for various governance sectors in order to avoid potentially disastrous results through a checks and balances system. It was noted that the defense sector has already developed such protocols, having established a chain of command due to its long history of algorithmic decision making (e.g. the Aegis Combat System being used by the US Navy in the 1980s).

One key reason why militaries prefer human in and on the loop systems as opposed to out of the loop systems is because of the protocol associated with human action on the battlefield. International Humanitarian Law has clear indicators of what constitutes a war crime and who is to be held responsible in the scenario but developing such a framework with AI systems would be challenging as it would be difficult to determine which party ought to be held accountable in the case of a transgression or a mistake.

Takeaway Point: It was reiterated by many participants that neither AI technology or India’s regulatory framework is at a point where AI can be trusted to make significant decisions alone -- especially when such decisions are evaluating humans directly. It was recommended that human out of the loop decision making should be reserved for descriptive practices whereas human on and in the loop decision making should be used for prescriptive practices. Lastly, it was also suggested that appropriate protocols be put in place to direct those involved in the automated decision making workflow. Particularly when the process involves judgements and complex decision making in sectors such as jurisprudence and the military.

The Social and Power Relations Surrounding AI

Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.

Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization.

One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s.

Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.

Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization.

One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s.

Takeaway Point: Discussions of the implementation and deployment of AI within the governance landscape should attempt to take into consideration larger power relations and concepts of equity.

Regulatory Approaches to AI

Many recognized the need for AI-specific regulations across Indian sectors, including governance. These regulations, participants stated, should draw from notions of accountability, algorithmic transparency and efficiency. Furthermore, it was also stated that such regulations should consider the variations across the different legs of the governance sector, especially in regards to defence. One participant, pointing to the larger trends towards automation, recommended the establishment of certain fundamental guidelines aimed at directing the applicability of AI in general. The participant drew attention to the need for a robust evaluation system for various sectors (the criminal justice system, the securities market, etc.) as a way of providing checks on algorithmic biases. Another emphasized for the need of regulations for better quality data as to ensure machine readability and processiblity for various AI systems.

Another key point that emerged was the importance of examining how specific algorithms performed processes like identification or detection. A participant recommended the need to examine the ways in which machines identify humans and what categories/biases could infiltrate machine-judgement. They reiterated that if a new element was introduced in the system, the pre-existing variables would be impacted as well. The participant further recommended that it would be useful to look at these systems in terms of the couplings that get created in order to determine what kinds of relations are fostered within that system.

The roundtable saw some debate regarding the most appropriate approach to developing such regulations. Some participants argued for a harms-based approach, particularly in regards to determining if regulations are needed all together for specific sectors (as opposed to guidelines, best practices, etc.). The need to be cognisant of both individual and structural harms was emphasized, mindful of the possibility of algorithmic biases affecting traditionally marginalized groups.

Others only saw value in a harms based approach insomuch that it could help outline the appropriate penalties in an event of regulations being violated, arguing instead for a rights-based approach as it enabled greater room for technological changes. An approach that kept in mind emerging AI technologies was reiterated by a number of participants as being crucial to any regulatory framework. The need for a regulatory space that allowed for technological experimentation without the fear of constitutional violation was also communicated.

Takeaway Point: The need for a AI-specific regulatory framework cognisant of differentiations across sectors in India was emphasized. There is some debate about the most appropriate approach for such a framework, a harms-based approach being identified by many as providing the best perspective on regulatory need and penalties. Some identified the rights-based approach as providing the most flexibility for an rapidly evolving technological landscape.

Challenges to Adopting AI

Out of all the concerns regarding the adoption of algorithms, ML and AI, the two key points of resistance that emerged, centred around issues of accountability and transparency. Participants suggested that within an AI system, predictability would be a key concern, and in the absence of predictable outcomes, establishing redressal mechanisms would pose key challenges as well.

A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.

One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.

Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.

A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.

Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.

A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.

One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.

Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.

A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.

Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.

Takeaway Point: The two key points of resistance that emerged during the course of discussion were accountability and transparency. Participants pointed out the various challenges involved in attributing blame within the AI chain and they also spoke about the complexities of opening up AI code, thereby leaving it vulnerable to manipulation. Certain other challenges that were briefly touched upon were the information asymmetry, excessive data collection, centralization of power in the hands of the controllers and complicated service distribution models.

Conclusion

The Roundtable provided some insight into larger debates regarding the deployment and applications of AI in the governance sector of India. The need for a regulatory framework as well as globally replicable standards surrounding AI was emphasized, particularly one mindful of the particular needs of differing fields of the governance sector (especially defence). Furthermore, a need for human on/in the loop practices with regards to automated decision making was highlighted for prescriptive instances, particularly when such decisions are responsible for directly evaluating humans. Contextualising AI within its sociopolitical parameters was another key recommendation as it would help filter out the biases that might work themselves into the code and affect the performance of the algorithm. Further, it is necessary to see the involvement and influence of the private sector in the deployment of AI for governance, it often translating into the delivery of technological services from private actors to public bodies towards discharge of public functions. This has clear implications for requirements of transparency  and procedural fairness even in private sector delivery of these services. Defining the meaning and scope of AI while working to demystify algorithms themselves would serve to strengthen regulatory frameworks as well as make AI more accessible for the user / consumer.


[1]. Automated decision making model where final decisions are made by a human operator

[2]. Automated decision making model where decisions can be made without human involvement but a human can override the system.

[3]. A completely autonomous decision making model requiring no human involvement

[4]. https://futureoflife.org/ai-principles/

[5]. The participant was drawing this example from Cathy O’Neil’s Weapons of Math Destruction, (Penguin,2016), at 4-13.

A look at two problematic provisions of the draft Anti-trafficking bill

by Swaraj Paul Barooah last modified Aug 18, 2018 09:21 AM
This post examines two badly drafted provisions of the new Anti-Trafficking bill that have the potential to severely impinge upon the Freedom of Expression, including through a misunderstanding of intermediary liability.

On 28 Feb 2018, the Union Cabinet approved ‘The Trafficking of Persons (Prevention, Protection and Rehabilitation) Bill, 2018’ (‘the bill’) for introduction to the Parliament. This comes after a series of consultations on an earlier 2016 draft bill, that had faced its fair share of criticism. As per the Press Information Bureau announcement, the Ministry of Women and Child Development met with various stakeholders including 60 NGOs and have incorporated many of the suggestions put forth. They’ve also stated that ‘the new law will make India a leader among South Asian countries to combat trafficking.’

However, at first glance, there appear to be several issues with overbroad or vague language used in the drafting of the bill, that stretch it into potentially problematic areas. This current post will focus on two such provisions that could lead to a deleterious effect on the Freedom of Expression. As the bill is currently not publicly available, a stakeholder’s copy of the draft is being used to source these provisions. The relevant sections have been reproduced below for convenience. (Emphasis in bold is as provided by the author).

Section 39: Buying or Selling of any person

39. (l) Whoever buys or sells any person for a consideration, shall be punished with rigorous imprisonment for a term which shall not be less than seven years but may extend to ten years, and shall also be liable to fine which shall not be less than one lakh rupees.

(2) Whoever solicits or publicises electronically, taking or distributing obscene photographs or videos or providing materials or soliciting or guiding tourists or using agents or any other form which may lead to the trafficking of a person shall be punished with rigorous imprisonment for a term which shall not be less than five years but may extend to ten years, and shall also be liable to fine which shall not be less than fifty thousand rupees but which may extend to one lakh rupees.

The grammatical acrobatics of section 39(2) aside, this anti-solicitation provision is severely problematic in that it mandates punishment even for a vaguely defined action or actions that may not actually be connected to the trafficking of a person. In other words, the provision doesn’t require any of the actions to be connected to trafficking in their intent or even outcome, but only in potential connection to the outcome. At the same time, it says these ‘shall’ be punished!

This vagary that ignores actual or even probabilistic causation flies in the face of standard criminal law which requires mens rea along with actus rea. The excessively wide scope of this badly drafted provision leaves it prone to abuse. For example, currently the provision allows the following interpretation to be included: ‘Whoever publicizes electronically, by providing materials in any form, which may lead to trafficking of a person shall be punished…’. Even the electronic publicizing of an academic study on trafficking could fall under the provision as it currently reads, if it is argued that publishing studies that show the prevalence of trafficking ‘may lead to the trafficking of a person’! It is not hard to imagine that an academic study that shows trafficking numbers at embarrassingly high rates could be threatened with this provision. Similarly, any of our vast number of self-appointed moral guardians could also pull within this provision any artistic work that they may personally find offensive or ‘obscene’. Simply put, without any burden of showing a causal connect, it could be argued that anything ‘may lead’ to the trafficking of a person. Needless to say, this paves the way for a severe chilling effect on free speech, especially on critical speech around trafficking issues.

Section 41: Offences related to media

41. (l) Whoever commits trafficking of a person with the aid of media, including, but not limited to print, internet, digital or electronic media, shall be punished with rigorous imprisonment for a term which shall not be less than seven years but may extend to ten years and shall also be liable to fine which shall not be less than one lakh rupees.

(2) Whoever distributes, or sells or stores, in any form in any electronic or printed form showing incidence of sexual exploitation, sexual assault, or rape for the purpose of exploitation or for coercion of the victim or his family members, or for unlawful gain shall be punished with rigorous imprisonment for a term which shall not be less than three years but may extend to seven years and shall also be liable to fine which shall not be less than one lakh rupees.

The drafters of this bill have perhaps overlooked the fact that unlike the physical world, the infrastructure of the electronic / digital world requires 3rd party intermediaries to handle information during most forms of electronic activities, whether it is transmission, storage or display. As it is not feasible, desirable or even practically possible for intermediaries to verify the legality of every bit of data that gets transferred or stored by the intermediary, ‘safe harbours’ are provided in law for intermediaries, protecting them from liability of the information being transmitted through them. These ensure that entities that act as architectural requirements and intermediary platforms are able to operate smoothly and without fear. If intermediaries are not granted this protection, it puts them in the unenviable position of having to monitor un-monitorable amounts of data, and face legal action for the slip-ups that are bound to happen regularly. Furthermore, there are several levels of free speech and privacy issues associated with having multiple gatekeepers on the expression of speech online. A charitable reading of the intent of a provision which does not recognise safe harbours for 3rd party intermediaries, would be that the drafters of the bill have simply not realised that users who upload and initiate transfer of information online, are not the same parties who do the actual transmission of the information.

Distribution, selling or storing of information online would require the transmission of information over intermediaries, as well as the temporary storage of such information on intermediary platforms. In India, intermediaries engaging with transmission or temporary storage of information are provided safe harbour[1] by Section 79 of the Information Technology Act, 2000 (‘IT Act’), so long as they:

(i) act as a mere ‘conduit’ and do not initiate the transmission, select the receiver of the transmission, or select or modify the information contained in the transmission.

(ii) exercise due diligence while discharging duties under this Act, and observes other guidelines that the Central Government may prescribe.

The Information Technology (Intermediary Guidelines) Rules, 2011, list out the nature of the due diligence to be followed by intermediaries to claim exemption under Section 79 of the IT Act.

Intermediaries will not be granted safe harbour if they have conspired, abetted, aided or induced commission of the unlawful act, or if they do not remove or disable access to information upon receiving actual knowledge, or notice from the Government, of the information that is transmitted or stored by the intermediary being used for unlawful purposes.

Thus it can be seen that the IT Act already provides an in-depth regime for intermediary liability, and given its non-obstante clause which states that Section 79 of the IT Act would apply “Notwithstanding anything contained in any law for the time being in force” ,  as well as the reiteration of the IT Act’s overriding effect via Section 81, which states that the provisions of the Act ‘shall have effect notwithstanding anything inconsistent therewith contained in any other law for the time being in force’ (barring the exercise of copyright or patent rights), it is generally considered the appropriate legal framework for this issue. However, it appears that the drafters of the 2018 Anti-trafficking bill have not considered this aspect at all, since they have not referenced the IT Act in this context in the bill, and have additionally added their own non-obstante clause in Section 59 of the bill:

59. The provisions of this Act, shall be in addition to and not in derogation of the provisions of any other law for the time being in force and, in case of any inconsistency, the provisions of this Act shall have overriding effect on the provisions of any such law to the extent of the inconsistency.

So the regime as prescribed by the IT Act allows for safe harbours, whereas the regime as prescribed by the Anti-Trafficking bill does not allow for safe harbours, and both say that they would an overriding effect for any conflicting law. This legislative bumble could potentially be solved by using the settled principle that a special Act prevails over a general legislation. This is still a little tricky as they are technically both special Acts. It could be argued that given the context of the Anti-trafficking bill as focusing on trafficking, and the context of the IT Act focusing on the interface of law and technology, that for the purposes of Section 41(2) of the Anti-trafficking bill, the IT Act is the special legislation. And thus Section 79 of the IT Act should make redundant the relevant portion of Section 41(2) of the Anti-trafficking bill. This reading would require the bill to be modified so as to remove the redundancy and the conflicting portion of Section 41(2).


[1] In 2016, a division bench of the Delhi High Court held in the case of Myspace Inc vs Super Cassettes Industries Ltd that a safe harbour immunity for intermediaries was necessary as it was not technically feasible to pre-screen content from third parties, and that tasking intermediaries with this responsibility could have a chilling effect on free speech, It held that their responsibility was limited to the extent of acting upon receiving ‘actual knowledge’. Earlier, in determining what ‘actual knowledge’ refers to, in 2015 the Supreme Court of India in the landmark case of Shreya Singhal vs Union of India, required this to be in the form of a notice via a court or government order. Thus under our current law, intermediaries are granted a safe harbour from liability so long as they act upon court or government orders which notify them of content that is required to be taken down.

 

Clarification (18th August, 2018): A letter sent to the Ministry of Women and Child Development mentioned the Centre for Internet & Society as instituionally endorsing a critique of the The Trafficking of Persons (Prevention, Protection and Rehabilitation) Bill, 2018. We seek to clarify that the Centre for Internet & Society did not endorse the letter to the Ministry.

What’s up with WhatsApp?

by Aayush Rathi and Sunil Abraham — last modified Apr 23, 2018 04:45 PM
In 2016, WhatsApp Inc announced it was rolling out end-to-end encryption, but is the company doing what it claims to be doing?
What’s up with WhatsApp?

Silhouettes of mobile users next to a screen projection of the WhatsApp logo. Photo: REUTERS/Dado Ruvic/Illustration

The article by Aayush Rathi and Sunil Abraham was published in Asia Times on April 20, 2018.


Back in April 2016, when WhatsApp Inc announced it was rolling out end-to-end encryption (E2EE) for its billion-plus strong user base as a default setting, the messaging behemoth signaled to its users it was at the forefront of providing technological solutions to protect privacy.

Emphasized in the security white paper explaining the implementation of the technology is the encryption of both forms of communication – one-to-one and group and also of all types of messages shared within such communications – text as well as media.

Simply put, all communication taking place over WhatsApp would be decipherable only to the sender and recipient – it would be virtual gibberish even to WhatsApp.

This announcement came in the backdrop of Apple locking horns with the FBI after being asked to provide a backdoor to unlock the San Bernardino mass shooter’s iPhone. This further reinforced WhatsApp Inc’s stand on the ensuing debate between the interplay of privacy and security in the digital age.

Kudos to WhatsApp, for there is growing discussion around how encryption and anonymity is central to enabling secure online communication which in turn is integral to essential human rights such as those of freedom of opinion and expression.

WhatsApp may have taken encryption to the masses, but here we outline why WhatsApp’s provisioning of privacy and security measures needs a more granular analysis – is the company doing what it claims to be doing? Security issues with WhatsApp’s messaging protocol certainly are not new.

Man-in-the-middle attacks

study published by a group of German researchers from Ruhr University highlighted issues with WhatsApp’s implementation of its E2EE protocol to group communications. Another paper points out how WhatsApp’s session establishment strategy itself could be problematic and potentially be targeted for what are called man-in-the-middle (MITM) attacks.

An MITM attack takes the form of a malicious actor, as the term suggests, placing itself between the communicating parties to eavesdrop or impersonate. The Electronic Frontier Foundation also highlighted other security vulnerabilities, or trade-offs, depending upon ideological inclinations, with respect to WhatsApp allowing for storage of unencrypted backups, issues with WhatsApp’s web client and also with its approach to cryptographic key change notifications.

Much has been written questioning WhatsApp’s shifting approach to ensuring privacy too. Quoting straight from WhatsApp’s Privacy Policy: “We joined the Facebook family of companies in 2014. As part of the Facebook family of companies, WhatsApp receives information from, and shares information with, this family of companies.” Speaking of Facebook …

Culling out larger issues with WhatsApp’s privacy policies is not the intention here. What we specifically seek to explore is right at the nexus of WhatsApp’s security and privacy provisioning clashing with its marketing strategy: the storage of data on WhatsApp’s servers, or ‘blobs,’ as they are referred to in the technical paper. Facebook’s rather. In WhatsApp’s words: “Once your messages (including your chats, photos, videos, voice messages, files and share location information) are delivered, they are deleted from our servers. Your messages are stored on your own device.”

In fact, this non-storage of data on their ‘blobs’ is emphasizes at several other points on the official website. Let us call this the deletion-upon-delivery model.

A simple experiment

While drawing up a rigorous proof of concept, made near-impossible thanks to WhatsApp being a closed source messaging protocol, a simple experiment is enough to raise some very pertinent questions about WhatsApp’s outlined deletion-upon-delivery model. It should, however, be mentioned that the Signal Protocol developed by Open Whisper Systems and pivotal in WhatsApp’s rolling out of E2EE is open source. Here is how the experiment proceeds:

Rick sends Morty an attachment.

Morty then switches off the data on her mobile device.

Rick downloads the attachment, an image.

Subsequently, Rick deletes the image from his mobile device’s internal storage.

Rick then logs into a WhatsApp’s web client on his browser. (Prior to this experiment, both Rick and Morty had logged out from all instances of the web client)

Upon a fresh log-in to the web client and opening the chat with Morty, the option to download the image is available to Rick.

The experiment concludes with bewilderment at WhatsApp’s claim of deletion-upon-delivery as outlined earlier. The only place from which Morty could have downloaded the image would be from Facebook’s ‘blobs.’ The attachment could not have been retrieved from Morty’s mobile device as it had no way of sending data and neither from Rick’s mobile device as it no longer existed in the device’s storage.

As per the Privacy Policy, the data is stored on the ‘blobs’ for a period of 30 days after transmission of a message only when it can’t be delivered to the recipient. Upon delivery, the deletion-upon-delivery model is supposed to kick in.

Another straightforward experiment that leads to a similar conclusion is seeing the difference in time taken for a large attachment to be forwarded as opposed to when the same large attachment is uploaded. Forwarding is palpably quicker than uploading afresh: non-storage of attachments on the ‘blob’ would entail that the same amount should be taken for both.

The plot thickens. WhatsApp’s Privacy Policy goes on to state: “To improve performance and deliver media messages more efficiently, such as when many people are sharing a popular photo or video, we may retain that content on our servers for a longer period of time.”  The technical paper offers no help in understanding how WhatsApp systems assess frequently shared encrypted media messages without decrypting it at its end.

A possible explanation could be the usage of metadata by WhatsApp, which it discloses in its Privacy Policy while simultaneously being sufficiently vague about the specifics of it. That WhatsApp may be capable of reading encrypted communication through the inclusion of a backdoor bodes well for law enforcement, but not so much for unsuspecting users.

The weakest link in the chain

Concerns about backdoors in WhatsApp’s product have led the French government to start developing their own encrypted messaging service. This will be built using Matrix – an open protocol designed for real-time communication. Indeed, the Privacy Policy lays out that the company “may collect, use, preserve, and share your information if we have a good-faith belief that it is reasonably necessary to respond pursuant to applicable law or regulations, to legal process, or to government requests.”

The Signal Protocol is the undisputed gold standard of E2EE implementations. It is the integration with the surrounding functionality that WhatsApp offers which leads to vulnerabilities. After all, a chain is only as strong as its weakest link. Assuming that the attachments stored on the ‘blobs’ are in encrypted form, indecipherable to all but the intended recipients, this does not pose a privacy risk for the users from a technological point of view.

However, it is easy lose sight of the fact that the Privacy Policy is a legally binding document and it specifically states that messages are not stored on the ‘blobs’ as a matter of routine. As a side note, WhatsApp’s Privacy Policy and Terms of Service are refreshing in their readability and lack of legalese.

As we were putting the final touches to this piece, news from WABetaInfo, a well-reputed source of information on WhatsApp features, has broken that newer updates of WhatsApp for Android are permitting users to re-download media deleted up to three months back. WhatsApp cannot possibly achieve this without storing the media in the ‘blobs,’ or in other words, in violation of its Privacy Policy.

As the aphorism goes: “When the service is free, you are the product.”

Revenge Porn Laws across the World

by Shradha Nigam — last modified Apr 25, 2018 04:58 PM
The following is a compilation of laws dealing with revenge porn if and how they exist across multiple countries and jurisdictions.

Country-wise legislation on “revenge porn” laws, click to download the file (PDF, 636 Kb)

Europe

United States of America

Australia

Asia and Rest of the World

1. Europe

Country

Statute

Year

Contents – definition, classification, punishment, standard of proof

Punishment

Remarks

United Kingdom

England and Wales

Section 33, Criminal Justice and Courts Act 2015

2015

Makes it an offence in England and Wales to disclose private sexual photographs and films without the consent of the individual depicted and with the intent to cause distress.

There is a maximum sentence of two years imprisonment

A call has been made to cover a wider range of offences through enactment of a new Act.

The law is not applicable retroactively.

Scotland

Part 1, Section 2, Abusive Behaviour and Sexual Harm Act, 2016

2016

A person (“A”) commits an offence if—

(a)A discloses, or threatens to disclose, a photograph or film which shows, or appears to show, another person (“B”) in an intimate situation,

(b)by doing so, A intends to cause B fear, alarm or distress or A is reckless as to whether B will be caused fear, alarm or distress, and

(c)the photograph or film has not previously been disclosed to the public at large, or any section of the public, by B or with B’s consent.

A person who commits such an offence is liable—

(a)on summary conviction, to imprisonment for a term not exceeding 12 months or a fine not exceeding the statutory maximum (or both),

(b) on conviction on indictment, to imprisonment for a term not exceeding 5 years or a fine (or both).

 

Northern Ireland

Part 3, Section 51, Amendment to Justice Act

2016

It is an offence for a person to disclose a private sexual photograph or film if the disclosure is made—

(a)without the consent of an individual who appears in the photograph or film, and

(b)with the intention of causing that individual distress.

A person guilty of an offence under this section is liable—

(a) on conviction on indictment, to imprisonment for a term not exceeding 2 years or a fine (or both), and

(b) on summary conviction, to imprisonment for a term not exceeding 6 months or a fine not exceeding the statutory maximum (or both).

 

Malta

Article 208E, Maltese Criminal Code

2016

2016

It punishes whoever, with an intent to cause distress, emotional harm or harm of any nature, discloses a private sexual photograph or film without the consent of the person or persons displayed or depicted in such photograph or film.

Such person would, on conviction be liable to imprisonment for a term of up to two years or to a fine of not less than €,3000 and not more than €5,000, or to both such imprisonment and fine

 

Germany

General Data Protection Regulation

 

Art Copyright Law

 

Regulation (EU) 679/2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation (GDPR)

A person also has the right to object to the unauthorised dissemination or public display of his/her photograph (section 22, Art Copyright Law)

If privacy rights are infringed, the individual affected can seek civil law remedies, which include:
  • Cease and desist orders, rectification and
  • Compensatory damages.
In 2014, The Bundesgerichtshof (BGH), upheld an earlier ruling from a regional court in Koblenz, Germany, that said a man did not have the right to keep intimate photos of his ex-lover just because she had consented to taking them in the first place.

France

Digital Republic Act

2016

Under the new law, the persons have a right to oppose the use of their personal data.

Revenge porn may be sanctioned by 2 years of imprisonment and a 60.000 euro fine.

 

2. United States of America

State

Statute

Year

Constituents of the offence

Punishment

Remarks

Alabama

SB301. Code of Alabama 1975 Secs 15-20A-4 to 15-20A-43 amended.

2017

Distribution of an intimate, private image, also known as "revenge porn" or "nonconsensual pornography." The law applies when the depicted person has not consented to the transmission and the sender intends to harass or intimidate the depicted person.

A first offense is a Class A misdemeanor, punishable by up to a year in jail. Subsequent offenses are Class C felonies, punishable by up to 10 years in prison.

 

Alaska

Title 11. Criminal Law § 11.61.120

 

Provides that whoever publishes or distributes electronic or printed photographs, pictures, or films that show the genitals, anus, or female breast of the other person or show that person engaged in a sexual act commits a crime of harassment in second degree.

 

Harassment in the second degree is a class B misdemeanor.

Class B misdemeanors are less serious crimes, punishable by up to 90 days in jail and a fine of up to $2,000.

 

Arizona

Unlawful Distribution of Private Images, 2016 through amending Section 13‑1425 of the Arizona Revised Statutes

2016

It provides that the distribution of images depicting states of nudity or specific sexual activities of another person is unlawful.

If such disclosure is by electronic means, it is a Class 4 felony.

If the person threatens to disclose but does not disclose, then it is a Class 1 Misdemeanor.

 

·       Class 4 felonies are punishable up to 3.75 years in prison.

 

·       A class 1 misdemeanor is the most serious misdemeanor offense and is punishable by up to 6 months in jail, 3 years of probation (5 years maximum probation for DUI offenses) and a $2,500 fine plus surcharges.

·       The earlier state revenge porn bill was scrapped due to an ACLU Lawsuit.

Arkansas

Arkansas Code 5-26-314

July, 2015

It criminalizes the distribution of an image, picture, video, or voice or audio recording of a sexual nature to harass, frighten, intimidate, threaten, or abuse a family or household member or a person in a current or former dating relationship; and for other purposes.

Such an offence is a Class A misdemeanour.

·       A Class A misdemeanor is the most serious type of misdemeanor in Arkansas and it is punishable by up to one year in jail and a fine of up to $2,500.

Defines a “dating relationship” as romantic/ intimate relationship between two individuals and provides additional factors.

California

Section 647(j)(4) of California Penal Code

2014

Under this provision an act of revenge porn is defined as someone who “photographs or records by any means the image of the intimate body part or parts of another identifiable person, under circumstances where the parties agree or understand that the image shall remain private, and the person subsequently distributes the image taken, with the intent to cause serious emotional distress, and the depicted person suffers serious emotional distress.

It shall be a disorderly conduct, misdemeanour.

 

Colorado

Colorado Revised Statutes 18-7-107 and 18-7-108.

2014

Posting a Private Image for Harassment and Posting a Private Image for Pecuniary Gain is a Class 1 Misdemeanor.

The defendant can be fined up to $10,000.

 

Connecticut

Section 53a-189a, Connecticut General Statutes

October 1, 2015

It provides that whoever indulges in Unlawful dissemination of an intimate image is guilty

The offence is a class A misdemeanor.

 

Delaware

§ 1335, Title 11 of the Delaware Code

2014

When  a person knowingly reproduces, distributes, exhibits, publishes, transmits, or otherwise disseminates a visual depiction of a person who is nude, or who is engaging in sexual conduct, when the person knows or should have known that the reproduction, distribution, exhibition, publication, transmission, or other dissemination was without the consent of the person depicted and that the visual depiction was created or provided to the person under circumstances in which the person depicted has a reasonable expectation of privacy, such person shall be guilty of violation of privacy.

It is a class A misdemeanor; class G felony.

 

District of Columbia

Criminalization of Non-Consensual Pornography Act

2014

It provides that a person knowingly discloses one or more sexual images of another identified or identifiable person when:

(1) The person depicted did not consent to the disclosure of the sexual image;

(2) There was an agreement or understanding between the person depicted and the person disclosing that the sexual image would not be disclosed; and

(3) The person disclosed the sexual image with the intent to harm the person depicted person depicted or to receive financial gain.

(b) A person who violates this subsection shall be guilty of a misdemeanour.

Upon conviction such person shall be fined not more than the amount set forth in section 101 of the Criminal Fine Proportionality Amendment Act of 2012, approved June 11, 2013 (D.C. Law 19-317; D.C.42 Official Code § 22-3571.01), imprisoned for not more than 180 days, or both.

 

Florida

Florida Statute Section 784.049

 

2015

·       “Sexually cyberharass” means to publish a sexually

explicit image of a person that contains or conveys the personal identification information of the depicted person to an Internet website without the depicted person’s consent, for no legitimate purpose, with the intent of causing substantial emotional distress to the depicted person.

A person who willfully and maliciously sexually cyberharasses another person commits a misdemeanor of the first degree, punishable as provided in s. 775.082 or s. 775.083.

If a person who has one prior conviction for sexual cyber harassment and who commits a second or subsequent sexual cyber harassment commits a felony of the third degree, punishable as provided in s. 775.082, s. 775.083, or s. 775.084.

Aggrieved person can also initiate civil action to recover damages.

Georgia

Article 3 of Chapter 11 of Title 16 of the Official Code of Georgia

 

 

2014

·       Whoever Electronically transmits or posts or causes such transmission or posting, in one or more transmissions or posts, a photograph or video which depicts nudity or sexually explicit conduct of an adult when the transmission or post is harassment or causes financial loss to the depicted person and serves no legitimate purpose to the depicted person.

 

Such person shall be guilty of a misdemeanor of a high and aggravated nature; provided, however, that upon a second or subsequent violation of this Code section, he or she shall be guilty of a felony and, upon conviction thereof, shall be punished by imprisonment of not less than one nor more than five years, a fine of not more than $100,000.00, or both.

There is a rebuttable presumption on the Internet Service Provider that it was not aware of the content of such post

Hawaii

Section 711-1110.9, Hawaii Revised Statutes

 

 

2014

A person commits the offense of violation of privacy in the first degree if The person knowingly discloses an image or video of another identifiable person either in the nude, as defined in section 712-1210, or engaging in sexual conduct, as defined in section 712-1210, without the consent of the depicted person, with intent to harm substantially the depicted person with respect to that person’s health, safety, business, calling, career, financial condition, reputation, or personal relationships.

Violation of privacy in the first degree is a class C felony.

In addition to any penalties the court may impose, the court may order the destruction of any recording made in violation of this section

Exception has been carved out for When the person was voluntarily nude in public or voluntarily engaging in sexual conduct in public.

Idaho

Idaho Code 18-6609(2)(b)

2017

Intentionally or with reckless disregard disseminating, publishing or selling (or conspiring) any image or images of the intimate areas of another person or persons without the consent of such other person or persons and he knows or reasonably should have known that one or both parties agreed or understood that the images should remain private.

The punishments are decided on a case by case basis, but seem to range from state prison terms of three to five years, and/or a fine of up to $5,000 based on the cases that have emerged

 

Illinois

Section 11-23.5 of The Illinois Criminal Code of 2012

2015

Criminalises the Non-Consensual Dissemination of Private Sexual Images.

It is a Class 4 Felony.

 

Iowa

Section708.7 of the Code of 2017

2017

Dissemination, publication, distribution or causing it thereof of photograph or film showing another person in partial or full nudity or engaged in a sex act, without consent, is harassment.

Such an offence is harassment in first degree and is an aggravated misdemeanour

 

Kansas

Section 21-6101(a)(8) of Kansas State Act

2016

Breach of privacy is knowingly and without lawful authority: disseminating any videotape, photograph, film or image of another identifiable person 18 years of age or older who is nude or engaged in sexual activity and under circumstances in which such identifiable person had a reasonable expectation of privacy, with the intent to harass, threaten or intimidate such identifiable person, and such identifiable person did not consent to such dissemination

Such an offence is a Severity level 8, person felony

 

Louisiana

R.S. 14:283.2

2015

A person commits the offense of non-consensual disclosure of a private mage when all of the following occur:

(1) The person intentionally discloses an image of another person who is seventeen years of age or older, who is identifiable from the image or information displayed in connection with the image, and whose intimate parts are exposed in whole or in part.

(2) The person who discloses the image obtained it under circumstances in which a reasonable person would know or understand that the image was to remain private.

(3) The person who discloses the image knew or should have known that the person in the image did not consent to the disclosure of the image.

(4) The person who discloses the image has the intent to harass or cause emotional distress to the person in the image, and the person who commits the offense knew or should have known that the disclosure could harass or cause emotional distress to the person in the image

Whoever commits the offense of non-consensual disclosure of a private image shall be fined not more than ten thousand dollars, imprisoned with or without hard labour for not more than two years, or both

No liability is imposed on the computer service used for posting such image

Maine

Section 1 17-A MRSA §511-A

 

 

2015

A person is guilty of unauthorized dissemination of certain private images if the person, with the intent to harass, torment or threaten the depicted person or another person, knowingly disseminates, displays or publishes a photograph, videotape, film or digital recording of another person in a state of nudity or engaged in a sexual act or engaged in sexual contact in a manner in which there is no public or newsworthy purpose when the person knows or should have known that the depicted person:

(1) Is 18 years of age or older;

(2) Is identifiable from the image itself or information displayed in connection with the image; and

(3) Has not consented to the dissemination, display or publication of the private image.

Unauthorized dissemination of certain private images is a Class D crime.

 

Maryland

§ 3-809, Maryland Code.

 

 

2014

A person may not intentionally cause serious emotional distress to another by intentionally placing on the internet an identifiable a photograph, film, videotape, recording, or any other reproduction of the image of the other person that reveals the identity of the other person with his or her intimate parts exposed or while engaged in an act of sexual contact:

(1) knowing that the other person did not consent to the placement of the image on the internet; and

(2) under circumstances in which the other person had a reasonable expectation that the image would be kept private.

A person who violates this section is guilty of a misdemeanor and on conviction is subject to imprisonment not exceeding 2 years or a fine not exceeding $5,000 or both.

 

Michigan

Sec 145e of MCL

 

2016

If a person threatens, coerces, or intimidates dissemination of any sexually explicit visual material of another person shall be punishable under section 145f.

Section 145f- first offense punishable by 93 day sentence or fine up to $500.

 

Minnesota

§ 617.261, Minnesota Statutes

 

 

2016

A cause of action against a person for the non-consensual dissemination of private sexual images exists when:

(1) a person disseminated an image without the consent of the person depicted in the image;

(2) the image is of an individual depicted in a sexual act or whose intimate parts are exposed in whole or in part;

(3) the person is identifiable:

(i) from the image itself, by the person depicted in the image or by another person; or

(ii) from the personal information displayed in connection with the image; and

(4) the image was obtained or created under circumstances in which the person depicted had a reasonable expectation of privacy.

The fact that the individual depicted in the image consented to the creation of the image or to the voluntary private transmission of the image is not a defense to liability for a person who has disseminated the image without consent.

Conviction for nonconsensual dissemination of private sexual images qualifies as a prior “qualified domestic violence-related offense” that enhances penalties for convictions for domestic assault, 4th & 5th degree assault, stalking, and violation of a harassment restraining order.

Consent to such image being taken is no defense

Nevada

Sections 2-6 of Chapter 200 of NRS

 

 

2015

A person commits the crime of unlawful dissemination of an intimate image when, with the intent to harass, harm or terrorize another person, the person electronically disseminates or sells an intimate image which depicts the other person and the other person:

(1) did not give prior consent to the electronic dissemination or sale;

(2) had a reasonable expectation that the intimate image would be kept private and would not be made visible to the public; and

(3) was at least 18 years of age when the intimate image was created

Such person is guilty of a category D felony

 

New Hampshire

§ 644:9-a, N.H. Rev. Stat.

2016

Nonconsensual dissemination of private sexual images with the intent to harass, intimidate, threaten, or coerce the depicted person.

It is a felony.

 

New Jersey

§ 2C:14-9, New Jersey Code

2015

Making a nonconsensual recording that reveals another person’s "intimate parts" or shows the person engaged in a sexual act without consent.

Felony, three to five years in prison, a fine not to exceed $15,000.

 

New Mexico

HB 142, new section added to the New Mexico Criminal Code

 

2015

Unauthorised distribution of sensitive images without that person’s consent with the intent to harass, humiliate or intimidate that person or cause substantial emotional distress is a misdemeanour.

It is a misdemeanour.

Upon a second or subsequent conviction, the offender is guilty of a fourth degree felony

 

North Carolina

§ 14-190.5A, Article 26 of Chapter 14 of the General Statutes

2015

A person is guilty of disclosure of private images if all of the following apply:

(1) The person knowingly discloses an image of another person with the intent to do either of the following:

a. Coerce, harass, intimidate, demean, humiliate, or cause financial loss to the depicted person.

b. Cause others to coerce, harass, intimidate, demean, humiliate, or cause financial loss to the depicted person.

(2) The depicted person is identifiable from the disclosed image itself or information offered in connection with the image.

(3) The depicted person's intimate parts are exposed or the depicted person is engaged in sexual conduct in the disclosed image.

(4) The person discloses the image without the affirmative consent of the depicted person.

(5) The person discloses the image under circumstances such that the person knew or should have known that the depicted person had a reasonable expectation of privacy.

For an offense by a person who is 18 years of age or older at the time of the offense, the violation is a Class H felony.

 

For a first offense by a person who is under 18 years of age at the time of the offense, the violation is a Class 1 misdemeanor.

 

For a second or subsequent offense by a person who is under the age of 18 at the time of the offense, the violation is a Class H felony

The Court may order destruction of such image.

 

This provision is in addition to civil and criminal remedies.

North Dakota

Section 12.1-17-07.2 of the North Dakota Century Code

2015

·       A person commits the offense of distribution of intimate images if the person knowingly or intentionally distributes to any third party any intimate image of an individual eighteen years of age or older, if:

(1) The person knows that the depicted individual has not given consent to the person to distribute the intimate image;

(2) The intimate image was created by or provided to the person under circumstances in which the individual has a reasonable expectation of privacy; and

(3) Actual emotional distress or harm is caused to the individual as a result of the distribution under this section.

Distribution of an intimate image is a class A misdemeanor

 

Oklahoma

Section 1040.13b of Title 21, Oklahoma Statutes

 

 

2016

·       A person commits nonconsensual dissemination of private sexual images when he or she:

(1) Intentionally disseminates an image of another person:

a. who is at least eighteen (18) years of age,

b. who is identifiable from the image itself or

information displayed in connection with the image,

and

c. who is engaged in a sexual act or whose intimate parts are exposed, in whole or in part;

(2) Disseminates the image with the intent to harass, intimidate or coerce the person, or under circumstances in which a reasonable person would know or understand that dissemination of the image would harass, intimidate or coerce the person

(3) Obtains the image under circumstances in which a reasonable person would know or understand that the image was to remain private; and

(4) Knows or a reasonable person should have known that the person in the image has not consented to the dissemination.

Any person who violates the provisions of this section shall be guilty of a misdemeanour punishable by imprisonment in a county jail for not more than one (1) year or by a fine of not more than.

 

One Thousand Dollars ($1,000.00), or both such fine and imprisonment

The court shall have the authority to order the defendant to remove the disseminated image should the court find it is in the power of the defendant to do so.

Oregon

ORS 161.005

 

 

2015

·       (1) A person commits the crime of unlawful dissemination of an intimate image if:

(a) The person, with the intent to harass, humiliate or injure another person, knowingly causes to be disclosed through an Internet website an identifiable image of the other person whose intimate parts are visible or who is engaged in sexual conduct;

(b) The person knows or reasonably should have known that the other person does not consent to the disclosure;

(c) The other person is harassed, humiliated or injured by the disclosure; and

(d) A reasonable person would be harassed, humiliated or injured by the disclosure.

Unlawful dissemination of an intimate image is a Class A misdemeanor.

 

Unlawful dissemination of an intimate image is a Class C felony if the person has a prior conviction under this section at the time of the offense.

 

Pennsylvania

Title 18 Pennsylvania Consolidated Statutes § 3131

 

 

2014

A person commits the offense of unlawful dissemination of intimate image if, with intent to harass, annoy or alarm a current or former sexual or intimate partner, the person disseminates a visual depiction of the current or former sexual or intimate partner in a state of nudity or engaged in sexual conduct.

·       An offense shall be:

(1)  A misdemeanor of the first degree, when the person depicted is a minor.

(2)  A misdemeanor of the second degree, when the person depicted is not a minor.

 

South Dakota

Section 4 of Chapter 22-21 of South Dakota Code of Laws

2015

No person may use or disseminate in any form any visual recording or photographic device to photograph or visually record any other person without clothing or under or through the clothing, or with another person depicted in a sexual manner, for the purpose of viewing the body of, or the undergarments worn by, that other person, without the consent or knowledge of that other person, with the intent to self-gratify, to harass, or embarrass and invade the privacy of that other person, under circumstances in which the other person has a reasonable expectation of privacy.

A violation of this section is a Class 1 misdemeanor.

 

However, a violation of this section is a Class 6 felony if the victim is seventeen years of age or younger and the perpetrator is at least twenty-one years old.

 

Tennessee

Chapter 872 Tenn. Pub. Act

2017

(a) A person commits unlawful exposure who, with the intent to cause emotional distress, distributes an image of the intimate part or parts of another identifiable person if:

(1) The image was photographed or recorded under circumstances where the parties agreed or understood that the image would remain private; and

(2) The person depicted in the image suffers emotional distress.

(b) As used in this section:

(1) "Emotional distress" has the same meaning as defined in § 39-17-315; and

(2) "Intimate part" means any portion of the primary genital area, buttock, or any portion of the female breast below the top of the areola that is either uncovered or visible through less than fully opaque clothing.

A violation of subsection (a) is a Class A misdemeanor.

However, nothing in this section precludes punishment under any other section of law providing for greater punishment.

 

Texas

Chapter 98B, ATitle 4, Civil Practice and Remedies Code

 

 

2015

(a)A defendant is liable, as provided by this chapter, to a person depicted in intimate visual material for damages arising from the disclosure of the material if:

(1)the defendant discloses the intimate visual

material without the effective consent of the depicted person;

(2)the intimate visual material was obtained by the

defendant or created under circumstances in which the depicted person had a reasonable expectation that the material would remain private;

(3)the disclosure of the intimate visual material causes harm to the depicted person; and

(4)the disclosure of the intimate visual material reveals the identity of the depicted person in any manner, including through:

(A)any accompanying or subsequent information or material related to the intimate visual material; or

(B)information or material provided by a third party in response to the disclosure of the intimate visual material

(b) defendant is liable, as provided by this chapter, to a person depicted in intimate visual material for damages arising from the promotion of the material if, knowing the character and content of the material, the defendant promotes intimate visual material described by Subsection (a) on an Internet website or other forum for publication that is owned or operated by the defendant.

An offense under this section is a Class A misdemeanor.

 

If conduct that constitutes an offense under this section also constitutes an offense under another law, the actor may be prosecuted under this section, the other law, or both.

Aggrieved person may recover actual and exemplary damages.

 

The provisions shall be liberally construed by the courts to promote its underlying purpose to protect

Persons from, and provide adequate remedies to victims of, the disclosure or promotion of intimate visual material.

Utah

§ 76-5b-203, Utah Code.

 

2014

An actor commits the offense of distribution of intimate images if the actor, with the intent to cause emotional distress or harm, knowingly or intentionally distributes to any third party any intimate image of an individual who is 18 years of age or older, if:

(a) the actor knows that the depicted individual has not given consent to the actor to distribute the intimate image;

(b) the intimate image was created by or provided to the actor under circumstances in which the individual has a reasonable expectation of privacy; and

(c) actual emotional distress or harm is caused to the person as a result of the distribution under this section.

Distribution of an intimate image is a class A misdemeanour.

 

Vermont

Sec. 2. 13 V.S.A. § 2606

2015

A person violates this section if he or she knowingly discloses a visual image of an identifiable person who is nude or who is engaged in sexual conduct, without his or her consent, with the intent to harm, harass, intimidate, threaten, or coerce the person depicted, and the disclosure would cause a reasonable person to suffer harm. A person may be identifiable from the image itself or information offered in connection with the image. Consent to recording of the visual image does not, by itself, constitute consent for disclosure of the image.

 

A person who violates this provision shall be imprisoned not more than two years or fined not more than $2,000.00, or both.

 

A person who violates this provision with the intent of disclosing the image for financial profit shall be imprisoned not more than five years or fined not more than $10,000.00, or both.

In addition, the Court may order equitable relief, including a temporary restraining order, a preliminary injunction, or a permanent injunction ordering the defendant to cease display or disclosure of the image.

 

The Court may grant injunctive relief maintaining the confidentiality of a plaintiff using a pseudonym.

Virginia

§ 18.2-386.2, Code of Virginia

 

2014

Any person who, with the intent to coerce, harass, or intimidate, maliciously disseminates or sells any videographic or still image created by any means whatsoever that depicts another person who is totally nude, or in a state of undress so as to expose the genitals, pubic area, buttocks, or female breast, where such person knows or has reason to know that he is not licensed or authorized to disseminate or sell such videographic or still image is guilty.

Such an offense is a Class 1 misdemeanor.

 

Washington

Title 9A RCW

2015

A person commits the crime of disclosing intimate images when the person knowingly discloses an intimate image of another person and the person disclosing the image:

(a) Obtained it under circumstances in which a reasonable person would know or understand that the image was to remain private;

(b) Knows or should have known that the depicted  person has not consented to the disclosure; and10

(c) Knows or reasonably should know that disclosure would cause harm to the depicted person.

 

The crime of disclosing intimate images:

(a) Is a gross misdemeanor on the first offense; or

(b) Is a class C felony if the defendant has one or more prior convictions for disclosing intimate images.

A person who is under the age of eighteen is not guilty of the crime of disclosing intimate images unless the person:

(a) Intentionally and maliciously disclosed an intimate image of another person;

(b) Obtained it under circumstances in which a reasonable person would know or understand that the image was to remain private; and

(c) Knows or should have known that the depicted person has not consented to the disclosure

West Virginia

§61-8-28a, Code of West Virginia

 

2017

No person may knowingly and intentionally disclose, cause to be disclosed or threaten to disclose, with the intent to harass, intimidate, threaten, humiliate, embarrass, or coerce, an image of another which shows the intimate parts of the depicted person or shows the depicted person engaged in sexually explicit conduct which was captured under circumstances where the person depicted had a reasonable expectation that the image would not be publicly disclosed.

A person convicted is guilty of a misdemeanor and, upon conviction thereof, shall be confined in jail for not more than one year, fined not less than $1,000 nor more than $5,000, or both confined and fined.

 

Wisconsin

§ 942.09, Code of Wisconsin

2014

It provides for posting or publishing a sexually explicit image without consent and providing a penalty. Such an offence is a Class A misdemeanour.

Class A misdemeanors can result in fines up to $10,000, imprisonment up to 9 months or a combination of the two.

 

3. Australia

Country

Statute

Year

Contents – definition, classification, punishment, standard of proof

Punishment

Remarks

New South Wales

Section 91Q, Crimes Act 1900

2018

A person who intentionally distributes an intimate image of another person:

(a) without the consent of the person, and

(b) knowing the person did not consent to the distribution or being reckless as to whether the person consented to the distribution, is guilty of an offence.

"intimate image" means:

(a) an image of a person's private parts, or of a person engaged in a private act, in circumstances in which a reasonable person would reasonably expect to be afforded privacy, or

(b) an image that has been altered to appear to show a person's private parts, or a person engaged in a private act, in circumstances in which a reasonable person would reasonably expect to be afforded privacy.

Maximum penalty: 100 penalty units or imprisonment for 3 years, or both.

 

South Australia

Summary Offences Act 1953

2018

A person who distributes an invasive image of another person, knowing or having reason to believe that the other person—

(a) does not consent to that particular distribution of the image; or

(b) does not consent to that particular distribution of the image and does not consent to distribution of the image generally, is guilty of an offence.

An image of a person will be taken to be an invasive

image of the person if it depicts the person in a place other than a public place—

(a) engaged in a private act; or

(b) in a state of undress such that—

(i) in the case of a female—the bare breasts are visible; or

(ii) in any case—the bare genital or anal region is visible.

(3) However, an image of a person that falls within the standards of morality, decency and propriety generally accepted by reasonable adults in the community will not be taken to be an invasive image of the person.

Maximum penalty:

(a) if the invasive image is of a person under the age of 17 years—$20000 or imprisonment for 4 years;

(b) in any other case—$10 000 or imprisonment for 2 years.

 

Western Australia

Section 10G/61, Restraining Orders and Related Legislation

Amendment (Family Violence) Act 2016

2016

A court may restrain the respondent from doing all or any of the following in the case of a family violence restraining order:

distributing or publishing, or threatening to distribute or publish, intimate personal images of the person seeking to be protected;

2 years imprisonment.

Check comes into play only in case of a family violence restraining order and is not general protection.

Victoria

Section 41C, Summary Offences Act 1966

2012

A person who visually captures or has visually captured an image of another person's genital or anal region must not intentionally distribute that image.

2 years imprisonment.

 

4. Asia and Rest of the World

Country

Statute

Year

Contents – definition, classification, punishment, standard of proof

Punishment

Remarks

Canada

Section 162.1, Criminal Code through Bill C-13 or Cyberbullying Act

2015

Everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct, is guilty.

In this section, “intimate image” means a visual recording of a person made by any means including a photographic, film or video recording,

(a) in which the person is nude, is exposing his or her genital organs or anal region or her breasts or is engaged in explicit sexual activity;

(b) in respect of which, at the time of the recording, there were circumstances that gave rise to a reasonable expectation of privacy; and

(c) in respect of which the person depicted retains a reasonable expectation of privacy at the time the offence is committed.

Punishment is:

(a) of an indictable offence and liable to imprisonment for a term of not more than five years; or

(b) of an offence punishable on summary conviction.

 

Philippines

Anti-Photo and Video Voyeurism Act of 2009

 

It is hereby prohibited and declared unlawful for any person:

(a) To take photo or video coverage of a person or group of persons performing sexual act or any similar activity or to capture an image of the private area of a person/s such as the naked or undergarment clad genitals, public area, buttocks or female breast without the consent of the person/s involved and under circumstances in which the person/s has/have a reasonable expectation of privacy;

(b) To copy or reproduce, or to cause to be copied or reproduced, such photo or video or recording of sexual act or any similar activity with or without consideration;

(c) To sell or distribute, or cause to be sold or distributed, such photo or video or recording of sexual act, whether it be the original copy or reproduction thereof; or

(d) To publish or broadcast, or cause to be published or broadcast, whether in print or broadcast media, or show or exhibit the photo or video coverage or recordings of such sexual act or any similar activity through VCD/DVD, internet, cellular phones and other similar means or device.

The prohibition under paragraphs (b), (c) and (d) shall apply notwithstanding that consent to record or take photo or video coverage of the same was given by such person/s. Any person who violates this provision shall be liable for photo or video voyeurism as defined herein.

The penalty of imprisonment of not less that three (3) years but not more than seven (7) years and a fine of not less than One hundred thousand pesos (P100,000.00) but not more than Five hundred thousand pesos (P500,000.00), or both, at the discretion of the court shall be imposed upon any person found guilty of violating Section 4 of this Act.

If the violator is a juridical person, its license or franchise shall be automatically be deemed revoked and the persons liable shall be the officers thereof including the editor and reporter in the case of print media, and the station manager, editor and broadcaster in the case of a broadcast media.

If the offender is a public officer or employee, or a professional, he/she shall be administratively liable.

If the offender is an alien, he/she shall be subject to deportation proceedings after serving his/her sentence and payment of fines.

 

Israel

Prevention of Sexual Harassment Law, 5758-1998 amended in 2014

2014

The distribution of still pictures or video recordings of a person’s image that focuses on his/her sexuality, including by editing or incorporation, is unlawful if made:

1. without the person’s consent;

2. in a way that facilitates identification of the person; and

3. under circumstances that may degrade or shame him/her

 

The distribution of such an image constitutes sexual harassment under section 3(a) of the Prevention of Sexual Harassment Law and intentional harm to a person’s privacy under section 5 of the Protection of Privacy Law.

The crimes are punishable with five years of imprisonment, in addition to subjecting the perpetrator to civil liability and the duty to pay monetary compensation to the victim.

 

Japan

Act on Prevention of Damage by Provision of Private Sexual Image Records Act

2014

It criminalizes the provision of a private sexual image of another person without the person’s approval via a means of telecommunication to an unspecified number of or to many people.

It allows Internet service providers to delete suspected revenge porn images without the uploader’s consent, in cases where:

1. the victim had notified the provider of the existence of the image;

2. the provider had requested the consent of the uploader to delete the image; and

3. the uploader did not respond or delete the image.

A maximum sentence of 500,000 yen or three years in jail.

The Act also obligates the national and local governments to ease victims’ embarrassment when they report the crime.

For especially young potential victims, the Act further obligates the governments to educate people on how to avoid revenge porn.

Comments on the Draft Digital Information Security in Healthcare Act

by Amber Sinha and Shweta Mohandas — last modified May 01, 2018 02:05 AM
The Centre for Internet & Society submitted comments to the Ministry of Health & Family Welfare, Government of India on the draft Digital Information Security in Healthcare Act on April 21, 2018.

This submission presents comments by the Centre for Internet and Society, India (“CIS”) on the Draft Digital Information Security in Healthcare Act, released by Ministry of Health & Family Welfare, Government of India. CIS has conducted research on the issues of privacy, data protection and data security since 2010 and is thankful for the opportunity to put forth its views. This submission was made on April 21, 2018.

Download the full submission here

AI in the Banking and Finance Industry in India

by Saman Goudarzi, Elonnai Hickok and Amber Sinha — last modified Jun 19, 2018 11:48 AM
This is a draft report that seeks to map the present state of use of AI in the banking and financial sector in India.

This draft report was prepared by Saman Goudarzi, Elonnai Hickok and Amber Sinha. It was edited by Shyam Ponappa. Mapping was done by Shweta Mohandas. Pranav M Bidare, Sidharth Ray, and Aayush Rathi provided research assistance in preparing this report.


Executive Summary

In the last couple of years, the finance and banking sectors in India have increasingly deployed and implemented AI technologies. Such technologies are being implemented for front-end and back end processes – offering solutions for both financial and business management operations. At the moment, the AI landscape appears to be overwhelmingly populated by natural language processing and natural language generation technologies culminating in numerous chatbot initiatives by various banking and financial actors. Arguably more significant – but less documented – is the usage of said technologies for financial decision making on a variety of issues including, credit-scoring, transactions, wealth and risk management, and fraud detection. These trends are largely facilitated by technology service companies – both large-scale firms and startups – that either work with established banking and financial institutions to deploy AI technologies or develop and offer their own financial services directly to consumers.

This draft report seeks to map the present state of use of AI in the banking and financial sector in India. In doing so, it explores:

  • Uses: What is the present use of AI in banking and finance? What is the narrative and discourse around AI and banking/finance in India?
  • Actors: Who are the key stakeholders involved in the development, implementation and regulation of AI in the banking/finance sector?
  • Impact: What is the potential and existing impact of AI in the banking and finance sectors?
  • Regulation: What are the challenges faced in policy making around AI in the banking and finance sectors?

 

The draft report first offers an overview of the ways in which AI is being used in the sector. This is followed by an examination of existing challenges to the adoption of AI and the significant legal and ethical concerns that need to be considered in light of these trends. Lastly, the draft report draws attention to a number of key government actions and initiatives surrounding AI related to the banking and finance industry, discusses challenges to the adoption and implementation of AI and articulates recommendations towards addressing the same. 

Download the draft report here

19th June Update: This case study has been modified to remove interview quotes, which are in the process of being confirmed. The link above is the latest draft of the report.

Internet Shutdown Stories

by Ambika Tandon last modified Sep 03, 2019 09:57 AM
The Centre for Internet & Society (CIS) has published a collection of stories of the impact of internet shutdowns on people's lives in the country. This book seeks to give a glimpse into the lives of those directly affected by these internet shutdown experiments. When seen in a larger context, we hope that the stories in this book also demonstrate that access to the internet and freedom of speech is not just about an individual’s rights, but are also required for the collective good. This is a project funded by Facebook and MacArthur Foundation, and the stories were provided by 101 Reporters. Case studies from the states of Jammu & Kashmir, Haryana, Rajasthan, Gujarat, Telangana, West Bengal, Tripura, Manipur, Nagaland, and Uttar Pradesh have been highlighted in this compilation.
Internet Shutdown Stories

Cover Image Aakash Hassan

 

Read the report here: Download (PDF)

The report is shared under Creative Commons Attribution-NoDerivatives 4.0 International license.

Edited by Debasmita Haldar, Ambika Tandon, and Swaraj Barooah

Print Design by Saumyaa Naidu

Advisor: Nikhil Pahwa, Founder and Editor at MediaNama


Foreword

Aside from the waves of innovation that the digital revolution brought with it, the ever increasing pervasiveness of the internet has had a tremendous impact on empowerment and freedoms in society. We are seeing unprecedented levels of access to information, along with a democratization of the means of creation, production and dissemination of information to anyone with an internet connection. This in turn has greatly amplified, and in many cases even created the ability, particularly for those traditionally left in the margins, to more meaningfully participate in their global as well as local societies. Recognising the significance of the internet to the freedom of expression as well as for the development and exercising of human rights more broadly, the United Nations Human Rights Council unanimously passed a resolution confirming internet access being a fundamental human right.

Simultaneously however, we are seeing Indian states discover and experiment with their power to clamp down on these new modes of communication for a variety of reasons, ranging from the ill-intentioned to the ill-informed. An internet shutdown tracker maintained by the Software Freedom Law Centre, shows that the number of shutdowns in India is increasing every year, with 70 shutdowns reported in 2017,and 45 shutdowns already reported from 1st Jan, 2018 to 4th May, 2018. These shutdowns also come at a significant economic cost. A 2016 Brookings report estimates that India faced a loss of about $968 million due to internet shutdowns. However, the democratic harms we have been accruing are more difficult to quantify and demonstrate.

This book seeks to give a glimpse into the lives of those directly affected by these internet shutdown experiments. From Jammu and Kashmir to Telangana, from Gujarat to Nagaland, we have collected 30 stories from across the country for an up-close look at how the everyday lives of common citizens have been impacted by internet shutdowns and website blocks. From CRPF members posted in Srinagar who use the internet to connect with their family, to students who have been cut off from education resources for competitive exams; from the disruptions in day to day life brought about by non-functional bank services in Darjeeling, to stock brokers in Ahmedabad who faced costly slowdowns; the idea of a Digital India is facing severe setbacks with these continuously increasing internet shutdowns.

When seen in a larger context, we hope that the stories in this book also demonstrate that access to the internet and freedom of speech is not just about an individual’s rights, but are also required for the collective good. The diversity of perspectives and activities that a healthy democracy demands is not met by the versioning of dominant narratives, but by allowing for, if not directly encouraging, the voices and activities of the unheard, oppressed and marginalised. We hope that in the telling of these personal stories of the day-to-day of people affected by such internet shutdowns, this book joins in the effort to position the dehumanized internet kill switches more aptly as dangers to democracy.

Sunil Abraham
Executive Director
The Centre for Internet and Society

 

India's Data Protection Framework Will Need to Treat Privacy as a Social and Not Just an Individual Good

by Amber Sinha last modified May 18, 2018 06:22 AM
The idea that technological innovations may compete with privacy of individuals assumes that there is social and/or economic good in allowing unrestricted access to data. However, it must be remembered that data is potentially a toxic asset, if it is not collected, processed, secured and shared in the appropriate way.

Published in Economic & Political Weekly, Volume 53, Issue No. 18, 05 May, 2018. Article can be accessed online here.


In July 2017, the Ministry of Electronics and Information Technology (MeITy) in India set up a committee headed by a former judge, B N Srikrishna, to address the growing clamour for privacy protections at a time when both private collection of data and public projects like Aadhaar are reported to pose major privacy risks (Maheshwari 2017). The Srikrishna Committee is in the process of providing its input, which will go on to inform India’s data-protection law.

While the committee released a white paper with provisional views, seeking feedback a few months ago, it may be discussing a data protection framework without due consideration to how data practices have evolved.

In early 2018, a series of stories based on investigative journalism by Guardianand Observer revealed that the data of 87 million Facebook users was used for the Trump campaign by a political consulting firm, Cambridge Analytica, without their permissions. Aleksandr Kogan, a psychology researcher at the University of Cambridge, created an application called “thisisyourdigitallife” and collected data from 270,000 participants through a personality test using Facebook’s application programming interface (API), which allows developers to integrate with various parts of the Facebook platform (Fruchter et al 2018). This data was collected purportedly for academic research purposes only. Kogan’s application also collected profile data from each of the participants’ friends, roughly 87 million people.

The kinds of practices concerning the sharing and processing of data exhibited in this case are not unique. These are, in fact, common to the data economy in India as well. It can be argued that the Facebook–Cambridge Analytica incident is representative of data practices in the data-driven digital economy. These new practices pose important questions for data protection laws globally, and how these may need to evolve to address data protection, particularly for India, which is in the process of drafting its own data protection law.

Privacy as Control

Most modern data protection laws focus on individual control. In this context, the definition by the late Alan Westin (2015) characterises privacy as:

The claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to other.

The idea of “privacy as control” is what finds articulation in data protection policies across jurisdictions, beginning with the Fair Information Practice Principles (FIPP) from the United States (US) (Dixon 2006). These FIPPs are the building blocks of modern information privacy law (Schwartz 1999) and not only play a significant role in the development of privacy laws in the US, but also inform data protection laws in most privacy regimes internationally (Rotenberg 2001), including the nine “National Privacy Principles” articulated by the Justice A P Shah Committee in India. Much of this approach is also reflected in the white paper released by the committee, led by Justice Srikrishna, towards the creation of data protection laws in India (Srikrishna 2017)

This approach essentially involves the following steps (Cate 2006):

(i) Data controllers are required to tell individuals what data they wish to collect and use and give them a choice to share the data. 
(ii) Upon sharing, the individuals have rights such as being granted access, and data controllers have obligations such as securing the data with appropriate technologies and procedures, and only using it for the purposes identified.

The objective in this approach is to make the individual empowered and allow them to weigh their own interests in exercising their consent. The allure of this paradigm is that, in one elegant stroke, it seeks to “ensure that consent is informed and free and thereby also (seeks) to implement an acceptable tradeoff between privacy and competing concerns.” (Sloan and Warner 2014). This approach is also easy to enforce for both regulators and businesses. Data collectors and processors only need to ensure that they comply with their privacy policies, and can thus reduce their liability while, theoretically, consumers have the information required to exercise choice. In recent years, however, the emergence of big data, the “Internet of Things,” and algorithmic decision-making has significantly compromised the notice and consent model (Solove 2013).

Limitations of Consent

Some cognitive problems, such as long and difficult to understand privacy notices, have always existed with regard to the issue of informed consent, but lately these problems have become aggravated. Privacy notices often come in the form of long legal documents, much to the detriment of the readers’ ability to understand them. These policies are “long, complicated, full of jargon and change frequently” (Cranor 2012).

Kent Walker (2001) lists five problems that privacy notices typically suffer from:

(i) Overkill: Long and repetitive text in small print.
(ii) Irrelevance: Describing situations of little concern to most consumers.
(iii) Opacity: Broad terms that reflect limited truth, and are unhelpful to track and control the information collected and stored.
(iv) Non-comparability: Simplification required to achieve comparability will lead to compromising of accuracy.
(v) Inflexibility: Failure to keep pace with new business models.

Today, data is collected continuously with every use of online services, making it humanly impossible to exercise meaningful consent. 
The quantity of data being generated is expanding at an exponential rate. With connected devices, smartphones, appliances transmitting data about our usage, and even the smart cities themselves, data now streams constantly from almost every sector and function of daily life, “creating countless new digital puddles, lakes, tributaries and oceans of information” (Bollier 2010).

The infinitely complex nature of the data ecosystem renders consent of little value in cases where individuals may be able to read and comprehend privacy notices. As the uses of data are so diverse, and often not limited by a purpose identified at the beginning, individuals cannot conceptualise how their data will be aggregated and possibly used or reused.

Seemingly innocuous bits of data revealed at different stages could be combined to reveal sensitive information about the individual. While the regulatory framework is designed such that individuals are expected to engage in cost–benefit analysis of trading their data to avail services, this ecosystem makes such individual analysis impossible.

Conflicts Between Big Data and Individual Control

The thrust of big data technologies is that the value of data resides not in its primary purposes, but in its numerous secondary purposes, where data is reused many times over (Schoenberger and Cukier 2013).

On the other hand, the idea of privacy as control draws from the “data minimisation” principle, which requires organisations to limit the collection of personal data to the minimum extent necessary to obtain their legitimate purpose and to delete data no longer required. Control is excercised and privacy is enhanced by ensuring data minimisation. These two concepts are in direct conflict. Modern data-driven businesses want to retain as much data as possible for secondary uses. Since these secondary uses are, by their nature, unanticipated, their practices run counter to the very principle of purpose limitation (Tene and Polonetsky 2012).

It is evident from such data-sharing practices, as demonstrated by the Cambridge Analytica–Facebook story, that platform architectures are designed with a clear view to collect as much data as possible. This is amply demonstrated by the provision of a “friends permission” feature by Facebook on its platform to allow individuals to share information not just about themselves, but also about their friends. For the principle of informed consent to be meaningfully implemented, it is necessary for users to have access to information about intended data practices, purposes and usage, so they consciously share data about themselves.

In reality, however, privacy policies are more likely to serve as liability disclaimers for companies than any kind of guarantee of privacy for consumers. A case in point is Mark Zuckerberg’s facile claim that there was no “data-breach" in the Cambridge Analytica–Facebook incident. Instead of asking each of the 87 million users whether they wanted their data to be collected and shared further, Facebook designed a platform that required consent in any form only from 270,000 users. Not only were users denied the opportunity to give consent, their consent was assumed through a feature which was on by default. This is representative of how privacy trade-offs are conceived by current data-driven business models. Participation in a digital ecosystem is by itself deemed as users’ consent to relinquish control over how their data is collected, who may have access to it, and what purposes it may be used for.

Yet, Zuckerberg would have us believe that the primary privacy issue of concern is not about how his platform enabled the collection of users’ data without their explicit consent, but in the subsequent unauthorised sharing of the data by Kogan. Zuckerberg’s insistence that collection of data of people without their consent is not a data breach is reminiscent of the UIDAI’s recent claims in India that publication of Aadhaar numbers and related information by several government websites  is not a data breach, so long as its central biometric database in secure (Sharma 2018). In such cases also, the intended architecture ensured the seeding of other databases with Aadhaar numbers, thus creating multiple potential points of failure through disclosure. Similarly, the design flaws in direct benefit transfers enabled Airtel to create payments bank accounts with the customers’ knowledge (Hindu Business Line 2017). Such claims clearly suggest the very limited responsibility data controllers (both public and private) are willing to take for personal data that they collect, while wilfully facilitating and encouraging data practices which may lead to greater risk to data.

On this note, it is also relevant to point out that the Srikrishna committee white paper begins with identifying informational privacy and data innovation as its two key objectives. It states that “a firm legal framework for data protection is the foundation on which data-driven innovation and entrepreneurship can flourish in India.”

Conversations around privacy and data have become inevitably linked to the idea of technological innovation as a competing interest. Before engaging in such conversations, it is important to acknowledge that the value of innovation as a competing interest itself is questionable. It is not a competing right, nor a legitimate public interest endeavour, nor a proven social good.

The idea that in policymaking, technological innovations may compete with privacy of individuals assumes that there is social and/or economic good in allowing unrestricted access to data. The social argument is premised on the promises of mathematical models and computational capacity being capable of identifying key insights from data. In turn, these insights may be useful in public and private decision-making. However, it must be remembered that data is potentially a toxic asset, if it is not collected, processed, secured and shared in the appropriate way. Sufficient research suggests that indiscriminate data collection is greatly increasing the ratio of noise to signal, and can lead to erroneous insights. Further, the greater the amount of data you collect, the greater is the attack surface that leads to cybersecurity risks. Further, incidents such as Facebook–Cambridge Analytica demonstrate that toxicity of data in various ways and underscores the need for data regulation at every stage of the data lifecycle (Scheiner  2016). These are important tempering factors that need to be kept in mind while evaluating data innovation as a key mover of policy or regulation.

Privacy as Social Good

As long as privacy is framed as arising primarily from individual control, data controllers will continue to engage in practices that compromise the ability to exercise choice. There is a need to view privacy as a social good, and policymaking should ensure its preservation and enhancement. Contractual protections and legal sanctions can themselves do little if platform architectures are designed to do the exact opposite.

More importantly, policymaking needs to recognise privacy not merely as an individual right, available for individuals to forego when engaging with data-driven business models, but also as a social good. The recognition of something as a social good deems it desirable by definition, and a legitimate goal of law and policy, rather than rely completely on market forces for its achievement.

The Puttaswamy judgment (K Puttaswamy v Union of India 2017) lends sufficient weight to privacy’s social value by identifying it as fundamental to any individual development through its dependence on solitude, anonymity, and temporary releases from social duties.

Sociological scholarship demonstrates that different types of social relationships, be it Gesellschaft (interest groups and acquaintances) or Gemeinschaft (friendship, love, and marriage), and the nature of these relationships depend on the ability to conceal certain things (Simmel 1906). Demonstrating this in the context of friendships, it has been stated that such relationships “present a very peculiar synthesis in regard to the question of discretion, of reciprocal revelation and concealment.” Friendships, much like most other social relationships, are very much dependent on our ability to selectively present ourselves to others. Contrast this with Zuckerberg’s stated aim of making the world more “open” where information about people flows freely and effectively without any individual control. Contrast this also with government projects such as the Aadhaar which intends to act as one universal identity which can provide a 360-degree view of citizens.

Other scholars such as Julie Cohen (2012) and Anita Allen (2011) have demonstrated that data that a person produces or has control over concerns both herself and others. Individuals can be exposed not only because of their own actions and choices, but also made vulnerable merely because others have been careless with their data. This point is amply demonstrated in the Facebook–Cambridge Analytica incident. What this means is that protection of privacy requires not just individual action, but in a sense, requires group co-ordination. It is my argument that this group interest of privacy as a social good must be the basis of policymaking and regulation of data in the future, in addition to the idea of privacy as an individual right. In the absence of attention to the social good aspect of privacy, individual consumers are left to their own devices to negotiate  their privacy trade-offs with large companies and governments and are significantly compromised.

What this translates into is a regulatory framework and data protection frameworks should not be value-neutral in their conception of privacy as a facet of individual control. The complete reliance of data regulation on the data subject to make an informed choice is, in my opinion, an idea that has run its course. If privacy is viewed as a social good, then the data protection framework, including the laws and the architecture must be designed with a view to protect it, rather than leave it entirely to the market forces.

The Way Forward

Data protection laws need to be re-evaluated, and policymakers must recognise Lawrence Lessig’s dictum that “code is law.” Like laws, architecture and norms can play a fundamental role in regulation. Regulatory intervention for technology need not mean regulation of technology only, but also how technology itself may be leveraged for regulation (Lessig 2006; Reidenberg 1998). It is key that the latter is not left only in the hands of private players. 
Zuckerberg, in his testimony (Washington Post 2018) before the United States Senate's Commerce and Judiciary committees, asserted that "AI tools" are central to any strategy for addressing hate speech, fake news, and manipulations that use data ecosystems for targeting.

What is most concerning in his testimony is the complete lack of mention of standards, public scrutiny and peer-review processes, which “AI tools” and regulatory technologies need to be subject to. Further, it cannot be expected that data-driven businesses will view privacy as a social good or be publicly accountable.

As policymakers in India gear up for writing the country’s data protection law, they must acknowledge that their responsibility extends to creating norms and principles that will inform future data-driven platforms and regulatory technologies.

Since issues of privacy and data protection will have to be increasingly addressed at the level of how architectures enable data collection, and more importantly how data is used after collection, policymakers must recognise that being neutral about these practices is no longer enough. They must take normative positions on data collection, processing and sharing practices. These positions cannot be implemented through laws only, but need to be translated into technological solutions and norms.  Unless a multipronged approach comprising laws, architecture and norms is adopted, India’s new data protection regime may end up with limited efficacy.

Indian Intermediary Liability Regime: Compliance with the Manila Principles on Intermediary Liability

by Divij Joshi last modified May 20, 2018 03:14 PM
This report assesses the compliance of the Indian intermediary liability framework with the Manila Principles on Intermediary Liability, and recommends substantive legislative changes to bring the legal framework in line with the Manila Principles.

The report was edited by Elonnai Hickok and Swaraj Barooah


The report is an examination of Indian laws based upon the background paper to the Manila Principles as the explanatory text on which these recommendations have been based, and not an assessment of the principles themselves. To do this, the report considers the Indian regime in the context of each of the principles defined in the Manila Principles. As such, the explanatory text to the Manila Principles recognizes that diverse national and political scenario may require different intermediary liability legal regimes, however, this paper relies only on the best practices prescribed under the Manila Principles.

The report is divided into the following sections

  • Principle I: Intermediaries should be shielded by law from liability for third-party content
  • Principle II: Content must not be required to be restricted without an order by a judicial authority
  • Principle III: Requests for restrictions of content must be clear, be unambiguous, and follow due process
  • Principle IV: Laws and content restriction orders and practices must comply with the tests of necessity and proportionality
  • Principle V: Laws and content restriction policies and practices must respect due process
  • Principle VI: Transparency and accountability must be built into laws and content restriction policies and practices
  • Conclusion

Download the Full report here

DIDP Request #30 - Employee remuneration structure at ICANN

by Paul Kurian and Akriti Bopanna — last modified Aug 24, 2018 06:57 AM
We have requested ICANN to disclose the employee pay structure at ICANN with specific enquiries about the payment across the institutional hierarchy, gender, and region.

We have requested ICANN to disclose information pertaining to the income of each employee based on the following grounds. We had hoped this information will increase ICANN's transparency regarding their remuneration policies however ths was not the case, they either referred to their earlier documents who do not have concrete information or stated that the relevant documents were not in their possession. Their response to the respective questions were:

Average salary across designations

ICANN responded by referring to their FY18 Remuneration Practices document which states, “ICANN uses a global compensation expert consulting firm to provide comprehensive benchmarking market data (currently Willis Towers Watson, Mercer and Radford). The market study is conducted before the salary review process. Estimates of potential compensation adjustments typically are made during the budgeting process based on current market data. The budget is then approved as part of ICANN’s overall budget planning process.”

Average salary for female and male employees

ICANN responded by saying “ICANN org’s remuneration philosophy and practice is not based upon gender” which is why they said that they have “no documentary information in ICANN org’s possession, custody or control that is responsive to this request.” However, the exact average salaries of female and male employees was not provided nor any information that could that could give us an idea as to whether the remuneration of their employees was in accordance with the above claim.

Bonuses - frequency at which it is given and upon what basis

ICANN responded by referring to “Discretionary At-Risk Component” section in their FY18 Remuneration Practices document which states,”The amount of at-risk pay an individual can earn is based on a combination of both the achievement of goals as well as the behaviors exhibited in achieving those goals… The Board has approved a framework whereby those with ICANN Org are eligible to earn an at-risk payment of up to 20 percent of base compensation as at-risk payment based on role and level in the organization, with certain senior executives eligible for up to 30 percent.” The duration over which the employees are eligible to receive an “at-risk” payment was given to be “twice a year".

Average salary across regions for the same region

ICANN responded by saying,”compensation may vary across the regions based on currency differences, the availability of positions in a given region, market conditions, as well as the type of positions that are available in a given region. “ They also added that they have no documentary information in their possession, custody or control that is responsive to this request.


The request filed by Paul Kurian may be found here. ICANN's response can be read here.

Design Concerns in Creating Privacy Notices

by Saumyaa Naidu last modified Jun 06, 2018 01:45 PM
The purpose of privacy notices and choice mechanisms is to notify users of the data practices of a system, so they can make informed privacy decisions.

This blog post was edited by Elonnai Hickok.


The Role of Design in Enabling Informed Consent

Currently, privacy notices and choice mechanisms, are largely ineffective. Privacy and security researchers have concluded that privacy notices not only fail to help consumers make informed privacy decisions but are mostly ignored by them. [1] They have been reduced to being a mere necessity to ensure legal compliance for companies. The design of privacy systems has an essential role in determining whether the users read the notices and understand them. While it is important to assess the data practices of a company, the communication of privacy policies to users is also a key factor in ensuring that the users are protected from privacy threats. If they do not read or understand the privacy policy, they are not protected by it at all.

The visual communication of a privacy notice is determined by the User Interface (UI) and User Experience (UX) design of that online platform. User experience design is broadly about creating the logical flow from one step to the next in any digital system, and user interface design ensures that each screen or page that the user interacts with has a consistent visual language and styling. This compliments the path created by the user experience designer. [2] UI/UX design still follows the basic principles of visual communication where information is made understandable, usable and interesting with the use of elements such as colours, typography, scale, and spacing.

In order to facilitate informed consent, the design principles are to be applied to ensure that the privacy policy is presented clearly, and in the most accessible form. A paper by Batya Friedman, Peyina Lin, and Jessica K. Miller, ‘Informed Consent By Design’, presents a model of informed consent for information systems. [3] It mentions the six components of the model; Disclosure, Comprehension, Voluntariness, Competence, Agreement, Minimal Distraction. The design of a notice should achieve these components to enable informed consent. Disclosure and comprehension lead to the user being ‘informed’ while ‘consent’ encompasses voluntariness, competence, and agreement. Finally, The tasks of being informed and giving consentshould happen with minimal distraction, without diverting users from their primary taskor overwhelming them with unnecessary noise.[4]

UI/UX design builds upon user behaviour to anticipate their interaction with the platform. It has led to practices where the UI/UX design is directed at influencing the user to respond in a way that is desired by the system. For instance, the design of default options prompts users to allow the system to collect their data when the ‘Allow’ button is checked by default. Such practices where the interface design is used to push users in a particular direction are called “dark patterns”.[5] These are tricks used in websites and apps that make users buy or sign up for things that they did not intend to. [6] Dark patterns are often followed as UI/UX trends without the consequences on users being questioned. This has had implications on the design of privacy systems as well. Privacy notices are currently being designed to be invisible instead of drawing attention towards them.

Moreover, most communication designers believe that privacy notices are beyond their scope of expertise. They do not consider themselves accountable for how a notice comes across to the user. Designers also believe that they have limited agency when it comes to designing privacy notices as most of the decisions have been already taken by the company or the service. They can play a major role in communicating privacy concerns at an interface level, but the issues of privacy are much deeper. Designers tend to find ways of informing the user without compromising the user experience, and in the process choose aesthetic decisions over informed consent.

 

Issues with Visual Communication of Privacy Notices

The ineffectiveness of privacy notices can be attributed to several broad issues such as the complex language and length, their timing, and location. In 2015, the Center for Plain Language [7] published a privacy-policy analysis report [8] for TIME.com [9], evaluating internet-based companies’ privacy policies to determine how well they followed plain language guidelines. The report concluded that among the most popular companies, Google and Facebook had the more accessible notices, while Apple, Uber, and Twitter were ranked as less accessible. The timing of notices is also crucial in ensuring that it is read by the users. The primary task for the user is to avail the service being offered. The goals of security and privacy are valued but are only secondary in this process. [10] Notices are presented at a time when they are seen as a barrier between the user and the service. People thus, choose to ignore the notices and move on to their primary task. Another concern is disassociated notices or notices which are presented on a separate website or manual. The added effort of going to an external website also gets in the way of the users which leads to them not reading the notice. While most of these issues can be dealt with at the strategic level of designing the notice, there are also specific visual communication design issues that are required to be addressed.

Invisible Structure and Organisation of Information

Long spells of text with no visible structure or content organisation is the lowest form of privacy notices. These are the blocks of text where the information is flattened with no visual markers such as a section separator, or contrasting colour and typography to distinguish between the types of content. In such notices, the headings and subheadings are also not easy to locate and comprehend. For a user, the large block of text appears to be pointless and irrelevant, and they begin to dismiss or ignore it. Further, the amount of time it would take for the user to read the entire text and comprehend it successfully, is simply impractical, considering the number of websites they visit regularly.

null

The privacy policy notice by Apple [11] with no use of colours or visuals.

 

null

The privacy policy notice by Twitter [12] no visual segregator


Visual Contrast Between Front Interface and Privacy Notices

The front facing interface of an app or website is designed to be far more engaging than the privacy notice pages. There is a visible difference in the UI/UX design of the pages, almost as if the privacy notices were not designed at all. In case of Uber’s mobile app, the process of adding a destination, selecting the type of cab and confirming a ride has been made simple to do for any user. This interface has been thought through keeping in mind the users’ behaviour and needs. It allows for quick and efficient use of the service. As opposed to the process of buying into the service, the privacy notice on the app is complex and unclear.

Uber App Interface 2        Uber App Interface

Uber mobile app screenshots of the front interface (left) and the policy notice page (right)


Gaining Trust Through the Initial Pitch

A pattern in the privacy notices of most companies is that they attempt to establish credibility and gain confidence by stating that they respect the users’ privacy. This can be seen in the introductory text of the privacy notices of Apple and LinkedIn. The underlying intent seems to be that since the company understands that the users’ privacy is important, the users can rely on them and not read the full notice.

null

Introduction text to Apple’s privacy policy notice [13]

 

null

Introduction text to LinkedIn’s privacy policy notice [14]


Low Navigability

The text heavy notices need clear content pockets which can be navigated through easily using mechanisms such as menu bar. Navigability of a document allows for quick locating of sections, and moving between them. Several companies miss to follow this. Apple and Twitter privacy notices (shown above), have low navigability as the reader has no prior indication of how many sections there are in the notice. The reader could have summarised the content based on the titles of the sections if it were available in a table of contents or a menu. Lack of a navigation system leads to endless scrolling to reach the end of the page.

Facebook privacy notice, on the other hand is an example of good navigability. It uses typography and colour to build a clear structure of information that can be navigated through easily using the side menu. The menu doubles up as a table of contents for the reader. The side menu however, does not remain visible while scrolling down the page. This means while the user is reading through a section, they cannot switch to a different section from the menu directly. They will need to click on the ‘Return to top’ button and then select the section from the menu.

null

Navigation menu in the Facebook Data Policy page [15]


Lack of Visual Support

Privacy notices can rely heavily on visuals to convey the policies more efficiently. These could be visual summaries or supporting infographics. The data flow on the platform and how it would affect the users can be clearly visualised using infographics. But, most notices fail to adopt them. The Linkedin privacy notice [16] page shows a video at the beginning of its privacy policy. Although this could have been an opportunity to explain the policy in the video, LinkedIn only gives an introduction to the notice and follows it with a pitch to use the platform. The only visual used in notices currently are icons. Facebook uses icons to identify the different sections so that they can be located easily. But, apart from being identifiers of sections, these icons do not contribute to the communication of the policy. It does not make reading of the full policy any easier.


Icon Heavy ‘Visual’ Privacy Notices

The complexity of privacy notices has led to the advent of online tools and generators that create short notices or summaries for apps and websites to supplement the full text versions of policies. Most of these short notices use icons as a way of visually depicting the categories of data that is being collected and shared. iubenda [17], an online tool, generates policy notice summary and full text based on the inputs given by the client. It asks for the services offered by the site or app, and the type of data collection. Icons are used alongside the text headings to make the summary seem more ‘visual’ and hence more easily consumable. It makes the summary more inviting to read, but does not reduce the time for reading.

Another icon-based policy summary generator was created by KnowPrivacy. [18] They developed a policy coding methodology by creating icon sets for types of data collected, general data practices, and data sharing. The use of icons in these short notices is more meaningful as they show which type of data is collected or not collected, shared or not shared at a glance without any text. This facilitates comparison between data practices of different apps.

null

Icon based short policy notice created for Google by KnowPrivacy [19]


Initiatives to Counter Issues with the Design of Privacy Notices

Several initiatives have called out the issues with privacy notices and some have even countered them with tools and resources. The TIME.com ranking of internet-based companies’ privacy policies brought attention to the fact that some of the most popular platforms have ineffective policy notices. A user rights initiative called Terms of Services; Didn’t Read [20] rates and labels websites’ terms & privacy policies. There is also the Usable Privacy Policy Project which develops techniques to semi-automatically analyze privacy policies with crowdsourcing, natural language processing, and machine learning. [21] It uses artificial intelligence to sift through the most popular sites on the Internet, including Facebook, Reddit, and Twitter, and annotate their privacy policies. They realise that it is not practical for people to read privacy policies. Thus, their aim is to use technology to extract statements from the notices and match them with things that people care about. However, even AI has not been fully successful in making sense of the dense documents and missed out some important context. [22]

One of the more provocative initiatives is the Me and My Shadow ‘Lost in Small Print’ [23] project. It shows the text for the privacy notices of companies like LinkedIn, Facebook, WhatsApp, etc. and then ‘reveals’ the data collection and use information that would closely affect the users.

Issues with notices have also been addressed by standardising their format, so people can interpret the information faster. The Platform for Privacy Preferences Project (P3P) [24] was one of the initial efforts in enabling websites to share their privacy practices in a standard format. Similar to KnowPrivacy’s policy coding, there are more design initiatives that are focusing on short privacy notice design. An organisation offering services in Privacy Compliance and Risk Management Solutions called TrustArc, [25] is also in the process of designing an interactive icon-based privacy short notice.

null

TrustArc’s proposed design [26] for the short notice for a sample site

Most efforts have been done in simplifying the notices so as to decode the complex terminology. But, there have been very few evaluations and initiatives to improve the design of these notices.


Recommendations

Multilayered Privacy Notices

One of the existing suggestions on increasing usability of privacy notices are multilayered privacy notices. [27] Multilayered privacy notices comprise a very short notice designed for use on portable digital devices where there is limited space, condensed notice that contains all the key factors in an easy to understand way, and a complete notice with all the legal requirements. [28] Some of the examples above use this in the form of short notices and summaries. The very short notice layer consists of who is collecting the information, primary uses of information, and contact details of the organisation.[29] Condensed notice layer covers scope or who does the notice apply to, personal information collected, uses and sharing, choices, specific legal requirements if any, and contact information. [30] In order to maintain consistency, the sequence of topics in the condensed and the full notice must be same. Words and phrases should also be consistent in both layers. Although an effective way of simplifying information, multi-layered notices must be reconsidered along with the timing of notices. For instance, it could be more suitable to show very short notices at the time of collection or sharing of user data.

Supporting Infographics

Based on their visual design, the currently available privacy notices can be broadly classified into 4 categories; (i) the text only notices which do not have a clearly visible structure, (ii) the text notices with a contents menu that helps in informing of the structure and in navigating, (iii) the notices with basic use of visual elements such as icons used only to identify sections or headings, (iv) multilayered notices or notices with short summary before giving out the full text. There is still a lack of visual aid in all these formats. The use of visuals in the form of infographics to depict data flows could be more helpful for the users both in short summaries and complete text of policy notices.

Integrating the Privacy Notices with the Rest of the System

The design of privacy notices usually seems disconnected to the rest of the app or website. The UI/UX design of privacy notices requires as much attention as the consumer-facing interface of a system. The contribution of the designer has to be more than creating a clean layout for the text of the notice. The integration of privacy notices with the rest of the system is also related to the early involvement of the designer in the project. The designer needs to understand the information flows and data practices of a system in order to determine whether privacy notices are needed, who should be notified, and about what. This means that decisions such as selecting the categories to be represented in the short or condensed notice, the datasets within these categories, and the ways of representing them would all be part of the design process. The design interventions cannot be purely visual or UI/UX based. They need to be worked out keeping in mind the information architecture, content design, and research. By integrating the notices, strategic decisions on the timing and layering of content can be made as well, apart from the aesthetic decisions. Just as the aim of the front face of the interface in a system makes it easier for the user to avail the service, the policy notice should also help the user in understanding the consequences, by giving them clear notice of the unexpected collection or uses of their data.

Practice Based Frameworks on Designing Privacy Notices

There is little guidance available to communication designers for the actual design of privacy notices which is specific to the requirements and characteristics of a system. [31] The UI/UX practice needs to be expanded to include ethical ways of designing privacy notices online. The paper published by Florian Schaub, Rebecca Balebako, Adam L. Durity, and Lorrie Faith Cranor, called, ‘A Design Space for Effective Privacy Notice’ in 2015 offers a comprehensive design frame­work and standardised vocabulary for describing privacy notice options. [32] The objective of the paper is to allow designers to use this framework and vocabulary in creating effective privacy notices. The design space suggested has four key dimensions, ‘timing’, ‘channel’, ‘modality’ and ‘control’. [33] It also provides options for each of these dimensions. For example, ‘timing’ options are ‘at setup’, ‘just in time’, ‘context-dependent’, ‘periodic’, ‘persistent’, and ‘on demand’. The dimensions and options in the design space can be expanded to accommodate new systems and interaction methods.

Considering the Diversity of Audiences

For the various mobile apps and services, there are multiple user groups who use them. The privacy notices are hence not targeted to one kind of an audience. There are diverse audiences who have different privacy preferences for the same system. [34] The privacy preferences of these diverse groups of users’ must be accommodated. In a typical design process for any system, multiple user personas are identified. The needs and behaviour of each persona is used to determine the design of the interface. Privacy preferences must also be observed as part of these considerations for personas, especially while designing the privacy notices. Different users may need different kinds of notices based on which data practices affect them.[35] Thus, rather than mandating a single mechanism for obtaining informed consent for all users in all situations, designers need to provide users with a range of mechanisms and levels of control. [36]

Ethical Framework for Design Practitioners

An ethical framework is required for design practitioners that can be followed at the level of both deciding the information flow and the experience design. With the prevalence of ‘dark patterns’, the visual design of notices is used to trick users into accepting it. Design ethics can play a huge role in countering such practices. Will Dayable, co-director at Squareweave, [37] a developer of web and mobile apps, suggests that UI/UX designers should “Design Like They’re (Users are) Drunk”. [38]  He asks designers to imagine the user to be in a hurry and still allow them access to all the information necessary for making a decision. He concludes that good privacy UX and UI is about actually trying to communicate with users rather than trying to slip one past them. In principle, an ethical design practice would respect the rights of the users and proactively design to facilitate informed consent.


Reconceptualising Privacy Notices

Based on the above recommendations, a guiding sample for multilayered privacy notices has been created. Each system would need its own structure and mechanisms for notices, which are integrated with its data practice, audiences, and medium, but this sample notice provides basic guidelines for creating effective and accessible privacy notices. The aesthetic decisions would also vary based on the interface design of a system.

null

Sample Fixed Icon for Privacy Notifications

A fixed icon can appear along with all privacy notifications on the system, so that the users can immediately know that the notification is about a privacy concern. This icon should capture attention instantly and suggest a sense of caution. Besides its use as a call to attention, the icon can also lead to a side panel for privacy implications from all actions that the user takes.

null

Sample Very Short Notice on Desktop and Mobile Platforms

The very short notices can be shown when an action from the user would lead to data collection or sharing. The notice mechanism should be designed to provide notices at different times tailored to a user’s needs in that context. The styling and placement of the ‘Allow’ and ‘Don’t Allow’ buttons should not be biased towards the ‘Allow’ option. The text used in very short and condensed notice layers should be engaging yet honest in its communication.

null

Sample Summary Notice

The summary or the condensed notice layer should allow the user to gauge at a glance, how the data policy is going to affect them. This can be combined with a menu that lists the topics covered in the full notice. The menu would double up as a navigation mechanism for users. It should be visible to users even as they scroll down to the full notice. The condensed notice can also be supported by an infographic depicting the flow of data in the system.

null

Sample Navigation Menu

All the images in this section use sample text for the purpose of illustrating the structure and layout

The full notice can be made accessible by creating a clear information hierarchy in the text. The menu which is available on the side while scrolling down the text would facilitate navigation and familiarity with the structure of the notice.


Conclusion

The presentation of privacy notices directly influences the decisions of users online and ineffective notices make users vulnerable to their data being misused. But currently, there is little conversation about privacy and data protection among designers. Design practice has to become sensitive to privacy and security requirements. Designers need to take the accountability of creating accessible notices which are beneficial to the users, rather than to the companies issuing them. They must prioritise the well-being of users over aesthetics and user experience even. The aesthetics of a platform must be directed at achieving transparency in the privacy notice by making it easily readable.

The design community in India has a more urgent task at hand of building a design practice that is informed by privacy. Comparing the privacy notices of Indian and global companies, Indian companies have an even longer way to go in terms of communicating the notices effectively. Most Indian companies such as Swiggy, [39] 99acres, [40] and Paytm [41] have completely textual privacy policy notices with no clear information hierarchy or navigation. Ola Cabs [42]  provides an external link to their privacy notice, which opens as a pdf, making it even more inaccessible. Thus, there is a complete lack of design input in the layout of these notices.

Designers must engage in conversations with technologists and researchers, and include privacy and other user rights in design education in order to prepare practitioners for creating more valuable digital platforms.


  1. https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
  2. https://www.fastcodesign.com/3032719/ui-ux-who-does-what-a-designers-guide-to-the-tech-industry
  3. https://vsdesign.org/publications/pdf/Security_and_Usability_ch24.pdf
  4. https://vsdesign.org/publications/pdf/Security_and_Usability_ch24.pdf
  5. https://fieldguide.gizmodo.com/dark-patterns-how-websites-are-tricking-you-into-givin-1794734134
  6. https://darkpatterns.org/
  7. https://centerforplainlanguage.org/
  8. https://centerforplainlanguage.org/wp-content/uploads/2016/11/TIME-privacy-policy-analysis-report.pdf
  9. http://time.com/3986016/google-facebook-twitter-privacy-policies/
  10. https://www.safaribooksonline.com/library/view/security-and-usability/0596008279/ch04.html
  11. https://www.apple.com/legal/privacy/en-ww/?cid=wwa-us-kwg-features-com
  12. https://twitter.com/privacy?lang=en
  13. https://www.apple.com/legal/privacy/en-ww/?cid=wwa-us-kwg-features-com
  14. https://www.linkedin.com/legal/privacy-policy
  15. https://www.facebook.com/privacy/explanation
  16. https://www.linkedin.com/legal/privacy-policy
  17. http://www.iubenda.com/blog/2013/06/13/privacy­policy­for­android­app/
  18. http://knowprivacy.org/policies_methodology.html
  19. http://knowprivacy.org/profiles/google
  20. https://tosdr.org/
  21. https://explore.usableprivacy.org/
  22. https://motherboard.vice.com/en_us/article/a3yz4p/browser-plugin-to-read-privacy-policy-carnegie-mellon
  23. https://myshadow.org/lost-in-small-print
  24. https://www.w3.org/P3P/
  25. http://www.trustarc.com/blog/2011/02/17/privacy-short-notice-designpart-i-background/
  26. http://www.trustarc.com/blog/?p=1253
  27. https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
  28. https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/ten_steps_to_develop_a_multilayered_privacy_notice__white_paper_march_2007_.pdf
  29. https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/ten_steps_to_develop_a_multilayered_privacy_notice__white_paper_march_2007_.pdf
  30. https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/ten_steps_to_develop_a_multilayered_privacy_notice__white_paper_march_2007_.pdf
  31. https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
  32. https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
  33. https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
  34. https://www.safaribooksonline.com/library/view/security-and-usability/0596008279/ch04.html
  35. https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
  36. https://vsdesign.org/publications/pdf/Security_and_Usability_ch24.pdf
  37. https://www.squareweave.com.au/
  38. https://iapp.org/news/a/how-ui-and-ux-can-ko-privacy/
  39. https://www.swiggy.com/privacy-policy
  40. https://www.99acres.com/load/Company/privacy
  41. https://pages.paytm.com/privacy.html
  42. https://s3-ap-southeast-1.amazonaws.com/ola-prod-website/privacy_policy.pdf

CIS contributes to ABLI Compendium on Regulation of Cross-Border Transfers of Personal Data in Asia

by Amber Sinha and Elonnai Hickok — last modified Jun 03, 2018 03:10 PM
The Asian Business Law Institute, based in Singapore published a compendium on “Regulation of cross-border transfer of personal data in Asia”. This was part of an exercise to explore legal convergence around issues such as data protection, enforcement of foreign judgments and principle of restructuring in Asia.

The compendium contains 14 detailed reports written by legal practitioners, legal scholars and researchers in their respective jurisdictions, on the regulation of cross-border data transfers in the wider Asian region (Australia, China, Hong Kong SAR, India, Indonesia, Japan, South Korea, Macau SAR, Malaysia, New Zealand, Philippines, Singapore, Thailand, and Vietnam).

The compendium is intended to act as a springboard for the next phase of ABLI's project, which will be devoted to the in-depth study of the differences and commonalities between Asian legal systems on these issues and – where feasible – the drafting of recommendations and/or policy options to achieve convergence in this area of law in Asia.


The chapter titled Jurisdictional Report India was authored by Amber Sinha and Elonnai Hickok. The compendium can be accessed here.

Comments on the Draft National Policy on Official Statistics

by Gurshabad Grover and Sandeep Kumar — last modified Jun 07, 2018 02:54 AM
This submission presents comments by the Centre for Internet & Society, India (“CIS”) on the Draft National Policy on Official Statistics which was released to the public by the Ministry of Statistics and Programme Implementation on 17th May 2018 for comments and views.

Edited by Swaraj Barooah. Download a PDF of the submission here


Preliminary

CIS appreciates the Government’s efforts in realising the importance of the need for high quality statistical information enshrined in the Fundamental Principles of Official Statistics as adopted by the UN General Assembly in January 2014. CIS is grateful for the opportunity to put forth its views on the draft policy. This submission was made on 31st May, 2018.

First, this submission highlights some general defects in the draft policy: there is lack of principles guiding data dissemination policies; there are virtually no positive mandates set for Government bodies for secure storage and transmission of data; and while privacy is mentioned as a concern, it has been overlooked in designing the principles of the implementation of surveys. Then, this submission puts forward specific comments suggesting improvements to various sections in the draft policy.

CIS would also like to point out the short timeline between the publication of the draft policy (18th May, 2018), and the deadline set for the stakeholders to submit their comments (31st May, 2018). Considering that the policy has widespread implications for all Ministries, citizens, and State legislation rights (proposed changes include a Constitutional Amendment), it is necessary that such call-for-comments are publicised widely, and enough time is given to the public so that the Government can receive well-researched comments.

General Comments

Data dissemination

For data dissemination, the draft policy does not stress upon a general principle or set of principles, and often disregards principles specified in the Fundamental Principles of Official Statistics, which are the very principles the Government intends to draw its policies on official statistics from. Rather it relies on context-specific provisions that fail to summarise and articulate a general philosophy for the dissemination of official statistics, and fails to practically embody some stated goals. The first principle on Official Statistics, as realised by the United Nations General Assembly, clearly states that: “[...] official  statistics  that  meet  the  test  of  practical utility  are  to  be  compiled  and  made  available  on  an  impartial  basis  by  official statistical agencies to honour citizens’ entitlement to public information.”

Let us compare this with Section 5.1.7 (9) of the draft policy, which refers to policies regarding core statistics: it mentions a data “warehouse” to be maintained by the NSO which should be accessible to private and public bodies. While this does point towards an open data policy, such a vision has not been articulated in any part thereof.

The draft policy, at the outset, should have general guiding principles of publishing data openly and freely (once it meets the utility test, and it has been ensured that individual privacy will not be violated by the publishing of such statistics). This should serve well to inform further regulations and related policies governing the use and publishing of statistics, like the Statistical Disclosure Control Report.

A general commitment to a well-articulated policy on data dissemination will ensure easy-to-follow principles for the various Ministries that will refer to the document. The additional principles that come with open data principles should also be described by the policy document: a commitment to publishing data in a machine-readable format, making it available in multiple data formats (.txt, .csv, etc.), and including its metadata.

Data storage and usage

In the absence of a regime for data protection, it is absolutely necessary that a national policy on statistics provide positive mandates for the encryption of all digitally-stored personal and sensitive information collected through surveys. Even though the current draft of the policy mentions the need to protect confidential information, it sets no mandatory requirements on the Government to ensure the security of such information, especially on digital platforms.

Additionally, all transmission of potentially sensitive information should be done with the digital signatures of the employee/Department/Ministry authorising said transmission. This will ensure the integrity and authenticity of the information, and provide with an auditable trail of the information flowing between entities in the various bodies.

Data privacy

It is appreciable that Section 5.7.9 of the draft policy notes, “[a]ll statistical surveys represent a degree of privacy invasion, which is justified by the need for an alternative public good, namely information.” However, all statistical surveys may not be proportionate in their invasiveness, even if they might serve a legitimate public goal in the future.

The draft policy does not address how privacy concerns can be taken into account while designing the survey itself. A necessary outcome of the realisation of the possible privacy violations that may arise due to surveys is that all data collection be “minimally intrusive”, the data be securely stored (see previous comment section, ‘Data storage and usage’), and the surveyed users have control over the data even after they have parted with their information.

Since the policy deals extensively with the implementation of surveys, the following should details should be clearly laid out in the policy:

  • The extent to which an individual has control over the data they have provided to the surveying agency.
  • The means of redressal available to an individual who feels that his/her privacy has been violated through the publication of certain statistical information

Specific Comments

Section 5.1: Dichotomising official statistics as core statistics and other official statistics

Comments

The reasons for dichotomising official statistics has not been appropriately substantiated with evidence, considering the wide implications of policy proposals that arise from the definition of “core statistics.”

Firstly, the descriptions of what constitutes “core statistics” casts too wide a net by only having a single vague qualitative criterion, i.e. “national importance.” All the other characteristics of the “core statistics” are either recommendations or requirements as to how the data will be handled and thus, pose no filter to what can constitute “core statistics.” The wide net is apparent in the fact that even the initially-proposed list of “core statistics”, given in Annex-II of the policy, has 120 categories of statistics.

Secondly, the policy does not provide reasons for why the characteristics of “core statistics”, highlighted in Section 5.1.5, should not apply to all official statistics at the various levels of Government. Therefore, the utility of the proposed dichotomy has also not been appropriately substantiated with illustrative examples of how “core statistics” should be considered qualitatively different from all official statistics.

This definition may lead to widespread disagreement between the States and the Centre, because Section 5.2 proposes that “core statistics” be added to the Union List of the Seventh Schedule of the Constitution. How the proposal may affect Centre-State responsibilities and relations pertaining to the collection and dissemination of statistics is elaborated in the next section.

Recommendations

The policy should not make a forced dichotomy between “core” and (ipso facto) non-core statistics. If a distinction is to be made for any reason(s) (such as for the purposes of delineating administrative roles) then such reason must be clearly defined, along with a clear explanation for why such a dichotomy would alleviate the described problem. The definitions should have tangible and unambiguous qualitative criteria.

Section 5.2: Constitutional amendment in respect of core statistics

Comments

The main proposal in the section is that the Seventh Schedule of the Constitution be amended to include “core statistics” in the Union List. This would give the Parliament the legislative competence to regulate the collection, storage, publication and sharing of such statistics, and the Central Government the power to enforce such legislation. Annex-II provides a tentative list of what would constitute “core statistics”; as is apparent, this list is wide-ranging and consists over 120 items which span the gamut of administrative responsibilities.

The list includes items such as “Landholdings Number, area, tenancy, land utilisation [...]” (S. No. 21), and “Statistics on land records” (S. No. 111) while most responsibilities of land regulation currently lie with the States. Similarly, items in Annex-II venture into statistics related to petroleum, water, agriculture, electricity, and industry; some of which are in the Concurrent or State List.

Statistics are metadata. There is no reason for why the administration of a particular subject lie with the State, and the regulation of data about such subject should lie with solely with the Central Government. It is important to recognise that adding the vaguely defined “core statistics” to the Union List, while enabling the Central Government to execute and plan such statistical exercises, will also prevent the States from enacting any legislation that regulates the management of statistics regarding its own administrative responsibilities.

The regulation of State Government records in general has been a contentious issue, and its place in our federal structure has been debated several times in the Parliament: the enactment of Public Records Act, 1993; the Right to Information Act, 2005; and the Collection of Statistics Act, 2008 are predicated on an assumption of such competence lying with the Parliament. However, it is equally important to recognise the role States have played in advancing transparency of Government records. For example, State-level Acts analogous to the Right to Information Act existed in Tamil Nadu and Karnataka before the Central Government enactment.

Recommendations

We strongly recommend that “statistics” be included in the Concurrent List, so that States are free to enact progressive legislation which advances transparency and accountability, and is not in derogation of Parliamentary legislation.

The Ministry should view this statistical policy document as a venue to set the minimum standards for the collection, handling and publication of statistics regarding its various functions. If the item is added to the Concurrent List, the States, through local legislation, will only have the power to improve on the Central standards since in a case of conflict, State-levels laws will be superseded by Parliamentary ones.

Section 5.3: Mechanism for regulating core statistics including auditing

Comments

The draft policy in Section 5.3.2 says, “[...] The Committee will be assisted by a Search Committee headed by the Vice-Chairperson of the NITI Aayog, in which a few technical experts could be included as Members.” The non-commital nature of the word ‘could’ in this statement detracts from the importance of having technical experts on this committee, by making their inclusion optional. The policy also does not specify who has the power to include technical experts as Members in the Search Committee. The statement should include either a minimum number of a  specific number or members, and not use the non-committal word “could”

The National Statistical Development Council, as mentioned in 5.3.9, is supposed to “handle Centre-State relations in the areas of official statistics, the Council should be represented by Chief Ministers of six States to be nominated by the Centre” (Section 5.3.10). The draft does not elaborate on the rationale behind including just six states in the Council. It does not recommend any mechanism on the basis of which Centre will nominate states in the council.

Recommendations

The policy should recommend a minimum number of technical experts who must be included in the search committee, along with a clear process for how such members are to be appointed.

Additionally, the policy appropriately recognises the great diversity in India and the unique challenges faced by each State. Thus, each State has its unique requirements. Since in Section 5.3.11, the policy recommends that council meet at a low frequency of at least once in a year, all States should be represented in the Council.

Section 5.4: Official Machinery to implement directions on core statistics

Comments

The functions of Statistics Wing in the MOSPI, laid out in Section 5.4.7, include advisory functions which overlap with functions of National Statistical Commission (NSC) mentioned in Section 5.3.5. Some regulatory functions of Statistics Wing, like “conducting quality checks and auditing of statistical surveys/data sets”, overlap with the regulatory functions of NSC mentioned in Section 5.3.7.

In section 5.3.1, the draft policy explicitly mentions that “what is feasible and desirable is that production of official statistics should continue with the Government, whereas the related regulatory and advisory functions could be kept outside the Government”. But Statistics Wing is a part of the government and it also has regulatory and advisory functions. It will adversely affect the power of NSC as an autonomous body.

There are inconsistencies in the draft-policy regarding the importance and need of a decentralized statistical system. In section 3 [Objectives], it has been emphasized that the Indian Statistical System shall function within decentralized structure of the system.  But, in section 5.4.15, the draft says that decentralized statistical system poses a variety of problems, and advocates for a unified statistical system. Again, in section 5.15, draft emphasizes the development of sub-national statistical systems. These views are inconsistent and create confusion regarding the nature of statistical system that policy wants to pursue.

Recommendations

The functions of the NSC should be kept in its exclusive domain. Any such overlapping functions should be allocated to one agency taking into consideration the Fundamental Principles on Official Statistics.

The inconsistencies regarding the decentralisation philosophy of the statistical system should be addressed.

Section 5.5: Identifying statistical products required through committees

Comments

While Section 5.5.2 recognises data confidentiality as a goal for statistical coordination, it does not take into account the violation of privacy that might occur due to the sharing of data. For example, a certain individual might agree to share personal information with a particular Ministry, but have apprehensions about it being shared with other Ministries or private parties.

Recommendations

We recommend that point 4 in Section 5.5.2 be read as, “enabling sharing of data without compromising the privacy of individuals and the confidentiality/security of data.”The value of of the individual privacy stems from both the recent Supreme Court judgment that affirmed privacy as a Fundamental Right, and also Principle 6 of the of the Fundamental Principles of Official Statistics. Realising privacy as a goal in this section will add a realm of individual control that is already articulated in Section 5.7.9.

Annex-VII: Guidelines on Outsourcing statistical activities

Comments

Section 6 defines “sensitive information” in an all-inclusive manner and does not leave space for further inclusion of any information that may be interpreted as sensitive. For example, biometric data has not been listed as “sensitive information”.

Section 9.1, draft says, “[t]he identity of the Government agency and the Contractor may be made available to informants at the time of collection of data”. It is imperative that informants have the right to verify the identity of the Government agency and the Contractor before parting with their personal information.

Recommendations

The definition of “sensitive information” should be broad-based with scope for further inclusion of any kind of data that may be deemed “sensitive.”

Section 9.1 must mandate that the identity of the Government agency and the Contractor be made available to informants at the time of collection of data.

Section 9.6 can be redrafted to state that each informant must be informed of the manner in which the informant could access the data collected from the informant in a statistical project, as also of the measures taken to deny access on that information to others, except in the cases specified by the policy.

Section 10.2 can be improved to state that if information exists in a physical form that makes the removal of the identity of informants impracticable (e.g. on paper), the information should be recorded in another medium and the original records must be destroyed.

Network Disruptions Report by Global Network Initiative

by Akriti Bopanna last modified Jun 12, 2018 01:31 AM
Around 70% of all known shutdowns in the world took place in India in 2017. The same year Telecom Authority of India (TRAI) released the “Temporary Suspension of Internet Services” giving State and Central Government officials the power to terminate Internet services as per the guidelines.

The report by Global Network Initiative can be read here.


However S.144 of the Criminal Procedure Code as well Section 5 of the Telegraph Act are still used as legal grounds. The former targets unlawful assembly while the latter gives authorities the right to prevent transmission of messages, applicable to messages sent over the Internet as well. A case in the Gujarat High Court challenging the validity of using S.144 of the CrPC was dismissed essentially stating the Government could use the section to enforce shutdowns to maintain law and order.

The right to Internet has been accepted as a fundamental right by the United Nations and one which, cannot be disassociated from the exercise of freedom of expression and opinion and the right to peaceful assembly. These are rights guaranteed by the Constitution, affirmed in the Universal Declaration of Human Rights and thus should be provided, both, online and offline.  Online movements are unpredictable and dynamic making Governments fearful of their lack of control over content hosting websites. Their fear becomes their de facto perception of online services resulting in network shutdowns regardless of the reality on ground.

Given the rising importance of this issue, Global Network Initiative has published a report on such Network Disruptions by Jan Rydzak . A former Google Policy fellow and now a PhD candidate at the University of Arizona, he, conducts research on the nexus between technology and protest. The report, which uses India as a case study calls for more attention on network disruptions, the 'new form of digital repression' and delves into its impact on human rights.  Rydzak aims at widening the gambit of affected rights by discussing the civil and political rights of freedom of assembly, right to equality, religious belief and such. These are ramifications not widely discussed so far and helps shine a light on the collateral damage incurred due to these shutdowns.  Through a multitude of interviews with various stakeholders, the author brings to forefront the human rights implications of network disruptions on different groups of individuals such as women, immigrants and certain ethnic groups. These dangers are even more when it comes to vulnerable populations and the report does a comprehensive analysis of all of the above.

NITI Aayog Discussion Paper: An aspirational step towards India’s AI policy

by Sunil Abraham, Elonnai Hickok, Amber Sinha, Swaraj Barooah, Shweta Mohandas, Pranav M Bidare, Swagam Dasgupta, Vishnu Ramachandran and Senthil Kumar — last modified Jun 13, 2018 01:08 PM
The National Strategy for Artificial Intelligence — a discussion paper on India’s path forward in AI, is a welcome step towards a comprehensive document that reflects the government's AI ambitions. The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability.

Download the Report


The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability. The paper identifies five focus areas where AI could have a positive impact in India. It also focuses on reskilling as a response to the potential problem of job loss due the future large-scale adoption of AI in the job market. This blog is a follow up to the comments made by CIS on Twitter on the paper and seeks to reflect on the National Strategy as a well researched AI roadmap for India. In doing so, it identifies areas that can be strengthened and built upon.

Identified Focus Areas for AI Intervention

The paper identifies five focus areas—Healthcare, Agriculture, Education, Smart Cities and Infrastructure, Smart Mobility and Transportation, which Niti Aayog believes will benefit most from the use of AI in bringing about social welfare for the people of India. Although these sectors are essential in the development of a nation, the failure to include manufacturing and services sectors is an oversight. Focussing on  manufacturing is fundamental not only in terms of economic development and user base, but also regarding questions of safety and the impact of AI on jobs and economic security. The same holds true for the service sector particularly since AI products are being made for the use of consumers, not just businesses. Use of AI in the services sector also raises critical questions about user privacy and ethics. Another sector the paper fails to include is defense, this is worrying since India is chairing the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) in 2018. Across sectors, the report fails to look at how AI could be utilised to ensure accessibility and inclusion for the disabled. This is surprising, as  aid for the differently abled and accessibility technology was one of the 10 domains identified in the Task Force Report on AI published earlier this year. This should have been a focus point in the paper as it  aims to identify applications with maximum social impact and inclusion.

In its vision for the use of AI in smart cities, the paper suggests the adoption of a sophisticated surveillance system as well as the use of social media intelligence platforms to check and monitor people’s movement both online and offline to maintain public safety. This is at variance with constitutional standards of due process and criminal law principles of reasonable ground and reasonable suspicion. Further, use of such methods will pose issues of judicial inscrutability. From a rights perspective, state surveillance can directly interfere with fundamental rights including privacy, freedom of expression, and freedom of assembly. Privacy organizations around the world have raised concerns regarding the increased public surveillance through the use of AI. Though the paper recognized the impact on privacy that such uses would have, it failed to set a strong and forward looking position on the issue - such as advocating that such surveillance must be lawful and inline with international human rights norms.

Harnessing the Power of AI and Accelerating Research

One of the ways suggested for the proliferation of AI in India was to increase research, both core and applied, to bring about innovation that can be commercialised. In order to attain this goal the paper proposes a two-tier integrated approach: the establishment of  COREs (Centres of Research Excellence in Artificial Intelligence) and ICTAI (International Centre for Transformational Artificial Intelligence). However the roadmap to increase research in AI fails to acknowledge the principles of public funded research such as free and open source software (FOSS), open standards and open data. The report also blames the current Indian  Intellectual Property regime for being “unattractive” and averse to incentivising research and adoption of AI. Section 3(k) of Patents Act exempts algorithms from being patented, and the Computer Related Inventions (CRI) Guidelines have faced much controversy over the patentability of mere software without a novel hardware component. The paper provides no concrete answers to the question of whether it should be permissible to patent algorithms, and if yes, to  to what extent. Furthermore, there needs to be a standard either in the CRI Guidelines or the Patent Act, that distinguishes between AI algorithms and non-AI algorithms. Additionally, given that there is no historical precedence on the requirement of patent rights to incentivise creation of AI,  innovative investment protection mechanisms that have lesser negative externalities, such as compensatory liability regimes would be more desirable.  The report further failed to look at the issue holistically and recognize that facilitating rampant patenting can form a barrier to smaller companies from using or developing  AI. This is important to be cognizant of given the central role of startups to the AI ecosystem in India and because it can work against the larger goal of inclusion articulated by the report.

Ethics, Privacy, Security and Safety

In a positive step forward, the paper addresses a broader range of ethical issues concerning AI including transparency, fairness, privacy and security and safety in more detail when compared to the earlier report of the Task Force. Yet despite a dedicated section covering these issues, a number of concerns still remain unanswered.

Transparency

The section on transparency and opening the Black Box has several lacunae. First, AI that is used by the government, to an acceptable extent, must be available in the public domain for audit, if not under Free and Open Source Software (FOSS). This should hold true in particular for uses that impinge on fundamental rights. Second, if the AI is utilised in the private sector, there currently exists a right to reverse engineer within the Indian Copyright Act, which is not accounted for in the paper. Furthermore, if the AI was involved both in the commission of a crime or the violation of human rights, or in the investigations of such transgressions, questions with regard to judicial scrutability of the AI remain. In addition to explainability, the source code must be made circumstantially available, since explainable AI alone cannot solve all the problems of transparency. In addition to availability of source code and explainability, a greater discussion is needed about the tradeoff between a complex and potentially more accurate AI system (with more layers and nodes)  vs. an AI system which is potentially not as accurate but is able to provide a human readable explanation. It is interesting to note that transparency within human-AI interaction is absent in the paper. Key questions on transparency, such as whether an AI should disclose its identity to a human have not been answered.

Fairness

With regards to fairness, the paper mentions how AI can amplify bias in data and create unfair outcomes. However, the paper neither suggests detailed or satisfactory solutions nor does it deal with biased historical data in an Indian context. More specifically, there seems to be no mention of regulatory tools to tackle the problem of fairness, such as:

  • Self-certification
  • Certification by a self-regulatory body
  • Discrimination impact assessments
  • Investigations by the privacy regulator

Such tools will proactively need to ensure inclusion, diversity, and equity in composition and decisions.

Additionally, with reference to correcting bias in AI, it should be noted that the technocratic view that as an AI solution continues to be trained on larger amounts of data  , systems will self correct, does not fully recognize the importance of data quality and data curation, and is inconsistent with fundamental rights. Policy objectives of AI innovation must be technologically nuanced and cannot be at the cost of intermediary denial of rights and services.

Further, the paper does not deal with issues of multiple definitions and principles of fairness, and that building definitions into AI systems may often involve choosing one definition over the other. For instance, it can be argued that the set of AI ethical principles articulated by Google are more consequentialist in nature involving a a cost-benefit analysis, whereas a human rights approach may be more deontological in nature. In this regard, there is a need for interdisciplinary research involving computer scientists, statisticians, ethicists and lawyers.

Privacy

Though the paper underscores the importance of privacy and the need for a privacy legislation in India - the paper limits the potential privacy concerns arising from AI to collection, inappropriate use of data, personal discrimination, unfair gain from insights derived from consumer data  (the solution being to explain to consumers about the value they as consumers gain from this), and unfair competitive advantage by collecting mass amounts of data (which is not directly related to privacy). In this way the paper fails to discuss the full implications on privacy that AI might have and fails to address the data rights necessary to enable the right to privacy in a society where AI is pervasive. The paper fails to engage with emerging principles from data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI. Further, there is no discussion on the issues such as data minimisation and purpose limitation which some big data and AI proponents argue against. To that extent, there is a lack of appreciation of the difficult policy questions concerning privacy and AI. The paper is also completely silent on redress and remedy.  Further the paper endorses the seven data protection principles postulated by the Justice Srikrishna Committee. However CIS has pointed out that these principles are generic and not specific to data protection. Moreover, the law chapter of IEEE’s ‘Global Initiative on Ethics of Autonomous and Intelligent Systems’ has been ignored in favor of the chapter on ‘Personal Data and Individual Access Control in Ethically Aligned Design as the recommended international standard. Ideally, both chapters should be recommended for a holistic approach to the issue of ethics and privacy with respect to AI.

AI Regulation and Sectoral Standards

The discussion paper’s approach towards sectoral regulation advocates collaboration with industry to formulate regulatory frameworks for each sector.  However, the paper is silent on the possibility of reviewing existing sectoral regulation to understand if they require amending. We believe that this is an important solution to consider since amending existing regulation and standards often takes less time than formulating and implementing new regulatory frameworks. Furthermore, although the emphasis on awareness in the paper is welcome, it must complement regulation and be driven by all stakeholders, especially given India’s limited regulatory budget. The over reliance on industry self-regulation, by itself, is not advisable, as there is an absence of robust industry governance bodies in India and self-regulation raises questions about the strength and enforceability of such practices. The privacy debate in India has recognized this and reports, like the Report of the Group of Experts on Privacy, recommend a co-regulatory framework with industry developing binding standards that are inline with the national privacy law and that are approved and enforced by the Privacy Commissioner. That said, the UN Guiding Principles on Business and Human Rights and its “protect, respect, and remedy” framework should guide any self regulatory action.

Security and Safety of AI Systems

In terms of security and safety of AI systems the paper seeks to shift the discussion of accountability being primarily about liability, to that of one about the  explainability of AI. Furthermore, there is no recommendation of immunities or incentives for whistleblowers or researchers to report on privacy breaches and vulnerabilities. The report also does not recognize certain uses of AI as being more critical than others because of their potential harm to the human. This would include uses in healthcare and autonomous transportation. A key component of accountability in these sectors will be the evolution of appropriate testing and quality assurance standards. Only then, should safe harbours be discussed as an extension of the negligence test for damages caused by AI software. Additionally, the paper fails to recommend kill switches, which should be mandatory for all kinetic AI systems. Finally, there is no mention of mandatory human-in-the-loop in all systems where there are significant risks to safety and human rights. Autonomous AI is only viewed as an economic boost, but its potential risks have not been explored sufficiently. A welcome recommendation would be for all autonomous AI to go through human rights impact assessments.

Research and Education

Being a government think-tank, the NITI Aayog could have dealt in detail with the AI policies of the government and looked at how different arms of the government are aiming to leverage AI and tackle the problems arising out of the use of AI. Instead of tabulating the government’s role in each area and especially research, the report could have also listed out the various areas where each department could play a role in the AI ecosystem through regulation, education, funding research etc. In terms of the recommendations for introducing AI curriculums in schools, and colleges, the government could also ensure that ethics and rights are  part of the curriculum - especially in technical institutions. A possible course of action could include corporations paying for a pan-Indian AI education campaign.This would also require the government to formulate the required academic curriculum that is updated to include rights and ethics.

Data Standards and Data Sharing

Based on the amount of data the Government of India collects through its numerous schemes, it has the potential to be the largest aggregator of data specific to India. However the paper does not consider the use of this data with enough gravity. For example, the paper recommends Corporate Data Sharing for “social good” and making government datasets from the social sector available publicly. Yet  this section does not mention privacy enhancing technologies/standards such as pseudonymization, anonymization standards, differential privacy etc. Additionally there should be provisions that allow the government to prevent the formation of monopolies by regulating companies from hoarding user data. The open data standards could also be applicable to the private companies, so that they can also share their data in compliance with the privacy enhancing technologies mentioned above. The paper also acknowledges that AI Marketplaces require monitoring and maintenance of quality. It recognises the need for “continuous scrutiny of products, sellers and buyers”, and proposes that the government enable these regulations in a manner that private players could set up the marketplace. This is a welcome suggestion, but the legal and ethical framework of the AI Marketplace requires further discussion and clarification.

An AI Garage for Emerging Economies

The discussion paper also qualifies India as an “ideal test-bed” for trying out AI related solutions. This is problematic since questions of regulation in  India with respect to AI have yet to be legally clarified and defined and India does not have a comprehensive privacy law. Without a strong ethical and regulatory framework, the use of new and possibly untested technologies in India could lead to unintended and possibly harmful outcomes.The government's ambition to position India as a leader amongst developing countries on AI related issues should not be achieved by using Indians as test subjects for technologies whose effects are unknown.

Conclusion

In conclusion, NITI Aayog’s discussion paper represents a welcome step towards a comprehensive AI strategy for India. However, the trend of inconspicuously releasing reports (this and the AI Task Force) as well as the lack of a call for public comments, seems to be the wrong way to foster discussion on emerging technologies that will be as pervasive as AI.

The blanket recommendations were provided without looking at its viability in each sector. Furthermore, the discussion paper does not sufficiently explore or, at times, completely omits key areas. It barely touched upon societal, cultural and sectoral challenges to the adoption of AI — research that CIS is currently in the process of undertaking.Future reports on Indian AI strategy should pay more attention to the country’s unique legal context and to possible defense applications and take the opportunity to establish a forward looking, human rights respecting, and holistic position in global discourse and developments. Reports should also consider infrastructure investment as an important prerequisite for AI development and deployment. Digitised data and connectivity as well as more basic infrastructure, such as rural electricity and well-maintained roads, require more funding to more successfully leverage AI for inclusive economic growth. Although there are important concerns, the discussion paper is an aspirational step toward India’s AI strategy.

Why NPCI and Facebook need urgent regulatory attention

by Sunil Abraham last modified Jun 12, 2018 02:07 AM
The world’s oldest networked infrastructure, money, is increasingly dematerialising and fusing with the world’s latest networked infrastructure, the Internet.

The article was published in the Economic Times on June 10, 2018.


As the network effects compound, disruptive acceleration hurtle us towards financial utopia, or dystopia. Our fate depends on what we get right and what we get wrong with the law, code and architecture, and the market.

The Internet, unfortunately, has completely transformed from how it was first architected. From a federated, generative network based on free software and open standards, into a centralised, environment with an increasing dependency on proprietary technologies.

In countries like Myanmar, some citizens misconstrue a single social media website, Facebook, for the internet, according to LirneAsia research. India is another market where Facebook could still get its brand mistaken for access itself by some users coming online. This is Facebook put so many resources into the battle over Basics, in the run-up to India’s network neutrality regulation. an odd corporation.

On hand, its business model is what some term surveillance capitalism. On the other hand, by acquiring WhatsApp and by keeping end-toend (E2E) encryption “on”, it has ensured that one and a half billion users can concretely exercise their right to privacy. At the time of the acquisition, WhatsApp founders believed Facebook’s promise that it would never compromise on their high standards of privacy and security. But 18 months later, Facebook started harvesting data and diluting E2E.

In April this year, my colleague Ayush Rathi and I wrote in Asia Times that WhatsApp no longer deletes multimedia on download but continues to store it on its servers. Theoretically, using the very same mechanism, Facebook could also be retaining encrypted text messages and comprehensive metadata from WhatsApp users indefinitely without making this obvious.

My friend, Srikanth Lakshmanan, founder of the CashlessConsumer collective, is a keen observer of this space. He says in India, “we are seeing an increasing push towards a bank-led model, thanks to National Payments Corporation of India (NPCI) and its control over Unified Payments Interface (UPI), which is also known as the cashless layer of the India Stack.”

NPCI is best understood as a shape shifter. Arundhati Ramanathan puts it best when she says “depending on the time and context, NPCI is a competitor. It is a platform. It is a regulator. It is an industry association. It is a profitable non-profit. It is a rule maker. It is a judge. It is a bystander.”

This results in UPI becoming, what Lakshmanan calls, a NPCI-club-good rather than a new generation digital public good. He also points out that NPCI has an additional challenge of opacity — “it doesn’t provide any metrics on transaction failures, and being a private body, is not subject to proactive or reactive disclosure requirements under the RTI.”

Technically, he says, UPI increases fragility in our financial ecosystem since it “is a centralised data maximisation network where NPCI will always have the superset of data.” Given that NPCI has opted for a bank-led model in India, it is very unlikely that Facebook able to leverage its monopoly the social media market duopoly it shares with in the digital advertising market to become a digital payments monopoly.

However, NCPI and Facebook both share the following traits — one, an insatiable appetite for personal information; two, a fetish for hypercentralisation; three, a marginal commitment to transparency, and four, poor track record as a custodian of consumer trust. The marriage between these like-minded entities has already had a dubious beginning.

Previously, every financial technology wanting direct access to the NPCI infrastructure had to have a tie-up with a bank. But for Facebook and Google, as they are large players, it was decided to introduce a multi-bank model. This was definitely the right thing to do from a competition perspective. But, unfortunately, the marriage between the banks and the internet giant was arranged by NPCI in an opaque process and WhatsApp was exempted from the full NPCI certification process for its beta launch.

Both NPCI and Facebook need urgent regulatory attention. A modern data protection law and a more proactive competition regulator is required for Facebook. The NPCI will hopefully also be subjected to the upcoming data protection law. But it also requires a range of design, policy and governance fixes to ensure greater privacy and security via data minimisation and decentralisation; greater accountability and transparency to the public; separation of powers for better governance and open access policies to prevent anti-competitive behaviour.

Comments on the Draft Digital Communications Policy

by Anubha Sinha, Gurshabad Grover and Swaraj Barooah — last modified Jun 14, 2018 12:43 PM
This submission presents comments by the Centre for Internet & Society, India (“CIS”) on the Draft Digital Communications Policy which was released to the public by the Department of Telecommunications of the Ministry of Communications on 1st May 2018 for comments and views.

Preliminary

 

On 1st May 2018, the Department of Telecommunications of the Ministry of Communications released the Draft Digital Communications Policy for comments and feedback.  We laud the Government’s attempts to realise the socio-economic potential of India by increasing  access to Internet, and drafting a comprehensive policy while adequately keeping in mind the various security and privacy concerns that arise due to online communication. On behalf of the Centre for Internet & Society (CIS), we thank the Department of Telecommunications for the opportunity to submit its comments on the draft policy.

We would like to point out two concerns with the consultation process: (i) a character-limit imposed on the comments to each section, due to which this submission has to sacrifice on providing comprehensive references to research; and (ii) issues with signing in on the MyGov where this consultation was hosted. We strongly recommend that the consultation process be liberal in accepting content, and allow for multiple types of submissions.

Comments

Connect India: Creating a Robust Digital Communication Infrastructure

 
On 2022 Goals
 
a. Provide Universal broadband coverage at 50 Mbps to every citizen

According to UNICEF’s 2017 report, Children in a Digital World, only 29% of the internet users in India are female.  It is essential that the policy recognise the wide digital gender gap and other differences in internet access that result from traditional sociocultural barriers. Therefore, we recommend that the goal read as: “Provide Universal broadband coverage at 50 Mpbs to every citizen, with special focus on increasing internet access for women, people with disabilities, and historically-marginalised communities.”
 
g. Ensure connectivity to all uncovered areas

The term “connectivity” should be changed to “active internet connectivity”. As per the current norms, a gram panchayat may be considered “connected” if the fibre infrastructure exists, but this does not necessarily mean an active internet connection being serviced in the area. For example, as on May 20, “of 1.22 lakh gram panchayats with fibre connectivity, 1.09 lakh had active internet.” 
 
On Strategies
 
1.1 (a) i. BharatNet – Providing 1 Gbps to Gram Panchayats upgradeable to 10 Gbps

The Central Government, under the “State-led” implementation of the BharatNet initiative, has allowed certain state governments to implement the program in their respective states. This has allowed State Governments to take misplaced liberty with the core objective of the program, which originally was to increase access to internet services. For example, after the Telecom Commission’s approval of Andhra Pradesh’s “State-led” implementation of the program, the state government set up a body corporate Andhra Pradesh State FiberNet Limited. This body then went on to exceed its mandate by venturing into the television broadcasting and distribution business by offering Internet Protocol Television (IPTV) services.  This is deeply problematic a it indicates that central government funds meant for increasing internet access are being used for IPTV services, despite the TRAI’s repeated recommendations (since 2012) that state-owned entities should not be allowed to enter broadcasting and distribution activities ; allowing state entities in the business is against fair play and competition, runs contrary to the principle of independent and free media, and has chilling effects on the freedom of expression.
Additionally, this has created a problem for aggregated data availability on the expenditure on the program. While the central government should ideally have all data pertaining to state-wise expenditure of funds for the program, data regarding the states implementing the initiative on their own is generally excluded from the data provided by the Ministry. The goals of the program need to be specifically defined so that funds are directed effectively. The program needs stricter monitoring mechanisms to ensure that the intended objectives are met.
 
1.1 (a) iv. JanWiFi – Establishing 2 Million Wi-Fi Hotspots in rural areas

Under present regulations, resale of communication data logged by WiFi hotspots is not permitted. However, recent news reports suggest that the DoT may change these norms to permit (virtual network) operators to further sell this information. We understand that while changing such norms may incentivise the operators to set up WiFi hotspots, however, the proliferation of internet access cannot come at the cost of privacy of users. The data available to the operators of these hotspots includes all browsing data, which is sensitive private information, and thus, should be restricted from sale. We strongly recommend that in compatibility with the security & privacy goals for consumers envisioned in the latter sections of this draft policy, the DoT ensure that strong privacy measures are in place for public WiFi hotspots made available through programs like JanWiFi.
 
1.1 (f) Enabling Infrastructure Convergence of IT, telecom and broadcasting sectors

The policy proposes a convergence of the infrastructure administration currently performed by three central Government departments: IT, Broadcasting and Telecom. As admitted in the draft, this will require amendments, amongst many Acts, to the Telegraph Act. However, the draft policy has not clearly delineated the new proposed responsibilities for each department, and avoids elaborating on the nuance that will be required to address the multiple legal and administrative concerns stemming from the proposed convergence. The document also fails to detail how infrastructure (say internet access through 4G) will be regulated differently services (say IPTV operating on 4G). Further clarity is also required (i) how department-specific concerns (which are unsuited for a larger body) will be handled; and (ii) regarding the auspices under which the new converged body will operate.
 
1.2 (a) Making adequate spectrum available to be equipped for the new broadband era

TRAI’s consultation paper, Allocation and Pricing of Microwave Access (MWA) and Microwave Backbone (MWB) RF carriers (March 2014), recommends the exploration of the usage of the E-band (71 - 76 / 81-86 GHz) and V-band (57-64 MHz),  and for the allocation of the same to telecom service providers.  We recommend that the Ministry accept TRAI’s recommendations, and reflect it in this policy.
While the draft policy aims to decrease regulation of the spectrum, including liberalising spectrum “sharing, leasing and trading” regime, in addition, the policy should clear the government’s stance on unlicensed spectrum usage. CIS has written earlier (June 2012) about the demonstrable need for unlicensed spectrum to create a path for inexpensive connectivity in rural and remote areas. 
 
1.2 (a) v. Optimal Pricing of Spectrum to ensure sustainable and affordable access to Digital Communications

The draft policy should review existing approach to spectrum pricing in India. The Indian telecom sector is under heavy debt, and if rejuvenating this sector is a purported goal of this policy via “optimal pricing of spectrum”, auctions with a view to revenue maximisation should no longer remain the preferred method of assigning spectrum. The National Telecom Policy, 1999 which adopted a revenue-sharing approach to license fees, showed good results for the sector and translated into huge benefits for consumers. The government should adopt a similar approach to rescue the industry.
 

Propel India: Enabling Next Generation Technologies and Services through Investments, Innovation, Indigenous Manufacturing and IPR Generation

 

On Strategies

2.2 (a) ii. Simplifying  licensing  and regulatory frameworks whilst  ensuring  appropriate security  frameworks  for  IoT/  M2M  /  future services  and  network  elements incorporating international best practices

The process of “simplifying” licensing and regulatory regime is currently vague, and the intentions remain unclear. Simplifying licences without clear intentions can lead to losing the necessary nuance in the license agreements required to maintain competitive markets. In recent months, the industry has already witnessed a dilution of provisions which were placed to ensure healthy competition in the sector. For example, on May 31st, new norms were announced by DoT under which now allow an operator to hold 35% of the total spectrum  as opposed to the earlier regulation which only allowed for holding a maximum 25% of the total spectrum.

2.3 (d) (iii) Providing financial incentives for the development of Standard Essential Patents(SEPs) in the field of digital communications technologies

This is a welcome step by the government to incentivise the development of SEPs in the country. However, this appreciable step will only yield results in the long term - and realistically speaking, not before a decade. It is equally necessary to improve the environment of licensing of SEPs in the short-term. The government should take initiative for creation of government-controlled patent pools for SEPs, which will solve issues of licensing for SEP holders, and also improve transparency of information relating to SEPs. Specifically, we recommend that the government initiate the formation of a patent pool of critical mobile technologies and apply a five percent compulsory license. 

Secure India: Ensuring Digital Sovereignty, Safety and Security of Digital Communications

 

On Strategies

3.1 Harmonising communications law and policy with the evolving legal framework and jurisprudence relating to privacy and data protection in India

We welcome the Ministry’s intention to amend licence agreements to include data protection and privacy provisions. In the same vein, the Ministry should also consider removing provisions from licenses that prevent the operator from using certain encryption methods in its network. For example, Clause 2.2 (vii) of the License Agreement between DoT & ISP prohibits bulk encryption.  Additionally, in the License Agreement, encryption with only up to 40-bit in RSA (or equivalent) is normally permitted.  Similarly, Clause 37.1 of the Unified Service License Agreement prohibits bulk encryption.  These provisions must be revised to ensure that ISPs and other service providers can employ more cryptographically secure methods.

When regulating on encryption, we recommend that the government only set positive minimum mandates for the storage and transmission of data, and not set upper limits on the number of bits or on the quality of cryptographical method. In pursuance of the same goals, we also recommend adding point ‘iii’ to 3.1 (b): “promoting the use of encryption in private communication by providing positive minimum mandates for strong encryption in (or along with) the data protection framework.”

3.2 (a) Recognising the need to uphold the core principles of net neutrality

Like other goals of the draft policy, the target for ensuring and enforcing net neutrality principles has been set as 2022. However, this goal is achievable by as early as December 2018. We suggest that the Government take the first step towards this goal by accepting the net neutrality principles proposed by the TRAI and its recommendations to the government which have been pending with the Ministry since November 2017. The government may additionally take into consideration CIS’ position on net neutrality

The vaguely worded “appropriate exclusions and exceptions” carved out to net-neutrality principles in the policy need urgent elaboration. Given the vague boundaries between different control layers in digital communication, content regulation is very easy to slip into, and needs to be consciously avoided by the government.

3.3 (f) ii. Facilitating lawful interception agencies with state of the art lawful intercept and analysis systems for implementation of law and order and national security

There is no clarity in policy on how the government plans to meet the goal of “[f]acilitating lawful interception agencies with state of the art lawful intercept and analysis systems for implementation of law and order and national security.”  It has been recently suggested that some legal provisions that enable targeted communication surveillance might be violative of the privacy guidelines laid out in the recent Supreme Court judgment that affirmed the Right to Privacy.  Additionally, mass surveillance, prime facie, does not meet the “proportionality test.” Therefore, the policy documents needs details as to how the Ministry will aid intelligence agencies, and whether these interception details will be known to ISPs, TSPs and the public via reflection in the various License Agreements.

 

Comments on the Telecom Commercial Communications Customer Preference Regulations

by Sandeep Kumar, Torsha Sarkar, Swaraj Barooah, Gurshabad Grover — last modified Jun 23, 2018 12:44 AM
This submission presents comments by the Centre for Internet & Society, India (“CIS”) on the Telecom Commercial Communications Customer Preference Regulations which was released to the public by the Telecom Regulatory Authority of India (TRAI) on 29th May 2018 for comments and views.

Preliminary

This submission presents comments by the Centre for Internet & Society (“CIS”), India on ‘The Telecom Commercial Communications Customer Preference Regulations, 2018’ which were released on 29th May 2018 for comments and counter-comments.

CIS appreciates the intent and efforts of Telecom Regulatory Authority of India (TRAI) to curb the problem of Unsolicited Commercial Communication (UCC), or spam. Spam messages are constant irritants for telecom subscribers. Acknowledging the same, TRAI has proposed regulations which aim to empower subscribers in effectively dealing with UCC. CIS is grateful for the opportunity to put forth its views and comments on the regulations. This submission was made on 18th June 2018. This text has been slightly edited for readability.

The first part of the submission highlights some general issues with the regulations. While TRAI has offered a technological solution to the menace of UCC, the policy documents have no accompanying technical details. TRAI has not made a compelling case for why Distributed Ledger Technologies (DLTs) should be used for storing data instead of a distributed database. There is no clarity on the technical aspects of the proposed DLTs: the participating nodes in the network, how these nodes arrive at a consensus, whether they are independent of each other, are questions that remain unanswered. The draft regulations also mention curbing Robocalls, but technical challenges associated with the same have not been discussed. Spam which is non-commercial in nature remains out of the scope of the current regulations.

The second part of this submission puts forth specific comments related to various sections of the draft and suggests improvements therein. While CIS appreciates the extension of the deadline from 11th June to 18th June, we would like to highlight that the Draft was released on 29th May, and despite the extension, the time to submit comments remains less than a month. Considering the fact that the draft regulations hold significance for the entire telecom industry and nearly 1.5 billion subscribers, TRAI should have granted at least a month’s time for the stakeholder’s sound scrutiny.

General Comments

Distributed Ledger Technology (DLT)

The draft greatly emphasizes the fact that data regarding Consent, Complaints, Headers, Preferences, Content Template Register and Entities are stored on distributed ledgers. The intent is to keep data cryptographically secure with no centralized point of control. However, the regulations do not go into the technical details of the working of these distributed ledgers leading to several potential pitfalls.

As per the draft, every access provider has to establish distributed ledgers for Complaints, Consent, Content, Preference, Header, Entities and so on. There are specific entities mentioned which will act as nodes in the network, and these nodes are preselected.

Whenever a sender seeks to send commercial communications across a list of subscribers, the list is ‘scrubbed’ against the DL-Consent and DL-Preference, to check whether the subscriber has given consent and registered their preference. The sender can only send the commercial communication to the numbers which are present in the scrubbed list.

The objective of these regulations is to protect consumers’ rights but the consumer, i.e., the subscriber, is not a node in the distributed ledger. Since the primary benefits of decentralization are gained when the trust is devolved to the individual subscribers, and the individual users are not specified as participating nodes in the ledger, the justification behind a distributed ledger is unclear.

Additionally, the proposed regime requires the subscriber to place her trust in the access provider to register the complaint, thus offers no tangible benefit over the current regulation. While there are penalties for non-compliant Access Providers (APs), there are no business incentives for APs to expend the extra amount of resources required in for effective implementation of this technology, to act in the users’ interest. This builds a system where APs interests clash with subscribers, but they are nonetheless required to be the guardian of the subscribers’ concerns.

Further, the nodes are entities constituted by the access providers (APs), and there is no mechanism to ensure that they behave independently of each other. In such case, it is wholly possible that all nodes on a distributed ledger are run by the same entity, thus defeating the purpose of establishing consensus. The proposed regulations do not address this scenario.

One solution would be to add subscribers as nodes to the DLT network. But this would be impractical as the technical challenges associated therein, including generating public-private key pairs of each user, the computational complexity of the network, are immense. If this is indeed the intention of TRAI, this has not been spelled out clearly in the draft regulations. Additionally, in such a scenario, there would be no requirement for mandating every AP to maintain their own DLT for customer preference and consent artifacts.

Considering the points mentioned above, we request TRAI to publish the technical specifications of DLTs, which addresses the following issues:

  1. Who can participate in the network other than the entities mentioned in the regulations? Are these participating entities independent of each other? If not, then how will the conflict of interest be resolved?
  2. What is the consensus algorithm used in the DLTs?
  3. Will the code to implement DLTs be open-source?

Our recommendations are three-fold in this regard:

If distributed ledger is used, then, mechanisms should be devised to ensure the integrity of the consensus. For this, participating nodes in the network must be independent of each other. Aforementioned points regarding consensus protocol should be taken into consideration as well.

In place of DLTs, we recommend the use of a distributed database with signature-based authentication and encryption of the data to be stored. The immutability and non-repudiation of data can be achieved in this way. Distributed ledgers such as DL-consent, DL-preference, DL-complaints are instances where authentication of data and subscriber can be done using simplers means such as OTP verification, etc. So, such ledgers need not necessarily utilize DLTs.

The regulations should mandate the open-source publication of the implementation of the DLTs. This will enable interoperability, add transparency to the functioning of the regulations, and enable security audits to ensure accountability of the APs.

Broadening the scope of the Regulations to non-commercial communication

The proposed regulations attempt to specifically curb unsolicited commercial communications as defined in Regulation 2(bt). But, there are other forms of communication which are unsolicited and non-commercial, including political messages and market surveys.

We recommend that the scope of the regulations should be broadened to include both commercial and non-commercial communications. And both of these should be grouped under the category of Institutional Communications. Wherever needed, changes should be made to the regulations dealing with UCC to suit the specific requirements of dealing with unsolicited non-commercial communications as well. At the same time, the regulations should ensure that individual communications are not brought within their ambit.

Technical challenges in combating Robocalls

Robocalls are defined in Regulation 2(ba) and in Schedule IV, provision 3, it has been clubbed with other kinds of spam. However, there are some specific technical challenges in regulating robocalls. Right now, ‘block listing’ is a prevalent model where one can identify a number and then block it so that it cannot be used further. But with robocalls, spoofing of other numbers is easily achievable which makes the blocking of the real identity of caller difficult. The proposed regulations do not adequately address this challenge.

The Alliance for Telecommunications Industry Solutions, with working groups of the Internet Engineering Task Force (IETF), has been working on a different approach to solve this problem. They are working on standards for all mobile and VoIP calling services which would enable them to do cryptographic digital call signing, “so calls can be validated as originating from a legitimate source, and not a spoofed robocall system. The protocols, known as ‘STIR’ and ‘SHAKEN,’ are in industry testing right now through ATIS's Robocalling Testbed, which has been used by companies like Sprint, AT&T, Google, Comcast, and Verizon so far”.

TRAI should take into account these developments and propose a specific regime accordingly. One possible way forward, for now, could be the banning of robocalls unless there is explicit opt-in by subscribers.

Registration of content-template

The draft envisages a distributed ledger system for registration of content template which would have both a fixed part and a variable part. The content template needs to be registered by the content template registrar, which would be an authorized entity.

Problematically, the content template is defined to include the fixed part as well as the variable part. Further, Schedule I, provision 4(3)(e) mandates that content template registration functions should be utilized to extract fixed and the variable portion from actual messages offered for delivery or already delivered. The variable portion of the message contains information specific to a customer, as defined in regulation 2(q)(ii). In addition to privacy concerns with accessing the variable part, there is no functional reason for variable portions to be extracted from the actual message, as only the fixed portion needs to be verified.

The hash of the fixed portion of the message can be used to identify whether a user has received UCC or not. We, therefore, recommend that the variable portion of the message shall not be made accessible to entities because it is not required for the identification of a message as UCC.

‘Safe and Secure Manner’

Throughout the draft, reference is made to the data collected being stored and/or exchanged in a ‘safe and secure manner’, without any clarification as to what this term implies.

We recommend that the term be defined as ‘measures in accordance with reasonable security practices and procedures’ as given in section 43A of the Information Technology Act, 2008 read with section 8 of the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011.

Bulk Registration

In the Consultation paper published by TRAI, bulk registration was envisaged as a way to curb UCC wherein one member of the family can register on behalf of the family. Australia has already implemented this mechanism.

In India, evidence suggests that major victims of spam are the elderly and people with limited financial capacities. In such cases, consent and preference registration on behalf of these people by one person may help in the successful control of UCC.

Some telecom service providers argued against this by emphasizing the individual choice of a subscriber. However, in cases where there is authorization given by the customer, the primary user can register consent on his/her behalf. Similarly, since corporate connections are by definition owned and paid for by corporates, bulk registration in those situations can be also be done.

We recommend that given the situation in India, the provision for bulk registration be incorporated in the regulations for specific scenarios, as mentioned above. An authorization template giving the nominee power to register on behalf of a class can be incorporated to this effect. Also, an opt-out option must be incorporated in case an individual choice differs from the choice registered in the bulk-registration.

Specific Comments

Inferred Consent [Regulation 2(k)(II)(A)]

Comments
Regulation 2(k)(ii)(a) of the Draft defines consent as “voluntary permission given by the customer to the sender to receive commercial communication”. However, the draft also includes, “inferred consent”, which is defined as consent that can be “reasonably inferred from the customer’s conduct or the business and the relationship between the individual and the sender”.

When consent is derived from the customer’s conduct, rather than being given explicitly, it defeats its ‘voluntary nature’. The provision of consent being ‘reasonably inferred’ from the customer’s conduct is also vague, and there is no indication given in the draft as to what kind of conduct would lead to a reasonable inference of implied consent. The definition can also be interpreted to mean that customer’s conduct will be subject to monitoring, which raises privacy concerns.

Recommendations 
Consent shall not be derived from the customer’s conduct unless the person provides it explicitly. We recommend amendment to the definition of ‘inferred consent’ accordingly.

Three years history to be stored in DL-Complaints [Regulations 24(3) and 24(4)]

Comments

Regulation 24(3) and (4) states that the DL-Ledger for Complaints (DL-Complaints) shall record ‘three years history’ of both the complainant and the sender, with details of complaints made, date, time and status of the resolution of the complaint. It is not clear from the regulation whether the mentioned set of data is exhaustive or not.

Recommendations 
We recognize that the legislative intent behind drafting Regulation 24(3) and (4) was to curb frivolous or false complaints, which has already been a concern of TRAI. Storing both the complainant and the sender’s history, in such cases, may aid in resolving these.

We recommend that the language of the regulations may be amended to “three years history which only includes details of all complaint(s) made by him, with date(s) and time(s) . . .”, thereby giving a limiting qualification to the broad scope of the term.

The responsibility of the APs to ensure that the devices support the requisite permissions [Regulation 34]

Comments
Regulation 34 mandates that the APs are to ensure that the devices “registered in the network” shall support the requisite permissions of the Apps under this regulations.

In terms of jurisdiction, regulation of the functioning of electronic devices (which can be phones, tablets or smart watches) is outside the scope of the proposed regulations, and probably out of TRAI's regulatory competence.

Even if TRAI can impose the regulation on end devices, this regulation puts the burden on the APs to ensure that devices support the pertinent app permissions. Considering that TRAI itself has been weighing legal recourse against device manufacturers on similar grounds, it is unclear why TRAI assumes that APs have any legal or technical method to ensure control of a device which has neither been manufactured by them nor is it under their physical or remote control.

In modern smartphones, the end-user has full control over most app installations and permissions. This practice is consistent with a consumer's autonomy over the device and its functioning. Considering the fact that TRAI has not implemented basic security features in the 'Do Not Disturb' app, TRAI is putting at risk the privacy of millions of device owners by legally mandating permissions for an app with the second proviso. The proviso further gives TRAI the power to order APs to derecognize devices from their network. This regulation is draconic and inimical to the rights of consumers, who are at risk of losing network access and connectivity because of their device choice, which is a completely different business and market.

Recommendations 
Reporting unsolicited messages or calls is a consumer right, and the regulations are in furtherance of the same goals. TRAI should enable consumer rights by giving subscribers the option to report spam and has no reason to force users to report spam possibly through legal overreach and privacy invasion. Accordingly, we recommend the removal of Regulation 34.

Additional Suggestions

Consumer and subscriber

The usage of the terms ‘customer’ and ‘subscriber’ in Regulation 3(1) implies that the terms have two different meanings. This interpretation, however, clashes with the actual definition given in Regulation 2(u) and 2(bk), whereby a customer is a subscriber. This is an inconsistent interpretation.

Either the definition of a ‘customer’ must be clarified or differentiated from that of a ‘subscriber’ in regulation 2, or regulation 3 must be amended to indicate what its actual object of regulation is - the customer or the subscriber.

Drafting misnumbering

There are a few instances of misnumbering of regulations and reference regulations which are non-existent.

Regulations 25(5)(b) and (c) make a reference to regulation 25(3)(a), which does not exist in the given draft. A bare reading of regulation 25, however, indicate that the intention was to refer to regulation 25(5)(a), and as such, this misnumbering should be rectified.

Regulation 34 makes a reference to regulation 7(2), which again, does not exist. In such case, either regulation 34 or regulation 7(2) must be amended to keep a consistent interpretation.

Ambiguous terms

‘Allocation and assignment principles and policies’ - Provision 4(1)(a) of Schedule I of the regulations indicate that header assignment should be done on the basis of ‘allocation and assignment principles and policies’, without any clarification to the meaning of this term. We recommend an amendment to this provision accordingly.

The AI Task Force Report - The first steps towards India’s AI framework

by Elonnai Hickok, Shweta Mohandas and Swaraj Paul Barooah — last modified Jun 27, 2018 02:32 PM
The Task Force on Artificial Intelligence was established by the Ministry of Commerce and Industry to leverage AI for economic benefits, and provide policy recommendations on the deployment of AI for India.

The blog post was edited by Swagam Dasgupta. Download PDF here


The Task Force’s Report, released on March 21st 2018, is a result of the combined expertise of members from different sectors and examines how AI will benefit India. It sheds light on the Task Force’s perception of AI, the sectors in which AI can be leveraged in India, the challenges endemic to India and certain ethical considerations. It concludes with a set of policy recommendations for the government to leverage AI for the next five years. While acknowledging AI as a social and economic problem solver, the Report attempts to answer three policy questions:

  • What are the areas where government should play a role?
  • How can AI improve quality of life and solve problems at scale for Indian citizens?
  • What are the sectors that can generate employment and growth by the use of AI technology?

This blog will look at how the Task Force answered these three policy questions. In doing so, it gives an overview of salient aspects and reflects on the strengths and weaknesses of the Report.

Sectors of Relevance and Challenges

In order to navigate the outlined questions, the Report looks at ten sectors that it refers to as ‘domains of relevance to India’. Furthermore, it examines the use of AI along with its major challenges, and possible solutions for each sector. These sectors include: Manufacturing, FinTech, Agriculture, Healthcare, Technology for the Differently-abled, National Security, Environment, Public Utility Services, Retail and Customer Relationship, and Education. While these ten domains are part of the 16 domains of focus listed in the AITF’s web page, it would have been useful to know the basis on which these sectors were identified. A particular strength of the identified sectors is the consideration of technology for the differently abled as well as the recognition to the development of AI systems in spoken and sign languages in the Indian context.

Some of the problems endemic to India that were recognized include infrastructural barriers, managing scale and innovation, and the collection, validation and distribution of data. The Task Force also noted the lack of consumer awareness, and inability of technology providers to explain benefits to end users as further challenges. The Task Force — by putting the onus on the individual — seems to hint that the impediment to the uptake of technology is the inability of individuals to understand the benefits of the technology, rather than aspects such as poor design, opacity, or misuse of data and insights. Furthermore, although the Report recognizes the challenges associated to data in India and highlights the importance of quality and quantity of data; it overlooks the importance of data curation in creatinge reliable AI systems.

Although the Report examines challenges to AI in each sector, it fails to include all challenges that require addressal. For example, the report fails to acknowledge challenges such as the lack of appropriate certification systems for AI driven health systems and technologies. In the manufacturing sector, the Report fails to highlight contextual challenges associated with the use of AI. This includes the deployment of autonomous vehicles compared to the use of industrial robots.

On the use of AI in retail, the Report while examining consumer data and its respective regulatory policies, identified the issues to be related to the definition, discrimination, data breaches, digital products and safety awareness and reporting standards. In this, the Report is limited in its understanding of what categories of data can lead to discrimination and restricts mechanisms for transparency and accountability to data breaches. The Report could have also been more forward looking in its position on security — including security by design and security by default. Furthermore, these issues were noted only in the context of the retail sector and ideally should have been discussed across all sectors.

The challenges for utilizing AI for national security could have been examined beyond cost and capacity to include associated ethical and legal challenges such as the need for legal backing. The use of AI in national security demands clear accountability and oversight as it is a ground for legitimate state interference with fundamental rights such as privacy and freedom of expression. As such, there is a need for human rights impact assessments, as well as a need for such uses to be aligned with international human rights norms. Government initiatives that allow country wide surveillance and AI decisions based on such data should ideally be implemented only after a comprehensive privacy law is in place and India’s surveillance regime has been revisited.

Recognizing the potential of AI for the benefit of the differently abled is one of the key takeaways from this section of the Report. Furthermore, it also brings in the need for AI inclusivity. AI in natural language generation and translation systems have the potential to help the large number of youth that are disabled or deprived. Therefore, AI could have a large positive impact through inclusive growth and empowerment.

Although the Report examines each of the ten domains in an attempt to provide an insight into the role the government can play, there seems to be a lack of clarity in terms of the role that each department will and is playing with respect to AI. Even the section which lays down the relevant ministries for each of the ten domains failed to include key ministries and departments. For example, the Report does not identify the Ministry of Education, nor does it list the Ministry of Law for national security. The Report could have also identified government departments which would be responsible for regulation and standardization. This could include the Medical Council of India (healthcare), CII (manufacture and retail), RBI (Fintech) etc. The Report also does not recognize other developments around AI emerging out the government. For example, the Draft National Digital Communications Policy (published on May 1, 2018) seeks to empower the Department of Telecommunication to provide a roadmap for AI and robotics. Along similar lines, the Department of Defence Production has also created a task force earlier this year to study the use of AI to accelerate military technology and economic growth. The government should look at building a cohesive AI government body, or clearly delineating the role of each ministry, in order to ensure harmonization going forward.

Areas in need of Government Intervention

The Report also lists out the grand challenges where government intervention is required. This includes data collection and management and the need for widespread expertise contributing to research, innovation, and response. However, while highlighting the need for AI experts from diverse backgrounds, it fails to include experts from law and policy into the discussion. While identifying manufacturing, agriculture, healthcare and public utility to be places where government intervention is needed, the Report failed to examine national security beyond an important domain to India and as a sector where government intervention is needed.

Participation in International Forums

Another relevant concern that the Report underscores is India’s scarce participation as researchers, AI developers and government engagement in global discussions around AI. The Report states that although efforts were being made by Indian universities to increase their presence in international AI conferences, they were lagging behind other nations. On the subject of participation by the government it recommends regular presence in International AI policy forums. Hence, emphasising the need for India’s active participation in global conversations around AI and international rulemaking.

Key Enablers to AI

The Report while analysing the key enablers for AI deployment in India states that positive societal attitudes will be the driving force behind the proliferation of AI. Although relying on positive social attitudes alone will not help in increasing the trust on AI, steps such as making algorithms that are used by public bodies public, enacting a data protection law etc. will be important in enabling trust beyond highlighting success stories.

Data and Data Marketplaces

While the Report identifies data as a challenge where government intervention is needed, it also points to the Aadhaar ecosystem as an enabler. It states that Aadhaar will help in the proliferation of AI in three ways: one as a creator of jobs as related to the collection and digitization of data, two as a collector of reliable data, and three as a repository of Indian data. However, since the very constitutionality of Aadhaar is yet to be determined by the Supreme Court, the task force should have used caution in identifying Aadhaar as a definitive solution. Especially while making statements that the Aadhaar along with the SC judgement has created adequate frameworks to protect consumer data. Additionally, the Task Force should have recognized the various concerns that have been voiced about Aadhaar, particularly in the context of the case before the Supreme Court.

This section also proposes the creation of a Digital Data Marketplace. A data marketplace needs to be framed carefully so as to not create a situation where privacy becomes a right available to only those who can afford it. It is concerning that the discussion on data protection and privacy in the Report is limited to policies and guidelines for businesses and not centered around the individual.

Innovation and Patents

The Report states that the Indian startups working in the field of AI must be encouraged, and industry collaborations and funding must be taken up as a policy measure. One of the ways in which this could be achieved is by encouraging innovations, and one of the ways to do so is by adding a commercial incentive to it, such as through IP rights. Although the Report calls for a stronger IP regime that protects and incentivises innovation, it remains ambiguous as to which aspect of IP rights — patents, trade secrets and copyrights — need significant changes. If the Report is specifically advocating for stronger patent rights in order to match those of China and US, then it shows that the the task force fails to understand the finer aspects of Indian patent law and the history behind India’s stance on patenting. This includes the fact that Indian patent law excludes algorithms from being patented. Indian patent law, by providing a higher threshold for patenting computer related inventions (CRIs), ensures that only truly innovative patents are granted. Given the controversies over CRIs that have dotted the Indian patent landscape, the task force would have done well to provide more clarity on the ‘how’ and ‘why’ of patenting in this sector, if that is their intent with this suggestion.

Ethical AI framework

Responsible AI

In terms of establishing an ethical AI framework, the Task Force suggests measures such as making AI explainable, transparent, and auditable for biases. The Report addresses the fact that currently with the increase in human and AI interaction there is a need to have new standards set for the deployment of AI as well as industrial standards for robots. However, the Report does not go into details of how AI could cause further bias based on various identifiers such as gender and caste, as well as the myriad concerns around privacy and security. This is especially a concern given that the Report envisions widespread use of AI in all major sectors. In this way, the Report looks at data as both a challenge and an enabler, but fails to dedicate time towards explaining the various ethical considerations behind the collection and use of data in the context of privacy, security and surveillance as well as account for unintended consequences. In laying out the ethical considerations associated with AI, the report does not make a distinction between the use of AI by the public sector and private sector. As the government is responsible for ensuring the rights of citizens and holds more power than the citizenry, the public sector needs to be more accountable in their use of AI. This is especially so in cases where AI is proposed to be used for sovereign functions such as national security.

Privacy and Data

The Report also recognises the significance of the implementation of the Aadhaar Act, the privacy judgement and the proposed data protection laws, on the development and use of AI for India. Yet, the Report does not seem to recognize the importance of a robust and multi-faceted privacy framework as it assumes that the Aadhaar Act and the Supreme Court Judgement on privacy and potential privacy law have already created a basis for safe and secure utilization and sharing of customer data. Although the Report has tried to be an expansive examination of various aspects of AI for India, it unfortunately has not looked in depth at the current issues and debates around AI privacy and ethics and makes policy recommendations without appearing to fully reflect on the implementation and potential impact of the same. Similar to the discussion paper by the Niti Aayog, this Report does not consider the emerging principles of data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI. Furthermore, there is a lack of discussion on issues such as data minimisation and purpose limitation which some big data and AI proponents argue against.

Liability

On the question of liability, the Report only states that specific liability mechanisms need to be worked out for certain categories of machines. The Report does not address the questions of liability that should be applicable to all AI systems, and on whom the duty of care lies, not only in case of robots but also in the case of automated decision making etc. Thus, there is a need for further thinking on mechanisms for determining liability and how these could apply to different types of AI (deep learning models and other machine learning models) and AI systems.

AI and Employment

On the topic of jobs and employment, the Report states that AI will create more jobs than it takes as a result of an increase in the number of companies and avenues created by AI technologies. Additionally, the Report provides examples of jobs where AI could replace the human (autonomous drivers, industrial robots etc,) but does not go as far as envisioning what jobs could be created directly from this replacement. Though the Report recognizes emerging forms of work such as crowdsourcing platforms like Mturk, it fails to examine the impact of such models of work on workers and traditional labour market structures and processes. Going forward, it will be important that the government and the private sector undertake the necessary steps to ensure that fair, protected, and fulfilling jobs are created simultaneously with the adoption of AI. This will include revisiting national and organizational skilling programmes, labor laws, social benefit schemes, relevant economic policies, and exploring best practices with respect to the adoption and integration of AI in work.

Education and Re-skilling

The task force emphasised the need for a change in the education curriculum as well as the need to reskill the labour force to ensure an AI ready future. This level of reskilling will be a massive effort, and a thorough review and audit of existing skilling programmes in India is needed before new skilling programmes are established and financed. The Report also clarifies that the statistics used were based on a study on the IT component of the industry, and that a similar study was required to analyse AI’s effect on the automation component. Going forward, there is the need for a comprehensive study of the labour intensive sectors and formal and informal sectors to develop evidence based policy responses.

Policy Recommendations

The Task Force, in its policy recommendations, notes that the successful adoption of AI in India will depend on three factors: people, process and technology. However, it does not explain these three factors any further.

National Artificial Intelligence Mission

The most significant suggestion made in the Report is for the establishment of the National Artificial Intelligence Mission (N-AIM) — a centralised nodal agency for coordinating and facilitating research, collaboration and providing economic impetuous to AI startups. The mission with a budget allocation of Rs 1,200 crore over five years aims, among other things, to look at various ways to encourage AI research and deployment. Some of the suggestions include targeting and prototyping AI systems and setting up of a generic AI test bed. These suggestions seems to draw inspiration from other countries such as the US DARPA Challenge and Japan’s sandbox for self driving trucks. The establishment of N-AIM is a welcome step to encourage both AI research and development on a national scale. The availability of public funds will encourage more AI research and development.Additionally, government engagement in AI projects has thus far been fragmentedand a centralised body will presumably bring about better coordination and harmonization. Some of the initiatives such as Capture the flag competition that seeks to centre around the provision for real datasets to catalyze innovation will need to be implemented with appropriate safeguards in place.

Other recommendations

There are other suggestions that are problematic — particularly that of funding “an inter-disciplinary large data integration center in pilot mode to develop an autonomous AI Machine that can work on multiple data streams in real time and provide relevant information and predictions to public across all domains.” Before such a project is developed and implemented there are a number of factors where legal clarity is required; a few being: data collection and use, accuracy and quality of the AI system. There is also a need to ensure that bias and discrimination have been accounted for and fairness, responsibility and liability have been defined with consideration that this will be a government driven AI system. Additionally, such systems should be transparent by design and should include redress mechanisms for potential harms that may arise. This can be through the presence of a human in the loop, or the existence of a kill switch. These should be addressed through ethical principles, standards, and regulatory frameworks.

The recommendations propose establishing operation standards for data storage and  privacy, communication standards for autonomous systems, and standards to allow for interoperability between AI based systems. A significant lacuna in this list is the development of safety, accuracy, and quality standards for AI algorithms and systems.

Similarly, although the proposed public private partnership model for research and startups is a good idea, this initiative should be undertaken only after questions such as the implications of liability, ownership of IP and data, and the exclusion of critical sectors are thought through.

Furthermore, the suggestion to ‘fund a national level survey on identification of cluster of clean annotated data necessary for building effective AI systems’ needs to recognize the existing initiatives around open data or use this as a starting place. The Report does not clarify if this survey would involve identifying data.

Conclusion

The inconspicuous release of the Report as well as the lack of a call for public comments results in the fact that the Report does not incorporate or reflect on the sentiments of the public or draw upon the expertise that exists in India on the topic or policies around emerging technologies, which will have a pervasive and wide effect on society. The need for multi stakeholder engagement and input cannot be understated. Nonetheless, the Report of the Task Force is a welcome step towards understanding the movement towards an definitive AI policy. The task force has attempted answering the three policy questions keeping people, process and technology in mind. However, it could have provided greater details about these indices. The Report, which is meant for a wider audience, would have done well to provide greater detail, while also providing clarity on technical terms. On a definitional plane, a list of technologies that the task force perceived as AI for this Report, could have also helped keep it grounded on possible and plausible 5 year recommendations.

Compared to the recent Niti Aayog Discussion Paper, this Report misses out on a detailed explanation on AI and ethics, however, it does spend some considerable amount of time on education and the use of AI for the differently abled. Additionally, the Report’s statement on the democratization of development and equal access as well as assigning ownership and framing transparent rules for usage of the infrastructure is a positive step towards making AI inclusive. Overall, the Report is a progressive step towards laying down India’s path forward in the field of Artificial Intelligence. The emphasis on India’s involvement in International rulemaking gives India an opportunity to be a leader of best practice in international forums by adopting forward looking and human rights respecting practices. Whether India will also become a strong contender in the AI race, with policies favouring the development of a socio-economically beneficial, and ethical-AI backed industries and services is yet to be seen.

 

The Task Force consists of 18 members in total. Of these, 11 members are from the field of AI technology both research and industry, three from the civil services, one from healthcare research, one with and Intellectual property law background, and two from a finance background. The specializations of the members are not limited to one area as the members have experience or education in various areas relevant to AI. https://www.aitf.org.in// There is a notable lack of members from Civil Society. It may also be noted that only 2 of the 18 members are women

The Report on the Artificial Intelligence Task Force, Pg. 1,http://dipp.nic.in/sites/default/files/Report_of_Task_Force_on_ArtificialIntelligence_20March2018_2.pdf

ibid.

The Artificial Intelligence Task Force https://www.aitf.org.in/

The Report on the Artificial Intelligence Task Force, Pg. 8

The Report on the Artificial Intelligence Task Force, Pg. 9,10.

The Report on the Artificial Intelligence Task Force, Pg. 9

ibid.

Artificial Intelligence in the Healthcare Industry in India https://cis-india.org/internet-governance/files/ai-and-healtchare-report

Artificial Intelligence in the Manufacturing and Services Sector https://cis-india.org/internet-governance/files/AIManufacturingandServices_Report   _02.pdf

The Report on the Artificial Intelligence Task Force, Pg. 21.

Submission to the Committee of Experts on a Data Protection Framework for India, Centre for Internet and Society https://cis-india.org/internet-governance/files/data-protection-submission

The Report on the Artificial Intelligence Task Force, Pg. 22

Draft National Digital Communications Policy-2018, http://www.dot.gov.in/relatedlinks/draft-national-digital-communications-policy-2018

Task force set up to study AI application in military,https://indianexpress.com/article/technology/tech-news-technology/task-force-set-up-to-study-ai-application-in-military-5049568/

It is not just technical experts  that are needed, ethical, technical, and legal experts as well as domain experts need to be part of the decision making process.

The Report on the Artificial Intelligence Task Force, Pg. 31

Constitutional validity of Aadhaar: the arguments in Supreme Court so far, http://www.thehindu.com/news/national/constitutional-validity-of-aadhaar-the-arguments-in-supreme-court-so-far/article22752084.ece

ibid.

CIS Submission to TRAI Consultation on Free Data http://trai.gov.in/Comments_FreeData/Companies_n_Organizations/Center_For_Internet_and_Society.pdf

The Report on the Artificial Intelligence Task Force, Pg. 30

Section 3(k) of the patent act describes that a mere mathematical or business method or a computer programme or algorithm cannot be patented.

Patent Office Reboots CRI Guidelines Yet Again: Removes “novel hardware” Requirement

https://spicyip.com/2017/07/patent-office-reboots-cri-guidelines-yet-again-removes-novel-hardware-requirement.html

The Report on the Artificial Intelligence Task Force, Pg. 37

The Report on the Artificial Intelligence Task Force, Pg. 7

ibid.

The Report on the Artificial Intelligence Task Force, Pg. 8

National Strategy for Artificial Intelligence: http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf

Meaningful information and the right to explanation,Andrew D Selbst  Julia Powles, International Data Privacy Law, Volume 7, Issue 4, 1 November 2017, Pages 233–242

The Principle of Purpose Limitation and Big Data, https://www.researchgate.net/publication/319467399_The_Principle_of_Purpose_Limitation_and_Big_Data

M-Turk https://www.mturk.com/

For example a lesser threshold of minimum wages, no job secuirity etc, https://blogs.scientificamerican.com/guilty-planet/httpblogsscientificamericancomguilty-planet20110707the-pros-cons-of-amazon-mechanical-turk-for-scientific-surveys/

The Report on the Artificial Intelligence Task Force, Pg. 41

Report of Artificial Intelligence Task Force Pg, 46, 47

ibid.

The DARPAChallenge https://www.darpa.mil/program/darpa-robotics-challenge

Japan may set regulatory sandboxes to test drones and self driving vehicles http://techwireasia.com/2017/10/japan-may-set-regulatory-sandboxes-test-drones-self-driving-vehicles/

Mariana Mazzucato in her 2013 book The Entrepreneurial State, argued that it was the government that drives technological innovation. In her book she stated that high-risk discovery and development were made possible by government spending, which the private enterprises capitalised once the difficult work was done.

https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977,https://analyticsindiamag.com/amaravati-world-centre-for-ai-data/

The Report on the Artificial Intelligence Task Force, Pg. 47

Report of Artificial Intelligence Task Force Pg. 49

The Report on the Artificial Intelligence Task Force, Pg. 47

The AI task force website has a provision for public comments although it is only for the vision and mission and the domains mentioned in the website.

National Strategy for Artificial Intelligence: http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf

CIS contributes to the Research and Advisory Group of the Global Commission on the Stability of Cyberspace (GCSC)

by Arindrajit Basu — last modified Jul 05, 2018 04:00 PM
The Global Commission on the Stability of Cyberspace (GCSC) is an initiative of the Hague Centre for Strategic Studies and the East West Institute that seeks to promote mutual awareness and understanding among various cyberspace communities. It seeks to develop norms and policies that advance the stability and security of cyberspace.

Chaired by Marina Kaljurand, and Co-Chaired by Michael Chertoff and Latha Reddy, the Commission comprises 26 prominent Commissioners who are experts hailing from a wide range of  geographic regions representing multiple communities including academia industry, government, technical and civil society.

As a part of their efforts, the GCSC sent out a call for proposals for papers that sought to analyze and advance various aspects of the cyber norms debate.

Elonnai Hickok and Arindrajit Basu’s paper ‘ Conceptualizing an International Security Architecture for Cyberspace’ was selected by the Commissioners  and published as a part of the Briefings of the Research and Advisory Group.

Arindrajit Basu represented CIS at the Cyberstability Hearings held by the GCSC at the sidelines of the GLOBSEC forum in Bratislava-a multilateral conference seeking to advance dialogue on various issues of international peace and security.

The published paper and the Power Point may be accessed here.

The agenda for the hearings is reproduced below

GCSC HEARINGS, 19 MAY 2018

HEARINGS: TOWARDS INTERNATIONAL CYBERSTABILITY

Venue: “Habsburg” room, Grand Hotel River Park 15:00-15:15

Welcome Remarks by Marina Kaljurand, Chair of the Global Commission on the Stability of Cyberspace (GCSC) and former Foreign Minister of Estonia 15:15-16:45

Hearing I: Expert Hearing

This session focuses on the topic Cyberstability and the International Peace and Security Architecture and includes scene settings, food-for-thought presentations on the new GCSC commissioned research, briefings and open statements by government and nongovernmental speakers.

“Scene setting: ”Cyber Diplomacy in Transition” by Carl Bildt, former Prime Minister of Sweden

“Commissioned Research I: Lessons learned from three historical case studies on establishing international norms” by Arindrajit Basu, Centre for Internet and Society, India

Commission Research II: The “pre-normative” framework and options for cyber diplomacy” by Elana Broitman, New America Foundation

“Some Remarks on current thinking within the United Nations”, by Renata Dwan, Director United Nations Institute for Disarmament Research (UNIDIR)  (Registered Statements by Government Advisors)  (Statements by other experts)

(Open floor discussion) 16:45-17:15

Coffee Break

ICANN Diversity Analysis

by Akriti Bopanna last modified Aug 29, 2018 11:19 AM
The by-laws of The Internet Corporation for Assigned Names and Numbers (ICANN) state that it is a non-profit public-benefit corporation which is responsible at the overall level, for the coordination of the “global internet's systems of unique identifiers, and in particular to ensure the stable and secure operation of the internet's unique identifier systems”. As key stakeholders of ICANN are spread across the world, much of the communication discussing the work of ICANN takes place over email. This analysis of the diversity of participation at the ICANN processes, through a study of their mailing lists, was undertaken by Paul Kurian and Akriti Bopanna.

 

The by-laws of The Internet Corporation for Assigned Names and Numbers (ICANN) state that it is a non-profit public-benefit corporation which is responsible at the overall level, for the coordination of the “global internet's systems of unique identifiers, and in particular to ensure the stable and secure operation of the internet's unique identifier systems”.[1]Previously, this was overseen by the Internet Assigned Number Authority (IANA) under a US Government contract but in 2016, the oversight was handed over to ICANN, as a global multi-stakeholder body.[2] Given the significance of the multistakeholder nature of ICANN, it is imperative that stakeholders continue to question and improve the inclusiveness of its processes. The current blog post seeks to focus on the diversity of participation at the ICANN process.

As stakeholders are spread across the world, much of the communication discussing the work of ICANN takes place over email. Various [or X number of ] mailing lists inform members of ICANN activities and are used for discussions between them from policy advice to organizational building matters. Many of these lists are public and hence can be subscribed to by anyone and also can be viewed by non-members through the archives.

CIS analysed the five most active mailing lists amongst the working group mailing lists from January 2016 to May 2018, namely:

  1. Outreach & Engagement,
  2. Technology,
  3. At-Large Review 2015 - 2019,
  4. IANA Transition & ICANN Accountability, and
  5. Finance & Budget mailing lists.

We looked at the diversity among these active participants by focusing on their gender, stakeholder grouping and region. In order to arrive at the data, we referred to public records such as the Statement of Interests which members have to give to the Generic Names Supporting Organization(GNSO) Council if they want to participate in their working groups. We also used, where available, ICANN Wiki and the LinkedIn profiles of these participants. Given below are some of the observations we made subsequent to surveying the data. We acknowledge that there might be some inadvertent errors made in the categorization of these participants, but are of the opinion that our inference from the data would not be drastically affected by a few errors.

The following findings were observed:

  • A total of 218 participants were present on the 5 mailing lists that were looked at.
  • Of these,, 92 were determined to be active participants (participants who had sent more than the median number of mails in their working group) out of which 75 were non-staff members.

Among the active non-staff participants:

  • Out of the 75 participants, 56 (74.7%) were male and 19 (25.3%) were female.
    null

    null
  • 57.3% were identified to be members of the industry and technological community and 1.3% were identified as government representatives. 8.0% were representatives from Academia, 25.3% represented civil society and the remaining 8.0% were from fields that were uncategorizable with respect to the above, but were related to law and consultancy.
    null
  • Only 14.7% of the participants were from Asia while the majority belonged to Africa and then North America with 24% and 22.7% participation respectively
  • Within Asia, we identified only one active participant from China.

Concerns

  • The vast number of the people participating and as an extension, influencing ICANN work are male constituting three fourth of the participants.
  • The mailing list are dominated by individuals from industry.. This coupled with the relative minority presence of the other stakeholders creates an environment where concerns emanating from other sections of the society could be overshadowed.
  • Only 14.7% of the participants were from Asia, which is concerning since 48.7% of internet users worldwide belong to Asia.[3]
  • China which has the world’s largest population of internet users (700 million people)[4] had only one active participant on these mailing lists.

ICANN being a global multistakeholder organization should ideally have the number of representatives from each region be proportionate to the number of internet users in that region. In addition to this, participation of women on these mailing lists need to increase to ensure that there is inclusive contribution in the functioning of the organization. We did not come across any indication of participation of individuals of non binary genders.


[1] https://cis-india.org/telecom/knowledge-repository-on-internet-access/icann

[2] https://www.icann.org/news/announcement-2016-10-01-en

[3] https://www.internetworldstats.com/stats.htm

[4] https://www.internetworldstats.com/stats3.htm

CIS submitted a response to a Notice of Enquiry by the US Government on International Internet Policy Priorities

by Akriti Bopanna and Swagam Dasgupta — last modified Aug 24, 2018 07:05 AM
The Centre for Internet and Society drafted a response to a Notice of Inquiry (NOI) issued by the U.S. Commerce Department's National Telecommunications and Information Administration (NTIA) on "International Internet Policy Priorities."

The notice was based on different areas and we commented on the following three areas; The Free Flow of Information and Jurisdiction, The Multi-stakeholder Approach to Internet Governance, Privacy and Security. The submission was made by Swagam Dasgupta and Akriti Bopanna. Read the submission here.


The submission broadly covered the following aspects:

The Free Flow of Information and Jurisdiction

  • What are the challenges to the free flow of information online?
  • Which foreign laws and policies restrict the free flow of information online? What is the impact on U.S companies and users in general?
  • Have courts in other countries issued internet-related judgments that apply national laws to the global internet? What have the effects been on users?
  • What are the challenges to freedom of expression online?
  • What should be the role of all stakeholders globally—governments, companies, technical experts, civil society and end users — in ensuring free expression online?
  • What role can NTIA play in helping to reduce restrictions on the free flow of information over the internet and ensuring free expression online?
  • In which international organizations or venues might NTIA most effectively advocate for the free flow of information and freedom of expression? What specific actions should NTIA and the U.S. Government take?

Multistakeholder Approach to Internet Governance

  • Does the multistakeholder approach continue to support an environment for the internet to grow and thrive? If so, why? If not, why not?
  • Are there public policy areas in which the multistakeholder approach works best? If yes, what are those areas and why? Are there areas in which the multistakeholder approach does not work effectively? If there are, what are those areas and why?
  • Should the IANA Stewardship Transition be unwound? If yes, why and how? If not, why not?
  • What should be NTIA’s priorities within ICANN and the GAC?
  • Are there barriers to engagement at the IGF? If so, how can we lower these barriers?
  • Are there improvements that can be made to the IGF’s structure?

Privacy and Security

  • In what ways are cybersecurity threats harming international commerce? In what ways are the responses to those threats harming international commerce?

DIDP #31 Diversity of employees at ICANN

by Akash Sriram — last modified Aug 21, 2018 09:26 AM
We have requested ICANN to disclose information pertaining to the diversity of employees based on race and citizenship.

This data is being requested to verify ICANN’s claim of being an equal opportunities employer. ICANN’s employee handbook states that they “...provide equal opportunities and are committed to the principle of equality regardless of race, colour, ethnic or national origin, religious belief, political opinion or affiliation, sex, marital status, sexual orientation, gender reassignment, age or disability.” The data on the diversity of employees based on race and nationality of their employees will depict how much they have stuck to their commitment to delivering equal opportunities to personnel in ICANN and potential employees.

The request filed by CIS can be accessed here

The Centre for Internet and Society’s Comments and Recommendations to the: Indian Privacy Code, 2018

by Shweta Mohandas, Elonnai Hickok, Amber Sinha and Shruti Trikanand — last modified Jul 20, 2018 01:55 PM
The debate surrounding privacy has in recent times gained momentum due to the Aadhaar judgement and the growing concerns around the use of personal data by corporations and governments.

Click to download the file here


As India moves towards greater digitization, and technology becomes even more pervasive, there is a need to ensure the privacy of the individual as well as hold the private and public sector accountable for the use of personal data. Towards enabling public discourse and furthering the development a privacy framework for India, a group of lawyers and policy analysts backed by the Internet Freedom Foundation (IFF) have put together a draft a citizen's bill encompassing a citizen centric privacy code that is based on seven guiding principles.[1] This draft builds on the Citizens Privacy Bill, 2013 that had been drafted by CIS on the basis of a series of roundtables conducted in India.[2] Privacy is one of the key areas of research at CIS and we welcome this initiative and hope that our comments make the Act a stronger embodiment of the right to privacy.

Section by Section Recommendations

Preamble

Comment: The Preamble specifies that the need for privacy has increased in the digital age, with the emergence of big data analytics.

Recommendation: It could instead be worded as ‘with the emergence of technologies such as big data analytics’, so as to recognize the impact of multiple technologies and processes including big data analytics.

Comment: The Preamble states that it is necessary for good governance that all interceptions of communication and surveillance be conducted in a systematic and transparent manner subservient to the rule of law.

Recommendation: The word ‘systematic’ is out of place, and can be interpreted incorrectly. It could instead be replaced with words such as ‘necessary’, ‘proportionate’, ‘specific’, and ‘narrow’, which would be more appropriate in this context.

Chapter 1

Preliminary

Section 2: This Section defines the terms used in the Act.

Comment: Some of the terms are incomplete and a few of the terms used in the Act have not been included in the list of definitions.

Recommendations:

  • The term “effective consent” needs to be defined. The term is first used in the Proviso to Section 7(2), which states “Provided that effective consent can only be said to have been obtained where...:”It is crucial that the Act defines effective consent especially when it is with respect to sensitive data.
  • The term “open data” needs to be defined. The term is first used in Section 5 that states the exemptions to the right to privacy. Subsection 1 clause ii states as follows “the collection, storage, processing or dissemination by a natural person of personal data for a strictly non-commercial purposes which may be classified as open data by the Privacy Commission”. Hence the term open data needs to be defined in order to ensure that there is no ambiguity in terms of what open data means.
  • The Act does not define “erasure”, although the term erasure does come under the definition of destroy (Section 2(1)(p)). There are some provisions that use the word erasure , hence if erasure and destruction mean different acts then the term erasure needs to be defined, otherwise in order to maintain uniformity the sections where erasure is used could be substituted with the term “destroy” as defined under this Act.
  • The definition of “sensitive personal data” does not include location data and identification numbers. The definition of sensitive data must include location data as the Act also deals in depth with surveillance. With respect to identification numbers, the Act needs to consider identification numbers (eg. the Aadhaar number, PAN number etc.) as sensitive information as this number is linked to a person's identity and can reveal sensitive personal data such as name, age, location, biometrics etc. Example can be taken from Section 4(1) of the GDPR[3] which identifies location data as well as identification numbers as sensitive personal data along with other identifies such as biometric data, gender race etc.
  • The Act defines consent as the “unambiguous indication of a data subject’s agreement” however, the definition does not indicate that there needs to be an informed consent. Hence the revised definition could read as follows “the informed and unambiguous indication of a data subject’s agreement”. It is also unclear how this definition of consent relates to ‘effective consent’. This relationship needs to be clarified.
  • The Act defines ‘data controller’ in Section 2(1)(l) as “ any person including appropriate government..”. In order to remove any ambiguity over the definition of the term person, the definition could specify that the term person means any natural or legal person.
  • The Act defines ‘data processor’ in Section (2(1)(m) as “means any person including appropriate government”. In order to remove any ambiguity over the definition of the term ‘any person’, the definition could specify that the term person means any natural or legal person.

CHAPTER II

Right to Privacy

Section 5: This section provides exemption to the rights to privacy.

Comment: Section 5(1)(ii) states that the collection, storage, processing or dissemination by a natural person of personal data for a strictly non-commercial purposes are exempted from the provisions of the right to privacy. This clause also states that this data may be classified as open data by the Privacy Commission. This section hence provides individuals the immunity from collection, storage, processing and dissemination of data of another person. However this provision fails to state what specific activities qualify as non commercial use.

Recommendation: This provision could potentially be strengthened by specifying that the use must be in the public interest. The other issue with this subsection is that it fails to define open data. If open data was to be examined using its common definition i.e “data that can be freely used, modified, and shared by anyone for any purpose”[4] then this section becomes highly problematic. As a simple interpretation would mean that any personal data that is collected, stored, processed or disseminated by a natural person can possibly become available to anyone. Beyond this, India has an existing framework governing open data. Ideally the privacy commissioner could work closely with government departments to ensure that open data practices in India are in compliance with the privacy law.

CHAPTER III

Protection of Personal Data

PART A

Notice by data controller

Section 6: This section specifies the obligations to be followed by data controllers in their communication, to maintain transparency and lays down provisions that all communications by Data Controllers need to be complied with.

Comment: There seems to be a error in the Proviso to this section. The proviso states “Provided that all communications by the Data Controllers including but not limited to the rights of Data Subjects under this part shall may be refused when the Data Controller is, unable to identify or has a well founded basis for reasonable doubts as to the identity of the Data Subject or are manifestly unfounded, excessive and repetitive, with respect to the information sought by the Data Subject ”.

Recommendation: The proviso could read as follows “The proviso states “Provided that all communications by the Data Controllers including but not limited to the rights of Data Subjects under this part may be refused when the Data Controller is…”. We suggest the use of the ‘may’ as this makes the provision less limiting to the rights of the data controller.

Additionally, it is not completely clear what ‘included but not limited to...’ would entail. This could be clarified further.

PART B

CONSENT OF DATA SUBJECTS

Section 10: This section talks about the collection of personal data.

Comment: Section 10(3) lays down the information that a person must provide before collecting the personal data of an individual.

Comment: Section 10(3)(xi) states as follows “the time and manner in which it will be destroyed, or the criteria used to Personal data collected in pursuance of a grant of consent by the data subject to whom it pertains shall, if that consent is subsequently withdrawn for any reason, be destroyed forthwith: determine that time period;”. There seems to be a problem with the sentence construction and the rather complex sentence is difficult to understand.

Recommendation: This section could be reworked in such as way that two conditions are clear, one - the time and manner in which the data will be destroyed and two the status of the data once consent is withdrawn.

Comment: Section 10(3)(xiii) states that the identity and contact details of the data controller and data processor must be provided. However it fails to state that the data controller should provide more details with regard to the process for grievance redressal. It does not provide guidance on what type of information needs to go into this notice and the process of redressal. This could lead to very broad disclosures about the existence of redress mechanisms without providing individuals an effective avenue to pursue.

Recommendation: As part of the requirement for providing the procedure for redress, data controllers could specifically be required to provide the details of the Privacy Officers, privacy commissioner, as well as provide more information on the redressal mechanisms and the process necessary to follow.

Section 11:This section lays out the provisions where collection of personal data without prior consent is possible.

Comment: Section 11 states “Personal data may be collected or received from a third party by a Data Controller the prior consent of the data subject only if it is:..”. However as the title of the section suggests the sentence could indicate the situations where it is permissible to collect personal data without prior consent from the data subject”. Hence the word “without” is missing from the sentence. Additionally the sentence could state that the personal data may be collected or received directly from an individual or from a third party as it is possible to directly collect personal data from an individual without consent.

Recommendation:The sentence could read as “Personal data may be collected or received from an individual or a third party by a Data Controller without the prior consent of the data subject only if it is:..”.

Comment: Section 11(1)(i) states that the collection of personal data without prior consent when it is “necessary for the provision of an emergency medical service or essential services”. However it does not specify the kind or severity of the medical emergency.

Recommendation: In addition to medical emergency another exception could be made for imminent threats to life.

Section 12: This section details the Special provisions in respect of data collected prior to the commencement of this Act.

Comment: This section states that all data collected, processed and stored by data controllers and data processors prior to the date on which this Act comes into force shall be destroyed within a period of two years from the date on which this Act comes into force. Unless consent is obtained afresh within two years or that the personal data has been anonymised in such a manner to make re-identification of the data subject absolutely impossible. However this process can be highly difficult and impractical in terms of it being time consuming, expensive particularly, in cases of analog collections of data. This is especially problematic in cases where the controller cannot seek consent of the data subject due to change in address or inavailability or death. This will also be problematic in cases of digitized government records.

Recommendation: We suggest three ways in which the issue of data collected prior to the Act can be handled. One way is to make a distinction on the data based on whether the data controller has specified the purpose of the collection before collecting the data. If the purpose was not defined then the data can be deleted or anonymised. Hence there is no need to collect the data afresh for all the cases. The purpose of the data can also be intimated to the data subject at a later stage and the data subject can choose if they would like the controller to store or process the data.The second way is by seeking consent afresh only for the sensitive data. Lastly, the data controller could be permitted to retain records of data, but must necessarily obtain fresh consent before using them. By not having a blanket provision of retrospective data deletion the Act can address situations where deletion is complicated or might have a potential negative impact by allowing storage, deletion, or anonymisation of data based on its purpose and kind.

Comment: Section (2)(1)(i) of the Act states that the data will not be destroyed provided that effective consent is obtained afresh within two years. However as stated earlier the Act does not define effective consent.

Recommendation: The term effective consent needs to be defined in order to bring clarity to this provision.

PART C

FURTHER LIMITATIONS ON DATA CONTROLLERS

Section 16: This section deals with the security of personal data and duty of confidentiality.

Comment: Section 16(2) states “ Any person who collects, receives, stores, processes or otherwise handles any personal data shall be subject to a duty of confidentiality and secrecy in respect of it.” Similarly Section 16(3) states “data controllers and data processors shall be subject to a duty of confidentiality and secrecy in respect of personal data in their possession or control. However apart from the duty of confidentiality and secrecy the data collectors and processors could also have a duty to maintain the security of the data.” Though it is important for confidentiality and secrecy to be maintained, ensuring security requires adequate and effective technical controls to be in place.

Recommendation: This section could also emphasise on the duty of the data controllers to ensure the security of the data. The breach notification could include details about data that is impacted by a breach or attach as well as the technical details of the infrastructure compromised.

Section 17: This section details the conditions for the transfer of personal data outside the territory of India.

Comment: Section 17 allows a transfer of personal data outside the territory of India in 3 situations- If the Central Government issues a notification deciding that the country/international organization in question can ensure an adequate level of protection, compatible with privacy principles contained in this Act; if the transfer is pursuant to an agreement which binds the recipient of the data to similar or stronger conditions in relation to handling the data; or if there are appropriate legal instruments and safeguards in place, to the satisfaction of the data controller. However, there is no clarification for what would constitute ‘adequate’ or ‘appropriate’ protection, and it does not account for situations in which the Government has not yet notified a country/organisation as ensuring adequate protection. In comparison, the GDPR, in Chapter V[5], contains factors that must be considered when determining adequacy of protection, including relevant legislation and data protection rules, the existence of independent supervisory authorities, and international commitments or obligations of the country/organization. Additionally, the GDPR allows data transfer even in the absence of the determination of such protection in certain instances, including the use of standard data protection clauses, that have been adopted or approved by the Commission; legally binding instruments between public authorities; approved code of conduct, etc. Additionally, it allows derogations from these measures in certain situations: when the data subject expressly agrees, despite being informed of the risks; or if the transfer is necessary for conclusion of contract between data subject and controller, or controller and third party in the interest of data subject; or if the transfer is necessary for reasons of public interest, etc. No such circumstances are accounted for in Section 17.

Recommendation: Additionally, data controllers and processors could be provided with a period to allow them to align their policies towards the new legislation. Making these provisions operational as soon as the Act is commenced might put the controllers or processors guilty of involuntary breaching the provisions of the Act.

Section 19: This section states the special provisions for sensitive personal data.

Comment: Section 19(2) states that in addition to the requirements set out under sub-clause (1), the Privacy Commission shall set out additional protections in respect of:i.sensitive personal data relating to data subjects who are minors; ii.biometric and deoxyribonucleic acid data; and iii.financial and credit data.This however creates additional categories of sensitive data apart from the ones that have already been created.[6] These additional categories can result in confusion and errors.

Recommendation: Sensitive data must not be further categorised as this can lead to confusion and errors. Hence all sensitive data could be subject to the same level of protection.

Section 20: This section states the special provisions for data impact assessment.

Comment: This section states that all data impact assessment reports will be submitted periodically to the State Privacy commission. This section does not make provisions for instances of circumstances in which such records may be made public. Additionally the data impact assessment could also include a human rights impact assessment.

Recommendation: The section could also have provisions for making the records of the impact assessment or relevant parts of the assessment public. This will ensure that the data controllers / processors are subjected to a standard of accountability and transparency. Additionally as privacy is linked to human rights the data impact assessment could also include a human rights impact assessment. The Act could further clarify the process for submission to State Privacy Commissions and potential access by the Central Privacy Commission to provide clarity in process.

Section 20 requires controllers who use new technology to assess the risks to the data protection rights that occur from processing. ‘New technology’ is defined to include pre-existing technology that is used anew. Additionally, the reports are required to be sent to the State Privacy Commission periodically. However, there is no clarification on the situations in which such an assessment becomes necessary, or whether all technology must undergo such an assessment before their use. Additionally, the differentiation between different data processing activities based on whether the data processing is incidental or a part of the functioning needs to be clarified. This differentiation is necessary as there are some data processors and controllers who need the data to function; for instance an ecommerce site would require your name and address to deliver the goods, although these sites do not process the data to make decisions. This can be compared to a credit rating agency that is using the data to make decisions as to who will be given a loan based on their creditworthiness. Example can taken from the GDPR, which in Article 35, specifies instances in which a data impact assessment is necessary: where a new technology, that is likely to result in a high risk to the rights of persons, is used; where personal aspects related to natural persons are processed automatically, including profiling; where processing of special categories of data (including data revealing ethnic/racial origin, sexual orientation etc), biometric/genetic data; where data relating to criminal convictions is processed; and with data concerning the monitoring of publicly accessible areas. Additionally, there is no requirement to publish the report, or send it to the supervising authority, but the controller is required to review the processor’s operations to ensure its compliance with the assessment report.

Recommendation: The reports could be sent to a central authority, which according to this Act is the Privacy Commission, along with the State Privacy Commission. Additionally there needs to be a differentiation between the incidental and express use of data. The data processors must be given at least a period of one year after the commencement of the Act to present their impact assessment report. This period is required for the processors to align themselves with the provisions of the Act as well as conduct capacity building initiatives.

PART C

RIGHTS OF A DATA SUBJECT

Section 21: This section explains the right of the data subject with regard to accessing her data. It states that the data subject has the right to obtain from the data controller information as to whether any personal data concerning her is collected or processed. The data controller also has to not only provide access to such information but also the personal data that has been collected or processed.

Comment: This section does not provide the data subject the right to seek information about security breaches.

Recommendation: This section could state that the data subject has the right to seek information about any security breaches that might have compromised her data (through theft, loss, leaks etc.). This could also include steps taken by the data controller to address the immediate breach as well as steps to minimise the occurrence of such breaches in the future.[7]

CHAPTER IV

INTERCEPTION AND SURVEILLANCE

Section 28: This section lists out the special provisions for competent organizations.

Comment: Section 28(1) states ”all provisions of Chapter III shall apply to personal data collected, processed, stored, transferred or disclosed by competent organizations unless when done as per the provisions under this chapter ”.This does not make provisions for other categories of data such as sensitive data.

Recommendation: This section needs to include not just personal data but also sensitive data, in order to ensure that all types of data are protected under this Act.

Section 30: This section states the provisions for prior authorisation by the appropriate Surveillance and Interception Review Tribunal.

Comment: Section 30(5) states “any interception involving the infringement of the privacy of individuals who are not the subject of the intended interception, or where communications relate to medical, journalistic, parliamentary or legally privileged material may be involved, shall satisfy additional conditions including the provision of specific prior justification in writing to the Office for Surveillance Reform of the Privacy Commission as to the necessity for the interception and the safeguards providing for minimizing the material intercepted to the greatest extent possible and the destruction of all such material that is not strictly necessary to the purpose of the interception.” This section needs to state why these categories of communication are more sensitive than others. Additionally, interceptions typically target people and not topics of communication - thus medical may be part of a conversation between two construction workers and a doctor will communicate about finances.

Recommendation: The section could instead of singling out “medical, journalistic, parliamentary or legally privileged material” state that “any interception involving the infringement of the privacy of individuals who are not the subject of the intended interception may be involved, shall satisfy additional conditions including the provision of specific prior justification in writing to the Office for Surveillance Reform of the Privacy Commission.

Section 37: This section details the bar against surveillance.

Comment: Section 37(1) states that “no person shall order or carry out, or cause or assist the ordering or carrying out of, any surveillance of another person”. The section also prohibits indiscriminate monitoring, or mass surveillance, unless it is necessary and proportionate to the stated purpose. However, it is unclear whether this prohibits surveillance by a resident of their own residential property, which is allowed in Section 5, as the same could also fall within ‘indiscriminate monitoring/mass surveillance’. For instance, in the case of a camera installed in a residential property, which is outward facing, and therefore captures footage of the road/public space.

Recommendation: The Act needs to bring more clarity with regard to surveillance especially with respect to CCTV cameras that are installed in private places, but record public spaces such as public roads. The Act could have provisions that clearly define the use of CCTV cameras in order to ensure that cameras installed in private spaces are not used for carrying out mass surveillance. Further, the Act could address the use of emerging techniques and technology such as facial recognition technologies, that often rely on publicly available data.

CHAPTER V

THE PRIVACY COMMISSION

Section 53: This section details the powers and functions of the Privacy Commission.

Comment: Section 53(2)(xiv) states that the Privacy Commission shall publish periodic reports “providing description of performance, findings, conclusions or recommendations of any or all of the functions assigned to the Privacy Commission”. However this Section does not make provisions for such reporting to happen annually and to make them publicly available, as well as contain details including financial aspects of matters contained within the Act.

Recommendation: The functions could include a duty to disclose the information regarding the functioning and financial aspects of matters contained within the Act. Categories that could be included in such reports include: the number of data controllers, number of data processors, number of breaches detected and mitigated etc.

CHAPTER IX

OFFENCES AND PENALTIES

Sections 73 to 80: These sections lay out the different punishments for controlling and processing data in contravention to the provisions of this Act.

Comment: These sections, while laying out different punishments for controlling and processing data in contravention to the provisions of this Act, mets out a fine extending upto Rs. 10 crore. This is problematic as it does not base these penalties on the finer aspects of proportionality, such as  offences that are not as serious as the others.

Recommendation: There could be a graded approach to the penalties based on the degree of severity of the offence.This could be in the form of name and shame, warnings and penalties that can be graded based on the degree of the offence.
----------------------------------------------------------------------

Additional thoughts: As India moves to a digital future there is a need for laws to be in place to ensure that individual's rights are not violated. By riding on the push to digitization, and emerging technologies such as AI, a strong all encompassing privacy legislation can allow India to leapfrog and use these emerging technologies for the benefit of the citizens without violating their privacy. A robust legislation can also ensure a level playing field for data driven enterprises within a framework of openness, fairness, accountability and transparency.


[1] These seven principles include: Right to Access, Right to Rectification, Right to Erasure And Destruction of Personal Data,Right to Restriction Of Processing, Right to Object, Right to Portability of Personal Data,Right to Seek Exemption from Automated Decision-Making.

[2]The Privacy (Protection) Bill 2013: A Citizen’s Draft, Bhairav Acharya, Centre for Internet & Society, https://cis-india.org/internet-governance/blog/privacy-protection-bill-2013-citizens-draft

[3]General Data Protection Regulation, available at https://gdpr-info.eu/art-4-gdpr/.

[4] Antonio Vetro, Open Data Quality Measurement Framework: Definition and Application to Open Government Data, available at https://www.sciencedirect.com/science/article/pii/S0740624X16300132

[5] General Data Protection Regulation, available at https://gdpr-info.eu/chapter-5/.

[6] Sensitive personal data under Section 2(bb) includes, biometric data; deoxyribonucleic acid data;
sexual preferences and practices;medical history and health information;political affiliation;
membership of a political, cultural, social organisations including but not limited to a trade union as defined under Section 2(h) of the Trade Union Act, 1926;ethnicity, religion, race or caste; and
financial and credit information, including financial history and transactions.

[7] Submission to the Committee of Experts on a Data Protection Framework for India, Amber Sinha, Centre for Internet & Society, available at https://cis-india.org/internet-governance/files/data-protection-submission

The Potential for the Normative Regulation of Cyberspace: Implications for India

by Pranav M B last modified Jul 31, 2018 11:49 PM
Author: Arindrajit Basu Edited by: Elonnai Hickok, Sunil Abraham and Udbhav Tiwari Research Assistance: Tejas Bharadwaj

The standards of international law combined with strategic considerations drive a nation's approach to any norms formulation process. CIS has already produced work with the Research and Advisory Group (RAG) of the Global Commission on the Stability of Cyberspace (GCSC), which looks at the negotiation processes and strategies that various players may adopt as they drive the cyber norms agenda.

This report focuses more extensively on the substantive law and principles at play and looks closely at what the global state of the debate means for India

With the cyber norms formulation efforts in a state of flux,India needs to advocate a coherent position that is in sync with the standards of international law while also furthering India's strategic agenda as a key player in the international arena.

This report seeks to draw on the works of scholars and practitioners, both in the field of cybersecurity and International Law to articulate a set of coherent positions on the four issues identified in this report. It also attempts to incorporate, where possible, state practice on thorny issues of International Law. The amount of state practice that may be cited differs with each state in question.

The report provides a bird’s eye-view of the available literature and applicable International Law in each of the briefs and identifies areas for further research, which would be useful for the norms process and in particular for policy-makers in India.Historically, India had used the standards of International Law to inform it's positions on various global regimes-such as UNCLOS and legitimize its position as a leader of alliances such as the Non-Aligned Movement and AALCO. However, of late, India has used international law far less in its approach to International Relations. This Report therefore explores how various debates on international law may be utilised by policy-makers when framing their position on various issues. Rather than creating original academic content,the aim of this report is to inform policy-makers and academics of the discourse on cyber norms.In order to make it easier to follow, each Brief is followed by a short summary highlighting the key aspects discussed in order to allow the reader to access the portion of the brief that he/she feels would be of most relevance. It does not advocate for specific stances but highlights the considerations that should be borne in mind when framing a stance.

The report focuses on four issues which may be of specific relevance for Indian policy-makers. The first brief, focuses on the Inherent Right of Self-Defense in cyberspace and its value for crafting a stable cyber deterrence regime. The second brief looks at the technical limits of attributability of cyber-attacks and hints at some of the legal and political solutions to these technical hurdles. The third brief looks at the non-proliferation of cyber weapons and the existing global governance framework which india could consider when framing its own strategy. The final brief looks at the legal regime on counter-measures and outlines the various grey zones in legal scholarship in this field. It also maps possible future areas of cooperation with the cyber sector on issues such as Active Cyber Defense and the legal framework that might be required if such cooperation were to become a reality.Each brief covers a broad array of literature and jurisprudence and attempts to explore various debates that exist both among international legal academics and the strategic community.

The ongoing global stalemate over cyber norms casts a grim shadow over the future of cyber-security. However, as seen with the emergence of the nuclear non-proliferation regime, it is not impossible for consensus to emerge in times of global tension. For India, in particular, this stalemate presents an opportunity to pick up the pieces and carve a leadership position for itself as a key norm entrepreneur in cyberspace.


Read the full report here

Lining up the data on the Srikrishna Privacy Draft Bill

by Sunil Abraham last modified Jul 31, 2018 02:52 AM
In the run-up to the Justice BN Srikrishna committee report, some stakeholders have advocated that consent be eliminated and replaced with stronger accountability obligations. This was rejected and the committee has released a draft bill that has consent as the bedrock just like the GDPR. And like the GDPR there exists legal basis for nonconsensual processing of data for the “functions of the state”. What does this mean for lawabiding persons?

The article was published in Economic Times on July 30, 2018


Non-consensual processing is permitted in the bill as long it is “necessary for any function of the” Parliament or any state legislature. These functions need not be authorised by law.

Or alternatively “necessary for any function of the state authorised by law” for the provision of a service or benefit, issuance of any certification, licence or permit.
Fortunately, however, the state remains bound by the eight obligations in chapter two i.e., fair and reasonable processing, purpose limitation, collection limitation, lawful processing, notice and data quality and data storage limitations and accountability. This ground in the GDPR has two sub-clauses: one, the task passes the public interest test and two, the loophole like the Indian bill that possibly includes all interactions the state has with all persons.

The “necessary” test appears both on the grounds for non-consensual processing, and in the “collection limitation” obligation in chapter two of the bill. For sensitive personal data, the test is raised to “strictly necessary”. But the difference is not clarified and the word “necessary” is used in multiple senses.

Under the “collection limitation” obligation the bill says “necessary for the purposes of processing” which indicates a connection to the “purpose limitation” obligation. The “purpose limitation” obligation, however, only requires the state to have a purpose that is “clear, specific and lawful” and processing limited to the “specific purpose” and “any other incidental purpose that the data principal would reasonably expect the personal data to be used for”. It is perhaps important at this point to note that the phrase “data minimisation” does not appear anywhere in the bill.

Therefore “necessary” could broadly understood to mean data Parliament or the state legislature requires to perform some function unauthorised by law, and data the citizen might reasonably expect a state authority to consider incidental to the provision of a service or benefit, issuance of a certificate, licence or permit.

Or alternatively more conservatively understood to mean data without which it would be impossible for Parliament and state legislature to carry out functions mandated by the law, and data without it would be impossible for the state to provide the specific service or benefit or issue certificates, licences and permits. It is completely unclear like with the GDPR why an additional test of “strictly necessary” is — if you will forgive the redundancy — necessary.

After 10 years of Aadhaar, the average citizen “reasonably expects” the state to ask for biometric data to provide subsidised grain. But it is not impossible to provide subsidised grain in a corruption-free manner without using surveillance technology that can be used to remotely, covertly and non-consensually identify persons. Smart cards, for example, implement privacy by design. Therefore a “reasonable expectation” test is not inappropriate since this is not a question about changing social mores.

When it comes to persons that are not law abiding the bill has two exceptions — “security of the state” and “prevention, detection, investigation and prosecution of contraventions of law”. Here the “necessary” test is combined with the “proportionate” test.

The proportionate test further constrains processing. For example, GPS data may be necessary for detecting someone has jumped a traffic signal but it might not be a proportionate response for a minor violation. Along with the requirement for “procedure established by law”, this is indeed a well carved out exception if the “necessary” test is interpreted conservatively. The only points of concern here is that the infringement of a fundamental right for minor offences and also the “prevention” of offences which implies processing of personal data of innocent persons.

Ideally consent should be introduced for law-abiding citizens even if it is merely tokenism because you cannot revoke consent if you have not granted it in the first place. Or alternatively, a less protective option would be to admit that all egovernance in India will be based on surveillance, therefore “necessary” should be conservatively defined and the “proportionate” test should be introduced as an additional safeguard.

Spreading unhappiness equally around

by Sunil Abraham last modified Jul 31, 2018 02:49 PM
The section of civil society opposed to Aadhaar is unhappy because the UIDAI and all other state agencies that wish to can process data non-consensually.

The article was published in Business Standard on July 31, 2018.


There is a joke in policy-making circles — you know you have reached a good compromise if all the relevant stakeholders are equally unhappy. By that measure, the B N Srikrishna committee has done a commendable job since there are many with complaints.

Some in the private sector are unhappy because their demonisation of the European Union’s General Data Protection Regulation (GDPR) has failed. The committee’s draft data protection Bill is closely modelled upon the GDPR in terms of rights, principles, design of the regulator and the design of the regulatory tools like impact assessments. With 4 per cent of global turnover as maximum fine, there is a clear signal that privacy infringements by transnational corporations will be reigned in by the regulator. Getting a law that has copied many elements of the European regulation is good news for us because the GDPR is recognised by leading human rights organisations as the global gold standard. But the bad news for us is that the Bill also has unnecessarily broad data localisation mandates for the private sector.

Some in the fintech sector are unhappy because the committee rejected the suggestion that privacy be regulated as a property right. This is a positive from the human rights perspective, especially because this approach has been rejected across the globe, including the European Union. Property rights are inappropriate because a natural law framing of the enclosure of the commons into private property through labour does not translate to personal data. Also in comparison to patents — or “intellectual property” — the scale of possible discreet property holdings in personal information is several orders higher, posing unimaginable complexity for regulation, possibly creating a gridlock economy.

The section of civil society opposed to Aadhaar is unhappy because the UIDAI and all other state agencies that wish to can process data non-consensually. A similar loophole exists in the GDPR. Remember the definition of processing includes “operations such as collection, recording, organisation, structuring, storage, adaptation, alteration, retrieval, use, alignment or combination, indexing, disclosure by transmission, dissemination or otherwise making available, restriction, erasure or destruction”. This means the UIDAI can collect data from you without your consent and does not have to establish consent for the data it has collected in the past. There is a “necessary” test which is supposed to constrain data collection. But for the last 10 odd years, the UIDAI has deemed it “necessary” to collect biometrics to give the poor subsidised grain. Will those forms of disproportionate non-consensual data collection continue? Most probably because the report recommends that the UIDAI continue to play the role of the regulator with heightened powers. Which is like trusting the fox with
the henhouse.

Employees should be unhappy because the Bill has an expansive ground under which employers can nonconsensually harvest their data. The Bill allows for non-consensual processing of any data “necessary” for recruitment, termination, providing any benefit or service, verifying the attendance or any other activity related to the assessment of the performance”. This is permitted when consent is not an appropriate basis or would involve disproportionate effort on the part of the employer. This is basically a surveillance provision for employers. Either this ground should be removed like in the GDPR or a “proportionate” test should also be introduced otherwise disproportionate mechanisms like spyware on work computers will be installed by employees without providing notice.

Some free speech activists are unhappy because the law contains a “right to be forgotten” provision. They are concerned that this will be used by the rich and powerful to censor mainstream and alternative media. On the face of the “right to be forgotten” in the GDPR is a much more expansive “right to erasure”, whilst the Bill only provides for a more limited "right to restrict or prevent continuing disclosure”. However, the GDPR has a clear exception for “archiving purposes in the public interest, scientific or historical research purposes or statistical purposes”. The Bill like the GDPR does identify the two competing human rights imperatives — freedom of expression and the right to information. However, by missing the “public interest” test it does not sufficiently social power asymmetries.

Privacy and security researchers are unhappy because re-identification has been made an offence without a public interest or research exception. It is indeed a positive that the committee has made re-identification a criminal offence. This is because the de-identification standards notified by the regulator would always be catching up with the latest mathematical development. However, in order to protect the very research that the regulator needs to protect the rights of individuals, the Bill should have granted the formal and non-formal academic community immunity from liability and criminal prosecution.

Lastly but also most importantly, human rights activists are unhappy because the committee again like the GDPR did not include sufficiently specific surveillance law fixes. The European Union has historically handled this separately in the ePrivacy Regulation. Maybe that is the approach we must also follow or maybe this was a missed opportunity. Overall, the B N Srikrishna committee must be commended for producing a good data protection Bill. The task before us is to make it great and to have it enacted by Parliament at the earliest.

Anti-trafficking Bill may lead to censorship

by Swaraj Barooah and Gurshabad Grover — last modified Aug 02, 2018 01:59 PM
There are a few problematic provisions in the proposed legislation—it may severely impact freedom of expression.

The article was published in Livemint on July 24, 2018.


The legislative business of the monsoon session of Parliament kicked off on 18 July with the introduction of the Trafficking of Persons (Prevention, Protection and Rehabilitation) Bill, 2018, in the Lok Sabha. The intention of the Union government is to “make India a leader among South Asian countries to combat trafficking” through the passage of this Bill. Good intentions aside, there are a few problematic provisions in the proposed legislation, which may severely impact freedom of expression.

For instance, Section 36 of the Bill, which aims to prescribe punishment for the promotion or facilitation of trafficking, proposes a minimum three-year sentence for producing, publishing, broadcasting or distributing any type of material that promotes trafficking or exploitation. An attentive reading of the provision, however, reveals that it has been worded loosely enough to risk criminalizing many unrelated activities as well.

The phrase “any propaganda material that promotes trafficking of person or exploitation of a trafficked person in any manner” has wide amplitude, and many unconnected or even well-intentioned actions can be construed to come within its ambit as the Bill does not define what constitutes “promotion”. For example, in moralistic eyes, any sexual content online could be seen as promoting prurient interests, and thus also promoting trafficking.

Rather than imposing a rigorous standard of actual and direct nexus with the act of trafficking or exploitation, a vaguer standard which includes potentially unprovable causality, including by actors who may be completely unaware of such activity, is imposed. This opens the doors to using this provision for censorship and imposes a chilling effect on any literary or artistic work which may engage with sensitive topics, such as trafficking of women.

In the past, governments have been keen to restrict access to online escort services and pornography. In June 2016, the Union government banned 240 escort sites for obscenity even though it cannot do that under Section 69A or Section 79 of the Information Technology Act, or Section 8 of the Immoral Traffic (Prevention) Act. In July 2015, the government asked internet service providers (ISPs) to block 857 pornography websites sites on grounds of outraging “morality” and “decency”, but later rescinded the order after widespread criticism. If historical record is any indication, Section 36 in this present Bill will legitimize such acts of censorship.

Section 39 proposes an even weaker standard for criminal acts by proposing that any act of publishing or advertising “which may lead to the trafficking of a person shall be punished” (emphasis added) with imprisonment for 5-10 years. In effect, the provision mandates punishment for vaguely defined actions that may not actually be connected to the trafficking of a person at all. This is in stark contrast to most provisions in criminal law, which require mens rea (intention) along with actus reus (guilty act). The excessive scope of this provision is prone to severe abuse, since without any burden of showing a causal connect, it could be argued that anything “may lead” to the trafficking of a person.

Another by-product of passing the proposed legislation would be a dramatic shift in India’s landscape of intermediary liability laws, i.e., rules which determine the liability of platforms such as Facebook and Twitter, and messaging services like Whatsapp and Signal for hosting or distributing unlawful content.

Provisions in the Bill that criminalize the “publication” and “distribution” of content, ignore that unlike the physical world, modern electronic communication requires third-party intermediaries to store and distribute content. This wording can implicate neutral communication pipeways, such as ISPs, online platforms, mobile messengers, which currently cannot even know of the presence of such material unless they surveil all their users. Under the proposed legislation, the fact that human traffickers used Whatsapp to communicate about their activities could be used to hold the messaging service criminally liable.

By proposing such, the Bill is in direct conflict with the internationally recognized Manila Principles on Intermediary Liability, and in dissonance with existing principles of Indian law, flowing from the Information Technology Act, 2000, that identify online platforms as “safe harbours” as long as they act as mere conduits. From the perspective of intermediaries, monitoring content is unfeasible, and sometimes technologically impossible as in the case of Whatsapp, which facilitates end-to-end encrypted messaging. And as a 2011 study by the Centre for Internet & Society showed, platforms are happy to over-comply in favour of censorship to escape liability rather than verify actual violations. The proposed changes will invariably lead to a chilling effect on speech on online platforms.

Considering these problematic provisions, it will be a wise move to send the Bill to a select committee in Parliament wherein the relevant stakeholders can engage with the lawmakers to arrive at a revised Bill, hopefully one which prevents human trafficking without threatening the Constitutional right of free speech.

The National Health Stack: An Expensive, Temporary Placebo

by Murali Neelakantan, Swaraj Barooah, Swagam Dasgupta, and Torsha Sarkar — last modified Aug 13, 2018 03:13 PM
The year 2002 saw the introduction of a very ambitious National Program for Information Technology in the United Kingdom with the goal to transform the National Health Service — a pre-existing state-sponsored universal healthcare program. This would include a centralised, digital healthcare record for patients and secure access for 30,000 professionals across 300 hospitals.
The article was published by Bloomberg Quint on August 6, 2018.

However, the next ten years would see the scheme meet with constant criticism about its poor management and immense expenditure; and after a gruelling battle for survival, including spending £20 billion and having top experts on board, the NPfIT finally met its end in 2011.

Fast forward eight years — the Indian government’s public policy think tank, NITI Aayog, is proposing an eerily similar idea for the much less developed, and much more populated Indian healthcare sector. On July 6, the NITI Aayog released a consultation paper to discuss “a digital infrastructure built with a deep understanding of the incentive structures prevalent in the Indian healthcare ecosystem”, called the National Health Stack. The paper identifies four challenges that previous government-run healthcare programs ran into and that the current system hopes to solve. These include:

  • low enrollment of entitled beneficiaries of health insurance,
  • low participation by service providers of health insurance,
  • poor fraud detection,
  • lack of reliable and timely data and analytics.

The current article takes a preliminary look at the goals of the NHS and where it falls behind. Subsequent articles will break down the proposed scheme with regard to safety, privacy and data security concerns, the feasibility of data analytics and fraud detection, and finally, the role of private players within the entire structure.

The primary aim of any digital health infrastructure should be to compliment an existing, efficient healthcare delivery system.

As seen in the U.K., even a very well-functioning healthcare system doesn’t necessarily mean the digitisation efforts will bear fruit.

The NHS is meant to be designed for and beyond the Ayushman Bharat Yojana — the government’s two-pronged healthcare regime that was introduced on Feb. 1. Unfortunately, though, India’s healthcare regime has long been in the need of severe repair, and even if the Ayushman Bharat Yojana works optimally, there are no indications to show that this will miraculously change by their stated target of 2022. Indeed, experts predict it would take at least a ten-year period to successfully implement universal health coverage. A 2013 report by EY-FICCI stated that we must consider a ten-year time frame as well as allocating 3.5-4.7 percent of the GDP to health expenditure to achieve universal health coverage.

However, as per the current statistics, the centre’s allocation for health in the 2017-18 budget is Rs 47,353 crore, which is 1.15 percent of India’s GDP.

Patient

Patients wait for treatment in the corridor of the Acharya Tulsi Regional Cancer Treatment & Research Institute in Bikaner, Rajasthan, India. (Photographer: Prashanth Vishwanathan/Bloomberg)

Along with the state costs, India’s current expenditure in the health sector comes to a meagre 1.4 percent of the total GDP, far short of what the target should be. Yet, the government aims to attain universal health coverage by 2022.

In the first of its two-pronged strategy, the Ayushman Bharat Yojana aims to establish 1.5 lakh ‘Health and Wellness Centres’ across the country by 2022, which would provide primary healthcare services free of cost.

However, the total fund allocated for ’setting up’ these centres is only Rs 1,200 crore, which comes down to a meagre Rs 80,000 per centre.

It is unclear whether the government plans to establish new sub-centres, or improve the existing ones. Either way, a pittance of Rs 80,000 is grossly insufficient. As per reports, among the 1,56,231 current health centres, only 17,204 (11 percent) have met Indian Public Health Standards as of March 31, 2017. Shockingly, basic amenities like water and electricity are scarce, if not, absent in a substantial number of these centres.

At least 6,000 centres do not have a female health worker, and at least 1,00,000 centres do not have a male health worker.

Woma

A woman holds a child in the post-delivery ward of the district hospital in Jind, Haryana, India. (Photographer: Prashanth Vishwanathan/Bloomberg)

Even taking the generous assumption that the existing 17,204 centres are in top condition, the future of the rest of these health and wellness centres continues to be bleak.

In truth, both limbs of the Ayushman Bharat strategy remain oblivious to the reality of the situation. The goals do not take into account the existing problems within access to healthcare, nor the relevant economic and social indicators that depict a contrasting reality.

Therefore, the fundamental question remains: if there is no established, well-functioning healthcare delivery system to support, what will the NHS help?

Bitter Pill

NHS: What Purpose Does It Serve?

The ambitious scope of the National Health Stack consultation paper aside, the central problem plaguing the Indian healthcare system, i.e, delivery, and access to healthcare, remains unaddressed. The first two problems that the NHS aims to solve focus solely on increasing health insurance coverage. However, very problematically, the document does not explicitly mention how a digital infrastructure would lead to rising enrollment of both beneficiaries and service providers of insurance.

This goal of increasing enrollment without a functioning healthcare system could result in two highly problematic scenarios.

Either health and wellness centres will effectively act as enrollment agencies rather than providers of healthcare, or the government would fall back on its ‘Aadhar approach’ and employ external enrollment agents.

The former approach runs a very real risk of the health and wellness centres losing focus on their primary purpose even while statistics show them as functioning centres – thus negatively impacting even the working centres. The latter approach is at a higher risk of running into problems akin to the case of Aadhaar enrollment, such as potential data leakages, identity thefts and a market for fake IDs. Even if we somehow overlook this and assume that the NHS would help increase insurance coverage without additional problems, the larger question still stands: should health insurance even be the primary goal of the government, over and above providing access to healthcare? And what effect will this have on the actual delivery of healthcare services to the common citizen?

Lone Patient

A lone patient sleeps in the post operation recovery ward of the district hospital in Jind, Haryana, India. (Photographer: Prashanth Vishwanathan/Bloomberg)

Should Insurance Be A Primary Objective Of The Indian Government?

Simply put, the answer is no, because greater insurance coverage need not necessitate better access to healthcare. In recent years, health insurance in India has been rising rapidly due to government-sponsored schemes. In the fiscal year 2016-17, the health insurance market was prized to be worth Rs 30,392 crore. Even with such large investments in insurance premiums, the insurance market accounts for lesser than 5 percent of the total health expenditure.

Furthermore, previous experiences with government-sponsored health insurance schemes have proven that there is little merit to such an expensive task.

For instance, the government’s earlier health insurance scheme, Rashtriya Swasthya Bima Yojana, was predicted to be unable to completely provide ‘accessible, affordable, accountable and good quality health care’ if it focussed only on “increasing financial means and freedom of choice in a top-down manner”.

These traditional insurance-based models are characterised by problems of information asymmetry such as ‘moral hazard’ — patients and healthcare providers have no incentive to control their costs and tend to overuse, resulting in an unsustainable insurance system and cost inflation. Any attempt to regulate providers is met with harsh, cost-cutting steps which end up harming patients.

On another note, some diseases which are responsible for the most number of deaths in the country — including ischaemic heart diseases, lower respiratory tract infections, chronic obstructive pulmonary disease, tuberculosis and diarrhoeal diseases — are usually chronic conditions that need outpatient consultation, resulting in out-of-pocket expenses.

Cancer Hospital

Patients wait at the Head and Neck Cancer Out Patient department of Tata Memorial Hospital in Mumbai, India. (Photographer: Prashanth Vishwanathan/Bloomberg News)

Even though the government has added non-communicable diseases under the ambit of the health and wellness centres, there are still reports stating that for some of the most impoverished, their reality is that 80 percent of the time, they have to cover their expenses from their pocket. This issue in all probability will continue to exist since the status and likelihood for these centres to be successful itself is questionable.

It is clear, that in the current scheme of things, this traditional insurance model of healthcare cannot benefit those it is meant for.

If this is the case, why has the NHS built its main objectives around insurance coverage rather than access to healthcare? It is imperative that we question the legitimacy of these goals, especially if they indicate the government's intentions to push health insurance via the NHS above its responsibility of delivering healthcare. The government's thrust for a digital infrastructure shows tremendous foresight, but at what cost? Even the clear goal of healthcare data portability has very little benefit when one understands that this becomes an important goal only when one has given up on ensuring widespread accessible healthcare. Once the focus shifts from using technology needlessly to developing an efficient and universally accessible healthcare delivery system, the need for data portability dramatically reduces. The temptation of digitisation and insurance coverage cannot and should not blind us from the main goal — access to healthcare. The one lesson that we must learn from the case of the U.K. is that even with a well-functioning healthcare delivery system, a digital infrastructure must be introduced very thoughtfully and carefully. In our eagerness to leapfrog with technology, we must not mistake a placebo for a panacea.


Murali Neelakantan is an expert in healthcare laws. Swaraj Barooah is Policy Director at The Centre for Internet and Society. Swagam Dasgupta and Torsha Sarkar are interns at The Centre for Internet and Society.

Future of Work: Report of the ‘Workshop on the IT/IT-eS Sector and the Future of Work in India’

by Ambika Tandon last modified Mar 05, 2020 07:03 PM
This report provides an overview of the proceedings and outcomes of the Workshop on the IT/IT-eS Sector and the Future of Work in India (hereinafter referred to as the “Workshop”), organised at Omidyar Networks’ office in Bangalore, on June 29, 2018.

 

This report was authored by Torsha Sarkar, Ambika Tandon and Aayush Rath. It was edited by Elonnai Hickok. Akash Sriram, Divya Kushwaha and Torsha Sarkar provided transcription and research assistance. A PDF of the report can be accessed here.


Introduction

The Workshop was attended by a diverse group of stakeholders which included industry representatives, academicians and researchers, and civil society. The discussions went over various components of the transition in the sector to Industry 4.0, including the impact of Industry 4.0-related technological innovations on work broadly in India, and specifically in the IT/IT-eS sector (hereinafter referred to as the “Sector”). The discussion focused on the reciprocal impact on socio-political dimensions, the structure of employment, and forms of work within workspaces.

The Workshop was divided into three sessions. The first session was themed around the adoption and impact of Industry 4.0 technologies vis-a-vis the organisation of work. Within this the key questions were: the nature of the technologies being adopted, the causes that are driving the uptake of these technologies, and the ‘tasks’ constituting jobs in the Sector.

The second session focussed on the role of skilling and re-skilling measures as mitigators to projected displacement of jobs. The issues dealt with included shifts in  company, educational, and social competency profiles as a result of Industry 4.0, transformations in the predominant pedagogy of education, vocational, and skill development programmes in India, and their success in creating employable workers and filling skill gaps in the industry.

The third session looked at social welfare considerations and public policy interventions that may be necessitated in the wake of potential technological unemployment owing to Industry 4.0. The session was designed with a specific focus on the axes of gender and class, addressing questions of precarity, wages, and job security in the future of work for marginalized groups in the workforce.

Preliminary Comments

The Workshop opened with a brief introduction on the research the Centre for Internet and Society (CIS) is undertaking on the Future of Work (hereinafter referred to as “FoW”) vis-a-vis Industry 4.0.  The conception of Industry 4.0 that CIS is looking at is the technical integration of cyber-physical systems in production and logistics on one hand, and the use of internet of things (IoT) and the connection between everyday objects and services in the industrial processes on the other. The scope of the project, including the impact of automation on the organisation of employment and shifts in the nature and forms of work, including through the gig economy, and microwork, was detailed. The historical lens taken by the project, and the specific focus on questions of inequality across gender, class, language, and skill were highlighted.

It was pointed out that CIS’ research, in this regard, comes from the necessity of localising and re-examining the global narratives around Industry 4.0. While new technologies will be developed and implemented globally, the impact of these technologies in the Indian context would be mediated through local, political and socio-economic structures. For instance, the Third Industrial Revolution, largely associated with the massification of computing, telecommunications and electronics, is still unfolding in India, while attempts are already being made to adapt to Industry 4.0. These issues provided a starting point to the discussion on the impact of Industry 4.0 in India.

Qualifying Technological Change

Contexualising the narrative with historical perspectives

The panel for the first session commenced with a discussion around a historical perspective on job loss being brought about due to mechanisation. The distinction between Industry 3.0 and 4.0, it was suggested, was largely arbitrary, inasmuch as technological innovation has been a continuous process and has been impacting lives and the way work is perceived. It was argued that the only factor differentiating Industry 4.0 from previous industrial revolutions is ‘intelligent’ technology that is automating routine cognitive tasks. The computer, programmatic logic control (PLC) and data (called the ‘new oil’) were also a part of Industry 3.0, but intelligent technologies are able to provide greater analytical power under Industry 4.0.

The discussion also went over the distinction between the terms ‘job’, ‘task’ and ‘work’. It was argued that the term ‘job’ might be treated as a subset of the term ‘work’, with the latter moving beyond livelihood to encompass questions of dignity and a sense of fulfilment in the worker. With relation to this distinction, it was mentioned that the jobs at the risk of automation would be those that fulfill only the basic level in Maslow’s hierarchy - implying largely routine manual tasks. Additionally, it was explained that although these jobs will continue to use labour through Industry 4.0, it is only the nature of technological enablement that would change to automate more dangerous and hazardous tasks.

Technology as a long-term enabler of job creation

It was argued that technology has historically been associated with job creation. Historical instances cited included that of popular anxiety due to anticipated job loss through the uptake of the spinning machine and the steam engine, whereas the actual reduction in the cost of production led to greater job creation, increased mobility and improved quality of life in the long-term. Such instances were used to further argue that technology has historically not resulted in long-term job reductions.

The platform economy was posited as a model for creating jobs, through the efficient matching of supply and demand through digital platforms. It was indicated that rural to urban migration is aided by such platforms, as labourers voluntarily enrol in skilling initiatives given the certainty of employment through platformization. It was further argued that historically, Indian workers have been educated rather than skilled, and that platformization and automation, coupled with the elasticity of human needs, will provide greater incentives for technically skilled workers by creating desirable jobs.

Factors leading to differential adoption of automation

In relation to the adoption of the technologies Industry 4.0, it was argued that the mere existence of a technology does not necessitate its scalablity at an industrial level. Scalability would be possible only when the cost of labour is high relative to the costs entailed in technological adoption. This was supported by data from a McKinsey Report[1] which indicated that countries like the US and Germany would be impacted in the short term by automation, because their cost of labour is higher. Conversely, since the cost of labour in India is relatively cheap, the reality of technological displacement is still far away and the impact would not be immediate.

Similarly, a distinction was also made to account for the differential impact of automation in various sectors. For instance, it was indicated that since the IT/IT-eS sector in India is based on exporting services and outsourcing of businesses. Accordingly, if Germany automates its automobile industry, that would impact India less than if it automates the IT/IT-eS sector, as the latter is more reliant on exporting its services to developed economies. The IT/IT-eS sector was further broken down into sub-sectors with the intention of highlighting the differential impact of automation and FoW in each of these sub-sectors. It was agreed that the BPO sub-sector would be more adversely impacted than core IT services, given its constitution of routine nature of tasks at a higher risk of automation.

Disaggregating India’s Skilling Approach

The discussion around skilling measures was contextualised in the Indian context by alluding to data collected from the National Sample Survey Organisation (NSSO) surveys. The data revealed that around 36% of India’s total population is under the age of seventeen and approximately 13% are between 18 - 24. Additional statistics suggested that only around a quarter of the workforce aged 18-24 years had achieved secondary and higher secondary education and close to 13% of the workforce was illiterate. While these numbers included both male and female workers, it was pointed out that it was an incomplete dataset as it excluded transgender workers. It was suggested it should be this segment of the Indian demographic that is targeted for significant skilling pushes, which could be catalysed through specific vocational training centres. It was  also suggested that there was a need for to restructure the role of the National Skill Development Corporation (NSDC) in the Indian skilling framework.

A comprehensive picture was painted by conceptualising the skilling framework in India as 5 distinct pillars. This conceptualisation was used to debunk the narrative around NSDC being the sole entity pushing for skill development in the country. The NSDC’s function in the skilling framework was posited as providing funding to skilling initiatives with programmes lasting for a period of 3 months. These 3- month programmes were critiqued for being insufficient for effective training, especially given the low skill levels of workers going into the programmes. The NSDC’s placement rate of 12% as per their own records was used to support this argument. Further suggestions on making the NSDC more effective were made in a later discussion[2].

Related to this, the second pillar of vocational skilling was said to be the Industrial Training Institute (ITI). The third pillar was said to be the school system which was critiqued for does not offering vocational education at secondary and senior secondary levels. The fourth pillar comprised of the 16 ministries which governed the labour laws in India - none of whose courses were National Skills Qualifications Framework (NSQF) compliant.

The fifth pillar was construed as  the industry itself and the enterprise-based training it conducted. However, it was stated that India’s share of registered companies who did enterprise-based training was dismal. In 2009, the share of enterprise-based training was 16% which rose in 2014 to 36%. Further, most of these 36% were registered large firms as opposed to small and medium sized enterprises. Unregistered companies, it was suggested, were simply doing informal apprenticeships.

Joint public and private skilling initiatives

In addition to government sponsored skilling initiatives, attention was directed to skill development partnerships that took the shape of public-private initiatives. As an example, it was said that that a big player in the ride-hailing economy had worked with NSDC and other skilling entities to ensure that soft skills were being imparted to their driver partners before they were on-boarded onto the platform.

It was also brought forth that innovative forms of skilling and training were gaining traction in the education sector as well in the private sector. This was instantiated through instances of uptake of platforms which apply artificial intelligence, and within that machine learning based techniques, to generate and disseminate easier- to- consume video-based learning.

Driving Job Growth: Solving for Structural Eccentricities of the Indian Labour Market

Catalysing manufacturing-led job growth

The discussion began by discussing specific dynamics of the Indian labour markets in the context of the Indian economy. It was pointed out the productivity level of the services sector is not as high as the productivity level of manufacturing, which is problematic for job creation in a developing economy such as India witnessing capital-intensive growth in the manufacturing sector. The underlying argument was that the jobs of the future in the Indian context will have to be created in the manufacturing sector.

Several macroeconomic policy interventions were suggested to reverse the trend of capital-intensive growth in order to make manufacturing the frontier for enhanced job creation. The need for a trade policy in consonance with the industrial policy was stated as imperative. This was substantiated by highlighting the lack of an inverted duty structure governing the automobile sector that has led India to be amongst the biggest manufacturers of automobiles. The inverted duty structure entails finished products having a lower import tariff and a lower customs duty when compared to import of raw materials or intermediates. However, it was highlighted that a dissonant industrial policy failed to acknowledge that at least 50% of india’s manufacturing comes from Micro, Small & Medium Enterprises (MSMEs) and provided no assistance to MSMEs in obtaining credit, market access or technology upgradation. On the other hand, it was asserted that large corporates get 77% of the total bank credit.

Another challenge that was highlighted was with the Government of India’s severely underfunded manufacturing cluster development programs under the aegis of the Ministry of Textiles and the Ministry of MSMEs. For sectors that contribute majorly towards India’s manufacturing output, it was asserted that these programmes were astonishingly bereft of any governing policy and suffer from several foundational issues. Moreover, it was observed that these clusters are located around the country in Tier 2, 3 and 4 cities where the quality of infrastructure is largely lacking. The Atal Mission for Rejuvenation and Urban Transformation (AMRUT) program devised for the development of these cities is also myopic as the the target cities are not the ones where these manufacturing clusters are located. The rationale behind such an approach was that building infrastructure at geographical sites of job creation would lead to an increase in productivity which would in turn attract greater investment.  This would have to necessarily be accompanied by hastening the setting up of industrial corridors - the lackadaisical approach to which was stated as a key component of India being outpaced by other developing economies in the South East Asian region.

An additional policy intervention that was suggested was from the lens of setting up of skilling centres by NSDC in proximity to these manufacturing clusters where the job creation is being evidenced as opposed to larger metropolitan cities.

Carving out space for a vocational training paradigm

It was asserted that the focus of skilling needs to be on the manufacturing rather than services sector, given the centrality of manufacturing to a developing economy undergoing an atypical structural transformation[3] - as outlined above. Further compounding the problem of jobless growth, it was stated that 50% of the manufacturing workforce have 8 or less years of education and only 5% of the workforce including those that have technical education are vocationally trained, according to the NSS, 62nd Round on Employment and Unemployment.

A gulf in primary and secondary education vis-a-vis vocational training was pointed as one of the most predominant causes behind the much touted ‘skills gap’ that the Indian workforce is said to be battling with. Using data to further cull out the argument, it was said that in 2007, the net enrollment in India for primary education had already reached 97% and that between 2010 - 2015, the secondary education enrollment rate went from 58% to 85%.[4] It was hypothesised that the latter may have risen to 90% levels since. Furthermore, the higher education enrollment rate also commensurately went up from 11% in 2006 to 26-27% in 2017.[5] It was argued that this is impossible to achieve without gender parity in higher education. This gender parity in education was contrasted with the systematic decline in the women’s labour force participation that India has been witnessing in the last 30 years.

Consequently, the ‘massification’ of higher education in India over the past 10 years was critiqued as ineffectual in comparison to the Chinese model, as the latter focused on engaging students in vocational training, which the Indian education system had failed to do. The role of the gig economy in creating job opportunities despite this gap between educational and vocational training was regarded as important, especially given the lack of growth in the traditional job markets.

Accounting for the Margins

With relation to the profiles of workers within sectors, it was indicated that factors such as gender, class, skill, income, and race must be accounted for to determine the ‘winners’ and ‘losers’ of automation. Several points were discussed with relation to this disaggregation.

Technology as an equaliser? Gender and skill-biased technological change

First, the idea of  technology and development as objective and neutral forces was questioned, with the assertion that human decision-makers, who more often than not tend to be male, allow inherent biases to creep into outputs, processes, and objectives of automation. Data from the Belong Survey in IT services[6] indicated that the proportion of women in core engineering was 26% of the workforce, while that in software testing was 33%. Coupled with the knowledge that automation and technology would automate software testing first, it was argued that jobs held by female workers were positioned at a higher immediate risk of automation than male workers.

The ‘Leaky Pipe Problem’ in STEM industries i.e. the observation that female workers tend to be concentrated in entry level jobs, while senior management is largely male dominated was also brought to the fore. This was used to bolster the argument that female workers in the Sector will lose out in the shorter term, when automation adversely impacts the lower level jobs.

A survey conducted by Aspiring Minds[7] which tracked the employability of the engineering graduates was utilised to further flesh out skill biased technological change. As per the survey, 40% of the graduating students are employable in the BPO sector, while only 3% of the students are employable in the sector for software production. With the BPO sector likelier to be impacted more adversely than core IT services, it was emphasised that policy considerations should be very specific in their ambit.

Social security and the platform economy

The discussion around the platform economy commenced with a focus on how it had created economic opportunities in the formal sector by matching demand and supply on one hand, and by reducing inefficiency in the system through technology on the other. It was pointed out that these newer forms of work were creating millions of entrepreneurship opportunities that did not exist previously. These opportunities, it was suggested, were in themselves flexible and contributing the greater push towards enhancing the numbers of those that come within the ambit of India’s formal economy.

This discussion was countered by suggesting that the shift of the workforce from the informal sector to the formal sector, which companies in the gig economy claimed they contributed to, have instead restricted the kind of lives gig workers have been living historically. As an instance, it was pointed out that a farmer who had been working with a completely different set of skills was now being asked to shift to a new set of skills which would be suited for a very specific role and not transposable across occupations. In other words, it would not be meaningful skilling. It was also pointed out that what distinguishes formal work from informal is whether the worker has social security net or not - mere access to banking services or filing of tax returns was not sufficient for characterising a workforce as being formal in nature.

Relatedly, the possibility of social security was discussed for the unorganised sector and microworkers. One of the possibilities discussed was to ensure state subsidised maternity, disability, and death security, and pensions for workers below the poverty line. The fiscal brunt borne by the government for such a scheme was anticipated to not be above 0.4% of the GDP. It was suggested that this would move forward the conversation on minimum wage and fair work, which would be of great importance in broader conversations around working conditions in the platform economy.

The interplay of gender and platformisation

It was highlighted that trends in automation are going to change the occupational structure in the digital economy - the effect of which will especially be felt in cognitive routine jobs given their increased propensity to platformisation. A World Economic Forum report[8] was cited which indicated the disproportional risk of unemployment faced by women given their concentration in cognitive routine jobs was also brought up.

The discussion logically undertook a deeper look at the platformisation of work with a specific focus on freelance microwork and its impact on the female labour force and culled out certain positive mandates arising from such newer forms of work. It was suggested that industries are more likely to employ female workers in microwork due to lower rates of attrition, and flexible labour. It was reiterated that freelancing in India extends beyond data entry and other routine jobs, to include complex work - thereby also catering to skilled workers desirous of flexibility. Platforms designing systems to meet the demand for flexible work were also discussed, such as platforms geared towards female workers undertaking reskilling measures and counselling for females returning from maternity leave or sabbaticals. Additionally, the difficulty of defining freelancing under existing frameworks of employment, compounded by the lack of legal structures for such work, was outlined.

Systemic challenges within the Indian labour law framework

Static design of legal processes

Labour law was, naturally, acknowledged as a key determinant in the conversation around both the uptake and impact of automative technologies encapsulated within Industry 4.0.

The archaic nature of India’s labour law framework was highlighted as a major impediment to ensuring both worker rights as well as the ease of conducting commerce. It was pointed out that organised labour continues to be under the ambit of the Industrial Disputes Act, which was made effective in 1947, has undergone minimal amendments since. This was critiqued on the basis that the framework for the law is embedded in its historical context, and while the industrial landscape in the country has transformed drastically since the implementation of the Industrial Disputes  Act, the legal framework has not evolved.  Similarly, the Karnataka Shops and Establishments Act, 1961 which regulates the Sector today was enacted much before the Sector even opened up in India in the 1990s.

Additionally, it was pointed out that the consolidation of the fragmented extant framework of labour laws in India was being consolidated into 4 labour codes without any wholesale modernisation push reforming the laws being consolidated. Consequently, it was argued that the government has to drive changes through policies alone as the legal framework remains static. Barriers to implementation of adequate policies were also discussed, such as the political impact of labour policies, lack of state initiative to deal with the impact of the future of work, apart from the historic inability of the law to keep up with the state of labour and economy.

Labour law arbitrage

One of the reasons behind the increasing contractualizing of labour in India was attributed to over-regulation. There was consensus that the labour law regime was not conducive to industry in India leading to greater opportunistic behaviours from industry participants. It was acknowledged that the political clout that a lot of contractors (of labourers) enjoy along with providing primary employers greater flexibility to hire and fire employees at will has led to a widespread utilisation of contract labour entities.

It was further stated that industry behaviour has adopted several other tools of arbitrage to not consider labour law as a key impediment in the ease of scaling business. Empirical evidence of labour law arbitrage was cited to drive home the point - according to national surveys, 80-85% of enterprises employ less than 99 workers as the law mandates stricter compliance requirements for enterprises employing 100 or more workers[9]. This was acknowledged a serious hurdle to scaling businesses.

Problems behind other apparently well-intentioned legislation from a public policy lens having counterproductive consequences was also highlighted. In the space of labour laws, the example of the recently enacted Maternity Benefit (Amendment) Act, 2017 was cited. By enhancing maternity benefits, without accounting for other provisioning such as a paternity benefit inclusion, it was anticipated that companies may entirely shy away from hiring women.

Policy Paralysis

The discussion progressed towards a high-level discussion around the efficacy of law vis-a-vis state policy as a means to create a system of checks and balances in the context of Industry 4.0. It was highlighted that law, by design, would be outpaced by technological change. The common law system of law operating in India is premised on a time-tested emphasis on post-facto regulation. In other words, it is reactionary. While policy making in India suffers from a similar plague of playing catch-up, it is in large part due to a bureaucratic structure premised on generalism - a pressing need for domain expertise in policy making was emphasised upon. Having said that, it was stated that it is the institutional design of policy making institutions that needs rectification. What was acknowledged was the success, albeit scant, that individual states have had in policy making catering to specific yet diverse domains. A greater push towards clear and progressive evidence-based policy pushes was stressed upon with the anticipation that it would lead to self-regulation by the industry itself - be it in terms of the future of employment or of the economic direction that the industry will embark on.

Concluding Remarks

The discussions during the course of the Workshop situated the discourse around Industry 4.0 within the contours of the Indian labour realities and the IT sector within that.

As a useful starting point, various broader perspectives around the impact of technological change on the quantum of jobs were brought forth. While the industry perspective was that of technology as an enabler of job creation in the long-run, it was sufficiently tempered by concerns around those impacted adversely in the short to medium-term time frames. These concerns coalesced towards understanding the potential impact of Industry 4.0 on the nature of work, as well as mitigation tools to ease the impact of technological disruption on labour.

Important facets of technological adoption within the Sector were highlighted, such as potential for scalability as well as the distinct eccentricities of the various sub-sectors the IT sector subsumed. The differential impact within the various sub-sectors was pegged to the differential composition of automatable tasks (routine, rule-based) within each sub-sector. However, questions regarding the exact contours of task composition were left unanswered signalling a potential area for further research. On the other hand, the primary challenge to technological adoption faced from the labour-supply side was skilling, or the lack thereof. This was contextualised in the larger scheme of structural issues plaguing the skilling machinery operating in the country, which lead to inadequate dispensation of technical and vocational education and training (TVET). In terms of additional structural issues that would potentially have an impact on how Industry 4.0 plays out in the Indian context, attention was directed towards overdue reform of the labour law framework which has already struggled with incorporating newer forms of working engagements such as platform and gig work, that are being evidenced as a part of Industry 4.0.

An underlying theme that found mention across sessions was the need to devote attention to prevent further marginalisation as a consequence of technological disruption of the already marginalised. Evidence from government datasets as well as from literature around concepts such as skill biased technological change, the leaky pipe problem, and the U-shaped curve of female labour force participation were cited to explicate these issues. The merits of different policy measures to address these concerns, such as social security, living wages, and maternity benefits were also discussed.

While the Workshop touched upon several facets of the discourse around Industry 4.0 in the Sector, it also sprung up areas that require further inquiry. Questions around where in the value chain use-cases for Industry 4.0 technologies were emerging needed a more comprehensive understanding. Moreover, the impact of Sector Skill Councils (SSCs), a central aspect of the skilling ecosystem in India, wasn’t touched upon. An additional path of inquiry that emerged pertained to evolving constructive reforms to legal and economic policy frameworks as top-down interventions within the Sector that could be anticipated to play a significant role in the uptake and impact of Industry 4.0 technologies.


[1] McKinsey Global Institute, A future that works: Automation, employment, and productivity, https://www.mckinsey.com/~/media/mckinsey/featured%20insights/Digital%20Disruption/Harnessing%20automation%20for%20a%20future%20that%20works/MGI-A-future-that-works-Executive-summary.ashx, (accessed 10 August 2018).

[2] See discussion under ‘Catalysing manufacturing-led job growth‘.

[3] R. Verma, Structural Transformation and Jobless Growth in the Indian Economy, The Oxford Handbook of the Indian Economy, 2012.

[4] S. Mehrotra, ‘The Indian Labour Market: A Fallacy, Two Looming Crises and a Tragedy’, CSE Working Paper, April 2018.

[5] ibid.

[6] Mohita Nagpal, ‘Women in tech: There are 3 times more male engineers to females’, belong.co, http://blog.belong.co/gender-diversity-indian-tech-companies, (accessed 10 August 2018).

[7] Aspiring Minds, National Programming Skills Report - Engineers 2017, https://www.aspiringminds.com/sites/default/files/National%20Programming%20Skills%20Report%20-%20Engineers%202017%20-%20Report%20Brief.pdf, (accessed 11 August 2018).

[8] World Economic Forum, The Future of Jobs Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution: Global Challenge Insight Report, January 2016.

[9] Ministry of Statistics and Programme Implementation, All India Report of Sixth Economic Census, Government of India, 2014.

India's Contribution to Internet Governance Debates

by Sunil Abraham, Mukta Batra, Geetha Hariharan, Swaraj Barooah and Akriti Bopanna — last modified Aug 16, 2018 03:38 PM
India's Contribution to Internet Governance Debates", an article by Sunil Abraham, Mukta Batra, Geetha Hariharan, Swaraj Barooah and Akriti Bopanna, was recently published in the NLUD Student Law Journal, an annual peer-reviewed journal published by the National Law University, Delhi.

Abstract

India is the leader that championed ‘access to knowledge’ and ‘access to medicine’. However, India holds seemingly conflicting views on the future of the Internet, and how it will be governed. India’s stance is evolving and is distinct from that of authoritarian states who do not care for equal footing and multi-stakeholderism.


Introduction

Despite John Perry Barlow’s defiant and idealistic Declaration of Independence of Cyberspace1 in 1996, debates about governing the Internet have been alive since the late 1990s. The tug-of-war over its governance continues to bubble among states, businesses, techies, civil society and users. These stakeholders have wondered who should govern the Internet or parts of it: Should it be the Internet Corporation for Assigned Names and Numbers (ICANN)? The International Telecommunications Union (ITU)? The offspring of the World Summit on Information Society (WSIS) - the Internet Governance Forum (IGF) or Enhanced Cooperation (EC) under the UN? Underlying this debate has been the role and power of each stakeholder at the decision-making table.States in both the global North and South have taken various positions on this issue.

Whether all stakeholders ought to have an equal say in governing the unique structure of the Internet or do states have sovereign public policy authority? India has, in the past, subscribed to the latter view. For instance, at WSIS in 2003, through Arun Shourie, then India’s Minister for Information Technology, India supported the move ‘requesting the Secretary General to set up a Working Group to think through issues concerning Internet Governance,’ offering him ‘considerable experience in this regard... [and] contribute in whatever way the Secretary General deems appropriate’. The United States (US), United Kingdom (UK) and New Zealand have expressed their support for ‘equal footing multi-stakeholderism’ and Australia subscribes to the status quo.

India’s position has been much followed, discussed and criticised. In this article, we trace and summarise India’s participation in the IGF, UN General Assembly (‘UNGA’), ITU and the NETmundial conference (April 2014) as a representative sample of Internet governance fora. In these fora, India has been represented by one of three arms of its government:  the  Department  of  Electronics  and  Information  Technology  (DeitY),  the Department of Telecommunications (DoT) and the Ministry of External Affairs (MEA). The DeitY was converted to a full-fledged ministry in 2016 known as the Ministry of Electronics and Information Technology (MeitY). DeitY and DoT were part of the Ministry of Communications and Information Technology (MCIT) until 2016 when it was bifurcated into the Ministry of Communications and MeitY.

DeitY  used  to  be  and  DoT  still  is,  within  the  Ministry  of  Communications  and Information Technology (MCIT) in India. Though India has been acknowledged globally for championing ‘access to knowledge’ and ‘access to medicine’ at the World Intellectual Property Organization (WIPO) and World Trade Organization (WTO), global civil society and other stakeholders have criticised India’s behaviour in Internet governance for reasons such as lack of continuity and coherence and for holding policy positions overlapping with those of authoritarian states.

We  argue  that  even  though  confusion  about  the  Indian  position  arises  from  a multiplicity of views held within the Indian government, India’s position, in totality, is distinct from those of authoritarian states. Since criticism of the Indian government became more strident in 2011, after India introduced a proposal at the UNGA for a UN Committee on Internet-related Policies (CIRP) comprising states as members, we will begin to trace India's position chronologically from that point onwards.


  • Download the paper published in NLUD Student Law Journal here
  • For a timeline of the events described in the article click here
  • Read the paper published by NLUD Student Law Journal on their website

National Health Stack: Data For Data’s Sake, A Manmade Health Hazard

by Murali Neelakantan, Swaraj Barooah, Swagam Dasgupta and Torsha Sarkar — last modified Sep 16, 2018 05:01 AM
On Oct. 5, 2017, an HIV positive woman was denied admission in Hyderabad’s Osmania General Hospital even though she was entitled to free treatment under India’s National AIDS Control Organisation programme. Another incident around the same time witnessed a 24-year-old pregnant woman at Tikamgarh district hospital in Madhya Pradesh being denied treatment by hospital doctors once she tested positive for HIV. The patient reportedly delivered the twins outside the maternity ward after she was turned away by the hospital, but her newborn twin girls died soon after.

The op-ed was published in Bloomberg Quint on August 14, 2018.


Apart from facing the severity of their condition, patients afflicted with diseases such as HIV, tuberculosis, and mental illnesses, are often subject to social stigma, sometimes even leading to the denial of medical treatment. Given this grim reality would patients want their full medical history in a database?

The ‘National Health Stack’ as described by the NITI Aayog in its consultation paper, is an ambitious attempt to build a digital infrastructure with a “deep understanding of the incentive structures prevalent in the Indian healthcare ecosystem”. If the government is to create a database of individuals’ health records, then it should appreciate the differential impact that it could have on the patients.

The collection of health data, without sensitisation and accountability, has the potential to deny healthcare to the vulnerable.

We have innumerable instances of denial of services due to Aadhaar and there is a real risk that another database will lead to more denial of access to the most vulnerable.

Earlier, we had outlined some key aspects of the NHS, the ‘world’s largest’ government-funded national healthcare scheme. Here we discuss some of the core technical issues surrounding the question of data collection, updating, quality, and utilisation.

Resting On A Flimsy Foundation: The Unique Health ID

The National Health Stack envisages the creation of a unique ID for registered beneficiaries in the system — a ‘Digital Health ID’. Upon the submission of a ‘national identifier’ and completion of the Know Your Customer process, the patient would be registered in the system, and a unique health ID generated.

This seemingly straightforward process rests on a very flimsy foundation. The base entry in the beneficiary registry would be linked to a ‘strong foundational ID’. Extreme care needs to be taken to ensure that this is not limited to an Aadhaar number. Currently, the unavailability of Aadhaar would not be a ground for denial of treatment to a patient only for their first visit; the patient must provide Aadhaar or an Aadhaar enrolment slip to avail treatment thereafter. This suggests that the national healthcare infrastructure will be geared towards increasing Aadhaar enrollment, with the unstated implication that healthcare is a benefit or subsidy — a largess of government, and not, as the courts have confirmed, a fundamental right.

Not only is this project using government-funded infrastructure to deny its citizens the fundamental right to healthcare, it is using the desperate need of the vulnerable for healthcare to push the ‘Aadhaar’ agenda.

Any pretence that Aadhaar is voluntary is slowly fading with the government mandating it at every step of our lives.

Aadhaar Seva kendra. (Source: Aadhaar Official Account/Facebook) 

Aadhaar Seva kendra. (Source: Aadhaar Official Account/Facebook

Is The Health ID An Effective And Unique Identifier?

Even if we choose to look past the fact that the validity of Aadhaar is still pending the test of legality before the apex court, a foundational ID would mean that the data contained within that ID is unique, accurate, incorruptible, and cannot be misused. These principles, unfortunately, have been compromised by the UIDAI in the Aadhaar project with its lack of uniqueness of identity (i.e, fake IDs and duplicity), failure to authenticate identity, numerous alleged data leaks (‘alleged’ because UIDAI maintains that there haven’t been any leaks), lack of connectivity to be able to authenticate identity and numerous instances of inaccurate information which cannot be corrected.

Linking something as crucial and basic as healthcare data with such a database is a potential disaster.

There is a real risk that incorrect linking could cause deaths or inappropriate medical care.

The High Risk Of Poor Quality Data

The NITI Aayog paper envisages several expansive databases that are capable of being updated by different entities. It includes enrollment and updating processes but seems to assume that all these extra steps will be taken by all the relevant stakeholders and does not explain the motivation for stakeholders to do so.

In a country where government doctors, hospitals, wellness centres, etc are overburdened and understaffed, this reliance is simply not credible. For instance, all attributes within the registries are to be digitally signed by an authorised updater, there must be an audit trail for all changes made to the registries, and surveyors will be tasked with visiting providers in person to validate the data. Identifying these precautions as measures to assure accurate data is a great step towards building a national health database, but this seems an impossible task.

Who are these actors and what will incentivise them to ensure the accuracy and integrity of data?

In other words, what incentive and accountability structures will ensure that data entry and updating is accurate, and not approached from a more ‘jugaad’ ‘let’s just get this done for the sake of it’ attitude that permeates much of the country. How will patients have access to the database to be able to check its accuracy? Is it possible for a patient (who will presumably be ill) to gain easy access to an updater to change their data? If so, how? It is worth noting that the patient’s ‘right’ to check her data assumes that they have access to a computer that is connected to the internet as well as a good level of digital literacy, which is not the case in India for a significant section of the population. Even data portability loses its potential benefits if the quality of data on these registries is not reliable. In this case, healthcare providers will need to verify their patients’ health history using physical records instead, rendering the stack redundant.

Who will be liable to the patient for misdiagnosis based on the database?

A sonographic image is displayed on a monitor as a patient undergoes an ultrasound scan in Bikaner, Rajasthan, India. (Photographer: Prashanth Vishwanathan/Bloomberg)

A sonographic image is displayed on a monitor as a patient undergoes an ultrasound scan in Bikaner, Rajasthan, India. (Photographer: Prashanth Vishwanathan/Bloomberg)

Leaving the question of accountability vague opens updaters to the possibility of facing dangerous and unnecessarily punitive measures in the future. The NITI Aayog paper fails to address this key issue which arose recently. Despite being a notifiable disease, there are reports that numerous doctors from the private sector failed to notify or update TB cases to the Ministry of Health and Family Welfare ostensibly on the grounds that they did not receive consent from their patients to share their information with the government. This was met with a harsh response from the government which stated that clinical establishment that failed to notify tuberculosis patients would face jail time. According to a few doctors, the government’s new move would coerce patients to go to ‘underground clinics’ to receive treatment discreetly and hence, would not solve the issue of TB.

The document also offers no specific recommended procedures regarding how inaccurate entries will be corrected or deleted.

It is then perhaps not a stretch to imagine that these scenarios would affect the quality of the data stored; defeating NITI Aayog’s objective of researchers using the stack for high-quality medical data.

The reason why the quality and integrity of data is at the head of the table is that all the proposed applications of the NHS (analytics, fraud detection etc.) assume a high quality, accurate dataset. At the same time, the enrolment process, updating process and disclosed measures to ensure data quality will effectively lead to poor quality data. If this is the case, then applications derived from the NHS dataset should assume an imperfect data, rather than an accurate dataset, which should make one wonder if no data is better than data that is certainly inaccurate.

Lack Of Data Utilisation Guidelines

Issues with data quality are exacerbated depending on how and where it is used, and who uses it. The paper has identified some users to be health-sector stakeholders such as healthcare providers (hospitals, clinics, labs etc), beneficiaries, doctors, insurers and accredited social health activists but misses laying down utilisation guidelines. The foresight to create a dataset that can be utilised by multiple actors for numerous applications is commendable, but potentially problematic -- especially if guidelines on how this data is to be used by stakeholders (especially the private sector) are ignored.

In order to bridge this knowledge gap, India has the opportunity to learn from the legal precedent set by foreign institutions. As an example, one could examine the Health Information Technology for Economic and Clinical Health Act (HITECH) and the Health Insurance Portability and Accountability Act (HIPAA) in the U.S. which sets out strict guidelines for how businesses are to handle sensitive health data in order to maintain the individual’s privacy and security. It goes one step further to also lay down incentive and accountability structures in order that business associates necessarily report security breaches to their respective covered entities.

If we do not take necessary precautions now, we not only run the risk of poor security and breach of privacy but of inaccurate data that renders the national health data repository a health risk for the whole patient population.

There’s also the lack of clarity on who is meant to benefit from using such a database or whether the benefits are equal to all stakeholders, but more on that in a subsequent piece.

A medical team uses a glucometer to check the blood glucose level of a patient at a mobile clinic in Pancharala, on the outskirts of Bengaluru, India. (Photographer: Dhiraj Singh/Bloomberg)

A medical team uses a glucometer to check the blood glucose level of a patient at a mobile clinic in Pancharala, on the outskirts of Bengaluru, India. (Photographer: Dhiraj Singh/Bloomberg)

It’s Your Recipe, You Try It First!

If the NITI Aayog and the government are sure that there is a need for a national healthcare database, perhaps they can start using the Central Government Health Scheme (which includes all current and retired government employees and their families) as a pilot scheme for this. Once the software, database and the various apps built on it are found to be good value for money and patients benefit from excellent treatment all over the country, it could be expanded to those who use the Employees’ State Insurance system, and then perhaps to the armed forces. After all, these three groups already have a unique identifier and would benefit from the portability of healthcare records since they are likely to be transferred and posted all over the country. If, and only if, it works for these groups and the claimed benefits are observed, then perhaps it can be expanded to the rest of the country’s healthcare systems.

Murali Neelakantan is an expert in healthcare laws. Swaraj Barooah is Policy Director at The Centre for Internet and Society. Swagam Dasgupta and Torsha Sarkar are interns at The Centre for Internet and Society.

Use of Visuals and Nudges in Privacy Notices

by Saumyaa Naidu last modified Aug 22, 2018 01:16 PM
Nudging in privacy notices can be a privacy-enhancing tool. For example, informing users of how many people would have access to their data would help them make a decision. However, nudges can also be used to influence users towards making choices that compromise their privacy. For example, the visual design of default options on digital platforms currently nudge users to share their data. It is critical to ensure that there is mindful use of nudges, and that it is directed at the well being of the users.

 

Edited by Elonnai Hickok and Amber Sinha


Former Supreme Court judge, Justice B.N. Srikrishna, who is currently involved in drafting the new data-privacy laws for India, was quoted recently by the Bloomberg[1]. Acknowledging the ineffectiveness of consent forms of tech companies that leads to users’ data being collected and misused, he asked if we should have pictograph warnings for consent much like the warnings that are given on cigarette packets. His concern is that an average Indian does not realise how much data they are generating or how it is being used. He attributed this to the access issues with the consent forms presented by companies which are in the English language. In the Indian context, Justice Srikrishna pointed out, considerations around literacy and languages should be addressed.

The new framework being worked on by Srikrishna and his committee comprising academics and government officials, would make the tech companies more accountable for data collection and use, and allow users to have more control over their own data. But, in addition to this regulatory step towards privacy and data protection, the concern towards communication of companies’ data practices through consent forms or privacy notices is also critical for users. Currently, the cryptic notices are a barrier for users, as are the services that do not provide incremental information about the use of the service - for example, what data is being shared with how many people or what data is being collected at what point, instead relying on blanket consent forms taken at the beginning of a service. Visuals can go a long way in making these notices and services accessible to users.

Although, Justice Srikrishna chose the extreme example of warnings on cigarette packets, visually depicting the health risks of cigarette smoking using repulsive imagery, the underlying intent seems to be of using visuals as a means of giving an immediate and clear warning about how people’s data is being used and by whom. It must be noted that the effectiveness of warnings on cigarette packets is debatable. These warnings are also a way in which manufacturers consider their accountability met, which is a possible danger with privacy notices as well. Most companies consider that their accountability is limited to giving all the information to the users without ensuring that the information is communicated to help the user understand the risks. Hence, one has to be cautious of the role of visuals in notices so that they are used with the primary purpose of meaningful communication and accessibility that can be used to inform further action. The visual summary of the data practice in terms of how it will affect the user will also serve as a warning.

The warning images on cigarette packets are an example of the user-influencing design approach called nudging[2]. While nudging techniques are meant to be aimed at the users’ well being, it brings forward the question of who decides what is beneficial for the users. Moreover, the harm in cigarette smoking is more obvious, and thus the favourable choice for the users is also clearer. But, in the context of data privacy, the harms are less apparent. It is difficult to demonstrate the harms or benefits of data use, particularly when data is re-purposed or used indirectly. There is also no single choice that can be pushed when it comes to the use and collection of data. Different users may have different preferences or degrees to which they would like to allow the use of their data. This raises deeper questions about the extent to which privacy law and regulation should be paternalistic.

Nudges are considered to follow the soft or libertarian paternalism approach, where the user is not forbidden any options but only given a push to alter their behaviour in a predictable way[3]. It is crucial to differentiate between the strong paternalistic approach that doesn’t allow a choice at all, the usability approach, and the soft paternalistic approach of nudging, as mentioned by Alessandro Acquisti in his paper, ‘The Behavioral Economics of Personal Information’[4]. In the usability approach, the design of the system would make it intuitive for users to change settings and secure their data. The soft paternalistic approach of nudging would be a step further and present secure settings as a default. Usability is often prioritised by designers. However, soft paternalism techniques help to enhance choice for users and lead to larger welfare[5].

Nudging in privacy notices can be a privacy-enhancing tool. For example, informing users of how many people would have access to their data would help them make a decision[6]. However, nudges can also be used to influence users towards making choices that compromise their privacy. For example, the visual design of default options on digital platforms currently nudge users to share their data. It is critical to ensure that there is mindful use of nudges, and that it is directed at the well being of the users.

The design of privacy notices should be re-conceptualised to ensure that they inform the users effectively, keeping in mind certain best practices. For instance, a multilayered privacy notice can be used, which includes a very short notice designed for use on portable digital devices where there is limited space, condensed notice that contains all the key factors in an easy to understand way, and a complete notice with all the legal requirements[7]. Along with the layering of information, the timing of notices should also be designed to be at setup, just in time of the user’s action, or at periodic intervals. In terms of visuals, infographics can be used to depict data flows in a system. Another best practice is to integrate privacy notices with the rest of the system. Designers are needed to be involved early in the process so that the design decisions are not purely visual but also consider information architecture, content design, and research.

Practice based frameworks should be developed for communication designers in order to have a standardised vocabulary around creating privacy notices. Additionally, multiple user groups and their varied privacy preferences must be taken into account. Finally, an ethical framework must be put into place for design practitioners in order to ensure that the users’ well being is prioritised, and notices are designed to facilitate informed consent. Further recommendations and concerns regarding the design of privacy notices, and the use of visuals can be read here.

Justice Srikrishna’s statement is an important step towards creating effective privacy notices with visuals. The conversation on the need to design privacy notices can lead to clearer and more comprehensible notices. Combined with the enforcement of fair collection and use of data by companies, well designed notices will allow users more control and a real choice to opt-in or out of a service and make informed choices as they engage with a service. Justice Srikrishna’s analogy seems to recommend using visuals to describe what type of data is being collected and for what purposes at the time of taking consent. Though cigarette warnings may not be the most appropriate analogy, this is a good start, and it is important to explore how visuals and design can be used throughout a service - from beginning to end - to convey and promote awareness and informed choices by users. It is also important to extend this conversation outside of privacy into the realm of security and understand how visuals and design can inform users’ awareness and personal choices around security when using a service.


[1] https://www.bloomberg.com/news/articles/2018-06-10/tech-giants-nervous-as-judge-drafts-first-data-rules-in-india

[2] http://www.ijdesign.org/index.php/IJDesign/article/viewFile/1512/584

[3] https://www.andrew.cmu.edu/user/pgl/psosm2013.pdf

[4] https://www.heinz.cmu.edu/~acquisti/papers/acquisti-privacy-nudging.pdf

[5] https://www.heinz.cmu.edu/~acquisti/papers/acquisti-privacy-nudging.pdf

[6] https://cis-india.org/internet-governance/files/rethinking-privacy-principles

[7] https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/ten_steps_to_develop_a_multilayered_privacy_notice__white_paper_march_2007_.pdf

ICANN response to DIDP #31 on diversity

by Akriti Bopanna and Akash Sriram — last modified Aug 21, 2018 05:35 PM
This post summarizes the response of ICANN to our inquiry on the diversity of their employees.

The file can be found here


In our 31st DIDP request, we had asked ICANN to disclose information pertaining to the diversity of employees based on their race and citizenship. ICANN states that they are an equal opportunities employer and to ascertain the extent of people from different backgrounds in their ranks, we were hoping to be given the information.

However the response provided to us did not shed any light on this because of two reasons; firstly, ICANN has this information solely for two countries namely USA and Singapore as legislation in these countries compels employers to record this information. In the US, Title VII of the Civil Rights Act of 1964 requires that any organization with 100 or more employees have to file an Employer Information Report wherein the employment data is categorized by race/ethnicity/, gender and job category. Whereas in Singapore,  information on race is gathered from the employee to assess which Self-Help group fund an employee should contribute to under Singaporean law.

Secondly, for the two countries, they refused to divulge information on the basis of their conditions of nondisclosure. The conditions pertinent here were:

  1. Information provided by or to a government or international organization, or any form of recitation of such information, in the expectation that the information will be kept confidential and/or would or likely would materially prejudice ICANN's relationship with that party.
  2. Personnel, medical, contractual, remuneration, and similar records relating to an individual's personal information, when the disclosure of such information would or likely would constitute an invasion of personal privacy, as well as proceedings of internal appeal mechanisms and investigations.
  3. Drafts of all correspondence, reports, documents, agreements, contracts, emails,  or any other forms of communication


We had only enquired about the percentage of representation of employees at each level by their race or citizenship but this was deemed dangerous to disclose by ICANN. They did not volunteer anymore information such as an anonymized data set and hence we will now file a DIDP to ask them for the same.

Given the global and multi-stakeholder nature of the processes at ICANN, it is also of importance that their workforce represents true diversity as well. Their bylaws mandate diversity amongst its Board of Directors and some of its constituent bodies but there is no concrete proof of this being imbibed within their recruitment ICANN also did not think it was necessary to disclose our requested information in the benefit of public interest because it does not outweigh the harm that could be caused by the requested disclosure.

DNA ‘Evidence’: Only Opinion, Not Science, And Definitely Not Proof Of Crime!

by Elonnai Hickok and Murali Neelakantan — last modified Aug 22, 2018 12:43 AM
On August 9, 2018, the DNA Technology (Use and Application) Regulation Bill, 2018 was introduced in the Lok Sabha and we commented on some key aspects of it earlier.

The article was published in Bloomberg Quint on August 20, 2018.


Though taking some steps in the right direction such as formalising the process for lab accreditation, the Bill ignores many potential cases of ‘harm’ that may arise out of the collection, databasing, and using DNA evidence for criminal and civil purposes.

DNA evidence is widely touted as the most accurate forensic tool, but what is not widely publicised is it is not infallible. From crime scene to database, it is extremely vulnerable to a number of different unknown variables and outcomes. These variables are only increasing as the technology becomes more precise – profiles can be developed from only a few cells and technology now exists that generates a profile in 90 minutes. Primary and secondary transfer, contamination, incomplete samples, too many mixed samples, and inaccurate or outdated methods of analysis and statistical methodologies that may be used, are all serious reasons as to why DNA evidence may paint an innocent person guilty.

Importantly, DNA itself is not static and predicting how it may have changed over time is virtually impossible.

Innocent, But Charged

In April 2018, WIRED carried a story of Lukis Anderson who was charged with the first-degree murder of Raveesh Kumra, a Silicon Valley investor after investigators found Anderson’s DNA on Kumra’s nails. Long story short – Anderson earlier that day had been intoxicated in public and had been attended by paramedics. The same paramedics handled Kumra’s body and inadvertently transferred Anderson’s DNA to Kumra’s body. The story quotes some sobering facts that research has found about DNA:

  1. Direct contact is not necessary for DNA to be transferred. In an experiment with a group of individuals sharing a bottle of juice, 50 percent had another’s DNA on their hand and ⅓rd of the glasses contained DNA from individuals that did not have direct contact with them.
  2. An average person sheds 50 million skin cells a day.
  3. Standing still our DNA can travel over a yard away and will be easily carried over miles on others clothing or hair, for example not very differently from pollen.
  4. In an experiment that tested public items, it was found that items can contain DNA from a half-dozen people.
  5. A friendly or inadvertent contact can transfer DNA to private regions or clothing.
  6. Different people shed detritus at different levels that contain DNA.
  7. One in five has some other person’s DNA under the fingernails on a continuous basis.
Crime Scene Tape in Alexandria
A police office carries crime scene tape in Alexandria, Virginia, U.S. (Photographer: Andrew Harrer/Bloomberg)

In another case, the police in Idaho, USA, used a public DNA database to run a familial DNA search – a technique used to identify suspects whose DNA is not recorded in a law enforcement database, but whose close relatives have had their genetic profiles cataloged, just as India's DNA Bill seeks to do. The partial match that resulted implicated Michael Usry, the son of the man whose DNA was in the public database. It took 33 days for Michael to be cleared of the crime. That an innocent man only spent 33 days under suspicion could be considered a positive outcome when compared to the case of Josiah Sutton who spent four years convicted of rape in prison due to misinterpretation of DNA samples by the Houston Police Department Crime Laboratory, which is among the largest public forensic centers in Texas. The Atlantic called this out as “The False Promise of DNA Testing – the forensic technique is becoming ever more common and ever less reliable”.

Presently, there is little confidence that such safeguards exist – prosecutors do not share any exculpatory evidence with the accused and India does not even follow the ‘fruit of a poisonous tree’ doctrine with respect to the admissibility of evidence and India has yet to develop a robust jurisprudence for evaluating scientific evidence.

The 2015 Law Commission Report cites four cases that speak to the role and reliance on expert opinion as evidence. Though these cases point to the importance of expert opinion they differ on the weight that should be given to the same. International best practice requires the submission of corroborating evidence, training law enforcement, and court officers, and ensuring that prosecution and defence have equal access to forensic evidence.

Consider India with a population of 1.3 billion people – 70 percent mostly residing in rural areas and less educated and a heavy migrant population in urban centres, an overwhelmed police force in nascent stages of forensic training, and an overburdened judiciary and no concrete laws to govern issues of the admissibility of forensic techniques.

In such circumstances, the question is not only how many criminals can be convicted but also how many innocents could be convicted.

Handcuffs

A pair of standard issue handcuffs sits on a table. (Photographer: Jerome Favre/Bloomberg)

The DNA Bill seeks to establish DNA databanks at the regional and national level but how this will be operationalised is not quite clear. The Bill enables the DNA Regulatory Board to accredit DNA labs. Will databases be built from scratch? Will they begin by pulling in existing databases?

The question is not if the DNA samples match but how they came to match. The greater power that comes from the use of DNA databases requires greater responsibility in ensuring adequate information, process, training, and laws are in place for everyone – those who give DNA, collect DNA, store DNA, process DNA, present DNA, and eventually decide on the use of the DNA. As India matures in its use of DNA evidence for forensic purposes it is important that it keeps at the forefront what is necessary to ensure and protect the rights of the individual.


Elonnai Hickok Chief Operating Officer at The Centre for Internet and Society. Murali Neelakantan is an expert in healthcare laws, and the author ofDNA Testing as Evidence - A Judges Nightmarein the Journal of Law and Medicine.

An Analysis of the CLOUD Act and Implications for India

by Elonnai Hickok and Vipul Kharbanda — last modified Aug 22, 2018 02:55 PM
India houses the second largest population in the world at approximately 1.35 billion individuals. In such a diverse and dense context, law enforcement could be a challenging job.

Introduction

Networked technologies have changed the nature of crime and will continue to do so. Access to data generated by digital technologies and on digital platforms is important in solving online and offline crimes. Yet, a significant amount of such data is stored predominantly under the control of companies in the United States. Thus, for Indian law enforcement to access metadata (location data or subscriber information), they can send a request directly to the company. However for access to content data, law enforcement must follow the MLAT process as a result of requirements under the Electronic Communications Privacy Act (ECPA).  ECPA allows service providers to share metadata on request of foreign governments, but requires a judicially issued warrant based on a finding of ‘probable cause’ for a service provider to share content data.

The challenges associated with accessing data across borders has been an area of concern for India for many years. From data localization requirements, legal decryption mandates, proposed back doors- law enforcement and the government have consistently been trying to find efficient ways to access data across borders. 

Towards finding solutions to the challenges in the MLAT process, Peter Swire and Deven Desai in the article “A Qualified SPOC Approach for India and Mutual Legal Assistance” have noted the importance of finding a solution to the hurdles in the India - US MLAT and have suggested that reforms for the MLAT process in India should not start with law enforcement, and have instead proposed the establishment of a Single Point of Contact designated to handle and process government to government requests with requests emerging from that office receiving special legal treatment.

Frustrations with cross border sharing of data are not unique to India and the framework has been recognized by many stakeholders for being outdated, slow, and inefficient - giving rise to calls from governments, law enforcement, and companies for solutions. As a note, some research has also highlighted that the identified issues with the MLAT system are broad and more evidence is needed to support each concern and inform policy response.

Towards this, the US and EU have undertaken clear policy steps to address the tensions in the MLAT system by enabling direct access by governments to content data. On April 17 2018, the European Union published the E-Evidence Directive and a Regulation that allows for a law enforcement agency to obtain electronic evidence from service providers within 10 days of receiving a request or 6 hours for emergency requests and request the preservation or production of data. Production orders for content and transactional records can be issued only for certain serious crimes and must be issued by a judge.  No judicial authorisation is required for production orders for subscriber information and access data, and it can be sought to investigate any criminal offense, not just serious offenses. Preservation orders can be issued without judicial authorisation for all four types of data and for the investigation of any crime. Further, requests originating from the European Union must be handled by a designated legal representative. Preservation orders can be issued for all four types of data. Further, requests originating from the European Union must be handled by a designated legal representative.

On the US side, in 2016, the Department of Justice (DoJ) put out draft legislation that would create a framework allowing the US to enter into executive agreements with countries that have been evaluated as meeting criteria defined in the law. Our response to the DoJ draft Bill can be found here. In February 2018, the Microsoft Ireland Case was presented before the U.S Supreme Court. The question central to the case was whether or not a US warrant issued against a company incorporated in the US was valid if the data was stored in servers outside of the US. On March 23, 2018, the United States government enacted the “Clarifying Lawful Overseas Use of Data Act” also known as the CLOUD Act. The passing of the Act solves the dilemma found in the Microsoft Ireland case. The CLOUD Act amends Title 18 of the United States Code and allows U.S. law enforcement agencies to access data stored abroad by increasing the reach of the U.S. Stored Communication Act, enabling access without requiring the specific cooperation of foreign governments. Under this law, U.S. law enforcement agencies can seek or issue orders that compel companies to provide data regardless of where the data is located as long as the data is under their “possession, custody or control”. It further allows US communication service providers to intercept or provide the content of communications in response to orders from foreign governments if the foreign government has entered into an executive agreement with the US upon approval by the Attorney General and concurrence with the Secretary of State. The Act also absolves companies from criminal and civil liability when disclosing information in good faith pursuant to an executive agreement between the US and a foreign country. Such access would be reciprocal, with the US government having similar access rights to data stored in the foreign country.  

Though the E-Evidence Directive is a significant development, in this article - we focus on the CLOUD Act and its implications for cross border sharing of data between India and the US.


To read more download the PDF

Consumer Care Society: Silver Jubilee Year Celebrations

by Arindrajit Basu — last modified Aug 27, 2018 01:51 PM
Arindrajit Basu delivered a talk the Silver Jubilee Celebrations of the Consumer Care Society (CCS )on 'Privacy and Security in the Age of the Internet.

CONSUMER CARE SOCIETY (CCS) is an active volunteer based not-for-profit organization involved in Consumer activities. Established as a registered society in the year 1994, CCS has for the past 3 decades functioned as the voice of consumer in many forums. Today CCS is widely recognized as an premier consumer voluntary organization (CVO) in Bangalore and Karnataka. CCS is registered with many goverenmental agencies and regulators like TRAI,BIS, Petroleum and Natural Gas Regulatory Board, DOT, ICMR at the Central Government levels and with almost all service providers at the State Level like BWSSB, BESCOM, BDA, BBMP.

Shreenivas.S. Galgali, ITS, Adviser,  TRAI Regional Office, Bangalore and Aradhana Biradar, User Education and Research Specialist, Google were the other speakers at the event held at CCS.

The Srikrishna Committee Data Protection Bill and Artificial Intelligence in India

by Amber Sinha and Elonnai Hickok — last modified Sep 03, 2018 01:29 PM
Artificial Intelligence in many ways is in direct conflict with traditional data protection principles and requirements including consent, purpose limitation, data minimization, retention and deletion, accountability, and transparency.

Privacy Considerations in AI

Other related privacy concerns in the context of AI center around re-identification and de-anonymisation, discrimination, unfairness, inaccuracies, bias, opacity, profiling, and misuse of data and imbedded power dynamics.[1]

The need for large amounts of data to improve accuracy, the ability to process vast amounts of granular data, and the present relationship between explainability and result of AI systems[2] have raised many concerns on both sides of the fence. On one hand, there is concern that heavy handed or inappropriate regulation will result in stifling innovation. If developers can only use data for pre-defined purpose - the prospects of AI are limited. On the other hand, individuals are concerned that privacy will be significantly undermined in light of AI systems that collect and process data in realtime and at a personal level not previously possible. Chatbots, house assistants, wearable devices, robot caregivers, facial recognition technology etc.  have the ability to collect data from a person at an intimate level. At the sametime, some have argued that AI can work towards protecting privacy by limiting the access that humans working at respective companies have to personal data.[3]

India is embracing AI. Two national roadmaps for AI were released in 2018 respectively by the Ministry of Commerce and Industry and Niti Aayog. Both roadmaps emphasized the importance of addressing privacy concerns in the context of AI and ensuring that a robust privacy legislation is enacted. In August 2018, the Srikrishna Committee released a draft Personal Data Protection Bill 2018 and the associated report that outlines and justifies a framework for privacy in India. As the development and use of AI in India continues to grow, it is important that India simultaneously moves forward with a privacy framework that addresses the privacy dimensions of AI.

In this article we attempt to analyse if and how the Srikrishna committee draft Bill  and report has addressed AI, contrast this with developments in the EU and the passing of the GDPR, and identify solutions that are being explored towards finding a way to develop AI while upholding and safeguarding privacy.

The GDPR and Artificial Intelligence

The General Data Protection Regulation became enforceable in May 2018 and establishes a framework for the processing of personal data for individuals within the European Union. The GDPR has been described by IAAP  as taking a ‘risk based’ approach to data protection that pushes data controllers to engage in risk analysis and adopt ‘risk measured responses’.[4] Though the GDPR does not explicitly address artificial intelligence, it does have a number of provisions that address automated decision making and profiling and a number of provisions that will impact companies using artificial intelligence in their business activities. These have been outlined below:

  1. Data rights: The GDPR enables individuals with a number of  data rights: the right to be informed, right of access, right to rectification, right to erasure, right to restrict processing, right to data portability, right to object, and rights related to automated decision making including profiling.  The last right - rights related to automated decision making - seeks to address concerns arising out of automated decision making by giving the individual the right to request to not be subject to a decision based solely on automated decision making including profiling if the decision would produce legal effects or similarly significantly affects them.  There are three exceptions to this right - if the automated decision making is:  a. necessary for the performance of a contract, b. authorised by the Union or Member State c. is based on explicit consent.[5]
  2. Transparency: Under Article 14, data controllers must enable the right to opt out of automated decision making by notifying individuals of the existence of automated decision making including profiling and providing meaningful information about the logic involved as well as the potential consequences of such processing.[6] Importantly, this requirement has the potential of ensuring that companies do not operate complete  ‘black box’ algorithms within their business processes.
  3. Fairness: The principle of fairness found under Article 5(1) will also apply to the processing of personal data by AI. The principle requires that personal data must be processed in a way to meet the three conditions of lawfully, fairly, and in a transparent manner in relation to the data subject. Recital 71 further clarifies that this will include implementing appropriate mathematical and statistical measures for profiling, ensuring that inaccuracies are corrected, and  ensuring that processing that does not result in negative discriminatory results.[7]
  4. Purpose Limitation: The principle of purpose limitation (Article 5(1)(b) requires that personal data must be collected for  specified, explicit, and legitimate purposes and not be further processed in a manner incompatible with those purposes.  Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes are not considered to be incompatible with the initial purposes. It has been noted that it is unclear if research carried out through artificial intelligence would fall under this exception as the GDPR does not define ‘scientific purposes’.[8]
  5. Privacy by Design and Default: Article 25 requires all data controllers to implement technical and organizational measures to meet the requirements of the regulation. This could include techniques like pseudonymisation. Data controllers also are required to implement appropriate technical and organizational measures for ensuring that by default only personal data which are necessary for a specific purpose are processed.[9]
  6. Data Protection Impact Assessments: Article 35 requires data controllers to undertake impact assessments if they are undertaking processing that is likely to result in a high risk to individuals. This includes if the data controller undertakes: systematic and extensive profiling, processes special categories of criminal offence data on a large scale, systematically monitor publicly accessible places on a large scale. In implementation, some jurisdictions like the UK require impact assessments on additional conditions including if the data controller: uses new technologies, uses profiling or special category data to decide on access to services, profile individuals on a large scale, process biometric data, process genetic data, match data or combine datasets from different sources, collect personal data from a source other than the individual without providing them with a privacy notice, track individuals’ location or behaviour, profile children or target marketing or online services at them, process data that might endanger the individual’s physical health or safety in the event of a security breach.[10]
  7. Security: Article 30 requires data controllers to ensure a level of security appropriate to the risk including employing methods like encryption and pseudonymization.

Srikrishna Committee Bill and AI

The Draft Data Protection Bill and associated report by the Srikrishna Committee was published in August 2018 and recommends a privacy framework for India. The Bill contains a number of provisions that will directly impact data fiduciaries using AI and that try and account for the unintended consequences of emerging technologies like AI. These include:

  1. Definition of Harm: The Bill defines harm as including bodily or mental injury, loss, distortion or theft of identity, financial loss or loss of property, loss of reputation or humiliation, loss of employment, any discriminatory treatment, any subjection to blackmail or extortion, any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal, any restriction placed or suffered directly or indirectly on speech, movement or any other action arising out of a fear of being observed or surveilled, any observation or surveillance that is not reasonably expected by the data principal. The Bill also allows for categories of significant harm to be further defined by the data protection authority.

Many of the above are harms that have been associated with artificial intelligence - specifically loss employment, discriminatory treatment, and denial of service. Enabling the data protection authority to further define categories of  significant harm, could allow for unexpected harms arising from the use of AI to come under the ambit of the Bill.

  • Data Rights: Like the GDPR, the Bill creates a set of data rights for the individual including the right to confirmation and access, correction, data portability, and right to be forgotten. At the sametime the Bill is intentionally silent on the rights and obligations that have been incorporated into the GDPR that address automated decision making including: The right to object to processing,[11] the right to opt out of automated decision making[12], and the obligation on the data controller to inform the individual about the use of automated decision making and basic information regarding the logic and impact of same.[13] As justification, in their report the Committee noted the following: The right to restrict processing may be unnecessary in India as it provides only interim remedies around issues such as inaccuracy of data and the same can be achieved by a data principal approaching the DPA or courts for a stay on processing as well as simply withdraw consent. The objective of protecting against discrimination, bias, and opaque decisions that the right to object to automated processing and receive information about the processing of data in the Indian context seeks to fulfill would be better achieved through an accountability framework requiring specific data fiduciaries that will be making evaluative decisions through automated means to set up processes that ‘weed out’ discrimination. At the same time, if discrimination has taken place, individuals can seek remedy through the courts.

By taking this approach, the Bill creates a framework to address harms arising out of AI, but does not empower the individual to decide how their data is processed and remains silent on the issue of ‘black box’ algorithms.

  • Data Quality: Requires data fiduciaries to ensure that personal data that is processed is complete, accurate, not misleading and updated with respect to the purposes for which it is processed. When taking steps to comply with this - data fiduciaries must take into consideration if the personal data is likely to be used to make a decision about the data principal, if it is likely to be disclosed to other individuals, if the personal data is kept in a form that distinguishes personal data based on facts from personal data based on opinions or personal assessments.[14]

This principle, while not mandating that data fiduciaries take into account considerations such as biases in datasets, could potentially be be interpreted by the data protection authority to include in its scope, means towards ensuring that data does not contain or result in bias.

  • Principle of Privacy by Design: Requires significant data fiduciaries to have in place a number policies and measures around several aspects of privacy. These include - (a) measures to ensure managerial, organizational, business practices and technical systems are designed in a manner to anticipate, identify, and avoid harm to the data principal (b) the obligations mentioned in Chapter II are embedded in organisational and business practices (c) technology used in the processing of personal data is in accordance with commercially accepted or certified standards (d) legitimate interests of business including any innovation is achieved without compromising privacy interests (e) privacy is protected throughout processing from the point of collection to deletion of personal data (f) processing of personal data is carried out in a transparent manner (g) the interest of the data principal is accounted for at every stage of processing of personal data.

A number of these (a, d, e, and g)  require that the interest of the data principal is accounted for throughout the processing of personal data, This will be  significant for systems driven by artificial intelligence as a number of the harms that have arisen from the use of AI include discrimination, denial of service, or loss of employment - have been brought under the definition of harm within the Bill. Placing the interest of the data principal first is also important in protecting against unintended consequences or harms that may arise from AI.[15] If enacted, it will be important to see what policies and measures emerge in the context of AI to comply with this principle. It will also be important to see what commercially accepted or certified standard companies rely on to comply with (c.)

  • Data Protection Impact Assessment: Requires data fiduciaries to undertake a data protection impact assessment when implementing new technologies or large scale profiling or use of sensitive personal data. Such assessments need to include a detailed description of the proposed processing operation, the purpose of the processing and the nature of personal data being processed, an assessment of the potential harm that may be caused to the data principals whose personal data is proposed to be processed, and measures for managing, minimising, mitigating or removing such risk of harm. If the Authority finds that the processing is likely to cause harm to the data principles, it may direct the data fiduciary to undertake processing in certain circumstances or entirely.  This requirement applies to all significant data fiduciaires and all other data fiduciaries as required by the DPA.[16]

This principle will apply to companies implementing AI systems. For AI systems, it will be important to see how much information the DPA will require under the requirement of data fiduciaries providing detailed descriptions of the proposed processing operation and purpose of processing.

  • Classification of data fiduciaries as significant data fiduciaries: The Authority has the ability to notify certain categories of data fiduciaries as significant data fiduciaries based on 1. The volume of personal data processed, 2. The sensitivity of personal data processed, turnover of the data fiduciary, risk of harm resulting from any processing being undertaken by the fiduciary, use of new technologies for processing, and other factor relevant for causing harm to any data principal. If a data fiduciary falls under the ambit of any of these conditions they are required to register with the Authority. All significant data fiduciaries must undertake data protection impact assessments, maintain records as per the bill, under go data audits, and have in place a data protection officer.

As per this provision - companies deploying artificial intelligence would come under the definition of a significant data fiduciary and be subject to the principles of privacy by design etc. articulated in the chapter. The exception to this will be if the data fiduciary comes under the definition of ‘small entity’ found in section 48.[17]

  • Restrictions on cross border transfer of personal data: Requires that all data fiduciaries must store a copy of personal data on a server or data centre located in India and notified categories of critical personal data must be processed in servers located in India.

It is interesting to note that in the context of cross border sharing of data,  the Bill is creating a new category of data that can be further defined beyond personal and sensitive personal data. For companies implementing artificial intelligence, this provision may prove cumbersome to comply with as many utilize cloud storage and facilities located outside of India for the processing of larger amounts of data.[18]

  • Powers and functions of the Authority: The Bill lays down a number of functions of the Authority one being to monitor technological developments and commercial practices that may affect protection of personal data.

By assumption, this will include monitoring of technological developments in the field of Artificial Intelligence.[19]

  • Fair and reasonable processing: Requires that any person processing personal data owes a duty to the data principal to process such personal data in a fair and reasonable manner that respects the privacy of the data principal. In the Srikrishna Committee report, the committee explains that the principle of the fair and reasonable is meant to address 1. Power asymmetries between data subjects and data fiduciaries - recognizing that data fiduciaires have a responsibility to act in the best interest of the data principal 2. Situations where processing may be legal but not necessary fair or in the best interest of the data principal 3. Developing trust between the data principal and the data fiduciary.[20]

This is in contrast to the GDPR which requires processing to simultaneously meet the three conditions of fairness, lawfulness, and transparency.

  • Purpose Limitation: Personal data can only be processed for the purposes specified or any other purpose that the data principal would reasonably expect.

As a note, the Srikrishna Committee Bill does not include ‘scientific purposes’ as an exception to the principle of purpose limitation as found in the GDPR,[21] and instead creates an exception for research, archiving, or statistical purposes.[22] The DPA has the responsibility of developing codes defining research purposes under the act.[23]

  1. Security Safeguards: Every data fiduciary must implement appropriate security safeguards including the use of methods such as de-identification and encryption, steps to protect the integrity of personal data, and steps necessary to prevent misuse, unauthorised access to, modification, and disclosure or destruction of personal data.[24]

Unlike the GDPR which explicitly refers to the technique of pseudonymization, the Srikrishna  uses Bill uses term de-identification.  The Srikrishna Report clarifies that the this includes techniques like pseudonymization and masking and further clarifies that because of the  risk of re-identification, de-identified personal data should still receive the same level of protection as personal data. The Bill further gives the DPA the authority to define appropriate levels of anonymization. [25]

Technical perspectives of Privacy and AI

There is an emerging body of work that is looking at solutions to the dilemma of maintaining privacy while employing artificial intelligence and finding ways in which artificial intelligence can support and strengthen privacy. For example, there are AI driven platforms that leverage the technology to help a business to meet regulatory compliance with data protection laws[26], as well as research into AI privacy enhancing technologies.[27] Standards setting bodies like IEEE have undertaken work on the ethical considerations in the collection and use of personal data when designing, developing, and/or deploying AI through the standard ‘Ethically Aligned Design’.[28] . In the article Artificial Intelligence and Privacy by Datatilsynet - the Norwegian Data Protection Authority[29] break such methods into three categories:

  1. Techniques for reducing the need for large amounts of training data: Such techniques  can include
    1. Generative adversarial networks (GANs): GANs are used to create synthetic data and can address the need for large volumes of labelled data without relying on real data containing personal data. GANs could potentially be useful from a research and development perspective in sectors like healthcare where most data would quality as sensitive personal data.
    2. Federated Learning: Federated learning allows for models to be trained and improved on data from a large pool of users without directly using user data. This is achieved by running a centralized model on a client unit and subsequently improved on local data. Changes from the improvements are shared back with the centralized server. An average of the changes from multiple individual client units becomes the basis for improving the  centralized model.
    3. Matrix Capsules: Proposed by Google researcher Geoff Hinton, Matrix Capsules improve the accuracy of existing neural networks while requiring less data.[30]
  2. Techniques that uphold data protection without reducing the basic data set
    1. Differential Privacy: Differential privacy intentionally adds ‘noise’ to data when accessed. This allows for personal data to be accessed with revealing identifying information.
    2. Homomorphic Encryption: Homomorphic encryption allows for the processing of data while it is still encrypted. This addresses the need to access and use large amounts of personal data for multiple purposes
    3. Transfer Learning: Instead of building a new model, transfer learning relies builds upon existing models that are applied to new related purposes or tasks. This has the potential to reduce the amount of training data needed.
    4. RAIRD: Developed by Statistics Norway and the Norwegian Centre for Research Data, RAIRD is a national research infrastructure that allows for access to large amounts of statistical data for research while managing statistical confidentiality. This is achieved by allowing researchers access to metadata. The metadata is used to build analyses which are then run against detailed data without giving access to actual data.[31]
  3. Techniques to move beyond opaque algorithms
    1. Explainable AI (XAI): DARPA in collaboration with Oregon State University is researching how to create explainable models and explanation interface while ensuring a high level of learning performance in order to enable individuals to interact with, trust, and manage artificial intelligence.[32] DARPA identifies a number of entities working on different models and interfaces for analytics and autonomy AI.[33]
    2. Local Interpretable Model Agnostic Explanations: Developed to enable trust between AI models and humans by generating explainers to highlight key aspects that were important to the model and its decision - thus providing insight into the rationale behind a model.[34]

Public Sector use of AI and Privacy

The role of AI in public sector decision making has been gradually growing globally across sectors such as law enforcement, education, transportation, judicial decision making and healthcare. In India too, use of automated processing in electronic governance under the Digital India mission, domestic law enforcement agencies monitoring social media content and educational schemes is being discussed and gradually implemented. Much like the potential applications of AI across sub-sectors, the nature of regulatory issues are also diverse.

Aside from the accountability framework discussed in the Srikrishna Committee report, the Puttaswamy judgment also provides a basis for governance of AI with respect to its concerns for privacy, in limited contexts. The sources of right to privacy as articulated in the Puttaswamy judgments included the terms ‘personal liberty’ under Article 21 of the Constitution. In order to fully appreciate how constitutional principles could apply to automated processing in India, we need to look closely at the origins of privacy under liberty. In the famous case of AK Gopalan there is a protracted discussion on the contents of the rights under Article 21. Amongst the majority opinions itself, the opinion was divided. While Sastri J. and Mukherjea J. took the restrictive view that limiting the protections to bodily restraint and detention, Kania J. and Das J. take a broader view for it to include the right to sleep, play etc. Through RC Cooper[35] and Maneka[36], the Supreme Court took steps to reverse the majority opinion in Gopalan and it was established that that the freedoms and rights in Part III could be addressed by more than one provision. The expansion of ‘personal liberty’ has began in Kharak Singh where the unjustified interference with a person’s right to live in his house, was held to be violative of Article 21. The reasoning in Kharak Singh draws heavily from Munn v. Illinois[37] which held life to be “more than mere animal existence.” Curiously, after taking this position Kharak Singh fails to recognise a fundamental right to privacy (analogous to the Fourth Amendment protection in US) under Article 21. The position taken in Kharak Singh was to extrapolate the same method of wide interpretation of ‘personal liberty’ as was accorded to ‘life’. Maneka which evolved the test for enumerated rights within Part III says that the claimed right must be an integral part of or of the the same nature as the named right. It says that the claimed must be ‘in reality and substance nothing but an instance of the exercise of the named fundamental right’. The clear reading of privacy into ‘personal liberty’ in this judgment is effectively a correction of the inherent inconsistencies in the positions taken by the majority in Kharak Singh.

The other significant change in constitutional interpretation that occurred in Maneka was with respect to the phrase ‘procedure established by law’ in Article 21. In Gopalan, the majority held that the phrase ‘procedure established by law’ does not mean procedural due process or natural justice. What this meant was that, once a ‘procedure’ was ‘established by law’, Article 21 could not be said to have been infringed. This position was entirely reversed in Maneka. The ratio in Maneka said that ‘procedure established by law’ must be fair, just and reasonable, and cannot be arbitrary and fanciful. Therefore, any infringement of the right to privacy must be through a law which follows the principles of natural justice, and is not arbitrary or unfair. It follows that any instances of automated processing for public functioning by state actors or others, must meet this standard of ‘fair, just and reasonable’.

While there is a lot of focus internationally on what ethical AI must be, it is important that when we consider use of AI by the state, we pay heed to the existing constitutional principles which determine how AI must be evaluated against these standards. These principles however extend only to limited circumstances for protections under Article 21 are not horizontal in nature but only applicable against the state. Whether a party is the state or not is a question that has been considered several times by the Supreme Court and must be determined by functional tests. In our submission of the Justice Srikrishna Committee, we clearly recommended that where automated decision making is used for discharging of public functions, the data protection law must state that such actions are subject the the constitutional standards and are ‘just, fair and reasonable’ and satisfy the tests for both procedural and substantive due process. To a limited extent, the committee seems to have picked up the standards of ‘fair’ and ‘reasonable’ and made it applicable to all forms of processing, whether public or private. It is as yet unclear whether fairness and reasonableness as inserted in the bill would draw from the constitutional standard under Article 21. The report makes a reference to the twin principles of acting in a manner that upholds the best interest of the privacy of the individual, and processing within the reasonable expectations of the individual, which do not seem to cover the fullest essence of the legal standard under Article 21.

Conclusion

The Srikrishna Committee Bill attempts to create an accountability framework for the use of emerging technologies including AI that is focused on placing the responsibility on companies to prevent harm. Though not as robust as found in the GDPR, the protections have been enabled through requirements such as fair and reasonable processing, ensuring data quality, and implementing principles of privacy of design. At the sametime, the Srikrishna Bill does not include provisions that can begin to address the  consumer facing ‘black box’ of AI by ensuring that individuals have information about the potential impact of decisions taken by automated means. In contrast, the GDPR has already taken important steps to tackle this by requiring companies to explain the logic and potential impact of decisions taken by automated means.

Most importantly, the Bill gives the Data Protection Authority the necessary tools to hold companies accountable for the use of AI through the requirements of data protection audits. If enacted, it will have to be seen how these audits and the principle of privacy by design are implemented and enforced in the context of companies using  AI. Though the Bill creates a Data Protection Authority consisting of members that have significant experience in data protection, information technology, data management, data science, cyber and internet laws, and related subjects, these requirements can be further strengthened by having someone from a background of ethics and human rights.

One of the responsibilities of the DPA under the Srikrishna Bill will be to monitor technological developments and commercial practices that may affect protection of personal data and promote measures and undertake research for innovation in the field of protection of personal data. If enacted, we hope that AI and solutions towards enhancing privacy in the context of AI like described above will be one of these focus areas of the DPA. It will also be important to see how the DPA develops impact assessments related to AI and what tools associated with the principle of Privacy by Design emerge to address AI.


[1] https://privacyinternational.org/topics/artificial-intelligence

[2] https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/

[3] https://iapp.org/news/a/ai-offers-opportunity-to-increase-privacy-for-users/

[4] https://iapp.org/media/pdf/resource_center/GDPR_Study_Maldoff.pdf

[5] https://gdpr-info.eu/art-22-gdpr/

[6] https://gdpr-info.eu/art-14-gdpr/

[7] https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf

[8] https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf

[9] https://gdpr-info.eu/art-25-gdpr/

[10] https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/data-protection-impact-assessments/

[11] https://gdpr-info.eu/art-21-gdpr/

[12] https://gdpr-info.eu/art-22-gdpr/

[13] https://gdpr-info.eu/art-14-gdpr/

[14]Draft Data Protection Bill 2018 -  Chapter II section 9

[15] Draft Data Protection Bill 2018 -  Chapter VII section 29

[16] Draft Data Protection Bill 2018 -  Chapter VII section 33

[17] Draft Data Protection Bill 2018 -  Chapter VII section 38

[18] Draft Data Protection Bill 2018 -  Chapter VIII section 40

[19] Draft Data Protection Bill 2018 -  Chapter X section 60

[20] Draft Data Protection Bill 2018 -  Chapter II section 4

[21] Draft Data Protection Bill 2018 - Chapter II section 5

[22] Draft Data Protection Bill 2018 -  Chapter IX Section 45

[23] Draft Data Protection Bill 2018 - Chapter XIV section 97

[24] Draft Data Protection Bill 2018 - Chapter VII section 31

[25] Srikrishna Committee Report on Data Protection pg. 36 and 37. Available at: http://www.prsindia.org/uploads/media/Data%20Protection/Committee%20Report%20on%20Draft%20Personal%20Data%20Protection%20Bill,%202018.pdf

[26] https://www.ciosummits.com/Online_Assets_DocAuthority_Whitepaper_-_Guide_to_Intelligent_GDPR_Compliance.pdf

[27] https://jolt.law.harvard.edu/assets/articlePDFs/v31/31HarvJLTech217.pdf

[28] https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_personal_data_v2.pdf

[29] https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf

[30] https://www.artificial-intelligence.blog/news/capsule-networks

[31] http://raird.no/about/factsheet.html

[32] https://www.darpa.mil/attachments/XAIProgramUpdate.pdf

[33] https://www.darpa.mil/attachments/XAIProgramUpdate.pdf

[34] https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime

[35] R C Cooper v. Union of India, 1970 SCR (3) 530.

[36] Maneka Gandhi v. Union of India, 1978 SCR (2) 621.

[37] 94 US 113 (1877).

AI in India: A Policy Agenda

by Amber Sinha, Elonnai Hickok and Arindrajit Basu — last modified Sep 05, 2018 03:39 PM

Click to download the file


Background

Over the last few months, the Centre for Internet and Society has been engaged in the mapping of use and impact of artificial intelligence in health, banking, manufacturing, and governance sectors in India through the development of a case study compendium.[1] Alongside this research, we are examining the impact of Industry 4.0 on jobs and employment and questions related to the future of work in India. We have also been a part of several global conversations on artificial intelligence and autonomous systems. The Centre for Internet and Society is part of the Partnership on Artificial Intelligence, a consortium which has representation from some of most important companies and civil society organisations involved in developments and research on artificial intelligence. We have contributed to the The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and are also a part of a Big Data for Development Global Network, where we are undertaking research towards evolving ethical principles for use of computational techniques. The following are a set of recommendations we have arrived out of our research into artificial intelligence, particularly the sectoral case studies focussed on the development and use of artificial intelligence in India.

National AI Strategies: A Brief Global Overview

Artificial Intelligence is emerging as  a central policy issue  in several countries. In October 2016, the Obama White House released a report titled, “Preparing for the Future of Artificial Intelligence”[2] delving into a range of issues including application for public goods, regulation, economic impact, global security and fairness issues. The White House also released a companion document called the “National Artificial Intelligence Research and Development Strategic Plan”[3] which laid out a strategic plan for Federally-funded research and development in AI. These were the first of a series of policy documents released by the US towards the role of AI. The United Kingdom announced its 2020 national development strategy and issued a government report to accelerate the application of AI by government agencies while in 2018 the Department for Business, Energy, and Industrial Strategy released the Policy Paper - AI Sector Deal.[4] The Japanese government released it paper on Artificial Intelligence Technology Strategy in 2017.[5] The European Union launched "SPARC," the world’s largest civilian robotics R&D program, back in 2014.[6]

Over the last year and a half, Canada,[7] China,[8] the UAE,[9] Singapore,[10] South Korea[11], and France[12] have announced national AI strategy documents while 24 member States in the EU have committed to develop national AI policies that reflect a “European” approach to AI [13]. Other countries such as Mexico and Malaysia are in the process of evolving their national AI strategies. What this suggests is that AI is quickly emerging as central to national plans around the development of science and technology as well as economic and national security and development. There is also a focus on investments enabling AI innovation in critical national domains as a means of addressing key challenges facing nations. India has followed this trend and in 2018 the government published two AI roadmaps - the Report of Task Force on Artificial Intelligence by the AI Task Force constituted by the Ministry of Commerce and Industry[14] and the National Strategy for Artificial Intelligence by Niti Aayog.[15] Some of the key themes running across the National AI strategies globally are spelt out below.

Economic Impact of AI

A common thread that runs across the different national approaches to AI is the belief in the significant economic impact of AI, that it will likely increase productivity and create wealth. The British government estimated that AI could add $814 billion to the UK economy by 2035. The UAE report states that by 2031, AI will help boost the country’s GDP by 35 per cent, reduce government costs by 50 per cent. Similarly, China estimates that the core AI market will be worth 150 billion RMB ($25bn) by 2020, 400 billion RMB ($65bn) and one trillion RMB ($160bn) by 2030. The impact of adoption of AI and automation of labour and employment is also a key theme touched upon across the strategies. For instance, the White House Report of October 2016 states the US workforce is unprepared – and that a serious education programme, through online courses and in-house schemes, will be required.[16]

State Funding

Another key trend exhibited in all national strategies towards AI has been a commitment by the respective governments towards supporting research and development in AI. The French government has stated that it intends to invest €1.5 billion ($1.85 billion) in AI research in the period through to 2022. The British government’s recommendations, in late 2017, were followed swiftly by a promise in the autumn budget of new funds, including at least £75 million for AI. Similarly, the the Canadian government put together a $125-million ‘pan-Canadian AI strategy’ last year.

AI for Public Good

The use of AI for Public Good is a significant focus of most AI policies. The biggest justification for AI innovation as a legitimate objective of public policy is its promised impact towards improvement of  people’s lives by helping to solve some of the world’s greatest challenges and inefficiencies, and emerge as a transformative technology, much like mobile computing. These public good uses of AI are emerging across sectors such as transportation, migration, law enforcement and justice system, education, and agriculture..

National Institutions leading AI research

Another important trend which was  key to the implementation of national AI strategies is the creation or development of well-funded centres of excellence which would serve as drivers of research and development and leverage synergies with the private sector. The French Institute for Research in Computer Science and Automation (INRIA) plans to create a national AI research program with five industrial partners. In UK, The Alan Turing Institute is likely to emerge as the national institute for data science, and an AI Council would be set up to manage inter-sector initiatives and training. In Canada, Canadian Institute for Advanced Research (CIFAR) has been tasked with implementing their AI strategy. Countries like Japan has a less centralised structure with the creation of strategic council for AI technology’ to promote research and development in the field, and manage a number of key academic institutions, including NEDO and its national ICT (NICT) and science and tech (JST) agencies. These institutions are key to successful implementation of national agendas and policies around AI.

AI, Ethics and Regulation

Across the AI strategies — ethical dimensions and regulation of AI were highlighted as concerns that needed to be addressed. Algorithmic transparency and explainability, clarity on liability, accountability and oversight, bias and discrimination, and privacy are ethical  and regulatory questions that have been raised. Employment and the future of work is another area of focus that has been identified by countries.  For example, the US 2016 Report reflected on if existing regulation is adequate to address risk or if adaption is needed by examining the use of AI in automated vehicles. In the policy paper - AI Sector Deal - the UK proposes four grand challenges: AI and Data Economy, Future Mobility, Clean Growth, and Ageing Society. The Pan Canadian Artificial Intelligence Strategy focuses on developing global thought leadership on the economic, ethical, policy, and legal implications of advances in artificial intelligence.[17]

The above are important factors and trends to take into account and to different extents have been reflected in the two national roadmaps for AI. Without adequate institutional planning, there is a risk of national strategies being too monolithic in nature.  Without sufficient supporting mechanisms in the form of national institutions which would drive the AI research and innovation, capacity building and re-skilling of workforce to adapt to changing technological trends, building regulatory capacity to address new and emerging issues which may disrupt traditional forms of regulation and finally, creation of an environment of monetary support both from the public and private sector it becomes difficult to implement a national strategy and actualize the potentials of AI . As stated above, there is also a need for identification of key national policy problems which can be addressed by the use of AI, and the creation of a framework with institutional actors to articulate the appropriate plan of action to address the problems using AI. There are several ongoing global initiatives which are in the process of trying to articulate key principles for ethical AI. These discussions also feature in some of the national strategy documents.

Key considerations for AI policymaking in India

As mentioned above, India has published two national AI strategies. We have responded to both of these here[18] and here.[19] Beyond these two roadmaps, this policy brief reflects on a number of factors that need to come together for India to leverage and adopt AI across sectors, communities, and technologies successfully.

Resources, Infrastructure, Markets, and Funding

Ensure adequate government funding and investment in R&D

As mentioned above, a survey of all major national strategies on AI reveals a significant financial commitment from governments towards research and development surrounding AI. Most strategy documents speak of the need to safeguard national ambitions in the race for AI development. In order to do so it is imperative to have a national strategy for AI research and development, identification of nodal agencies to enable the process, and creation of institutional capacity to carry out cutting edge research.

Most jurisdictions such as Japan, UK and China have discussed collaborations between the industry and government to ensure greater investment into AI research and development. The European Union has spoken using the existing public-private partnerships, particularly in robotics and big data to boost investment by over one and half times.[20] To some extent, this  step has been initiated by the Niti Aayog strategy paper. The paper lists out enabling factors for the widespread adoption of AI and maps out specific government agencies and ministries that could promote such growth. In February 2018, the Ministry of Electronics and IT also set up four committees to prepare a roadmap for a national AI programme. The four committees are presently studying AI in context of citizen centric services; data platforms; skilling, reskilling and R&D; and legal, regulatory and cybersecurity perspectives.[21]

Democratize AI technologies and data

Clean, accurate, and appropriately curated data is essential for training algorithms. Importantly, large quantities of data alone does not translate into better results. Accuracy and curation of data should be prerequisites to quantity of data. Frameworks to generate and access larger quantity of data should not hinge on models of centralized data stores. The government and the private sector are generally gatekeepers to vast amounts of data and technologies. Ryan Calo has called this an issue of data parity,[22] where only a few well established leaders in the field have the ability to acquire data and build datasets. Gaining access to data comes with its own questions of ownership, privacy, security, accuracy, and completeness. There are a number of different approaches and techniques that can be adopted to enable access to data.

Open Government Data

Robust open data sets is one way in which access can be enabled. Open data is particularly important for small start-ups as they build prototypes. Even though India is a data dense country and has in place a National Data and Accessibility Policy India does not yet have robust and comprehensive open data sets across sectors and fields.  Our research found that this is standing as an obstacle to innovation in the Indian context as startups often turn to open datasets in the US and Europe for developing prototypes. Yet, this is problematic because the demography represented in the data set is significantly different resulting in the development of solutions that are trained to a specific demographic, and thus need to be re-trained on Indian data. Although AI is technology agnostic, in the cases of different use cases of data analysis, demographically different training data is not ideal. This is particularly true for certain categories such as health, employment, and financial data.

The government can play a key role in providing access to datasets that will help the functioning and performance of AI technologies. The Indian government has already made a move towards accessible datasets through the Open Government Data Platform which provides access to a range of data collected by various ministries. Telangana has developed its own Open Data Policy which has stood out for its transparency and the quality of data collected and helps build AI based solutions.

In order to encourage and facilitate innovation, the central and state governments need to actively pursue and implement the National Data and Accessibility Policy.

Access to Private Sector Data

The private sector is the gatekeeper to large amounts of data. There is a need to explore different models of enabling access to private sector data while ensuring and protecting users rights and company IP. This data is often considered as a company asset and not shared with other stakeholders. Yet, this data is essential in enabling innovation in AI.

Amanda Levendowski states that ML practitioners have essentially three options in securing sufficient data— build the databases themselves, buy the data, or use data in the public domain. The first two alternatives are largely available to big firms or institutions. Smaller firms often end resorting to the third option but it carries greater risks of bias.

A solution could be federated access, with companies allowing access to researchers and developers to encrypted data without sharing the actual data.  Another solution that has been proposed is ‘watermarking’ data sets.

Data sandboxes have been promoted as tools for enabling innovation while protecting privacy, security etc. Data sandboxes allow companies access to large anonymized data sets under controlled circumstances. A regulatory sandbox is a controlled environment with relaxed regulations that allow the product to be tested thoroughly before it is launched to the public. By providing certification and safe spaces for testing, the government will encourage innovation in this sphere. This system has already been adopted in Japan where there are AI specific regulatory sandboxes to drive society 5.0.160 data sandboxes are tools that can be considered within specific sectors to enable innovation. A sector wide data sandbox was also contemplated by TRAI.[23] A sector specific governance structure can establish a system of ethical reviews of underlying data used to feed the AI technology along with data collected in order to ensure that this data is complete, accurate and has integrity. A similar system has been developed by Statistics Norway and the Norwegian Centre for Research Data.[24]

AI Marketplaces

The National Roadmap for Artificial Intelligence by NITI Aayog proposes the creation of a National AI marketplace that is comprised of a data marketplace, data annotation marketplace, and deployable model marketplace/solutions marketplace.[25] In particular, it is envisioned that the data marketplace would be based on blockchain technology and have the features of: traceability, access controls, compliance with local and international regulations, and robust price discovery mechanism for data. Other questions that will need to be answered center around pricing and ensuring equal access. It will also be interesting how the government incentivises the provision of data by private sector companies. Most data marketplaces that are emerging are initiated by the private sector.[26] A government initiated marketplace has the potential to bring parity to some of the questions raised above, but it should be strictly limited to private sector data in order to not replace open government data.

Open Source Technology

A number of companies are now offering open source AI technologies. For example, TensorFlow, Keras, Scikit-learn, Microsoft Cognitive Toolkit, Theano, Caffe, Torch, and Accord.NET.[27] The government should incentivise and promote open source AI technologies towards harnessing and accelerating research in AI.

Re-thinking Intellectual Property Regimes

Going forward it will be important for the government to develop an intellectual property framework that encourages innovation. AI systems are trained by reading, viewing, and listening to copies of human-created works. These resources such as books, articles, photographs, films, videos, and audio recordings are all key subjects of copyright protection. Copyright law grants exclusive rights to copyright owners, including the right to reproduce their works in copies, and one who violates one of those exclusive rights “is an infringer of copyright.[28]

The enterprise of AI is, to this extent, designed to conflict with tenets of copyright law, and after the attempted ‘democratization’ of copyrighted content by the advent of the Internet, AI poses the latest challenge to copyright law. At the centre of this challenge is the fact that it remains an open question whether a copy made to train AI is a “copy” under copyright law, and consequently whether such a copy is an infringement.[29] The fractured jurisprudence on copyright law is likely to pose interesting legal questions with newer use cases of AI. For instance, Google has developed a technique called federated learning, popularly referred to as on-device ML, in which training data is localised to the originating mobile device rather than copying data to a centralized server.[30] The key copyright questions here is whether decentralized training data stored in random access memory (RAM) would be considered as “copies”.[31] There are also suggestions that copies made for the purpose of training of machine learning systems may be so trivial or de minimis that they may not qualify as infringement.[32] For any industry to flourish, there needs to be legal and regulatory clarity and it is imperative that these copyright questions emerging out of use of AI be addressed soon.

As noted in our response to the Niti Aayog national AI strategy  “The report also blames the current Indian  Intellectual Property regime for being “unattractive” and averse to incentivising research and adoption of AI. Section 3(k) of Patents Act exempts algorithms from being patented, and the Computer Related Inventions (CRI) Guidelines have faced much controversy over the patentability of mere software without a novel hardware component. The paper provides no concrete answers to the question of whether it should be permissible to patent algorithms, and if yes, to  to what extent. Furthermore, there needs to be a standard either in the CRI Guidelines or the Patent Act, that distinguishes between AI algorithms and non-AI algorithms. Additionally, given that there is no historical precedence on the requirement of patent rights to incentivise creation of AI,  innovative investment protection mechanisms that have lesser negative externalities, such as compensatory liability regimes would be more desirable.  The report further failed to look at the issue holistically and recognize that facilitating rampant patenting can form a barrier to smaller companies from using or developing  AI. This is important to be cognizant of given the central role of startups to the AI ecosystem in India and because it can work against the larger goal of inclusion articulated by the report.”[33]

National infrastructure to support domestic development

Building a robust national Artificial Intelligence solution requires establishing adequate indigenous  infrastructural capacity for data storage and processing.  While this should not necessarily extend to mandating data localisation as the draft privacy bill has done, capacity should be developed to store data sets generated by indigenous nodal points.

AI Data Storage

Capacity needs to increase as the volume of data that needs to be processed in India increases. This includes ensuring effective storage capacity, IOPS (Input/Output per second) and ability to process massive amounts of data.

AI Networking Infrastructure

Organizations will need to upgrade their networks in a bid to upgrade and optimize efficiencies of scale. Scalability must be undertaken on a high priority which will require a high-bandwidth, low latency and creative architecture, which requires appropriate last mile data curation enforcement.

Conceptualization and Implementation

Awareness, Education, and Reskilling

Encouraging AI research

This can be achieved by collaborations between the government and large companies to promote accessibility and encourage innovation through greater R&D spending. The Government of Karnataka, for instance, is collaborating with NASSCOM to set up a Centre of Excellence for Data Science and Artificial Intelligence (CoE-DS&AI) on a public-private partnership model to “accelerate the ecosystem in Karnataka by providing the impetus for the development of data science and artificial intelligence across the country.” Similar centres could be incubated in hospitals and medical colleges in India.  Principles of public funded research such as FOSS, open standards, and open data should be core to government initiatives to encourage research.  The Niti Aaayog report proposes a two tier integrated approach towards accelerating research, but is currently silent on these principles.[34]

Therefore,as suggested by the NITI AAYOG Report, the government needs to set up ‘centres of excellence’. Building upon the stakeholders identified in the NITI AAYOG Report, the centers of excellence should  involve a wide range of experts including lawyers, political philosophers, software developers, sociologists and gender studies from diverse organizations including government, civil society,the private sector and research institutions  to ensure the fair and efficient roll out of the technology.[35] An example is the Leverhulme Centre for the Future of Intelligence set up by the Leverhulme Foundation at the University of Cambridge[36] and the AI Now Institute at New York University (NYU)[37] These research centres bring together a wide range of experts from all over the globe.[38]

Skill sets to successfully adopt AI

Educational institutions should provide opportunities for students to skill themselves to adapt to adoption of AI, and also push for academic programmes around AI. It is also important to introduce computing technologies such as AI in medical schools in order to equip doctors to adopt the technical skill sets and ethics required to use integrate AI in their practices. Similarly, IT institutes could include courses on ethics, privacy, accountability etc. to equip engineers and developers with an understanding of the questions surrounding the technology and services they are developing.

Societal Awareness Building

Much of the discussion around skilling for AI is in the context of the workplace, but there is a need for awareness to be developed across society for a broader adaptation to AI. The Niti Aayog report takes the first steps towards this - noting the importance of highlighting the benefits of AI to the public. The conversation needs to go beyond this towards enabling individuals to recognize and adapt to changes that might be brought about - directly and indirectly - by AI - inside and outside of the workplace. This could include catalyzing a shift in mindset to life long learning and discussion around potential implications of human-machine interactions.

Early Childhood Awareness and Education

It is important that awareness around AI begins in early childhood. This is  in part because children already interact with AI and increasingly will do so and thus awareness is needed in how AI works and can be safely and ethically used. It is also important to start building the skills that will be necessary in an AI driven society from a young age.

Focus on marginalised groups

Awareness, skills, and education should be targeted at national minorities including rural communities, the disabled, and women. Further, there should be a concerted  focus on communities that are under-represented in the tech sector-such as women and sexual minorities-to ensure that the algorithms themselves and the community working on AI driven solutions are holistic and cohesive. For example, Iridescent focuses on girls, children, and families to enable them to adapt to changes like artificial intelligence through promoting curiosity, creativity, and perseverance to become lifelong learners.[39] This will be important towards ensuring that AI does not deepen societal  and global inequalities including digital divides. Widespread use of AI will undoubtedly require re-skilling various stakeholders in order to make them aware of the prospects of AI.[40] Artificial Intelligence itself can be used as a resource in the re-skilling process itself-as it would be used in the education sector to gauge people’s comfort with the technology and plug necessary gaps.

Improved access to and awareness of Internet of Things

The development of smart content or Intelligent Tutoring Systems in the education can only be done on a large scale if both the teacher and the student has access to and feel comfortable with using basic IoT devices . A U.K. government report has suggested that any skilled workforce  using AI should be a mix of those with a basic understanding responsible for implementation at the grassroots level , more informed users and specialists with advanced development and implementation skills.[41]The same logic applies to the agriculture sector, where the government is looking to develop smart weather-pattern tracking applications. A potential short-term solution may lie in ensuring that key actors have access to an  IoT device so that he/she may access digital and then impart the benefits of access to proximate individuals. In the education sector, this would involve ensuring that all teachers have access to and are competent in using an IoT device. In the agricultural sector, this may involve equipping each village with a set of IoT devices so that the information can be shared among concerned individuals. Such an approach recognizes that AI is not the only technology catalyzing change - for example industry 4.0 is understood as  comprising of a suite of technologies including but not limited to AI.

Public Discourse

As solutions bring together and process vast amounts of granular data, this data can be from a variety of public and private sources - from third party sources or generated by the AI and its interaction with its environment. This means that very granular and non traditional data points are now going into decision making processes. Public discussion is needed to understand social and cultural norms and standards and how these might translate into acceptable use norms for data in various sectors.

Coordination and collaboration across stakeholders

Development of Contextually Nuanced and Appropriate AI Solutions

Towards ensuring effectiveness and  accuracy it is important that solutions used in India are developed to account for cultural nuances and diversity. From our research this could be done in a number of ways ranging from: training AI solutions used in health on data from Indian patients to account for differences in demographics[42],  focussing on  natural language voice recognition to account for the diversity in languages and digital skills in the Indian context,[43] and developing and applying AI to reflect societal norms and understandings.[44]

Continuing, deepening, and expanding  partnerships for innovation

Continued innovation while holistically accounting for the challenges that AI poses  will be key for actors in the different sectors to remain competitive. As noted across case study reports partnerships is key in  facilitating this innovation and filling capacity gaps. These partnerships can be across sectors, institutions, domains, geographies, and stakeholder groups. For example:  finance/ telecom, public/private, national/international, ethics/software development/law, and academia/civil society/industry/government.  We would emphasize collaboration between actors across different domains and stakeholder groups as developing holistics AI solutions demands multiple understandings and perspectives.

Coordinated Implementation

Key sectors in India need to  begin to take steps to consider sector wide coordination in implementing AI. Potential stress and system wide vulnerabilities would need to be considered when undertaking this. Sectoral regulators such as RBI, TRAI, and the Medical Council of India are ideally placed to lead this coordination.

Develop contextual standard benchmarks to assess quality of algorithms

In part because of the nacency of the development and implementation of AI,  towards enabling effective assessments of algorithms to understand impact and informing selection by institutions adopting solutions, standard benchmarks can help in assessing quality and appropriateness of algorithms. It may be most effective to define such benchmarks at a sectoral level (finance etc.) or by technology and solution (facial recognition etc.).  Ideally, these efforts would be led by the government in collaboration with multiple stakeholders.

Developing a framework for working with the private sector for use-cases by the government

There are various potential use cases the government could adopt in order to use AI as a tool for augmenting public service delivery  in India by the government. However, given lack of capacity -both human resource and technological-means that entering into partnerships with the private sector may enable more fruitful harnessing of AI- as has been seen with existing MOUs in the agricultural[45] and healthcare sectors.[46] However, the partnership must be used as a means to build capacity within the various nodes in the set-up rather than relying  only on  the private sector partner to continue delivering sustainable solutions.

Particularly, in the case of use of AI for governance, there is a need to evolve a clear parameter to do impact assessment prior to the deployment of the technology that clearly tries to map estimated impact of the technology of clearly defined objectives, which must also include the due process, procedural fairness and human rights considerations . As per Article 12 of the Indian Constitution, whenever the government is exercising a public function, it is bound by the entire gamut of fundamental rights articulated in Part III of the Constitution. This is a crucial consideration the government will have to bear in mind whenever it uses AI-regardless of the sector.  In all cases of public service delivery, primary accountability for the use of AI should lie with the government itself, which means that a cohesive and uniform framework which regulates these partnerships must be conceptualised. This framework should incorporate : (a) Uniformity in the wording and content of contracts that the government signs, (b) Imposition of obligations of transparency and accountability on the developer to ensure that the solutions developed are in conjunction with constitutional standards and (c) Continuous evaluation of private sector developers by the government and experts to ensure that they are complying with their obligations.

Defining Safety Critical AI

The implications of AI differs according to use. Some countries, such as the EU, are beginning to define sectors where AI should play the role of augmenting jobs as opposed to functioning autonomously. The Global Partnership on AI is has termed sectors where AI tools supplement or replace human decision making in areas such as health and transportation as ‘safety critical AI’ and is  researching best practices for application of AI in these areas.  India will need to think through if there is a threshold that needs to be set and more stringent regulation applied. In addition to uses in health and transportation, defense and law enforcement would be another sector where certain use would require more stringent regulation.

Appropriate certification mechanisms

Appropriate certificate mechanisms will be important in ensuring the quality of AI solutions.   A significant barrier to the adoption of AI  in some sectors  in India is acceptability of results, which include direct results arrived at using AI technologies as well as opinions provided by practitioners that are influenced/aided by AI technologies. For instance, start-ups in the healthcare sectors often find that they are asked to show proof of a clinical trial when presenting their products to doctors and hospitals, yet clinical trials are expensive, time consuming and inappropriate forms of certification for medical devices and digital health platforms. Startups also face difficulty in conducting clinical trials as there is lack of a clear regulation to adhere to. They believe that while clinical trials are a necessity with respect to drugs, the process often results in obsolescence of the technology by the time it is approved in the context of AI. Yet, medical practitioners are less trusting towards startups who do not have approval from a national or international authority. A possible and partial solution suggested by these startups is to enable doctors to partner with them to conduct clinical trials together. However, such partnerships cannot be at the expense of rigour, and adequate protections need to be built in the enabling regulation.

Serving as a voice for emerging economies in the global debate on AI

While India should utilise Artificial Intelligence in the economy as a means of occupying a driving role in the global debate around AI, it must be cautious before allowing the use of Indian territory and infrastructure as a test bed for other emerging economies without considering the ramifications that the utilisation of AI may have for Indian citizens. The NITI AAYOG Report envisions  India as leverage AI as a ‘garage’ for emerging economies.[47] While there are certain positive connotations of this suggestion in so far as this propels India to occupy a leadership position-both technically and normatively in determining future use cases for AI in India,, in order to ensure that Indian citizens are not used as test subjects in this process, guiding principles could be developed such as requiring that projects have clear benefits for India.

Frameworks for Regulation

National legislation

Data Protection Law

India is a data-dense country, and the lack of a robust privacy  regime, allows the public and private sector easier access to large amounts of data than might be found in other contexts with stringent privacy laws. India also lacks a formal regulatory regime around anonymization. In our research we found that this gap does not always translate into a gap in practice, as some start up companies have  adopted  self-regulatory practices towards protecting privacy such as of anonymising data they receive before using it further, but it does result in unclear and unharmonized practice..

In order to ensure rights and address emerging challenges to the same posed by artificial intelligence, India needs to enact   a comprehensive privacy legislation applicable to the private and public sector to regulate the use of data, including use in artificial intelligence. A privacy legislation will also have to address more complicated questions such as the use of publicly available data for training algorithms, how traditional data categories (PI vs. SPDI - meta data vs. content data etc.) need to be revisited in light of AI,  and how can a privacy legislation be applied to autonomous decision making. Similarly, surveillance laws may need to be revisited in light of AI driven technologies such as facial recognition, UAS, and self driving cars as they provide new means of surveillance to the state and have potential implications for other rights such as the right to freedom of expression and the right to assembly.  Sectoral protections can compliment and build upon the baseline protections articulated in a national privacy legislation.[48] In August 2018 the Srikrishna Committee released a draft data protection bill for India. We have reflected on how the Bill addresses AI. Though the Bill brings under its scope companies deploying emerging technologies and subjects them to the principles of privacy by design and data impact assessments, the Bill is silent on key rights and responsibilities, namely the responsibility of the data controller to explain the logic and impact of automated decision making including profiling to data subjects and the right to opt out of automated decision making in defined circumstances.[49] Further, the development of technological solutions to address the dilemma between AI and the need for access to larger quantities of data for multiple purposes and privacy should be emphasized.

Discrimination Law

A growing area of research globally is the social consequences of AI with a particular focus on its tendency to replicate or amplify existing and structural inequalities. Problems such as data invisibility of certain excluded groups,[50] the myth of data objectivity and neutrality,[51] and data monopolization[52] contribute to the disparate impacts of big data and AI. So far much of the research on this subject has not moved beyond the exploratory phase as is reflected in the reports released by the White House[53] and Federal Trade Commission[54] in the United States. The biggest challenge in addressing discriminatory and disparate impacts of AI is ascertaining “where value-added personalization and segmentation ends and where harmful discrimination begins.”[55]

Some prominent cases where AI can have discriminatory impact are denial of loans based on attributes such as neighbourhood of residence as a proxies which can be used to circumvent anti-discrimination laws which prevent adverse determination on the grounds of race, religion, caste or gender, or adverse findings by predictive policing against persons who are unfavorably represented in the structurally biased datasets used by the law enforcement agencies. There is a dire need for disparate impact regulation in sectors which see the emerging use of AI.

Similar to disparate impact regulation, developments in AI, and its utilisation, especially in credit rating, or risk assessment processes could create complex problems that cannot be solved only by the principle based regulation. Instead, regulation intended specifically to avoid outcomes that the regulators feel are completely against the consumer, could be an additional tool that increases the fairness, and effectiveness of the system.

Competition Law

The conversation of use of competition or antitrust laws to govern AI is still at an early stage. However, the emergence of numerous data driven mergers or acquisitions such as Yahoo-Verizon, Microsoft-LinkedIn and Facebook-WhatsApp have made it difficult to ignore the potential role of competition law in the governance of data collection and processing practices. It is important to note that the impact of Big Data goes far beyond digital markets and the mergers of companies such as Bayer, Climate Corp and Monsanto shows that data driven business models can also lead to the convergence of companies from completely different sectors as well. So far, courts in Europe have looked at questions such as the impact of combination of databases on competition[56] and have held that in the context of merger control, data can be a relevant question if an undertaking achieves a dominant position through a merger, making it capable of gaining further market power through increased amounts of customer data. The evaluation of the market advantages of specific datasets has already been done in the past, and factors which have been deemed to be relevant have included whether the dataset could be replicated under reasonable conditions by competitors and whether the use of the dataset was likely to result in a significant competitive advantage.[57] However, there are limited circumstances in which big data meets the four traditional criteria for being a barrier to entry or a source of sustainable competitive advantage — inimitability, rarity, value, and non-substitutability.[58]

Any use of competition law to curb data-exclusionary or data-exploitative practices will first have to meet the threshold of establishing capacity for a firm to derive market power from its ability to sustain datasets unavailable to its competitors. In this context the peculiar ways in which network effects, multi-homing practices and how dynamic the digital markets are, are all relevant factors which could have both positive and negative impacts on competition. There is a need for greater discussion on data as a sources of market power in both digital and non-digital markets, and how this legal position can used to curb data monopolies, especially in light of government backed monopolies for identity verification and payments in India.

Consumer Protection Law

The Consumer Protection Bill, 2015, tabled in the Parliament towards the end of the monsoon session has introduced an expansive definition of the term “unfair trade practices.” The definition as per the Bill includes the disclosure “to any other person any personal information given in confidence by the consumer.” This clause excludes from the scope of unfair trade practices, disclosures under provisions of any law in force or in public interest. This provision could have significant impact on the personal data protection law in India. Alongside, there is also a need to ensure that principles such as safeguarding consumers personal information in order to ensure that the same is not used to their detriment are included within the definition of unfair trade practices. This would provide consumers an efficient and relatively speedy forum to contest adverse impacts on them of data driven decision-making.

Sectoral Regulation

Our research into sectoral case studies revealed that there are a number of existing sectoral laws and policies that are applicable to aspects of AI. For example, in the health sector there is the Medical Council Professional Conduct, Etiquette, and Ethics Regulations 2002, the Electronic Health Records Standards 2016, the draft Medical Devices Rules 2017, the draft Digital Information Security in Healthcare Act.  In the finance sector there is the Credit Information Companies (Regulation) Act 2005 and 2006, the Securities and Exchange Board of India (Investment Advisers) Regulations, 2013, the Payment and Settlement Systems Act, 2007, the Banking Regulations Act 1949, SEBI guidelines on robo advisors etc. Before new regulations, guidelines etc are developed - a comprehensive exercise needs to be undertaken at a sectoral level to understand if 1. sectoral policy adequately addresses the changes being brought about by AI 2. If it does not - is an amendment possible and if not - what form of policy would fill the gap.

Principled approach

Transparency

Audits

Internal and external audits can be mechanisms towards creating transparency about the processes and results of AI solutions as they are implemented in a specific context. Audits can take place while a solution is still in ‘pilot’ mode and on a regular basis during implementation. For example,  in the Payment Card Industry (PCI) tool,  transparency is achieved through frequent audits, the results of which are simultaneously and instantly transmitted to the regulator and the developer. Ideally parts of the results of the audit are also made available to the public, even if the entire results are not shared.

Tiered Levels of Transparency

There are different levels and forms of transparency as well as different ways of achieving the same. The type and form of transparency can be tiered and dependent on factors such as criticality of function, potential direct and indirect harm, sensitivity of data involved, actor using the solution . The audience can also be tiered and could range from an individual user to senior level positions, to oversight bodies.

Human Facing Transparency

It will be important for India to define standards around human-machine interaction including the level of transparency that will be required. Will chatbots need to disclose that they are chatbots? Will a notice need to be posted that facial recognition technology is used in a CCTV camera? Will a company need to disclose in terms of service and privacy policies that data is processed via an AI driven solution? Will there be a distinction if the AI takes the decision autonomously vs. if the AI played an augmenting role? Presently, the Niti Aayog paper has been silent on this question.

Explainability

An explanation is not equivalent to complete  transparency. The obligation of providing an explanation does not mean  that the developer should necessarily  know the flow of bits through the AI system. Instead, the legal requirement of providing an explanation requires an ability to explain how certain parameters may be utilised to arrive at an outcome in a certain situation.

Doshi-Velez and Kortz have highlighted two technical ideas that may enhance a developer's ability to explain the functioning of AI systems:[59]

1) Differentiation and processing: AI systems are designed to have the inputs differentiated and processed through various forms of computation-in a reproducible and robust manner. Therefore, developers should be able to explain a particular decision by examining the inputs in an attempt to determine which of them have the greatest impact on the outcome.

2) Counterfactual faithfulness: The second property of counterfactual faithfulness enables the developer to consider which factors caused a difference in the outcomes. Both these solutions can be deployed without necessarily knowing the contents of black boxes. As per Pasquale, ‘Explainability matters because the process of reason-giving is intrinsic to juridical determinations – not simply one modular characteristic jettisoned as anachronistic once automated prediction is sufficiently advanced.”[60]

Rules based system applied contextually

Oswald et al have suggested two proposals that might  mitigate algorithmic opacity.by designing a broad rules-based system, whose implementation need to be applied in a context-specific manner which thoroughly evaluates the key enablers and challengers in each specific use case.[61]

  • Experimental proportionality was designed to enable the courts to make proportionality determinations of an algorithm at the experimental stage even before the impacts are fully realised in a manner that would enable them to ensure that appropriate metrics for performance evaluation and cohesive principles of design have been adopted. In such cases they recommend that the courts give the benefit of the doubt to the public sector body subject to another hearing within a stipulated period of time once data on the impacts of the algorithm become more readily available.
  • ‘ALGO-CARE' calls for the design of a rules-based system which ensures that the algorithms[62] are:

(1) Advisory: Algorithms must retain an advisory capacity that augments existing human capability rather than replacing human discretion outright;

(2) Lawful: Algorithm's proposed function, application, individual effect and use of datasets should be considered in  symbiosis with necessity, proportionality and data minimisation principles;

(3) Granularity: Issues such as data analysis issues such as meaning of data, challenges stemming from disparate tracts of data, omitted data and inferences  should be key points in the implementation process;

(4) Ownership: Due regard should be given to intellectual property ownership but in the case of algorithms used for governance, it may be better to have open source algorithms at the default.  Regardless of the sector,the developer must ensure that the algorithm works in a manner that enables a third party to investigate the workings of the algorithm in an adversarial judicial context.

(5)Challengeable:The results of algorithmic analysis should be applied with regard to professional codes and regulations and be challengeable. In a report evaluating the NITI AAYOG  Discussion Paper, CIS has argued that AI that is used for governance , must be made auditable in the public domain,if not under Free and Open Source Software (FOSS)-particularly in the case of AI that has implications for fundamental rights.[63]

(6) Accuracy: The design of the algorithm should check for accuracy;

(7) Responsible: Should consider a wider set of ethical and moral principles and the foundations of human rights as a guarantor of human dignity at all levels and

(8) Explainable: Machine Learning should be interpretable and accountable.

A rules based system like ALGO-CARE can enable predictability in use frameworks for AI. Predictability compliments and strengthens  transparency.

Accountability

Conduct Impact Assessment

There is a need to evolve Algorithmic Impact Assessment frameworks for the different sectors in India, which should address issues of bias, unfairness and other harmful impacts of use of automated decision making. AI is a nascent field and the impact of the technology on the economy, society, etc. is still yet to be fully understood. Impact assessment standards will be important in identifying and addressing potential or existing harms and could potentially be more important in sectors or uses where there is direct human interaction with AI or power dimensions - such as in healthcare or use by the government. A 2018 Report by the AI Now Institute lists methods that should be adopted by the government for conducting his holistic assessment[64]: These should  include: (1) Self-assessment by the government department in charge of implementing the technology, (2)Development of meaningful inter-disciplinary external researcher review mechanisms, (3) Notice to the public regarding  self-assessment and external review, (4)Soliciting of public comments for clarification or concerns, (5) Special regard to vulnerable communities who may not be able to exercise their voice in public proceedings. An adequate review mechanism which holistically evaluates the impact of AI would ideally include all five of these components in conjunction with each other.

Regulation of Algorithms

Experts have voiced concerns about AI mimicking human prejudices due to the biases present in the Machine Learning algorithms. Scientists have revealed through their research that machine learning algorithms can imbibe gender and racial prejudices which are ingrained in language patterns or data collection processes. Since AI and machine algorithms are data driven, they arrive at results and solutions based on available
and historical data. When this data itself is biased, the solutions presented by the AI will also be biased. While this is inherently discriminatory, scientists have provided solutions to rectify these biases which can occur at various stages by introducing a counter bias at another stage. It has also been suggested that data samples should be shaped in such a manner so as to minimise the chances of algorithmic bias. Ideally regulation of algorithms could be tailored - explainability, traceability, scrutability. We recommend that the national strategy on AI policy must take these factors into account and combination of a central agency driving the agenda, and sectoral actors framing regulations around specific uses of AI that are problematic and implementation is required.

As the government begins to adopt AI into governance - the extent to which and the  circumstances autonomous decision making capabilities can be delegated to AI need to be questioned. Questions on whether AI should be autonomous, should always have a human in the loop, and should have a ‘kill-switch’ when used in such contexts also need to be answered. A framework or high level principles can help to guide these determinations. For example:

  • Modeling Human Behaviour: An AI solution trying to model human behaviour, as in the case of judicial decision-making or predictive policing may need to be more regulated, adhere to stricter standards, and need more oversight than an algorithm that is trying to predict ‘natural’ phenomenon such as traffic congestion or weather patterns.
  • Human Impact: An AI solution which could cause greater harm if applied erroneously-such as a robot soldier that mistakenly targets a civilian requires a different level and framework of regulation  than an AI solution  designed to create a learning path for a student in the education sector and errs in making an appropriate assessment..
  • Primary User: AI solutions whose primary users are state agents attempting to discharge duties in the public interest such as policemen, should be approached with more caution than those used by individuals such as farmers getting weather alerts

Fairness

It is possible to incorporate broad definitions of fairness into a wide range of data analysis and classification systems.[65] While there can be no bright-line rules that will necessarily enable the operator or designer of a Machine Learning System to arrive at an ex ante determination of fairness, from a public policy perspective, there must be a set of rules or best practices that explain how notions of fairness should be utilised in the real world applications of AI-driven solutions.[66] While broad parameters should be encoded by the developer to ensure compliance with constitutional standards, it is also crucial that the functioning of the algorithm allows for an ex-post determination of fairness by an independent oversight body if the impact of the AI driven solution is challenged.

Further, while there is no precedent on this anywhere in the world, India could consider establishing a Committee entrusted with the specific task of continuously evaluating the operation of AI-driven algorithms. Questions that the government would need to answer with regard to this body include:

  • What should the composition of the body be?
  • What should be the procedural mechanisms that govern the operation of the body?
  • When should the review committee step in? This is crucial because excessive review may re-entrench the bureaucracy that the AI driven solution was looking to eliminate.
  • What information will be necessary for the review committee to carry out its determination? Will there be conflicts with IP, and if so how will these be resolved?
  • To what degree will the findings of the committee be made public?
  • What powers will the committee have? Beyond making determinations, how will these be enforced?

Market incentives

Standards as a means to address data issues

With digitisation of legacy records and the ability to capture more granular data digitally, one of the biggest challenges facing Big Data is a lack of standardised data and interoperability frameworks. This is particularly true in the healthcare and medicine sector where medical records do not follow a clear standard, which poses a challenge to their datafication and analysis. The presence of developed standards in data management and exchange,  interoperable Distributed Application Platform and Services, Semantic related standards for markup, structure, query, semantics, Information access and exchange have been spoken of as essential to address the issues of lack of standards in Big Data.[67]

Towards enabling usability of data, it is important that clear data standards are established. This has been recognized by Niti Aayog in its National Strategy for AI. On one hand, there can operational issues with allowing each organisation to choose their own specific standards to operate under, while on the other hand, non-uniform digitisation of data will also cause several practical problems, most primarily to do with interoperability of the individual services, as well as their usability. For instance, in the healthcare sector, though India has adopted an EHR policy, implementation of this policy is not yet harmonized - leading to different interpretations of ‘digitizing records (i.e taking snapshots of doctor notes), retention methods and periods, and comprehensive implementation across all hospital data. Similarly, while independent banks and other financial organisations are already following, or in the process of developing internal practices,there exist no uniform standards for digitisation of financial data. As AI development, and application becomes more mainstream in the financial sector, the lack of a fixed standard could create significant problems.

Better Design Principles in Data Collection

An enduring criticism of the existing notice and consent framework has been that long, verbose and unintelligible privacy notices are not efficient in informing individuals and helping them make rational choices. While this problem predates Big Data, it has only become more pronounced in recent times, given the ubiquity of data collection and implicit ways in which data is being collected and harvested. Further, constrained interfaces on mobile devices, wearables, and smart home devices connected in an Internet of Things amplify the usability issues of the privacy notices. Some of the issues with privacy notices include Notice complexity, lack of real choices, notices decoupled from the system collecting data etc. An industry standard for a design approach to privacy notices which includes looking at factors such as the timing of the notice, the channels used for communicating the notices, the modality (written, audio, machine readable, visual) of the notice and whether the notice only provides information or also include choices within its framework, would be of great help.  Further, use of privacy by design principles can be done not just at the level of privacy notices but at each step of the information flow, and the architecture of the system can be geared towards more privacy enhanced choices.


[1] https://cis-india.org/internet-governance/blog/artificial-intelligence-in-india-a-compendium

[2] https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf

[3] https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf

[4] https://www.gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal

[5] http://www.nedo.go.jp/content/100865202.pdf

[6] https://www.eu-robotics.net/sparc/10-success-stories/european-robotics-creating-new-markets.html?changelang=2

[7] https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy

[8] https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/

[9] http://www.uaeai.ae/en/

[10] https://www.aisingapore.org/

[11] https://news.joins.com/article/22625271

[12] https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf

[13] https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe https://www.euractiv.com/section/digital/news/twenty-four-eu-countries-sign-artificial-intelligence-pact-in-bid-to-compete-with-us-china/

[14] https://www.aitf.org.in/

[15] http://www.niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf

[16] https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf

[17] https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy

[18] https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework

[19] https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy

[20] https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe

[21] http://pib.nic.in/newsite/PrintRelease.aspx?relid=181007

[22] Ryan Calo, 2017 Artificial Intelligence Policy: A Primer and Roadmap. U.C. Davis L. Review,

Vol. 51, pp. 398 - 435.

 

[23] https://trai.gov.in/sites/default/files/CIS_07_11_2017.pdf

[24] https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf

[25] http://www.niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf

[26] https://martechtoday.com/bottos-launches-a-marketplace-for-data-to-train-ai-models-214265

[27] https://opensource.com/article/18/5/top-8-open-source-ai-technologies-machine-learning

[28] Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s

Implicit Bias Problem, 93 WASH. L. REV. (forthcoming 2018) (manuscript at 23, 27-32),

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024938.

[29] Id.

[30] H. Brendan McMahan, et al., Communication-Efficient Learning of Deep Networks

from Decentralized Data, arXiv:1602.05629 (Feb. 17, 2016), https://arxiv.org/abs/1602.05629.

[31] Id.

[32] Pierre N. Leval, Nimmer Lecture: Fair Use Rescued, 44 UCLA L. REV. 1449, 1457 (1997).

[33] https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy

[34] https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy

[35] Discussion Paper on National Strategy for Artificial Intelligence | NITI Aayog | National Institution for Transforming India. (n.d.) p. 54. Retrieved from http://niti.gov.in/content/national-strategy-ai-discussion-paper.

[36] Leverhulme Centre for the Future of Intelligence, http://lcfi.ac.uk/.

[37] AI Now, https://ainowinstitute.org/.

[38] https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf

[39] http://iridescentlearning.org/

[40] https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf

[41] Points, L., & Potton, E. (2017). Artificial intelligence and automation in the UK.

[42] Paul, Y., Hickok, E., Sinha, A. and Tiwari, U., Artificial Intelligence in the Healthcare Industry in India, Centre for Internet and Society. Available at https://cis-india.org/internet-governance/files/ai-and-healtchare-report.

[43] Goudarzi, S., Hickok, E., and Sinha, A., AI in the Banking and Finance Industry in India,  Centre for Internet and Society. Available at https://cis-india.org/internet-governance/blog/ai-in-banking-and-finance.

[44] Paul, Y., Hickok, E., Sinha, A. and Tiwari, U., Artificial Intelligence in the Healthcare Industry in India, Centre for Internet and Society. Available at https://cis-india.org/internet-governance/files/ai-and-healtchare-report.

[45] https://news.microsoft.com/en-in/government-karnataka-inks-mou-microsoft-use-ai-digital-agriculture/

[46] https://news.microsoft.com/en-in/government-telangana-adopts-microsoft-cloud-becomes-first-state-use-artificial-intelligence-eye-care-screening-children/

[47] NITI Aayog. (2018). Discussion Paper on National Strategy for Artificial Intelligence. Retrieved from http://niti.gov.in/content/national-strategy-ai-discussion-paper. 18

[48] https://edps.europa.eu/sites/edp/files/publication/16-10-19_marrakesh_ai_paper_en.pdf

[49] https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india

[50] J. Schradie, The Digital Production Gap: The Digital Divide and Web 2.0 Collide. Elsevier Poetics, 39 (1).

[51] D Lazer, et al., The Parable of Google Flu: Traps in Big Data Analysis. Science. 343 (1).

[52] Danah Boyd and Kate Crawford,  Critical Questions for Big Data. Information, Communication & Society. 15 (5).

[53] John Podesta, (2014) Big Data: Seizing Opportunities, Preserving Values, available at

http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf

[54] E. Ramirez, (2014) FTC to Examine Effects of Big Data on Low Income and Underserved Consumers at September Workshop, available at http://www.ftc.gov/news-events/press-releases/2014/04/ftc-examine-effects-big-data-lowincome-underserved-consumers

[55] M. Schrage, Big Data’s Dangerous New Era of Discrimination, available at http://blogs.hbr.org/2014/01/bigdatas-dangerous-new-era-of-discrimination/.

[56] Google/DoubleClick Merger case

[57] French Competition Authority, Opinion n°10-A-13 of 1406.2010,

http://www.autoritedelaconcurrence.fr/pdf/avis/10a13.pdf. That opinion of the Authority aimed at

giving general guidance on that subject. It did not focus on any particular market or industry

although it described a possible application of its analysis to the telecom industry.

[58] http://www.analysisgroup.com/is-big-data-a-true-source-of-market-power/#sthash.5ZHmrD1m.dpuf

[59] Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O'Brien, D., ... & Wood, A. (2017). Accountability of AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134.

[60] Frank A. Pasquale ‘Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society’ (July 14, 2017). Ohio State Law Journal, Vol. 78, 2017; U of Maryland Legal Studies Research Paper No. 2017-21, 7.

[61] Oswald, M., Grace, J., Urwin, S., & Barnes, G. C. (2018). Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘Experimental’ proportionality. Information & Communications Technology Law, 27(2), 223-250.

[62] Ibid.

[63] Abraham S., Hickok E., Sinha A., Barooah S., Mohandas S., Bidare P. M., Dasgupta S., Ramachandran V., and Kumar S., NITI Aayog Discussion Paper: An aspirational step towards India’s AI policy. Retrieved from https://cis-india.org/internet-governance/files/niti-aayog-discussion-paper.

[64] Reisman D., Schultz J., Crawford K., Whittaker M., (2018, April) Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability. Retrieved from https://ainowinstitute.org/aiareport2018.pdf.

[65] Sample I., (2017, November 5) Computer says no: why making AIs fair, accountable and transparent is crucial. Retrieved from https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial.

[66] Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. U. Pa. L. Rev., 165, 633.

[67] http://www.iso.org/iso/big_data_report-jtc1.pdf

India’s post-truth society

by Swaraj Paul Barooah last modified Sep 12, 2018 12:16 PM
The proliferation of lies and manipulative content supplies an ever-willing state a pretext to step up surveillance.

The op-ed was published in Hindu Businessline on September 7, 2018.


After a set of rumours spread over WhatsApp triggered a series of lynchings across the country, the government recently took the interesting step of placing the responsibility for this violence on WhatsApp. This is especially noteworthy because the party in power, as well as many other political parties, have taken to campaigning over social media, including using WhatsApp groups in a major way to spread their agenda and propaganda.

After all, a simple tweet or message could be shared thousands of times and make its way across the country several times, before the next day’s newspaper is out. Nonetheless, while the use of social media has led to a lot of misinformation and deliberately polarising ‘news’, it has also helped contribute to remarkable acts of altruism and community, as seen during the recent Kerala floods.

While the government has taken a seemingly techno-determinist view by placing responsibility on WhatsApp, the duality of very visible uses of social media has led to others viewing WhatsApp and other internet platforms more as a tool, at the mercy of the user. However, as historian Melvin Kranzberg noted, “technology is neither good nor bad; nor is it neutral”. And while the role of political and private parties in spreading polarising views should be rigorously investigated, it is also true that these internet platforms are creating new and sometimes damaging structural changes to how our society functions. A few prominent issues are listed below:

Fragmentation of public sphere

Jurgen Habermas, noted sociologist, conceptualised the Public Sphere as being “a network for communicating information and points of view, where the streams of communication are, in the process, filtered and synthesised in such a way that they coalesce into bundles of topically specified public opinions”.

To a large extent, the traditional gatekeepers of information flow, such as radio, TV and mainstream newspapers, performed functions enabling a public sphere. For example, if a truth-claim about an issue of national relevance was to be made, it would need to get an editor’s approval.

In case there was a counter claim, that too would have to pass an editorial check. Today however, nearly anybody can become a publisher of information online, and if it catches the right ‘influencer’s attention, it could spread far wider and far quicker than it would’ve in traditional media. While this does have the huge positive of giving space to more diverse viewpoints, it also comes with two significant downsides.

First, that it gives a sense of ‘personal space’ to public speech. An ordinary person would think a few times, do some research, and perhaps practice a speech before giving it before 10,000 people. An ordinary person would also think for perhaps five seconds before putting out a tweet on the very same topic, despite now having a potentially global audience.

Second, by having messages sent directly to your hand-held device, rather than open for anyone to fact-check and counter, there is less transparency and accountability for those who send polarising material and misinformation. How can a mistaken and polarising view be countered, if one doesn’t even know it is being made? And if it can’t be countered, how can its spread by contained?

The attention market

Not only is that earlier conception of public sphere being fragmented, these new networked public spheres are also owned by giant corporations. This means that these public spheres where critical discourse is being shaped and spread, are actually governed by advertisement-financed global conglomerates. In a world of information overflow, and privately owned, ad-financed public spheres, the new unit of currency is attention.

It is in the direct interest of the Facebooks and Googles of the world, to capture user attention as long as possible, regardless of what type of activity that encourages. It goes without saying that neither the ‘mundane and ordinary’, nor the ‘nuanced and detailed’ capture people’s attention nearly as well as the sensational and exciting.

Nearly as addicting, studies show, are the headlines and viewpoints which confirm people’s biases. Fed by algorithms that understand the human desire to ‘fit in’, people are lowered into echo chambers where like-minded people find each other and continually validate each other. When people with extremist views are guided to each other by these algorithms, they not only gather validation, but also now use these platforms to confidently air their views — thus normalising what was earlier considered extreme. Needless to say, internet platforms are becoming richer in the process.

Censorship by obfuscation

Censorship in the attention economy, no longer requires blocking of views or interrupting the transmission of information. Rather, it is sufficient to drown out relevant information in an ocean of other information. Fact checking news sites face this problem. Regardless of how often they fact-check speeches by politicians, only a minuscule percentage of the original audience comes to know about, much less care about the corrections.

Additionally, repeated attacks (when baseless) on credibility of news sources causes confusion about which sources are trustworthy. In her extremely insightful book “Twitter and Tear Gas”, Prof Zeynep Tufekci rightly points out that rather than traditional censorship, powerful entities today, (often States) focus on overwhelming people with information, producing distractions, and deliberately causing confusion, fear and doubt. Facts, often don’t matter since the goal is not to be right, but to cause enough confusion and doubt to displace narratives that are problematic to these powers.

Viewpoints from members of groups that have been historically oppressed, are especially harangued. And those who are oppressed tend to have less time, energy and emotional resources to continuously deal with online harassment, especially when their identities are known and this harassment can very easily spill over to the physical world.

Conclusion

Habermas saw the ideal public sphere as one that is free of lies, distortions, manipulations and misinformation. Needless to say, this is a far cry from our reality today, with all of the above available in unhealthy doses. It will take tremendous effort to fix these issues, and it is certainly no longer sufficient for internet platforms to claim they are neutral messengers. Further, whether the systemic changes are understood or not, if they are not addressed, they will continue to create and expand fissures in society, giving the state valid cause for intervening through backdoors, surveillance, and censorship, all actions that states have historically been happy to do!

Artificial Intelligence in the Governance Sector in India

by Arindrajit Basu and Elonnai Hickok — last modified Sep 14, 2018 11:37 AM
The use of Artificial Intelligence has the potential to ameliorate several existing structural inefficiencies in the discharge of governmental functions. Our research indicates that the deployment of this technology across sub-sectors is still on the horizons.

Ecosystem Mapping:Shweta Mohandas and Anamika Kundu
Edited by: Amber Sinha, Pranav MB and Vishnu Ramachandran


Much of the technological capacity and funding for AI in governance in India is coming from the private sector - a trend we expect will continue as the government engages in an increasing number of partnerships with both start-ups and large corporations alike. While there is considerable enthusiasm and desire by the government to develop AI-driven solutions in governance, including the release of two reports identifying the broad contours of India’s AI strategy, this enthusiasm is yet to be underscored by adequate financial, infrastructural, and technological capacity. This gap provides India with a unique opportunity to understand some the of the ethical, legal and technological hurdles faced by the West both during and after the implementation of similar technology and avoid these challenges when devising its own AI strategy and regulatory policy.

The case study identified five sub-sectors including law enforcement, education, defense, discharge of governmental functions and also considered the implications of AI in judicial decision-making processes that have been used in the United States. After mapping the uses of AI in various sub-sectors, this report identifies several challenges to the deployment of this technology. This includes factors such as infrastructural and technological capacity, particularly among key actors at the grassroots level, lack of trust in AI driven solutions and adequate funding. We also identified several ethical and legal concerns that policy-makers must grapple with. These include over-dependence on AI systems, privacy and security, assignment of liability, bias and discrimination both in process and outcome, transparency and due process. Subsequently, this report can be considered as a roadmap for the future of AI in India by tracking corresponding and emerging developments in other parts of the world. In the final section of the report, we propose several recommendations for policy-makers and developers that might address some of the challenges and ethical concerns identified. Some of these include benchmarks for the use of AI in the public sector, development of standards of explanation, a standard framework for engagement with the private sector, leveraging AI as a field to further India’s international strategy, developing adequate standards of data curation, ensuring that the benefits of the technology reaches the lowest common denominator, adopting interdisciplinary approaches to the study of Artificial Intelligence and   developing fairness,transparency and due process through the contextual application of a rules-based system.

It is crucial that policy-makers do not adopt a ‘one-size-fits-all’ approach to AI regulation but consider all options within a regulatory spectrum that considers the specific impacts of the deployment of this technology for each sub-sector within governance - with the distinction of public sector use. Given that the governance sector has potential implications for the fundamental rights of all citizens, it is also imperative that the government does not shy away from its obligation to ensure the fair and ethical deployment of this technology while also ensuring the existence of robust redress mechanisms. To do so, it must chart out a standard rules-based system that creates guidelines and standards for private sector development of AI solutions for the public sector. As with other emerging technology, the success of Artificial intelligence depends on whether it is deployed with the intention of placing greater regulatory scrutiny on the daily lives of individuals or for harnessing individual potential that augment rather than counter the core tenets of constitutionalism and human dignity.

Read the full report here

Cross-Border Data Sharing and India: A study in Processes, Content and Capacity

by Amber Sinha, Elonnai Hickok, Udbhav Tiwari and Arindrajit Basu — last modified Sep 29, 2018 12:37 AM
A majority of criminal investigations in the modern era necessitate law enforcement access to electronic evidence stored extra-territorially. The conventional methods of compelling the presentation of evidence available for investigative agencies often fail when the evidence is not present within the territorial boundaries of the state.

The crux of the issue lies in the age old international law tenet of territorial sovereignty.Investigating crimes is a sovereign act and it cannot be exercised in the territory of another country without that country’s consent or through a permissive principle of extra-territorial jurisdiction. Certain countries have explicit statutory provisions which disallow companies incorporated in their territory from disclosing data to foreign jurisdictions. The United States of America, which houses most of the leading technological firms like Google, Apple, Microsoft, Facebook, and Whatsapp, has this requirement.

This necessitates a consent based international model for cross border data sharing as a completely ad-hoc system of requests for each investigation would be ineffective. Towards this, Mutual Legal Assistance Treaties (MLATs) are the most widely used method for cross border data sharing, with letters rogatory, emergency requests and informal requests being other methods available to most investigators. While recent gambits towards ring-fencing the data within Indian shores might alter the contours of the debate, a sustainable long-term strategy requires a coherent negotiation strategy that enables co-operation with a range of international partners.

This negotiation strategy needs to be underscored by domestic safeguards that ensure human rights guarantees in compliance with international standards, robust identification and augmentation of  capacity and clear articulation of how India’s strategy lines up with the existing tenets of International law. This report studies the workings of the Mutual Legal Assistance Treaty (MLAT) between the USA and India and identifies hurdles in its existing form, culls out suggestions for improvement and explores how recent legislative developments, such as the CLOUD Act might alter the landscape.

The path forward lies in undertaking process based reforms within India with an eye on leveraging these developments to articulate a strategically beneficial when negotiating with external partners.As the nature of policing changes to a model that increasingly relies on electronic evidence, India needs to ensure that it’s technical strides made in accessing this evidence is not held back by the lack of an enabling policy environment. While the data localisation provisions introduced in the draft Personal Data Protection Bill may alter the landscape once it becomes law, this paper retains  its relevance in terms of guiding the processes, content and capacity to adequately manoeuvre the present conflict of laws situation and accessing data not belonging to Indians that may be needed for criminal investigations.As a disclaimer,the report and graphics contained within it have been drafted using publicly available information and may not reflect real world practices.


Click here to download the report With research assistance from Sarath Mathew and Navya Alam and visualisation by Saumyaa Naidu

A trust deficit between advertisers and publishers is leading to fake news

by Sunil Abraham last modified Oct 02, 2018 06:44 AM
Transparency regulations is need of the hour. And urgently for election and political advertising. What do the ads look like? Who paid for them? Who was the target? How many people saw these advertisements? How many times? Transparency around viral content is also required.

The article was published in Hindustan Times on September 24, 2018.


Traditionally, we have depended on the private censorship that intermediaries conduct on their platforms. They enforce, with some degree of success, their own community guidelines and terms of services (TOS). Traditionally, these guidelines and TOS have been drafted keeping in mind US laws since historically most intermediaries, including non-profits like Wikimedia Foundation were founded in the US.

Across the world, this private censorship regime was accepted by governments when they enacted intermediary liability laws (in India we have Section 79A of the IT Act). These laws gave intermediaries immunity from liability emerging from third party content about which they have no “actual knowledge” unless they were informed using takedown notices. Intermediaries set up offices in countries like India, complied with some lawful interception requests, and also conducted geo-blocking to comply with local speech regulation.

For years, the Indian government has been frustrated since policy reforms that it has pursued with the US have yielded little fruit. American policy makers keep citing shortcomings in the Indian justice systems to avoid expediting the MLAT (Mutual Legal Assistance Treaties) process and the signing of an executive agreement under the US Clout Act. This agreement would compel intermediaries to comply with lawful interception and data requests from Indian law enforcement agencies no matter where the data was located.

The data localisation requirement in the draft national data protection law is a result of that frustration. As with the US, a quickly enacted data localisation policy is absolutely non-negotiable when it comes to Indian military, intelligence, law enforcement and e-governance data. For India, it also makes sense in the cases of health and financial data with exceptions under certain circumstances. However, it does not make sense for social media platforms since they, by definition, host international networks of people. Recently an inter ministerial committee recommended that “criminal proceedings against Indian heads of social media giants” also be considered. However, raiding Google’s local servers when a lawful interception request is turned down or arresting Facebook executives will result in retaliatory trade actions from the US.

While the consequences of online recruitment, disinformation in elections and fake news to undermine public order are indeed serious, are there alternatives to such extreme measures for Indian policy makers? Updating intermediary liability law is one place to begin. These social media companies increasingly exercise editorial control, albeit indirectly, via algorithms to claim that they have no “actual knowledge”.

But they are no longer mere conduits or dumb pipes as they are now publishers who collect payments to promote content. Germany passed a law called NetzDG in 2017 which requires expedited compliance with government takedown orders. Unfortunately, this law does not have sufficient safeguards to prevent overzealous private censorship. India should not repeat this mistake, especially given what the Supreme Court said in the Shreya Singhal judgment.

Transparency regulations are imperative. And they are needed urgently for election and political advertising. What do the ads look like? Who paid for them? Who was the target? How many people saw these advertisements? How many times? Transparency around viral content is also required. Anyone should be able to see all public content that has been shared with more than a certain percentage of the population over a historical timeline for any geographic area. This will prevent algorithmic filter bubbles and echo chambers, and also help public and civil society monitor unconstitutional and hate speech that violates terms of service of these platforms. So far the intermediaries have benefitted from surveillance — watching from above. It is time to subject them to sousveillance — watched by the citizens from below.

Data portability mandates and interoperability mandates will allow competition to enter these monopoly markets. Artificial intelligence regulations for algorithms that significantly impact the global networked public sphere could require – one, a right to an explanation and two, a right to influence automated decision making that influences the consumers experience on the platform.

The real solution lies elsewhere. Google and Facebook are primarily advertising networks. They have successfully managed to destroy the business model for real news and replace it with a business model for fake news by taking away most of the advertising revenues from traditional and new news media companies. They were able to do this because there was a trust deficit between advertisers and publishers. Perhaps this trust deficit could be solved by a commons-based solutions based on free software, open standards and collective action by all Indian new media companies.

Why Data Localisation Might Lead To Unchecked Surveillance

by Pranesh Prakash last modified Oct 16, 2018 02:08 PM
In recent times, there has been a rash of policies and regulations that propose that the data that Indian entities handle be physically stored on servers in India, in some cases exclusively. In other cases, only a copy needs to be stored.

The article was published in Bloomberg Quint on October 15, 2018 and also mirrored in the Quint.


In April 2018, the Reserve Bank of India put out a circular requiring that all “data relating to payment systems operated by them are stored in a system only in India” within six months. Lesser requirements have been imposed on all Indian companies’ accounting data since 2014 (the back-up of the books of account and other books that are stored electronically must be stored in India, the broadcasting sector under the Foreign Direct Investment policy, must locally store subscriber information, and the telecom sector under the Unified Access licence, may not transfer their subscriber data outside India).

The draft e-commerce policy has a wide-ranging requirement of exclusive local storage for “community data collected by Internet of Things devices in public space” and “data generated by users in India from various sources including e-commerce platforms, social media, search engines, etc.”, as does the draft e-pharmacy regulations, which stipulate that “the data generated” by e-pharmacy portals be stored only locally.

While companies such as Airtel, Reliance, PhonePe (majority-owned by Walmart) and Alibaba, have spoken up in support the government’s data localisation efforts, others like Facebook, Amazon, Microsoft, and Mastercard have led the way in opposing it.

Just this week, two U.S. Senators wrote to the Prime Minister’s office arguing that the RBI’s data localisation regulations along with the proposals in the draft e-commerce and cloud computing policies are “key trade barriers”. In her dissenting note to the Srikrishna Committee's report, Rama Vedashree of the Data Security Council of India notes that, “mandating localisation may potentially become a trade barrier and the key markets for the industry could mandate similar barriers on data flow to India, which could disrupt the IT-BPM (information technology-business process management) industry.”

Justification For Data Localisation

What are the reasons for these moves towards data localisation?

Given the opacity of policymaking in India, many of the policies and regulations provide no justification at all. Even the ones that do, don’t provide cogent reasoning.

The RBI says it needs “unfettered supervisory access” and hence needs data to be stored in India. However, it fails to state why such unfettered access is not possible for data stored outside of India.

As long as an entity can be compelled by Indian laws to engage in local data storage, that same entity can also be compelled by that same law to provide access to their non-local data, which would be just as effective.

What if they don’t provide such access? Would they be blacklisted from operating in India, just as they would if they didn’t engage in local data storage? Is there any investigatory benefit to storing data in India? As any data forensic expert would note, chain of custody and data integrity are what are most important components of data handling in fraud investigation, and not physical access to hard drives. It would be difficult for the government to say that it will block all Google services if the company doesn’t provide all the data that Indian law enforcement agencies request from it. However, it would be facile for the RBI to bar Google Pay from operating in India if Google doesn’t provide it “unfettered supervisory access” to data.

The most exhaustive justification of data localisation in any official Indian policy document is that contained in the Srikrishna Committee’s report on data protection. The report argues that there are several benefits to data localisation:

  1. Effective enforcement,
  2. Avoiding reliance on undersea cables,
  3. Avoiding foreign surveillance on data stored outside India,
  4. Building an “Artificial Intelligence ecosystem”

Of these, the last three reasons are risible.

Not A Barrier To Surveillance

Requiring mirroring of personal data on Indian servers will not magically give rise to experts skilled in statistics, machine learning, or artificial intelligence, nor will it somehow lead to the development of the infrastructure needed for AI.

The United States and China are both global leaders in AI, yet no one would argue that China’s data localisation policies have helped it or that America’s lack of data localisation polices have hampered it.

On the question of foreign surveillance, data mirroring will not have any impact, since the Srikrishna Committee’s recommendation would not prevent companies from storing most personal data outside of India.

Even for “sensitive personal data” and for “critical personal data”, which may be required to be stored in India alone, such measures are unlikely to prevent agencies like the U.S. National Security Agency or the United Kingdom’s Government Communications Headquarters from being able to indulge in extraterritorial surveillance.

In 2013, slides from an NSA presentation that were leaked by Edward Snowden showed that the NSA’s “BOUNDLESSINFORMANT” programme collected 12.6 billion instances of telephony and Internet metadata (for instance, which websites you visited and who all you called) from India in just one month, making India one of the top 5 targets.

This shows that technically, surveillance in India is not a challenge for the NSA.

So, forcing data mirroring enhances Indian domestic intelligence agencies’ abilities to engage in surveillance, without doing much to diminish the abilities of skilled foreign intelligence agencies.

As I have noted in the past, the technological solution to reducing mass surveillance is to use decentralised and federated services with built-in encryption, using open standards and open source software.

Reducing reliance on undersea cables is, just like reducing foreign surveillance on Indians’ data, a laudable goal. However, a mandate of mirroring personal data in India, which is what the draft Data Protection Bill proposes for all non-sensitive personal data, will not help. Data will stay within India if the processing happens within India. However, if the processing happens outside of India, as is often the case, then undersea cables will still need to be relied upon.

The better way to keep data within India is to incentivise the creation of data centres and working towards reducing the cost of internet interconnection by encouraging more peering among Internet connectivity providers.

While data mirroring will not help in improving the enforcement of any data protection or privacy law, it will aid Indian law enforcement agencies in gaining easier access to personal data.

The MLAT Route

Currently, many forms of law enforcement agency requests for data have to go through onerous channels called ‘mutual legal assistance treaties’. These MLAT requests take time and are ill-suited to the needs of modern criminal investigations. However, the U.S., recognising this, passed a law called the CLOUD Act in March 2018. While the CLOUD Act compels companies like Google and Amazon, which have data stored in Indian data centres, to provide that data upon receiving legal requests from U.S. law enforcement agencies, it also enables easier access to foreign law enforcement agencies to data stored in the U.S. as long as they fulfill certain procedural and rule-of-law checks.

While the Srikrishna Committee does acknowledge the CLOUD Act in a footnote, it doesn’t analyse its impact, doesn’t provide suggestions on how India can do this, and only outlines the negative consequences of MLATs.

Further, it is inconceivable that the millions of foreign services that Indians access and provide their personal data to will suddenly find a data centre in India and will start keeping such personal data in India. Instead, a much likelier outcome, one which the Srikrishna Committee doesn’t even examine, is that many smaller web services may find such requirements too onerous and opt to block users from India, similar to the way that Indiatimes and the Los Angeles Times opted to block all readers from the European Union due to the coming into force of the new data protection law.

The government could be spending its political will on finding solutions to the law enforcement agency data access question, and negotiating solutions at the international level, especially with the U.S. government. However it is not doing so.

Given this, the recent spate of data localisation policies and regulation can only be seen as part of an attempt to increase the scope and ease of the Indian government’s surveillance activities, while India’s privacy laws still remain very weak and offer inadequate legal protection against privacy-violating surveillance. Because of this, we should be wary of such requirements, as well as of the companies that are vocal in embracing data localisation.

377 Bites the Dust: Unpacking the long and winding road to the judicial decriminalization of homosexuality in India

by Agnidipto Tarafder and Arindrajit Basu — last modified Oct 18, 2018 12:39 AM
An informal case comment tracing the journey and assessing the societal implications the recent 377 (Navtej Johar v Union of India).

The article was published in Socio-Legal Review, a magazine published by National Law School of India University on October 11, 2018.


Introduction

After a prolonged illness due to AIDS-related complications, the gregarious Queen front-man Farrokh Bulsara (known to the world as Freddie Mercury) breathed his last in his home in Kensington, London in 1991.  Despite being the symbol of gay masculinity for over a decade, Mercury never explicitly confirmed his sexual orientation-for reasons that remain unknown but could stem from prevailing social stigma. Occluded from public discourse and shrouded in irrational fears, the legitimate problems of the LGBT+ community, including the serial killer of HIV/AIDS was still relegated to avoidable debauchery as opposed to genuine illness. Concerted activism throughout the 90’s-depicted on the big screen through masterpieces such as Philadelphia, alerted the Western public of this debacle, which lead to a hard-fought array of rights and a reduction of social ostracization at the turn of the century for the LGBT+ community across western countries. This includes over two dozen countries that have allowed same-sex marriages and a host of others that recognize civil union between same-sex partners in some form.[1]

On 6th September, 2018, Section 377 of the Indian Penal Code – a colonial era law that criminalized “carnal intercourse against the order of nature” bit the dust in New Delhi, at the hands of five judges of the Supreme Court of India (Navtej Johar v Union of India).[2] Large parts of the country celebrated the restoration of the ideals of the Indian Constitution. It was freedom, not just for a community long suppressed, but for the ethos of our foundation that for a century suffered this incessant incongruity. The celebrations were tempered, perhaps by a recognition of how long this fight had taken, the unnecessary hurdles – both judicial and otherwise – that were erected along the way, and a realization of the continued suffering this community might have to tolerate till they truly earn the acceptance they deserve. While the judgment will serve as a document that signifies the sanctity of our constitutional ethos, in the grander scheme of things it is still but a small step, with the potential to catalyze a giant leap forward. For our common future, it is imperative that the LGBT+ community does not undertake this leap alone but is accompanied by the rest of the nation- a nation that recognizes the travails of this long march to freedom.

Long March to Freedom

Modelled on the 1533 Buggery Act in the UK, Section 377 was introduced into the Indian Penal Code by Thomas Macaulay, a representative of the British Raj. While our colonial masters progressed in 1967, the hangover enmeshed in our penal laws lingered on. Public discourse on this legal incongruity emerged initially with the publication of a report titled Less than Gay: A Citizens Report on the Status of Homosexuality in India, spearheaded by activist Siddhartha Gautam, on behalf of the AIDS Bhedbav Virodhi Andolan (ABVA) that sought to fight to decriminalise homosexuality and thereby move towards removing its associated stigma.[3] The ABVA went on to file a petition for this decriminalisation in 1994. The judicial skirmish continued in 2001 with the Naz Foundation, a Delhi-based NGO that works on HIV/AIDS and sexual health, filing a petition by way of Public Interest Litigation asking for a reading down of the Section. The Delhi High Court initially dismissed this petition – stating that the foundation had no locus standi.[4] Naz Foundation appealed against this before the Supreme Court, which overturned the dismissal on technical grounds and ordered the High Court to decide the case on merits.

The two-judge bench of the Delhi High Court held that Section 377 violated privacy, autonomy and liberty, ideals which were grafted into the ecosystem of fundamental rights guaranteed by Part-III of the Indian Constitution.[5] It stated that the Constitution was built around the core tenet of inclusiveness, which was denigrated by the sustained suppression of the LGBT+ community. It was an impressive judgment, not only because of the bold and progressive claim it made in a bid to reverse a century and a half of oppression, but also because of the quality of the judgment itself. It tied in principles of international law, along with both Indian and Foreign judgments in addition to citing literature on sexuality as a form of identity. For a brief while, faith in the ‘system’ seemed justified.

Hope, however, is a fickle friend. Four years from the day, an astrologer by the name of Suresh Kumar Koushal challenged the Delhi High Court’s verdict.[6] Some of the reasons behind this challenge would defy any standard sense of rationality.  These included national security concerns – as soldiers who stay away from their families[7] may enter into consensual relationships with each other – leading to distractions that might end up in military defeats. Confoundingly, the Supreme Court’s verdict lent judicial legitimacy to Koushal’s thought process, as they overturned the Naz Foundation judgment and affirmed the constitutional validity of Section 377 on some truly bizarre grounds.[8] Indian constitutional tradition permits discrimination by the state only if classification is based on an intelligible differential between the group being discriminated against from the rest of the populace; having a rational nexus with a constitutionally valid objective. To satisfy this threshold, the Supreme Court stated, without any evidence, that there are two classes of people-those who engage in sexual intercourse in the ‘ordinary course’ and those who do not- thereby satisfying the intelligible differential threshold.[9] As pointed out by constitutional law scholar Gautam Bhatia, this differential makes little sense – an extrapolation of this idea could indicate that intercourse with a blue-eyed person was potentially not ‘ordinary’, since the probability of this occurring is rare.[10] The second justification was based on numbers. The Court argued that statistics pointed to the fact that only 200 people had been arrested under this law, which suggested that it was largely dormant and hence, discrimination doesn’t get established per se.[11] In other words, a plain reading of the judgement might lead one to conclude that the random arrests of a small number of citizens would be constitutionally protected, so long it does not overshoot an arbitrarily determined de minimis threshold! The judgment seemed to drag Indian society ceaselessly into the past. This backward shift internally was accompanied by international posturing by India that opposed the recent wave of UN resolutions which sought to advocate LGBT+ rights.[12]

Thankfully, there remained a way to correct such Supreme Court induced travesties, through what is known as a curative petition, a concept introduced by the Court itself through one of its earlier judgements.[13] Needless to mention, such a petition was duly filed before the Court.[14] While this curative petition was under consideration, last August, a 9-judge bench of the Court spun some magic through a landmark judgment in Just. (Retd.) K S Puttuswamy v Union of India[15] which stated that the ‘right to privacy’ was a recognised fundamental right as per the Indian Constitution. The judgment in Koushal was singled out and criticised by Justice Chandrachud who asserted the fact that an entire community could not be deprived of the dignity of privacy in their sexual relations.

Strategically, this was a master-class. While the right to privacy cannot alone serve as the justification for allowing individuals to choose their sexual orientation, in several common law nations including the UK[16] and the USA[17], privacy has served as the initial spark for legitimizing same-sex relations. A year before the privacy judgment was delivered, a group of individuals had filed a separate petition arguing that Section 377 violated their constitutional rights. The nature of this petition was intrinsically different[18] from the Naz Foundation’s, since the Foundation had filed a ‘public interest litigation’ in a representative capacity whereas this petition affected individuals in their personal capacity, implying that the nature of the claim in each case was different.

The cold case file of this petition that crystallised into the iconic judgment delivered last week, was brought to the fore and listed for hearing in January 2018.[19] Justice Chandrachud’s judgement in Puttaswamy, that tore apart the Koushal verdict, had no small role to play in the unfolding of this saga.[20]

And so the hearings began. The government chose to not oppose the petition and allowed the court to decide the fate of Article 377.[21] This was another convenient manoeuvre by the legislature, effectively shifting the ball into the judiciary’s court, shielding itself from potential pushbacks from its conservative voter-base. However, as public support for decriminalisation started pouring in from various quarters, leaders of religious groups were quick to make their opposition known, leaving the five judges on the bench to decide the fate of a community long suppressed through the clutches of an illegitimate law.

I am what I am”: The judgement, redemption and beyond

The mis-application of this provision denied them the Fundamental Right to equality guaranteed by Article 14. It infringed the Fundamental Right to non-discrimination under Article 15, and the Fundamental Right to live a life of dignity and privacy guaranteed by Article 21. The LGBT persons deserve to live a life unshackled from the shadow of being ‘unapprehended felons.”[22]

Justice Indu Malhotra summed up her short judgement with this momentous pronouncement, adding that ‘history owes an apology[23] to the members of the LGBT+ community, for the injustices faced during these centuries of hatred and apathy. It seems fair to suggest that this idea of ‘righting the wrongs of the past’ became the underlying theme of the Supreme Court’s landmark verdict on the constitutionality of Section 377. Five judges, through four concurring but separate opinions, extracted the essence of the claim against this law – protecting the virtue of personal liberty and dignity. In doing so, it exculpated itself from the travesty of Suresh Kaushal, emancipating the ‘miniscule minority’ from their bondage before the law and took yet another step towards restoring faith in the ‘system’ of which the judiciary is currently positioning itself as the sole conscientious wing. Perhaps the only set of people shamed through this verdict were our parliamentarians, who on two separate occasions in the recent past had thwarted any chance of change when they opposed, insulted and ridiculed Dr. Shashi Tharoor while he attempted to introduce a Bill decriminalizing homosexuality on the floor of the House.[24]

Earlier in the day, the Chief Justice, authoring the lead opinion for himself and Justice Khanwilkar, began with the ominous pronouncement that ‘denying self-expression (to the individual) was an invitation to death’,[25] emphasizing through his long judgement the importance of promoting individuality in all its varied facets- in matters of choice, privacy, speech and expression.[26] Arguing strongly in support of the ‘progressive realization of rights’,[27] which he identified as the soul of constitutional morality, the Chief Justice outlawed the ‘artificial distinction’ drawn between heterosexual and homosexual through the application of the ‘equality’ doctrine embedded in Articles 14 and 15.[28] Noting that the recent criminal law amendment recognizes the absence of consent as the basis for sexual offences, he pointed out the lack of a similar consent-based framework in the context of non peno-vaginal sex, effectively de-criminalizing ‘voluntary sexual acts by consenting adults’ as envisaged within the impugned law.[29] The Chief Justice went on to elaborate that the right to equality, liberty and privacy are inherent in all individuals, and no discrimination on grounds of sex would survive the scrutiny of the law.[30]

Justice Nariman in his separate opinion charted out the legislative history behind the adoption of the Indian Penal Code. In his inimitable manner, he travelled effortlessly across time and space to source historical material and legislations, judicial decisions and literary critique from various jurisdictions to bolster the claim that the discrimination faced by homosexuals had no basis in law or fact.[31] For instance, referring to the Wolfenden Committee Report in the UK regarding decriminalisation of homosexuality which urged legislators to distinguish between ‘sin and crime’, the judge went on to lament the lives lost to mere social perception, including that of Oscar Wilde and Alan Turing.[32] Repelling the popular myth of homosexuality being a ‘disease’, he quoted from the Mental Healthcare Act, 2017, the US Supreme Court’s seminal judgment in Lawrence v Texas[33] and several other studies on the intersection of homosexuality and public health, dismissing this contention entirely. Justice Nariman, invoking the doctrine of ‘manifest arbitrariness’[34] to dispel the notion that the law treating homosexuals was ‘different’. Since it was based on sexual identity and orientation, such a law was a gross abuse of the equal protection of the Constitution.

Justice Chandrachud, having already built a formidable reputation as the foremost liberal voice on the bench, launched a scathing, almost visceral attack against the idea of ‘unnatural sexual offence’ insofar as it applied to homosexuality.[35] Mirroring the concern first espoused by Justice Nariman about the chilling effect of majoritarianism, he wondered aloud what societal harm did a provision like Section 377 seek to prevent. In fact, his separate opinion is categorical in its negation of the ‘intelligible differentia’ between ‘natural’ and ‘non-natural’ sex, sardonically stating the perpetuation of heteronormativity cannot be the object of a law.[36]

As an interesting aside, his judgement in Puttaswamy famously introduced a section called ‘discordant notes’[37] which led an introspective Court to disown and overturn disturbing precedent from the past, most notably the Court’s opinion  in the ADM Jabalpur,[38] decided that the right to seek redressal for violation of Fundamental Rights remained suspended as a consequence of the National Emergency.

In a similar act of constitutional manipulation, he delved into a critique of the Apex Court’s judgement in the Nergesh Meerza[39] case. This was a decision which upheld the discriminatory practice of treating men and women as different classes of employees by Air India, denying the women employees certain benefits ordinarily available to men. The Court in Nergesh Meerza read the non-discrimination guarantee in Article 15 narrowly to understand that discrimination based on ‘sex alone’ would be struck down. He held that since the sexes had differences in the mode of recruitment, promotion and conditions of service, it did not tantamount to ‘merely sex based’ categorization and was an acceptable form of classification. In his missionary zeal to exorcise the Court of past blemishes, Dr. Chandrachud observed that interpreting constitutional provisions through such narrow tests as ‘sex alone’ would lead to denuding the freedoms guaranteed within the text. Though not the operative part of the judgement, one hopes his exposition of the facets of the equality doctrine and fallacies in reasoning in Nargesh Meerza will pave the way for just jurisprudence to emerge in sex discrimination cases in the future.[40]

Reverting to the original issue, the judge addresses several key concerns voiced by the LGBT+ community through their years of struggle. He spoke of bridging the public-private divide by ensuring the protection of sexual minorities in the public sphere as well, wherein they are most vulnerable. Alluding to his opinion in Puttaswamy, he declares that all people have an inalienable right to privacy, which is a fundamental aspect of their liberty and the ‘soulmate of dignity’- ascribing the right to dignified life as a constitutional guarantee for one and all. Denouncing the facial neutrality[41] of Section 377, insofar as it targets certain ‘acts and not classes of people’, his broad and liberal reading of non-discrimination goes beyond the semantics of neutrality and braves the original challenge- fashioning a justice system with real equality at its core.

Shall History Absolve Us?

Where to from here then? Can the 500 pages of this iconic judgment magically change the social norms that define the existence of LGBT+ communities in modern Indian society? If the reception of this judgement by the conservative factions within society is anything to go by, the answer is clear enough.  Yet, the role of this judgment – in an ecosystem of other enablers – might just be a crucial first step. As noted by Harvard Law School professor Lawrence Lessig, law can create, displace or change the collective expectations of society by channelling societal behaviour in a manner that conforms with its contents.[42] An assessment of the impact of Brown v Board of Education on African-Americans offers an interesting theoretical analogy.[43]

The unanimous decision of the US Supreme Court in Brown marked a watershed moment in American history that struck down the ‘separate but equal’ doctrine which served as the basis for segregation between communities of colour and the dominant White majority in American public schools. While this ruling initially faced massive resistance, it laid the edifice for progressive legislation such as the Civil Rights Act and the Voting Act a decade later.[44] While its true impact on evolving acceptable standards of social behaviour remains disputed with valid arguments on all sides, Brown kick-started a counter-culture that sought to wipe out the toxic norms that the Jim Crow-era had birthed in the 1950s. Along with subsequent decisions by the US Supreme Court, it acted as the catalyst that morphed the boundaries between ‘us’ and ‘them’. Republican Senator Barry Goldwater attempted to stifle this counterculture in 1964 by undertaking a sustained campaign that opposed the dictum in Brown not in opposition to African-Americans but instead in opposition to an overly intrusive federal government that was taking away from the cultural traditions and values, particularly of the South.[45] In the past few years, cultural apathy seems to have taken a more sinister turn as recent incidents of police violence and the rebirth of white supremacist movements indicate.

Lessons from a different context in an alternate society can never be transposed in another without substantial alterations. Discrimination is intersectional and a celebration of identity is a recognition of intersectionality. Therefore, the path ahead for the LGBT+ community lies in crafting a strategy that works for them – a strategy that can draw from lessons learned in other contexts. Last week’s judgment could morph into a point of reference for a counter-cultural movement that works to remove the stains of oppression. The key challenge is carrying this message to swathes of the populace who, goaded by leading public figures, continue to treat homosexuality as an unnatural phenomenon[46].

Being a majority Hindu nation, one possible medium of communication could be reference to ancient Hindu scriptures that do not ostracize individuals based on their sexual orientation but treat them as fellow sojourners on their path to Nirvana, the idea of spiritual emancipation, a central tenet of Hindu belief.[47] Strategically, using this framework as a dangling carrot for religious conservatives may be a potential conversation starter but comes riddled with potholes, as the same scriptures could be interpreted to justify subjugation of women, for example. A more holistic approach might be reading these scriptures into the overarching foundation stone of society -The Indian Constitution, which is not a rigid, static document – stuck in the time of its inception – but is a dynamic one that responds to and triggers the Indian social and political journey. The burden of a constitution, as reiterated by Chief Justice Misra and Dr. Chandrachud is to ‘draw a curtain’ on the past of social injustice and prejudice and embrace constitutional morality, a cornerstone of which is the principle of inclusiveness.  Inclusiveness driven by rhetoric in political speeches and storylines on the big screen. Inclusiveness that fosters symbiosis between the teachings of religious scriptures and that of Constitutional Law Professors – an inclusiveness that begets the idea of India, which is a fair deal for all Indians.

…And Justice for all?

In the aftermath of this decision come further legal challenges. Legally, while the ‘right to love’ has been vindicated, the right to formalise this union through societal recognition remains to be established. This judgement paves the way for the acceptance of homosexual relationships, but not necessarily the right to marry for a homosexual couple. There are passages within Justice Chandrachud’s visionary analysis which directly address this concern, and advocate for the ‘full protection’ of the law being extended to the LGBT+ populace. It will certainly be instructive for future courts, and one tends to remain hopeful that the long march to freedom for the LGBT+ community and its supporters will not come to a screeching halt through judicial intervention or State action. If anything, the wings of government should bolster these efforts, in view of this verdict.

That said, social acceptance seldom waits on the sanction of the law.

The outpouring of public support which was witnessed through public demonstrations, social media advocacy and concerted efforts from so many quarters to bring down this draconian law needs to continue and consolidate. There are evils yet, and the path to genuine inclusiveness in this country (as in most others) is littered with thorns. And even greater resistance is likely to emerge when tackling some of these issues, which tend to hit closer home than others.

While this judgement entered into detailed discussions on the issue of consent, it remained disquietingly silent on a most contentious subject, perhaps because it was perceived to be beyond the terms of reference. The exception of marital rape carved out in the Indian Penal Code, which keeps married relationships outside the purview of rape laws, remains as a curse – a reminder that gender equality in this nation will only come at tremendous human cost. The institution of family, that sacrosanct space which even the most liberal courtrooms in India have sought to protect, stands threatened. Malignant patriarchy will raise its head and claim its pound of flesh before the dust settles, and in the interest of freedom, it shall be up to the Apex Court to ensure that it settles on the right side of history. Else, all our progress, howsoever incremental, may be undone by this one stain on our collective conscience.

 

*Agnidipto Tarafder is an Assistant Professor of Law at the National University of Juridical Sciences, Kolkata, where he teaches courses in Constitutional Law, Labour Law and Privacy.

*Arindrajit Basu recently finished his LLM (Public International Law) at the University of Cambridge and is a Policy Officer at the Centre for Internet & Society, Bangalore

_________________________________________________________________________________________

[1] Gay Marriage Around the World, Pew Research Centre (Aug 8, 2017) available at http://www.pewforum.org/2017/08/08/gay-marriage-around-the-world-2013/.

[2] W. P. (Crl.) No. 76 of 2016 (Supreme Court of India).

[3] Aids Bhedbav Virodhi Andolan, Less than Gay: A Citizen’s Report on the Status of Homosexuality in India (Nov-Dec, 1991) available at https://s3.amazonaws.com/s3.documentcloud.org/documents/1585664/less-than-gay-a-citizens-report-on-the-status-of.pdf.

[4] P.P Singh, 377 battle at journey’s end (September 6, 2018) available at https://indianexpress.com/article/explained/section-377-verdict-supreme-court-decriminalisation-gay-sex-lgbtq-5342008/.

[5] (2009) 160 DLT 277; W.P. (C) No.7455/2001 of 2009 (Delhi HC).

[6] Sangeeta Barooah Pisharoty, It is like reversing the motion of the earth, The Hindu (December 20, 2013) available at https://www.thehindu.com/features/metroplus/society/it-is-like-reversing-the-motion-of-the-earth/article5483306.ece.

[7] Id.

[8] (2014) 1 SCC 1 (Supreme Court of India).

[9] Ibid, at para 42.

[10] Gautam Bhatia, The unbearable wrongness of Koushal v Naz Foundation, Ind Con Law Phil (December 11, 2013)

[11] supra note 8, at para 43.

[12] Manjunath, India’s UN Vote: A Reflection of Our Deep Seated Anti-Gay Sentiments, Amnesty International (Apr 20, 2015) available at https://amnesty.org.in/indias-un-vote-reflection-societys-deep-seated-anti-gay-prejudice/.

[13] The concept of curative petitions was laid down in Rupa Ashok Hurra v. Ashok Hurra, (2002) 4 SCC 388 (Supreme Court of India).

[14] Ajay Kumar, All you need to know about the SC’s decision to reopen the Section 377 debate, FIRSTPOST (February 3, 2016) available at https://www.firstpost.com/india/all-you-need-to-know-about-the-scs-decision-to-reopen-the-section-377-debate-2610680.html.

[15] 2017 (10) SCC 1(Supreme Court of India).

[16] The Wolfenden Report, Brit. J; Vener. Dis. (1957) 33, 205 available at https://sti.bmj.com/content/sextrans/33/4/205.full.pdf.

[17] Griswold v Connecticut, 381 US 479.

[18] Gautam Bhatia, Indian Supreme Court reserves judgment on the de-criminalisation of Homosexuality, OHRH Blog (August 15, 2018) available at http://ohrh.law.ox.ac.uk/the-indian-supreme-court-reserves-judgment-on-the-de-criminalisation-of-homosexuality/.

[19] Krishnadas Rajagopal, Supreme Court refers plea to decriminalize homosexuality under Section 377 to larger bench, The Hindu (January 8, 2018) available at https://www.thehindu.com/news/national/supreme-court-refers-377-plea-to-larger-bench/article22396250.ece.

[20] Puttuswamy, paras 124-28.

[21] Aditi Singh, Government leaves decision on Section 377 to the wisdom of Supreme Court, LIVEMINT (July 11, 2018) available at https://www.livemint.com/Politics/fMReaXRcldOWyY20ELJ0GK/Centre-leaves-it-to-Supreme-Court-to-decide-on-Section-377.html.

[22] supra note 2, at para 20.

[23] Ibid.

[24] Express News Service, Lok Sabha votes against Shashi Tharoor’s bill to decriminalize homosexuality again, Indian Express (March 12, 2016) available at https://indianexpress.com/article/india/india-news-india/decriminalising-homosexuality-lok-sabha-votes-against-shashi-tharoors-bill-again/.

[25] Navtej Johar v. Union of India, W. P. (Crl.) No. 76 of 2016 (Supreme Court of India) at para 1.

[26] Ibid, at  para 2.

[27] Ibid, at para 82.

[28]Ibid, at para 224.

[29] Ibid, at para 253.

[30] Ibid.

[31] Separate Opinion, RF Nariman, paras 1-20.

[32] Ibid, at paras 28-9.

[33] Ibid. Lawrence v Texas, 539 US 558 (2003), discussed in paras 108-09.

[34] Ibid, at para 82.

[35] Separate Opinion, DY Chandrachud, at para 28.

[36] Ibid, at para 56-7, 61.

[37] Supra note 20, at para 118-9.

[38] ADM Jabalpur v Shiv Kant Shukla (1976) 2 SCC 521. (Supreme Court of India)

[39] Air India v Nergesh Meerza (1981) 4 SCC 335. (Supreme Court of India)

[40] Supra note 25, at paras 36-41.

[41] Ibid, at paras 42-43, 56.

[42] Lawrence Lessig, The Regulation of Social Meaning, 62 University of Chicago Law Review 943 ,947 (1995)

[43] Brown v. Board of Education of Topeka, 347 U.S. 483.

[44] David Smith, Little Rock Nine: The day young students shattered racial segregation, The Guardian (September 24, 2017) available at https://www.theguardian.com/world/2017/sep/24/little-rock-arkansas-school-segregation-racism.

[45]Michael Combs and Gwendolyn Combs, Revisiting Brown v. Board of Education: A Cultural, Historical-Legal, and Political Perspective (2005).

[46] Poulomi Saha, RSS on 377: Gay sex not a crime but is unnatural, India Today (September 6, 2018) available at https://www.indiatoday.in/india/story/rss-on-section-377-verdict-gay-sex-not-a-crime-but-is-unnatural-1333414-2018-09-06.

[47] S Venkataraman and H Varuganti, A Hindu approach to LGBT Rights, Swarajya (July 4, 2015) available at https://swarajyamag.com/culture/a-hindu-approach-to-lgbt-rights.

Discrimination in the Age of Artificial Intelligence

by Arindrajit Basu — last modified Oct 26, 2018 02:47 PM
The dawn of Artificial Intelligence (AI) has been celebrated by both government and industry across the globe. AI offers the potential to augment many existing bureaucratic processes and improve human capacity, if implemented in accordance with principles of the rule of law and international human rights norms. Unfortunately, AI-powered solutions have often been implemented in ways that have resulted in the automation, rather than mitigation, of existing societal inequalities.

This was originally published by Oxford Human Rights Hub on October 23, 2018


Artificial Intelligence

Image Credit: Sarla Catt via Flickr, used under a Creative Commons license available at https://creativecommons.org/licenses/by/2.0/

In the international human rights law context, AI solutions pose a threat to norms which prohibit discrimination. International Human Rights Law recognizes that discrimination may take place in two possible ways, directly or indirectly. Direct discrimination occurs when an individual is treated less favourably than someone else similarly situated on one of the grounds prohibited in international law, which, as per the Human Rights Committee, includes race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Indirect discrimination occurs when a policy, rule or requirement is ‘outwardly neutral’ but has a disproportionate impact on certain groups that are meant to be protected by one of the prohibited grounds of discrimination. A clear example of indirect discrimination recognized by the European Court of Human Rights arose in the case of DH&Ors v Czech Republic. The ECtHR struck down an apparently neutral set of statutory rules, which implemented a set of tests designed to evaluate the intellectual capability of children but which resulted in an excessively high proportion of minority Roma children scoring poorly and consequently being sent to special schools, possibly because the tests were blind to cultural and linguistic differences. This case acts as a useful analogy for the potential disparate impacts of AI and should serve as useful precedent for future litigation against AI-driven solutions.

Indirect discrimination by AI may occur at two stages. First is the usage of incomplete or inaccurate training data that results in the algorithm processing data that may not accurately reflect reality. Cathy O’Neil explains this using a simple example. There are two types of crimes-those that are ‘reported’ and others that are only ‘found’ if a policeman is patrolling the area. The first category includes serious crimes such as murder or rape while the second includes petty crimes such as vandalism or possession of illicit drugs in small quantities. Increased police surveillance in areas in US cities where Black or Hispanic people reside lead to more crimes being ‘found’ there. Thus, data is likely to suggest that these communities commit a higher proportion of crimes than they actually do – indirect discrimination that has been empirically been shown through research published by Pro Publica.

Discrimination may also occur at the stage of data processing, which is done through a metaphorical ‘black-box’ that accepts inputs and generates outputs without revealing to the human developer how the data was processed. This conundrum is compounded by the fact that the algorithms are often utilised to solve an amorphous problem-which attempts to break down a complex question into a simple answer. An example is the development of ‘risk profiles’ of individuals for the  determination of insurance premiums. Data might show that an accident is more likely to take place in inner cities due  to more densely packed populations in these areas. Racial and ethnic minorities tend to reside more in these areas, which means that algorithms could learn that minorities are more likely to get into accidents, thereby generating an outcome (‘risk profile’) that indirectly discriminates on grounds of race or ethnicity.

It would be wrong to ignore discrimination, both direct and indirect, that occurs as a result of human prejudice. The key difference between that and discrimination by AI lies in the ability of other individuals to compel the decision-maker to explain the factors that lead to the outcome in question and testing its validity against principles of human rights. The increasing amounts of discretion and, consequently, power being delegated to autonomous systems mean that principles of accountability which audit and check indirect discrimination need to be built into the design of these systems. In the absence of these principles, we risk surrendering core tenets of human rights law to the whims of an algorithmically crafted reality.

Conceptualizing an International Security Regime for Cyberspace

by Elonnai Hickok and Arindrajit Basu — last modified Oct 26, 2018 03:09 PM
This paper was published as part of the Briefings from the Research and Advisory Group (RAG) of the Global Commission on the Stability of Cyberspace (GCSC) for the Full Commission Meeting held at Bratislava in 2018.

Policy-makers often use past analogous situations to reshape questions and resolve dilemmas in current issues. However, without sufficient analysis of the present situation and the historical precedent being considered, the effectiveness of the analogy is limited.This applies across contexts, including cyber space. For example, there exists a body of literature, including The Tallinn Manual, which applies key aspects (structure, process, and techniques) of various international legal regimes regulating the global commons (air, sea, space and the environment) towards developing global norms for the governance of cyberspace.

Given the recent deadlock at the Group of Governmental Experts (GGE), owing to a clear ideological split among participating states, it is clear that consensus on the applicability of traditional international law norms drawn from other regimes, will not emerge if talks continue without a major overhaul of the present format of negotiations. The Achilles Heel of the GGE thus far has been a deracinated approach to the norms formulation process. There has been excessive focus on the content and the language of the applicable norm rather than the procedure underscoring its evolution, limited state and non state participation, and a lack of consideration for social, cultural, economic and strategic contexts through which norms emerge at the global level. Even if the GGE process became more inclusive and included all United Nations members, strategies preceding the negotiation process must be designed in a manner to facilitate consensus.

There exists to date, no scholarship that traces the negotiation processes that lead to the forging of successful analogous universal regimes or an investigation into the nature of normative contestation that enabled the evolution of the core norms that shaped these regimes. To develop an effective global regime governing cyberspace, we must consider if and how existing international law or norms for other global commons might also apply to ‘cyberspace’, but also transcend this frame into more nuanced thinking around techniques and frameworks that have been successful in consensus building. This paper focuses on the latter and embarks on an assessment of how regimes universally maximized functional utility through global interactions and shaped legal and normative frameworks that resulted, for some time, at least, in  broad consensus.


Click to read more

Lessons from US response to cyber attacks

by Arindrajit Basu — last modified Nov 01, 2018 05:53 AM
Publicly attributing the attacks to a state or non-state actor is vital for building a credible cyber deterrence strategy.

The article was published in Hindu Businessline on October 30, 2018. The article was edited by Elonnai Hickok.


In September, amidst the brewing of a new found cross-continental romance between Kim Jong-Un and Donald Trump, the US Department of Justice filed a criminal complaint indicting North Korean hacker Park Jin Hyok for playing a role in at least three massive cyber operations against the US. This included the Sony data breach of 2014; the Bangladesh bank heist of 2016 and the WannaCry ransomware attack in 2017. This indictment was followed by one on October 4, of seven officers in the GRU, Russia’s military agency, for “persistent and sophisticated computer intrusions.” Evidence adduced in support included forensic cyber evidence like similarities in lines of code or analysis of malware and other factual details regarding the relationship between the employers of the indicted individuals and the state in question.

While it is unlikely that prosecutions will ensue, indicting individuals responsible for cyber attacks offers an attractive option for states looking to develop a credible cyber deterrence strategy.

Attributing cyber attacks

Technical uncertainty in attributing attacks to a specific actor has long fettered states from adopting defensive or offensive measures in response to an attack and garnering support from multilateral fora. Cyber attacks are multi-stage, multi-step and multi-jurisdictional, which complicates the attribution process and removes the attacker from the infected networks.

Experts at the RAND Corporation have argued that technical challenges to attribution should not detract from international efforts to adopt a robust, integrated and multi-disciplinary approach to attribution, which should be seen as a political process operating in symbiosis with technical efforts. A victim state must communicate its findings and supporting evidence to the attacking state in a bid to apply political pressure.

Clear publication of the attribution process becomes crucial as it furthers public credibility in investigating authorities; enables information exchange among security researchers and fosters deterrence by the adversary and potential adversaries.

Although public attributions need not take the form of a formal indictment and are often conducted through statements by foreign ministries, a criminal indictment is more legitimate as it needs to comply with the rigorous legal and evidentiary standards required by the country’s legal system. Further, an indictment allows for the attack to be conceptualised as a violation of the rule of law in addition to being a geopolitical threat vector.

Lessons for India

India is yet to publicly attribute a cyber attack to any state or non-state actor. This is surprising given that an overwhelming percentage of attacks on Indian websites are perpetrated by foreign states or non-state actors, with 35 per cent of attacks emanating from China, as per a report by the Indian Computer Emergency Response Team (CERT-IN), the national nodal agency under the Ministry of Electronics and Information Technology (MEITY) which deals with cyber threats.

Along with other bodies, such as the National Critical Information Protection Centre (NCIIPC) which is the nodal central agency for the protection of critical information infrastructure, CERT-IN forms part of an ecosystem of nodal agencies designed to guarantee national cyber security.

There are three key lessons that policy makers involved in this ecosystem can take away from the WannaCry attribution process and the Park indictment. First, there is a need for multi-stakeholder collaboration through sharing of research, joint investigations and combined vulnerability identification among the various actors employed by the government, law enforcement authorities and private cyber security firms.

The affidavit suggested that the FBI had used information from various law enforcement personnel, computer scientists at the FBI; Mandiant — a cyber security firm retained by the US Attorney’s Office and publicly available materials produced by cyber security companies. Second, the standards of attribution need to demonstrate compliance both with the evidentiary requirements of Indian criminal law and the requirements in the International Law on State Responsibility. The latter requires an attribution to demonstrate that a state had ‘effective control’ over the non-state actor.

Finally, the attribution must be communicated to the adversary in a manner that does not risk military escalation. Despite the delicate timing of the indictment, Park’s prosecution by the FBI did not dampen the temporary thaw in relations between US and North Korea.

While building capacity to improve resilience, detect attacks and improve attribution capabilities should be a priority, we need to remember that regardless of the breakthrough in both human and infrastructural capacities, attributing cyber attacks will never be an exercise in certainty.

India will need to marry its improved capacity with strategic geopolitical posturing. Lengthy indictments may not deter all potential adversaries but may be a tool in fostering a culture of accountability in cyberspace.

Clarification on the Information Security Practices of Aadhaar Report

by Amber Sinha and Srinivas Kodali — last modified Nov 05, 2018 12:08 PM
We are issuing a second clarificatory statement on our report titled “Information Security Practices of Aadhaar (or lack thereof): A documentation of public availability of Aadhaar numbers with sensitive personal financial information” published on May 1, 2017.

 

The report concerned can be accessed here, and the first clarificatory statement (dated May 16, 2017) can be accessed here.


This clarificatory statement is being issued in response to reports that misrepresent our research. In light of repeated questions we have received, which seem to emanate from a misunderstanding of our report, we would like to make the following clarifications.

  1. Our research involved documentation and taking illustrative screenshots (included in our report) of public webpages on the four government websites listed in our report. These screenshots were taken to demonstrate that the vulnerability existed.

  2. The figure of 130-135 million Aadhaar Numbers quoted in our Report are, as clearly stated, derived directly by adding the aggregate numbers (of beneficiaries/individuals whose data were listed in the three government websites concerned) and published by the portals themselves in the MIS reports publicly available on the portals. The numbers are as follows:

    • 10,97,60,343 from NREGA,

    • 63,95,317 from NSAP, and

    • 2,05,60,896 from Chandranna Bima (screenshots included in the report).

    We did not arrive at this number by downloading data ourselves but by adding the figures on the government websites. To our knowledge, no harm, financial or otherwise has been caused to anyone due to the public availability. Further, it must be noted that we published the report only after ascertaining that the websites in questions had masked or removed the data. Therefore our report only points to the possibility that there could be harm caused by malicious actors before the data was taken down. However, we are not aware of any such cases of exploitation, nor do we suggest so anywhere in our report.

We sincerely hope that this clarification helps with a clearer comprehension of the argument and implications of the said report. We urge those who are using our report in their research to reach out to us to prevent the future misinterpretation of the report.

— Amber Sinha and Srinivas Kodali

 

DIDP #32 On ICANN's Fellowship Program

by Akriti Bopanna last modified Nov 12, 2018 03:58 PM
In furtherance of its capacity building functions, ICANN selects Fellows for every public meeting. These are individuals from underserved and underrepresented communities who are trained to become active participants in the ICANN community.

These fellows are assigned a mentor and receive training on ICANN's various areas of engagement. They are also given travel assistance to attend the meeting. While the process and selection criteria is detailed on their website, CIS had some questions as to the execution of these.

Our DIDP questioned the following aspects:

  1. Has any individual received the ICANN Fellowship more than the stated maximum limit of 3 times?
  2. If so, whose decision and what was the justification given for awarding it the 4th time and any other times after that?
  3. What countries did any such individuals belong to?
  4. How many times has the limit of 3 been breached while giving fellowships?
  5. What recording mechanisms are being used to ensure that awarding of these fellowships is kept track of, stored and updated? Are these public or privately made available anywhere?

Budapest Convention and the Information Technology Act

by Vipul Kharbanda last modified Nov 20, 2018 04:18 PM
The Convention on Cybercrime adopted in Budapest (“Convention”) is the fist and one of the most important multilateral treaties addressing the issue of internet and computer crimes.

Introduction
It was drafted by the Council of Europe along with Canada, Japan, South Africa and the United States of America.[1] The importance of the Convention is also indicated by the fact that adherence to it (whether by outright adoption or by otherwise making domestic laws in compliance with it) is one of the conditions mentioned in the Clarifying Lawful Overseas Use of Data Act passed in the USA (CLOUD Act) whereby a process has been established to enable security agencies of in India and the United States to directly access data stored in each other’s territories. Our analysis of the CLOUD Act vis-à-vis India can be found here. It is in continuation of that analysis that we have undertaken here a detailed comparison of the Information Technology Act, 2000 (“IT Act”) and how it stacks up against the provisions of Chapter I and Chapter II of the Convention.[2]

Before we get into a comparison of the Convention with the IT Act, we must point out the distinction between the two legal instruments, for the benefit of readers from a non legal background. An international instrument such as the Convention on Cybercrime (generally speaking) is essentially a promise made by the States which are a party to that instrument, that they will change or modify their local laws to get them in line with the requirements or principles laid out in said instrument. In case the signatory State does not make such amendments to its local laws, (usually) the citizens of that State cannot enforce any rights that they may have been granted under such an international instrument. The situation is the same with the Convention on Cybercrime, unless the signatory State amends its local laws to bring them in line with the provisions of the Convention, there cannot be any enforcement of the provisions of the Convention within that State.[3] This however is not the case for India and the IT Act since India is not a signatory to the Convention on Cybercrime and therefore is not obligated to amend its local laws to bring them in line with the Convention.

Although India and the Council of Europe cooperated to amend the IT Act through major amendments brought about vide the Information Technology (Amendment) Act, 2008, India still has not become a signatory to the Convention on Cybercrime. The reasons for this appear to be unclear and it has been suggested that these reasons may range from the fact that India was not involved in the original drafting, to issues of sovereignty regarding the provisions for international cooperation and extradition.[4]

 

Convention on Cybercrime

Information Technology Act, 2000

Article 2 – Illegal access

Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the access to the whole or any part of a computer system without right. A Party may require that the offence be committed by infringing security measures, with the intent of obtaining computer data or other dishonest intent, or in relation to a computer system that is connected to another computer system.

Section 43

If any person without permission of the owner or any other person who is incharge of a computer, computer system or computer network -

(a) accesses or secures access to such computer, computer system or computer network or computer resource

 

Section 66

If any person, dishonestly, or fraudulently, does any act referred to in section 43, he shall be punishable with imprisonment for a term which may extend to two three years or with fine which may extend to five lakh rupees or with both.

 

The Convention gives States the right to further qualify the offence of “illegal access” or “hacking” by adding elements such as infringing security measures, special intent to obtain computer data, other dishonest intent that justifies criminal culpability, or the requirement that the offence is committed in relation to a computer system that is connected remotely to another computer system.[5] However, Indian law deals with the distinction by making the act of unathorised access without dishonest or fraudulent intent a civil offence, where the offender is liable to pay compensation. If the same act is done with dishonest and fraudulent intent, it is treated as a criminal offence punishable with fine and imprisonment which may extend to 3 years.

It must be noted that this provision was included in the Act only through the Amendment of 2008 and was not present in the Information Technology Act, 2000 in its original iteration.

Convention on Cybercrime

Information Technology Act, 2000

Article 3 – Illegal Interception

Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the interception without right, made by technical means, of non-public transmissions of computer data to, from or within a computer system, including electromagnetic emissions from a computer system carrying such computer data. A Party may require that the offence be committed with dishonest intent, or in relation to a computer system that is connected to another computer system.

 

NA

Although the Information Technology Act, 2000 does not specifically criminalise the interception of communications by a private person. It is possible that under the provisions of Rule 43(a) the act of accessing a “computer network” could be interpreted as including unauthorised interception within its ambit.

The other way in which illegal interception may be considered to be illegal is through a combined reading of Sections 69 (Interception) and 45 (Residuary Penalty) with Rule 3 of the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009 which prohibits interception, monitoring and decryption of information under section 69(2) of the IT Act except in a manner as provided by the Rules. However, it must be noted that section 69(2) only talks about interception by the government and Rule 3 only provides for procedural safeguards for such an interception. It could therefore be argued that the prohibition under Rule 3 is only applicable to the government and not to private individuals since section 62, the provision under which Rule 3 has been issued, itself is not applicable to private individuals.

 

Convention on Cybercrime

Information Technology Act, 2000

 

Article 4 – Data interference

1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the damaging, deletion, deterioration, alteration or suppression of computer data without right.

2 A Party may reserve the right to require that the conduct described in paragraph 1 result in serious harm.

Section 43

If any person without permission of the owner or any other person who is incharge of a computer, computer system or computer network -

(d) damages or causes to be damaged any computer, computer system or computer network, data, computer data base or any other programmes residing in such computer, computer system or computer network;

(i) destroys, deletes or alters any information residing in a computer resource or diminishes its value or utility or affects it injuriously by any means;

(j) Steals, conceals, destroys or alters or causes any person to steal, conceal, destroy or alter any computer source code used for a computer resource with an intention to cause damage,

he shall be liable to pay damages by way of compensation not exceeding one crore rupees to the person so affected. (change vide ITAA 2008)

Section 66

If any person, dishonestly, or fraudulently, does any act referred to in section 43, he shall be punishable with imprisonment for a term which may extend to two three years or with fine which may extend to five lakh rupees or with both.

 

Damage, deletion, diminishing in value and alteration of data is considered a crime as per Section 66 read with section 43 of the IT Act if done with fraudulent or dishonest intention. While the Convention only requires such acts to be crimes if committed intentionally, however the Information Technology Act requires that such intention be either dishonest or fraudulent only then such an act will be a criminal offence, otherwise it will only incur civil consequences requiring the perpetrator to pay damages by way of compensation.

It must be noted that the optional requirement of such an act causing serious harm has not been adopted by Indian law, i.e. the act of such damage, deletion, etc. by itself is enough to constitute the offence, and there is no requirement of such an act causing serious harm.

As per the Explanatory Report to the Convention on Cybercrime, “Suppressing of computer data means any action that prevents or terminates the availability of the data to the person who has access to the computer or the data carrier on which it was stored.” Strictly speaking the act of suppression of data in another system is not covered by the language of section 43, but looking at the tenor of the section it is likely that if a court is faced with a situation of intentional/malicious denial of access to data, the court could expand the scope of the term “damage” as contained in sub-section (d) to include such malicious acts.

 

Convention on Cybercrime

Information Technology Act, 2000

 

Article 5 – System interference

Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the serious hindering without right of the functioning of a computer system by inputting, transmitting, damaging, deleting, deteriorating, altering or suppressing computer data.

Section 43

If any person without permission of the owner or any other person who is incharge of a computer, computer system or computer network -

(e) disrupts or causes disruption of any computer, computer system or computer network;

Explanation - for the purposes of this section -

(i) "Computer Contaminant" means any set of computer instructions that are designed -

(a) to modify, destroy, record, transmit data or programme residing within a computer, computer system or computer network; or

(b) by any means to usurp the normal operation of the computer, computer system, or computer network;

(iii) "Computer Virus" means any computer instruction, information, data or programme that destroys, damages, degrades or adversely affects the performance of a computer resource or attaches itself to another computer resource and operates when a programme, data or instruction is executed or some other event takes place in that computer resource;

 

Section 66

If any person, dishonestly, or fraudulently, does any act referred to in section 43, he shall be punishable with imprisonment for a term which may extend to two three years or with fine which may extend to five lakh rupees or with both.

 

The offence of causing hindrance to the functioning of a computer system with fraudulent or dishonest intention is an offence under the IT Act.  While the Convention only requires such acts to be crimes if committed intentionally, however the IT Act requires that such intention be either dishonest or fraudulent only then such an act will be a criminal offence, otherwise it will only incur civil consequences requiring the perpetrator to pay damages by way of compensation.

The IT Act does not require such disruption to be caused in any particular manner as is required under the Convention, although the acts of introducing computer viruses as well as damaging or deleting data themselves have been classified as offences under the IT Act.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 6 – Misuse of devices

1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right:

a the production, sale, procurement for use, import, distribution or otherwise making available of:

i a device, including a computer program, designed or adapted primarily for the purpose of committing any of the offences established in accordance with Articles 2 through 5;

ii a computer password, access code, or similar data by which the whole or any part of a computer system is capable of being accessed, with intent that it be used for the purpose of committing any of the offences established in Articles 2 through 5; and

b the possession of an item referred to in paragraphs a.i or ii above, with intent that it be used for the purpose of committing any of the offences established in Articles 2 through 5. A Party may require by law that a number of such items be possessed before criminal liability attaches.

2 This article shall not be interpreted as imposing criminal liability where the production, sale, procurement for use, import, distribution or otherwise making available or possession referred to in paragraph 1 of this article is not for the purpose of committing an offence established in accordance with Articles 2 through 5 of this Convention, such as for the authorised testing or protection of a computer system.

3 Each Party may reserve the right not to apply paragraph 1 of this article, provided that the reservation does not concern the sale, distribution or otherwise making available of the items referred to in paragraph 1 a.ii of this article.

NA

 

This provision establishes as a separate and independent criminal offence the intentional commission of specific illegal acts regarding certain devices or access data to be misused for the purpose of committing offences against the confidentiality, the integrity and availability of computer systems or data. While the IT Act does not by itself makes the production, sale, procurement for use, import, distribution of devices designed to be adopted for such purposes, sub-section (g) of section 43 along with section 120A of the Indian Penal Code, 1860 which deals with “conspiracy” could perhaps be used to bring such acts within the scope of the penal statutes.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 7 – Computer related forgery

Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the input, alteration, deletion, or suppression of computer data, resulting in inauthentic data with the intent that it be considered or acted upon for legal purposes as if it were authentic, regardless whether or not the data is directly readable and intelligible. A Party may require an intent to defraud, or similar dishonest intent, before criminal liability attaches.

NA

 

The acts of deletion, alteration and suppression of data by itself is a crime as discussed above, there is no specific offence for doing such acts for the purpose of forgery. However this does not mean that the crime of online forgery is not punishable in India at all, such crimes would be dealt with under the relevant provisions of the Indian Penal Code, 1860 (Chapter 18) read with section 4 of the IT Act.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 8 – Computer-related fraud

Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the causing of a loss of property to another person by:

a any input, alteration, deletion or suppression of computer data,

b any interference with the functioning of a computer system,

with fraudulent or dishonest intent of procuring, without right, an economic benefit for oneself or for another person.

NA

Just as in the case of forgery, there is no specific provision in the IT Act whereby online fraud would be considered as a crime, however specific acts such as charging services availed of by one person to another (section 43(h), identity theft (section 66C), cheating by impersonation (section 66D) have been listed as criminal offences. Further, as with forgery, fraudulent acts to procure economic benefits would also get covered by the provisions of the Indian Penal Code that deal with cheating.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 9 – Offences related to child pornography

1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the following conduct:

a producing child pornography for the purpose of its distribution through a computer system;

b offering or making available child pornography through a computer system;

c distributing or transmitting child pornography through a computer system;

d procuring child pornography through a computer system for oneself or for another person;

e possessing child pornography in a computer system or on a computer-data storage medium.

2 For the purpose of paragraph 1 above, the term "child pornography" shall include pornographic material that visually depicts:

a a minor engaged in sexually explicit conduct;

b a person appearing to be a minor engaged in sexually explicit conduct;

c realistic images representing a minor engaged in sexually explicit conduct.

3 For the purpose of paragraph 2 above, the term "minor" shall include all persons under 18 years of age. A Party may, however, require a lower age-limit, which shall be not less than 16 years.

4 Each Party may reserve the right not to apply, in whole or in part, paragraphs 1, subparagraphs d and e, and 2, sub-paragraphs b and c.

67 B Punishment for publishing or transmitting of material depicting children in sexually explicit act, etc. in electronic form.

Whoever,-

(a) publishes or transmits or causes to be published or transmitted material in any electronic form which depicts children engaged in sexually explicit act or conduct or

(b) creates text or digital images, collects, seeks, browses, downloads, advertises, promotes, exchanges or distributes material in any electronic form depicting children in obscene or indecent or sexually explicit manner or

(c) cultivates, entices or induces children to online relationship with one or more children for and on sexually explicit act or in a manner that may offend a reasonable adult on the computer resource or

(d) facilitates abusing children online or

(e) records in any electronic form own abuse or that of others pertaining to sexually explicit act with children,

shall be punished on first conviction with imprisonment of either description for a term which may extend to five years and with a fine which may extend to ten lakh rupees and in the event of second or subsequent conviction with imprisonment of either description for a term which may extend to seven years and also with fine which may extend to ten lakh rupees:

Provided that the provisions of section 67, section 67A and this section does not extend to any book, pamphlet, paper, writing, drawing, painting, representation or figure in electronic form-

(i) The publication of which is proved to be justified as being for the public good on the ground that such book, pamphlet, paper writing, drawing, painting, representation or figure is in the interest of science, literature, art or learning or other objects of general concern; or

(ii) which is kept or used for bonafide heritage or religious purposes

Explanation: For the purposes of this section, "children" means a person who has not completed the age of 18 years.

The publishing, transmission, creation, collection, seeking, browsing, etc. of child pornography is an offence under Indian law punishable with imprisonment for upto 5 years for a first offence and upto 7 years for a subsequent offence, along with fine.

It is important to note that bona fide depictions for the public good, such as for publication in pamphlets, reading or educational material are specifically excluded from the rigours of the section, Similarly material kept for heritage or religious purposes is also exempted under this section. Such exceptions are in line with the intent of the Convention, since the Explanatory statement itself states that “The term "pornographic material" in paragraph 2 is governed by national standards pertaining to the classification of materials as obscene, inconsistent with public morals or similarly corrupt. Therefore, material having an artistic, medical, scientific or similar merit may be considered not to be pornographic.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 10 – Offences related to infringements of copyright and related rights

1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law the infringement of copyright, as defined under the law of that Party, pursuant to the obligations it has undertaken under the Paris Act of 24 July 1971 revising the Bern Convention for the Protection of Literary and Artistic Works, the Agreement on Trade-Related Aspects of Intellectual Property Rights and the WIPO Copyright Treaty, with the exception of any moral rights conferred by such conventions, where such acts are committed wilfully, on a commercial scale and by means of a computer system.

2 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law the infringement of related rights, as define under the law of that Party, pursuant to the obligations it has undertaken under the International Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations (Rome Convention), the Agreement on Trade-Related Aspects of Intellectual Property Rights and the WIPO Performances and Phonograms Treaty, with the exception of any moral rights conferred by such conventions, where such acts are committed wilfully, on a commercial scale and by means of a computer system.

3 A Party may reserve the right not to impose criminal liability under paragraphs 1 and 2 of this article in limited circumstances, provided that other effective remedies are available and that such reservation does not derogate from the Party’s international obligations set forth in the international instruments referred to in paragraphs 1 and 2 of this article.

81 Act to have Overriding effect

The provisions of this Act shall have effect notwithstanding anything inconsistent therewith contained in any other law for the time being in force.

Provided that nothing contained in this Act shall restrict any person from exercising any right conferred under the Copyright Act, 1957 or the Patents Act, 1970

The use of the term "pursuant to the obligations it has undertaken" in both paragraphs makes it clear that a Contracting Party to the Convention is not bound to apply agreements cited (TRIPS, WIPO, etc.) to which it is not a Party; moreover, if a Party has made a reservation or declaration permitted under one of the agreements, that reservation may limit the extent of its obligation under the present Convention.

The IT Act does not try to intervene in the existing copyright regime of India and creates a special exemption for the Copyright Act and the Patents Act in the clause which provides this Act overriding effect. India’s obligations under the various treaties and conventions on intellectual property rights are enshrined in these legislations.[6]

Convention on Cybercrime

Information Technology Act, 2000

 

Article 11 – Attempt and aiding or abetting

1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, aiding or abetting the commission of any of the offences established in accordance with Articles 2 through 10 of the present Convention with intent that such offence be committed.

2 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, an attempt to commit any of the offences established in accordance with Articles 3 through 5, 7, 8, and 9.1.a and c of this Convention.

3 Each Party may reserve the right not to apply, in whole or in part, paragraph 2 of this article.

84 B Punishment for abetment of offences

Whoever abets any offence shall, if the act abetted is committed in consequence of the abetment, and no express provision is made by this Act for the punishment of such abetment, be punished with the punishment provided for the offence under this Act.

Explanation: An Act or offence is said to be committed in consequence of abetment, when it is committed in consequence of the instigation, or in pursuance of the conspiracy, or with the aid which constitutes the abetment.

 

84 C Punishment for attempt to commit offences

Whoever attempts to commit an offence punishable by this Act or causes such an offence to be committed, and in such an attempt does any act towards the commission of the offence, shall, where no express provision is made for the punishment of such attempt, be punished with imprisonment of any description provided for the offence, for a term which may extend to one-half of the longest term of imprisonment provided for that offence, or with such fine as is provided for the offence or with both.

As can be seen, both attempts as well as abetment of criminal offences under the IT Act have also been criminalised.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 12 – Corporate liability

1 Each Party shall adopt such legislative and other measures as may be necessary to ensure that legal persons can be held liable for a criminal offence established in accordance with this Convention, committed for their benefit by any natural person, acting either individually or as part of an organ of the legal person, who has a leading position within it, based on:

a a power of representation of the legal person;

b an authority to take decisions on behalf of the legal person;

c an authority to exercise control within the legal person.

2 In addition to the cases already provided for in paragraph 1 of this article, each Party shall take the measures necessary to ensure that a legal person can be held liable where the lack of supervision or control by a natural person referred to in paragraph 1 has made possible the commission of a criminal offence established in accordance with this Convention for the benefit of that legal person by a natural person acting under its authority.

3 Subject to the legal principles of the Party, the liability of a legal person may be criminal, civil or administrative.

4 Such liability shall be without prejudice to the criminal liability of the natural persons who have committed the offence.

85 Offences by Companies.

(1) Where a person committing a contravention of any of the provisions of this Act or of any rule, direction or order made there under is a Company, every person who, at the time the contravention was committed, was in charge of, and was responsible to, the company for the conduct of business of the company as well as the company, shall be guilty of the contravention and shall be liable to be proceeded against and punished accordingly:

Provided that nothing contained in this sub-section shall render any such person liable to punishment if he proves that the contravention took place without his knowledge or that he exercised all due diligence to prevent such contravention.

(2) Notwithstanding anything contained in sub-section (1), where a contravention of any of the provisions of this Act or of any rule, direction or order made there under has been committed by a company and it is proved that the contravention has taken place with the consent or connivance of, or is attributable to any neglect on the part of, any director, manager, secretary or other officer of the company, such director, manager, secretary or other officer shall also be deemed to be guilty of the contravention and shall be liable to be proceeded against and punished accordingly.

Explanation-

For the purposes of this section

(i) "Company" means any Body Corporate and includes a Firm or other Association of individuals; and

(ii) "Director", in relation to a firm, means a partner in the firm.

The liability of a company or other body corporate has been laid out in the IT Act in a manner similar to the Budapest Convention. While, the test to determine the relationship between the legal entity and the natural person who has committed the act on behalf of the legal entity is a little more detailed[7] in the Convention, the substance of the test is laid out in the IT Act as “a person who is in charge of, and was responsible to, the company”.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 14

1 Each Party shall adopt such legislative and other measures as may be necessary to establish the powers and procedures provided for in this section for the purpose of specific criminal investigations or proceedings.

2 Except as specifically provided otherwise in Article 21, each Party shall apply the powers and procedures referred to in paragraph 1 of this article to:

a the criminal offences established in accordance with Articles 2 through 11 of this Convention;

b other criminal offences committed by means of a computer system; and

c the collection of evidence in electronic form of a criminal offence.

3 a Each Party may reserve the right to apply the measures referred to in Article 20 only to offences or categories of offences specified in the reservation, provided that the range of such offences or categories of offences is not more restricted than the range of offences to which it applies the measures referred to in Article 21. Each Party shall consider restricting such a reservation to enable the broadest application of the measure referred to in Article 20.

b Where a Party, due to limitations in its legislation in force at the time of the adoption of the present Convention, is not able to apply the measures referred to in Articles 20 and 21 to communications being transmitted within a computer system of a service provider, which system:

i is being operated for the benefit of a closed group of users, and

ii does not employ public communications networks and is not connected with another computer system, whether public or private, that Party may reserve the right not to apply these measures to such communications.

Each Party shall consider restricting such a reservation to enable the broadest application of the measures referred to in Articles 20 and 21.

NA

This is a provision of a general nature that need not have any equivalence in domestic law. The provision clarifies that all the powers and procedures provided for in this section (Articles 14 to 21) are for the purpose of “specific criminal investigations or proceedings”.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 15 – Conditions and safeguards

1 Each Party shall ensure that the establishment, implementation and application of the powers and procedures provided for in this Section are subject to conditions and safeguards provided for under its domestic law, which shall provide for the adequate protection of human rights and liberties, including rights arising pursuant to obligations it has undertaken under the 1950 Council of Europe Convention for the Protection of Human Rights and Fundamental Freedoms, the 1966 United Nations International Covenant on Civil and Political Rights, and other applicable international human rights instruments, and which shall incorporate the principle of proportionality.

2 Such conditions and safeguards shall, as appropriate in view of the nature of the procedure or power concerned, inter alia, include judicial or other independent supervision, grounds justifying application, and limitation of the scope and the duration of such power or procedure.

3 To the extent that it is consistent with the public interest, in particular the sound administration of justice, each Party shall consider the impact of the powers and procedures in this section upon the rights, responsibilities and legitimate interests of third parties.

NA

This again is a provision of a general nature which need not have a corresponding clause in the domestic law. India is a signatory to a number of international human rights conventions and treaties, it has acceded to the International Covenant on Civil and Political Rights (ICCPR), 1966, International Covenant on Economic, Social and Cultural Rights (ICESCR), 1966, ratified the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), 1965, with certain reservations, signed the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), 1979 with certain reservations, Convention on the Rights of the Child (CRC), 1989 and signed the Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (CAT), 1984. Further the right to life guaranteed under Article 21 of the Constitution takes within its fold a number of human rights such as the right to privacy. Freedom of expression, right to fair trial, freedom of assembly, right against arbitrary arrest and detention are all fundamental rights guaranteed under the Constitution of India, 1950.[8]

In addition, India has enacted the Protection of Human Rights Act, 1993 for the constitution of a National Human Rights Commission, State Human Rights Commission in States and Human Rights Courts for better protection of “human rights” and for matters connected therewith or incidental thereto. Thus, there does exist a statutory mechanism for the enforcement of human rights[9] under Indian law. It must be noted that the definition of human rights also incorporates rights embodied in International Covenants and are enforceable by Courts in India.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 16 – Expedited preservation of stored computer data

1 Each Party shall adopt such legislative and other measures as may be necessary to enable its competent authorities to order or similarly obtain the expeditious preservation of specified computer data, including traffic data, that has been stored by means of a computer system, in particular where there are grounds to believe that the computer data is particularly vulnerable to loss or modification.

2 Where a Party gives effect to paragraph 1 above by means of an order to a person to preserve specified stored computer data in the person’s possession or control, the Party shall adopt such legislative and other measures as may be necessary to oblige that person to preserve and maintain the integrity of that computer data for a period of time as long as necessary, up to a maximum of ninety days, to enable the competent authorities to seek its disclosure. A Party may provide for such an order to be subsequently renewed.

3 Each Party shall adopt such legislative and other measures as may be necessary to oblige the custodian or other person who is to preserve the computer data to keep confidential the undertaking of such procedures for the period of time provided for by its domestic law.

4 The powers and procedures referred to in this article shall be subject to Articles 14 and 15.

Article 17 – Expedited preservation and partial disclosure of traffic data

1 Each Party shall adopt, in respect of traffic data that is to be preserved under Article 16, such legislative and other measures as may be necessary to:

a ensure that such expeditious preservation of traffic data is available regardless of whether one or more service providers were involved in the transmission of that communication; and

b ensure the expeditious disclosure to the Party’s competent authority, or a person designated by that authority, of a sufficient amount of traffic data to enable the Party to identify the service providers and the path through which the communication was transmitted.

2 The powers and procedures referred to in this article shall be subject to Articles 14 and 15.

29 Access to computers and data.

(1) Without prejudice to the provisions of sub-section (1) of section 69, the Controller or any person authorized by him shall, if he has reasonable cause to suspect that any contravention of the provisions of this chapter made there under has been committed, have access to any computer system, any apparatus, data or any other material connected with such system, for the purpose of searching or causing a search to be made for obtaining any information or data contained in or available to such computer system. (Amended vide ITAA 2008)

 

(2) For the purposes of sub-section (1), the Controller or any person authorized by him may, by order, direct any person in charge of, or otherwise concerned with the operation of the computer system, data apparatus or material, to provide him with such reasonable technical and other assistant as he may consider necessary.

 

67 C Preservation and Retention of information by intermediaries

(1) Intermediary shall preserve and retain such information as may be specified for such duration and in such manner and format as the Central Government may prescribe.

 

Rule 3(7) of the Information Technology (Intermediary Guidelines) Rules, 2011

3(7) - When required by lawful order, the intermediary shall provide information or any such assistance to Government Agencies who are lawfully authorised for investigative, protective, cyber security activity. The information or any such assistance shall be provided for the purpose of verification of identity, or for prevention, detection, investigation, prosecution, cyber security incidents and punishment of offences under any law for the time being in force, on a request in writing staling clearly the purpose of seeking such information or any such assistance.

 

It must be noted that Article 16 and Article 17 refer only to data preservation and not data retention. “Data preservation” means to keep data, which already exists in a stored form, protected from anything that would cause its current quality or condition to change or deteriorate. Data retention means to keep data, which is currently being generated, in one’s possession into the future.[10] In short, the article provides only for preservation of existing stored data, pending subsequent disclosure of the data, in relation to specific criminal investigations or proceedings.

The Convention uses the term "order or similarly obtain", which is intended to allow the use of other legal methods of achieving preservation than merely by means of a judicial or administrative order or directive (e.g. from police or prosecutor). In some States, preservation orders do not exist in the procedural law, and data can only be preserved and obtained through search and seizure or production order. Flexibility was therefore intended by the use of the phrase "or otherwise obtain" to permit the implementation of this article by the use of these means.

While Indian law does not have a specific provision for issuing an order for preservation of data, the provisions of section 29 as well as sections 99 to 101 of the Code of Criminal Procedure, 1973 may be utilized to achieve the result intended by Articles 16 and 17. Although section 67C of the IT Act uses the term “preserve and retain such information”, this provision is intended primarily for the purpose of data retention and not data preservation.

Another provision which may conceivably be used for issuing preservation orders is Rule 3(7) of the Information Technology (Intermediary Guidelines) Rules, 2011 which requires intermediaries to provide “any such assistance” to Government Agencies who are lawfully authorised for investigative, protective, cyber security activity. However, in the absence of a power of preservation in the main statute (IT Act) it remains to be seen whether such an order would be enforced if challenged in a court of law.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 18 – Production order

1 Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to order:

a. a person in its territory to submit specified computer data in that person’s possession or control, which is stored in a computer system or a computer-data storage medium; and

b. a service provider offering its services in the territory of the Party to submit subscriber information relating to such services in that service provider’s possession or control.

2 The powers and procedures referred to in this article shall be subject to Articles 14 and 15.

3 For the purpose of this article, the term “subscriber information” means any information contained in the form of computer data or any other form that is held by a service provider, relating to subscribers of its services other than traffic or content data and by which can be established:

a the type of communication service used, the technical provisions taken thereto and the period of service;

b the subscriber’s identity, postal or geographic address, telephone and other access number, billing and payment information, available on the basis of the service agreement or arrangement;

c any other information on the site of the installation of communication equipment, available on the basis of the service agreement or arrangement.

 

Section 28(2)

(2) The Controller or any officer authorized by him in this behalf shall exercise the like powers which are conferred on Income-tax authorities under Chapter XIII of the Income-Tax Act, 1961 and shall exercise such powers, subject to such limitations laid down under that Act.

Section 58(2)

(2) The Cyber Appellate Tribunal shall have, for the purposes of discharging their functions under this Act, the same powers as are vested in a civil court under the Code of Civil Procedure, 1908, while trying a suit, in respect of the following matters, namely -

(b) requiring the discovery and production of documents or other electronic records;

 

While the Cyber Appellate Tribunal and the Controller of Certifying Authorities both have the power to call for information under the IT Act, these powers can be exercised only for limited purposes since the jurisdiction of both authorities is limited to the procedural provisions of the IT Act and they do not have the jurisdiction to investigate penal provisions. In practice, the penal provisions of the IT Act are investigated by the regular law enforcement apparatus of India, which use statutory provisions for production orders applicable in the offline world to computer systems as well. It is a very common practice amongst law enforcement authorities to issue orders under the Code of Criminal Procedure, 1973 (section 91) or the relevant provisions of the Income Tax Act, 1961 to compel production of information contained in a computer system. The power to order production of a “document or other thing” under section 91 of the Criminal Procedure Code is wide enough to cover all types of information which may be residing in a computer system and can even include the entire computer system itself.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 19 – Search and seizure of stored computer data

1 Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to search or similarly access:

a a computer system or part of it and computer data stored therein; and

b a computer-data storage medium in which computer data may be stored in its territory.

2 Each Party shall adopt such legislative and other measures as may be necessary to ensure that where its authorities search or similarly access a specific computer system or part of it, pursuant to paragraph 1.a, and have grounds to believe that the data sought is stored in another computer system or part of it in its territory, and such data is lawfully accessible from or available to the initial system, the authorities shall be able to expeditiously extend the search or similar accessing to the other system.

3 Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to seize or similarly secure computer data accessed according to paragraphs 1 or 2. These measures shall include the power to:

a seize or similarly secure a computer system or part of it or a computer-data storage

medium;

b make and retain a copy of those computer data;

c maintain the integrity of the relevant stored computer data;

d render inaccessible or remove those computer data in the accessed computer system.

4 Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to order any person who has knowledge about the functioning of the computer system or measures applied to protect the computer data therein to provide, as is reasonable, the necessary information, to enable the undertaking of the measures referred to in paragraphs 1 and 2.

5 The powers and procedures referred to in this article shall be subject to Articles 14 and15.

76 Confiscation

Any computer, computer system, floppies, compact disks, tape drives or any other accessories related thereto, in respect of which any provision of this Act, rules, orders or regulations made thereunder has been or is being contravened, shall be liable to confiscation:

Provided that where it is established to the satisfaction of the court adjudicating the confiscation that the person in whose possession, power or control of any such computer, computer system, floppies, compact disks, tape drives or any other accessories relating thereto is found is not responsible for the contravention of the provisions of this Act, rules, orders or regulations made there under, the court may, instead of making an order for confiscation of such computer, computer system, floppies, compact disks, tape drives or any other accessories related thereto, make such other order authorized by this Act against the person contravening of the provisions of this Act, rules, orders or regulations made there under as it may think fit.

 

 

While Article 19 provides for the power to search and seize computer systems for the investigation into criminal offences of any type of kind, section 76 of the IT Act is limited only to contraventions of the provisions of the Act, rules, orders or regulations made thereunder. However, this does not mean that Indian law enforcement authorities do not have the power to search and seize a computer system for crimes other than those contained in the IT Act; just as in the case of Article 18, the authorities in India are free to use the provisions contained in the Criminal Procedure Code and other sectoral legislations which allow for seizure of property to seize computer systems when investigating criminal offences.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 20 – Real-time collection of traffic data

1 Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to:

a collect or record through the application of technical means on the territory of that Party, and

b compel a service provider, within its existing technical capability:

i to collect or record through the application of technical means on the territory of that Party; or

ii to co-operate and assist the competent authorities in the collection or recording of,

 

traffic data, in real-time, associated with specified communications in its territory transmitted by means of a computer system.

2 Where a Party, due to the established principles of its domestic legal system, cannot adopt the measures referred to in paragraph 1.a, it may instead adopt legislative and other measures as may be necessary to ensure the real-time collection or recording of traffic data associated with specified communications transmitted in its territory, through the application of technical means on that territory.

3 Each Party shall adopt such legislative and other measures as may be necessary to oblige a service provider to keep confidential the fact of the execution of any power provided for in this article and any information relating to it.

4 The powers and procedures referred to in this article shall be subject to Articles 14 and 15.

69B Power to authorize to monitor and collect traffic data or information through any computer resource for Cyber Security

(1) The Central Government may, to enhance Cyber Security and for identification, analysis and prevention of any intrusion or spread of computer contaminant in the country, by notification in the official Gazette, authorize any agency of the Government to monitor and collect traffic data or information generated, transmitted, received or stored in any computer resource.

(2) The Intermediary or any person in-charge of the Computer resource shall when called upon by the agency which has been authorized under sub-section (1), provide technical assistance and extend all facilities to such agency to enable online access or to secure and provide online access to the computer resource generating , transmitting, receiving or storing such traffic data or information.

(3) The procedure and safeguards for monitoring and collecting traffic data or information, shall be such as may be prescribed.

(4) Any intermediary who intentionally or knowingly contravenes the provisions of sub-section (2) shall be punished with an imprisonment for a term which may extend to three years and shall also be liable to fine.

Explanation: For the purposes of this section, (i) "Computer Contaminant" shall have the meaning assigned to it in section 43.

(ii) "traffic data" means any data identifying or purporting to identify any person, computer system or computer network or location to or from which the communication is or may be transmitted and includes communications origin, destination, route, time, date, size, duration or type of underlying service or any other information.

 

Section 69B in the IT Act enables the government to authorise the monitoring and collection of traffic data through any computer system. Under the Convention, orders for collection and recording of traffic data can be given for the purposes mentioned in Articles 14 and 15. On the other hand, as per the Information Technology (Procedure and safeguard for Monitoring and Collecting Traffic Data or Information) Rules, 2009, an order for monitoring may be issued for any of the following purposes relating to cyber security:

(a) forecasting of imminent cyber incidents;

(b) monitoring network application with traffic data or information on computer resource;

(c) identification and determination of viruses or computer contaminant;

(d) tracking cyber security breaches or cyber security incidents;

(e) tracking computer resource breaching cyber security or spreading virus or computer contaminants;

(f) identifying or tracking of any person who has breached, or is suspected of having breached or being likely to breach cyber security;

(g) undertaking forensic of the concerned computer resource as a part of investigation or internal audit of information security practices in the computer resources;

(h) accessing a stored information for enforcement of any provisions of the laws relating to cyber security for the time being in force;

(i) any other matter relating to cyber security.

As can be seen from the above, the reasons for which an order for monitoring traffic data can be issued are extremely wide, this is in stark contrast to the reasons for which an order for interception of content data may be issued under section 69. The Rules also provide that the intermediary shall not disclose the existence of a monitoring order to any third party and shall take all steps necessary to ensure extreme secrecy in the matter of monitoring of traffic data.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 21 – Interception of content data

1 Each Party shall adopt such legislative and other measures as may be necessary, in relation to a range of serious offences to be determined by domestic law, to empower its competent authorities to:

a collect or record through the application of technical means on the territory of that Party, and

b compel a service provider, within its existing technical capability:

i to collect or record through the application of technical means on the territory of that Party, or

ii to co-operate and assist the competent authorities in the collection or recording of,

content data, in real-time, of specified communications in its territory transmitted by means of a computer system.

2 Where a Party, due to the established principles of its domestic legal system, cannot adopt the measures referred to in paragraph 1.a, it may instead adopt legislative and other measures as may be necessary to ensure the real-time collection or recording of content data on specified communications in its territory through the application of technical means on that territory.

3 Each Party shall adopt such legislative and other measures as may be necessary to oblige a service provider to keep confidential the fact of the execution of any power provided for in this article and any information relating to it.

4 The powers and procedures referred to in this article shall be subject to Articles 14 and 15.

69 Powers to issue directions for interception or monitoring or decryption of any information through any computer resource

(1) Where the central Government or a State Government or any of its officer specially authorized by the Central Government or the State Government, as the case may be, in this behalf may, if is satisfied that it is necessary or expedient to do in the interest of the sovereignty or integrity of India, defense of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above or for investigation of any offence, it may, subject to the provisions of sub-section (2), for reasons to be recorded in writing, by order, direct any agency of the appropriate Government to intercept, monitor or decrypt or cause to be intercepted or monitored or decrypted any information transmitted received or stored through any computer resource.

(2) The Procedure and safeguards subject to which such interception or monitoring or decryption may be carried out, shall be such as may be prescribed

(3) The subscriber or intermediary or any person in charge of the computer resource shall, when called upon by any agency which has been directed under sub section (1), extend all facilities and technical assistance to -

(a) provide access to or secure access to the computer resource containing such information; generating, transmitting, receiving or storing such information; or

(b) intercept or monitor or decrypt the information, as the case may be; or

(c) provide information stored in computer resource.

(4) The subscriber or intermediary or any person who fails to assist the agency referred to in sub-section (3) shall be punished with an imprisonment for a term which may extend to seven years and shall also be liable to fine.

There has been a lot of academic research and debate around the exercise of powers under section 69 of the IT Act, but the current piece is not the place for a standalone critique of section 69.[11] The analysis here is limited to a comparison of the provisions of Article 20 vis-à-vis section 69 of the IT Act.

In that background, it needs to be pointed out that two important issues mentioned in Article 20 of the Convention are not specifically mentioned in section 69B, viz. (i) that the order should be only for specific computer data, and (ii) that the intermediary should keep such an order confidential; these requirements are covered by Rules 9 and 20 of the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009, respectively.

Convention on Cybercrime

Information Technology Act, 2000

 

Article 22 – Jurisdiction

1 Each Party shall adopt such legislative and other measures as may be necessary to establish jurisdiction over any offence established in accordance with Articles 2 through 11 of this Convention, when the offence is committed:

a in its territory; or

b on board a ship flying the flag of that Party; or

c on board an aircraft registered under the laws of that Party; or

d by one of its nationals, if the offence is punishable under criminal law where it was committed or if the offence is committed outside the territorial jurisdiction of any State.

2 Each Party may reserve the right not to apply or to apply only in specific cases or conditions the jurisdiction rules laid down in paragraphs 1.b through 1.d of this article or any part thereof.

3 Each Party shall adopt such measures as may be necessary to establish jurisdiction over the offences referred to in Article 24, paragraph 1, of this Convention, in cases where an alleged offender is present in its territory and it does not extradite him or her to another Party, solely on the basis of his or her nationality, after a request for extradition.

4 This Convention does not exclude any criminal jurisdiction exercised by a Party in accordance with its domestic law.

5 When more than one Party claims jurisdiction over an alleged offence established in accordance with this Convention, the Parties involved shall, where appropriate, consult with a view to determining the most appropriate jurisdiction for prosecution.

1. Short Title, Extent, Commencement and Application

(2) It shall extend to the whole of India and, save as otherwise provided in this Act, it applies also to any offence or contravention hereunder committed outside India by any person.

75 Act to apply for offence or contraventions committed outside India

(1) Subject to the provisions of sub-section (2), the provisions of this Act shall apply also to any offence or contravention committed outside India by any person irrespective of his nationality.

(2) For the purposes of sub-section (1), this Act shall apply to an offence or contravention committed outside India by any person if the act or conduct constituting the offence or contravention involves a computer, computer system or computer network located in India.

The Convention provides for extra territorial jurisdiction only for crimes committed outside the State by nationals of that State. However, the IT Act applies even to offences under the Act committed by foreign nationals outside India, as long as the act involves a computer system or computer network located in India.

Unlike para 3 of Article 22 of the Convention, the IT Act does not touch upon the issue of extradition. Cases involving extradition would therefore be dealt with by the general law of the land in respect of extradition requests contained in the Extradition Act, 1962. The Convention requires that in cases where the state refuses to extradite an alleged offender, it should establish jurisdiction over the offences referred to in Article 21(1) so that it can proceed against that offender itself. In this regard, it must be pointed out that Section 34A of the Extradition Act, 1962 provides that “Where the  Central Government is  of the  opinion that  a  fugitive  criminal  cannot  be surrendered or  returned pursuant  to a request for extradition from a foreign State,  it may, as it thinks fit, take steps to prosecute such fugitive criminal in India.” Thus the Extradition Act gives the Indian government the power to prosecute an individual in the event that such individual cannot be extradited.

International Cooperation

Chapter III of the Convention deals specifically with international cooperation between the signatory parties. Such co-operation is to be carried out both "in accordance with the provisions of this Chapter" and "through application of relevant international agreements on international cooperation in criminal matters, arrangements agreed to on the basis of uniform or reciprocal legislation, and domestic laws." The latter clause establishes the general principle that the provisions of Chapter III do not supersede the provisions of international agreements on mutual legal assistance and extradition or the relevant provisions of domestic law pertaining to international co-operation.[12] Although the Convention grants primacy to mutual treaties and agreements between member States, in certain specific circumstances it also provides for an alternative if such treaties do not exist between the member states (Article 27 and 28). The Convention also provides for international cooperation on certain issues which may not have been specifically provided for in mutual assistance treaties entered into between the parties and need to be spelt out due to the unique challenges posed by cyber crimes, such as expedited preservation of stored computer data (Article 29) and expedited disclosure of preserved traffic data (Article 30). Contentious issues such as access to stored computer data, real time collection of traffic data and interception of content data have been specifically left by the Convention to be dealt with as per existing international instruments or arrangements between the parties.

Conclusion

The broad language and wide terminology used IT Act seems to cover a number of the cyber crimes mentioned in the Budapest Convention, even though India has not signed and ratified the same. Penal provisions such as illegal access (Article 2), data interference (Article 4), system interference (Article 5), offence related to child pornography (Article 9), attempt and aiding or abetting (Article 11), corporate liability (Article 12) are substantially covered and reflected in the IT Act in a manner very similar to the requirements of the Convention. Similarly procedural provisions such as search and seizure of stored computer data (Article 19), real-time collection of traffic data (Article 20), interception of content data (Article 21) and Jurisdiction (Article 22) are also substantially reflected in the IT Act.

However certain penal provisions mentioned in the Convention such as computer related forgery (Article 7), computer related fraud (Article 8) are not provided for specifically in the IT Act but such offences are covered when provisions of the Indian Penal Code, 1860 are read in conjugation with provisions of the IT Act. Similarly procedural provisions such as expedited preservation of stored computer data (Article 16) and production order (Article 18) are not specifically provided for in the IT Act but are covered under Indian law through the provisions of the Code of Criminal Procedure, 1973.

Apart from the above two categories there are certain provisions such as misuse of devices (Article 6) and Illegal interception (Article 3) which may not be specifically covered at all under Indian law, but may conceivably be said to be covered through an expansive reading of provisions of the Indian Penal Code and the IT Act. It may therefore be said that even though India has not signed or ratified the Budapest Convention, the legal regime in India is substantially in compliance with the provisions and requirements contained therein.

Thus, the Convention on Cybercrime is perhaps the most important international multi state instruments that may be used to combat cybercrime, not merely because the provisions thereunder may be used as a model to bolster national/local laws by any State, be it a signatory or not (as in the case of India) but also because of the mechanism it lays down for international cooperation in the field of cyber terrorism. In an increasingly interconnected world where more and more information of individuals is finding its way to the cloud or other networked infrastructure the international community is making great efforts to generate norms for increased international cooperation to combat cybercrime and cyber terrorism. While the Convention is one such multilateral effort, States are also proposing to use bilateral treaties to enable them to better fight cybercrime, the United States CLOUD Act, being one such effort. In the backdrop of these novel efforts the role to be played by older instruments such as the Convention on Cybercrime as well as by important States such as India is extremely crucial.


[1] Explanatory Report to the Convention on Cybercrime, Para 304, https://rm.coe.int/16800cce5b.

[2] The analysis here has been limited to only Chapter I and Chapter II of the Convention, as it is only adherence to these two chapters that is required under the CLOUD Act.

[3] The only possible enforcement that may be done with regard to the Convention on Cybercrime is that the Council of Europe may put pressure on the signatory State to amend its local laws (if it is refusing to do so) otherwise it would be in violation of its obligations as a member of the European Union.

[4] Alexander Seger, “India and the Budapest Convention: Why Not?”, https://www.orfonline.org/expert-speak/india-and-the-budapest-convention-why-not/

[5] Explanatory Report to the Convention on Cybercrime, Para 50, https://rm.coe.int/16800cce5b.

[6] India is a party to the Berne Convention on Literary and Artistic Works, the Agreement on Trade Related Intellectual Property Rights and the Rome Convention. India has also recently (July 4, 2018) announced that it will accede to the WIPO Copyright Treaty as well as the WIPO Performances and Phonographs Treaty.

[7] The test under the Convention is that the relevant person would be the one who has a leading position within the company, based on:

  • a power of representation of the legal person;
  • an authority to take decisions on behalf of the legal person;
  • an authority to exercise control within the legal person.

[8]Vipul Kharbanda and Elonnai Hickock, “MLATs and the proposed Amendments to the US Electronic Communications Privacy Act”, https://cis-india.org/internet-governance/blog/mlats-and-the-proposed-amendments-to-the-us-electronic-communications-privacy-act

[9] The term “human rights” has been defined in the Act as “rights relating to life, liberty, equality and dignity of the individual guaranteed by the Constitution or embodied in the International Covenants and enforceable by courts in India”.

[10] Explanatory Report to the Convention on Cybercrime, Para 151, https://rm.coe.int/16800cce5b. .

[11] A similar power of interception is available under section 5 of the Telegraph Act, 1885, but that extends only to interception of telegraphic communication and does not extend to communications exchanged through computer networks.

[12] Explanatory Report to the Convention on Cybercrime, Para 244, https://rm.coe.int/16800cce5b.

ICANN Workstream 2 Recommendations on Accountability

by Akriti Bopanna last modified Nov 23, 2018 02:56 PM
One of the most significant initiatives to improve the accountability of the Internet Corporation of Assigned Names and Numbers (ICANN) commenced in 2014, when the Cross Community Working Group on Accountability was created. Its role was to develop a set of proposed enhancements to ICANN’s accountability to the global Internet community. This resulted in the first Work Stream (WS1) recommendations, which were eventually approved and incorporated into the bylaws of ICANN in 2016. These included a provision expressing the need for a second WS since the first one, done on a tight deadline,did not cover all the requisite issues. Instead WS1 only focused on issues that were needed to complete the Internet Assigned Numbers Authority(IANA) transition.

At the ICANN meeting in March of 2017 in Finland, the second Work Stream (WS2) was launched. The Cross Community Working Group submitted their final report at the end of June 2018 and the purpose of this blog is to look at the main recommendations given and the steps ahead to its implementation.

The new Workstream was structured into the following 8 independent sub groups as per the topics laid down in the WS1 final report, each headed by a Rapporteur:

1. Diversity

2. Guidelines for Standards of Conduct Presumed to be in Good Faith Associated with Exercising Removal of Individual ICANN Board Directors. (Guidelines for Good Faith)

3. Human Rights Framework of Interpretation (HR-FOI)

4. Jurisdiction

5. Office of the Ombuds

6. Supporting Organization/ Advisory Committee Accountability

7. Staff Accountability

8. ICANN Transparency

 

1. DIVERSITY Recommendations

The sub-group on Diversity suggested ways by which ICANN can define, measure, report, support and promote diversity. They proposed 7 key factors to guide all diversity considerations: Language, Gender, Age, Physical Disability, Diverse skills, Geographical representation and stakeholder group. Each charting organization within ICANN is asked to undertake an exercise whereby they publish their diversity obligations on their website, for each level of employment including leadership either under their own charter or ICANN Bylaws. This should be followed by a diversity assessment of their existing structures and consequently used to formulate their diversity objectives/criteria and steps on how to achieve the same along with the timeline to do so. These diversity assessments should be conducted annually and at the very least, every 3 years. ICANN staff has been tasked with developing a mechanism for dealing with complaints arising out of diversity and related issues. Eventually, it is envisioned that ICANN will create a Diversity section on their website where an Annual Diversity Report will be published. All information regarding Diversity should also be published in their Annual Report.

The recommendations leave much upto the organization without establishing specific recruitment policies for equal opportunities. In their 7 parameters, race was left out as a criteria for diversity. The criteria of ‘diverse skills’ is also ambiguous; and within stakeholder group, it would have been more useful to highlight the priority for diversity of opinions within the same stakeholder group. So for example, to have two civil society organizations (CSOs) advocating for contrasting stances as opposed to having many CSO’s supporting one stance. However, these steps should be a good starting point to improve the diversity of an organization which in our earlier research we have found to be neither global nor multistakeholder. In fact, our recent diversity analysis has shown concerns such as the vast number of the end users participating and as an extension, influencing ICANN work are male. The mailing list where the majority of discussions take place are dominated by individuals from industry bodies. This coupled with the relative minority presence of the other stakeholders, especially geographically (14.7% participation from Asian countries), creates an environment where concerns emanating from other sections of the society could be overshadowed. Moreover, when we have questioned ICANN’s existing diversity of employees based on their race and citizenship, they did not give us the figures citing either lack of information or confidentiality.

2. HUMAN RIGHTS FRAMEWORK OF INTERPRETATION (HR-FOI)

A Framework of Interpretation was developed by the WS2 for ICANN Bylaws relating to Human Rights which clarified that Human Rights are not a Commitment for the organization but is a Core Value. The former being an obligation while the latter are not necessarily intended to apply consistently and comprehensively to ICANN’s activities”.

To summarize the FOI, if the applicable law i.e. the law practiced in the jurisdiction where ICANN is operating, does not mandate certain human rights then they do not raise issues under the core value. As such, there can be no enforcement of human rights obligations by ICANN or any other party against any other party. Thus, contingent on the seat of the operations the law can vary though by in large ICANN recognizes and can be guided by significant internationally respected human rights such as those enumerated in the Universal Declaration of Human Rights. The United Nations Guiding Principles for Business and Human Rights was recognized as useful in the process of applying the core value in operations since it discusses corporate responsibility to respect human rights. Building on this, Human Right Impact Assessments (HRIA) with respect to ICANN policy development processes are currently being formulated by the Cross Community Working Group on Human Rights. Complementing this, ICANN is also undertaking an internal HRIA of the organization’s operations. It is important to remember that the international human rights instruments that are relevant here are those required by the applicable law.

Apart from its legal responsibility to uphold the HR laws of an area, the framework is worded negatively in that it says ICANN should in general avoid violating human rights. It is also said that they should take into account HR when making policies but these fall short from saying that HR considerations should be given prominent weightage and since there are many core values, at any point one of the others can be used to sidestep human rights. One core value in particular says that ICANN should duly consider the public policy advice of governments and other authorities when arriving at a decision. Thus, if governments want to promote a decision to further national interests at the expense of citizen’s human rights then that would be very much possible within this FOI.

3. JURISDICTION

A highly contentious issue in WS2 was that of Jurisdiction, and the recommendations formed to tackle it were quite disappointing. Despite initial discussion by the group on ICANN’s location, they did not address the elephant in the room in their report. Even after the transition, ICANN’s new by-laws state that it is subject to California Law since it was incorporated there. This is partly the fault of the first Workstream because when enumerating the issues for WS2 with respect to jurisdiction, they left it ambiguous by stating: :

“At this point in the CCWG Accountability’s work, the main issues that need within Work Stream 2 relate to the influence that ICANN ́s existing jurisdiction may have on the actual operation of policies and accountability mechanisms. This refers primarily to the process for the settlement of disputes within ICANN, involving the choice of jurisdiction and of the applicable laws, but not necessarily the location where ICANN is incorporated.”

Jurisdiction can often play a significant role in the laws that ICANN will have to abide by in terms of financial reporting, consumer protection, competition and labour laws, legal challenges to ICANN’s actions and finally, in resolving contractual disputes. In its present state, the operations of ICANN could, if such a situation arises, see interference from US authorities by way of legislature, tribunals, enforcement agencies and regulatory bodies.

CIS has, in the past, discussed the concept of “jurisdictional resilience”, which calls for:

  • Legal immunity for core technical operators of Internet functions (as opposed to policymaking venues) from legal sanctions or orders from the state in which they are legally situated.
  • Division of core Internet operators among multiple jurisdictions
  • Jurisdictional division of policymaking functions from technical implementation functions

Proposing to change ICANN’s seat of headquarters or at the very least, suggest ways for ICANN to gain partial immunity for its policy development processes under the US law would have gone a long way in making ICANN truly a global body. It would have also ensured that as an organization, ICANN would have been equally accountable to all its stakeholders as opposed to now, where by virtue of its incorporation, it has higher legal and possible political, obligations to the United States. This was (initially?) expressed by Brazil who dissented from the majority conclusions of the sub-group and drafted their own minority report, which was supported by countries like Russia. They were unhappy that all countries are still not at an equal footing in the participation of management of Internet resources, which goes against the fundamentals of the multi-stakeholder system approach.

Recommendations:

The recommendations passed were in two categories:

  1. Office of Foreign Asset Control (OFAC)

OFAC is an office of the US Treasury administering and enforcing economic and trade sanctions based on the American foreign policy and national security objectives. It is pertinent because, for ICANN to enter into a Registration Accreditation Agreement (RAA) with an applicant from a sanctioned country, it will need an OFAC license. What happens right now is that ICANN is under no obligation to request for this license and in either case, OFAC can refuse to provide it. The sub group recommended that the terms of the RAA be modified so that ICANN is required to apply for and put their best efforts in securing the license if the applicant is qualified to be a registrar and not individually subject to sanctions. While the licensing process is underway they should also be helpful and transparent, and maintain on-going communication with the applicant. The same recommendation was made for applicants to the new gTLD program, from sanctioned countries. Other general licenses are needed from OFAC for certain ICANN transactions and hence it was proposed that ICANN pursue the same.

2. Choice of law and Choice of Venue Provisions in ICANN Agreements

In ICANN’S Registry Agreements (RA) and Registration Accreditation Agreement (RAA) the absence of a choice of law provision means that the governing law of these contracts is undetermined until later decided by a judge or arbitrator or an agreement between the parties. It was collectively seen that increased freedom of choice for the parties in the agreement could help in customizing the agreements and make it easier for registries and such to contractually engage with ICANN. Out of various options, the group decided that a Menu approach would be best whereby a host of options(decided by ICANN) can be provided and the party in case choose the most appropriate from them such as the jurisdiction of their incorporation.In RAs, the choice of venue was pre determined as Los Angeles, California but the group recommended that instead of imposing this choice on the party it would be better to offer a list of possible venues for arbitration. The registry can then choose amongst these options when entering into the contract. There were other issues discussed which did not reach fruition due to lack of unanimity such as discussions on immunity of ICANN from US jurisdiction.

4. OFFICE OF THE OMBUDS

Subsequent to the external evaluation of the ICANN Office of the Ombuds (IOO), there were a couple of recommendations to strengthen the office. They were divided into procedural aspects that the office should carry out to improve their complaint mechanism such as differentiating between categories of complaints and explaining how each type would be handled with. The issues that would not invoke actions from the IOO should also be established clearly and if and where these could be transferred to any other channel. The response from all the relevant parties of ICANN to a formal request or report from the IOO should take place within 90 days, and 120 at the maximum if an explanation for the same can be provided. An internal timeline will be defined by the office for handling of complaints and document a report on these every quarter or annually. A recommendation for the IOO to be formally trained in mediation and have such experience within its ranks was further given. Reiterating the importance of diversity, even this sub group emphasized on the IOO bearing a diverse group in terms of gender and other parameters. This ensures that a complainant has a choice in who to approach in the office making them more comfortable. To enhance the independence of the Ombuds, their employment contract should have a 5 year fixed term which only allows for one extension of maximum 3 years. An Ombuds Advisory Panel is to be constituted by ICANN comprising five members to act as advisers, supporters and counsel for the IOO with at least 2 members having Ombudsman experience and the remaining possessing extensive ICANN experience. They would be responsible for selecting the new Ombuds and conducting the IOO’s evaluation every 5 years amongst others. Lastly, the IOO should proactively document their work by publishing reports on activity, collecting and publicizing statistics, user satisfaction information a well any improvements to the process.

These proposals still do not address the opacity of how the Office of the Ombuds resolve these cases since it does not call for; a) a compilation of all the cases that have been decided by the office in the history of the organization b) the details of the parties that are involved if the parties have allowed that to be revealed and if not at the very least, the non sensitive data such as their nationality and stakeholder affiliation and c) a description of the proceedings of the case and who won in each of them. When CIS asked for the above in 2015, the information was denied on ground of confidentiality. Yet, it is vital to know these details since the Ombuds hear complaints against the Board, Staff and other constituent bodies and by not reporting on this, ICANN is rendering the process much less accountable and transparent. This conflict resolution process and its efficacy is even more essential in a multi-stakeholder environment so as to give parties the faith to engage in the process, knowing that the redressal mechanisms are strong. It is also problematic that sexual harassments complaints are dealt by the Ombuds and that ICANN does not have a specific Anti-Sexual Harassment Committee. The committee should be neutral and approachable and while it is useful for the Office of the Ombuds to be trained in sexual harassment cases, it is by no means a comprehensive and ideal approach to deal with complaints of this nature. Despite ICANN facing a sexual harassment claim in 2016, the recommendations do not specifically address the approach the Ombuds should take in tackling sexual harassment.

 

5. SUPPORTING ORGANIZATION/ ADVISORY COMMITTEE ACCOUNTABILITY

The sub group presented the outcomes under the main heads of Accountability, Transparency, Participation, Outreach and Updates to policies and procedures. They suggested these as good practices that can be followed by the organizations and did not recommend that implementation of the same be required. The accountability aspect had suggestions of better documentation of procedures and decision-making. Proposals of listing members of such organizations publicly, making their meetings open to public observation including minutes and transcripts along with disclosing their correspondence with ICANN were aimed at making these entities more transparent. In the same vein, rules of membership and eligibility criteria, the process of application and a process of appeal should be well defined. Newsletters should be published by the SO/AC to help non-members understand the benefit and the process of becoming a member. Policies were asked to be reviewed at regular intervals and these internal reviews should not extend beyond a year.

6. STAFF ACCOUNTABILITY

Improving the ICANN staff’s Accountability was the job of a different group who assessed it at the service delivery, departmental or organizational level not at an individual or personnel level. They did this by analysing the roles and responsibilities of the Board, staff and community members and the nexus between them. Their observations culminated in the understanding that ICANN needs to take steps such as make visible their performance management system and process, their vision for the departmental goals and how they tie in to the organization’s strategic goals and objectives. They note that several new mechanisms have already been established yet have not been used enough to ascertain their efficacy and thus, propose a regular information acquisition mechanism. Most importantly, they have asked ICANN to standardize and publish guidelines for suitable timeframes for acknowledging and responding to requests from the community.

 

7. ICANN TRANSPARENCY

The last group of the WS2 was one specifically looking at the transparency of the organization.

a. The Documentary Information Disclosure Policy (DIDP)

Currently the DIDP process only applies to ICANN’s “operational activities”, it was recommended to delete this caveat to cover a wider breadth of the organization’s activities. As CIS has experienced, request for information is often met with an answer that such information is not documented and to remedy the same, a documentation policy was proposed where if significant elements of a decision making process are taking place orally then the participants will be required to document the substance of the conversation. Many a times DIDP requests are refused because one aspect of the information sought is subject to confidentiality. hus one of the changes is to introduce a severability clause so that in such cases, information can still be disclosed with the sensitive aspect redacted or severed. In scenarios of redaction, the rationale should be provided citing one of the given DIDP exceptions along with the process for appeal. ICANN’s contracts should be under the purview of the DIDP except when subject to a non-disclosure agreement and further, the burden is on the other party to convince ICANN that it has a legitimate commercial reason for requested the NDA. No longer would any information pertaining to the security and stability of the Internet be outside the ambit of the DIDP but only if it is harmful to the security and stability. Finally, ICANN should review the DIDP every five years to see how it can be improved.

b. Documenting and Reporting on ICANN’s Interactions with the Government

In a prominent step towards being more transparent with their expenditure and lobbying, the group recommended that ICANN begins disclosing publicly on at least an annual basis, sums of $20,000 per year devoted to “political activities” both in the US and abroad. All expenditures should be done on an itemized basis by ICANN for both outside contractors and internal personnel along with the identities of the persons engaging in such activities and the type of engagement used for such activities amongst others.

cc. Transparency of Board Deliberations

The bylaws were recommended to be revised so that material may be removed from the minutes of the Board if subject to a DIDP exception. The exception for deliberative processes should not apply to any factual information, technical report or reports on the performance or effectiveness of a particular body or strategy. When any information is removed from the minutes of the Board meeting, they should be disclosed after a particular period of time as and when the window of harm has passed.

d. ICANN’s Anonymous Hotline (Whistle-blower Protection)

To begin with, ICANN was recommended to devise a way such that when anyone searches their website for the term “whistle-blower”, it should redirect to their Hotline policy since people are unlikely to be aware that in ICANN parlance it is referred to as the Hotline policy. Instead of only “serious crimes” that are currently reported, all issues and concerns that violate local laws should be. Complaints should not be classified as ‘urgent’ and ‘non-urgent’ but all reports should be a priority and receive a formal acknowledgment within 48 hours at the maximum. ICANN should make it clear that any retaliation against the reporter will be taken and investigated as seriously as the original alleged wrongdoing. Employees should be provided with data about the use of the Hotline, including the types of incidents reported. Few member of this group came out with a Minority Statement expressing their disapproval with one particular aspect of the recommendations that they felt was not developed enough, the one pertaining to ICANN’s attorney-client privilege. The recommendation did not delve into specifics but merely stated that ICANN should expand transparency in their legal processes including clarifying how attorney-client privilege is invoked. The dissidents thought ICANN should go farther and enumerate principles where the privilege would be waived in the interests of transparency and account for voluntary disclosure as well.

The transparency recommendations did not focus on the financial reporting aspects of ICANN which we have found ambiguities with before. Some examples are; the Registries and Registrars are the main sources of revenue though there is ambiguity as to the classifications provided by ICANN such as the difference between RYG and RYN. The mode of contribution of sponsors isn’t clear either so we do not know if this was done through travel, money, media partnerships etc. Several entities have been listed from different places in different years, sometimes depending on the role they have played such as whether they are a sponsor or registry. Moreover, the Regional Internet Registries are clubbed under one heading and as a consequence it is not possible to determine individual RIR contribution like how much did APNIC pay for the Asia and Pacific region. Thus, there is a lot more scope for ICANN to be transparent which goes beyond the proposals in the report.

 

It is worth noting that whereas the mandate of the WS1 included the implementation of the recommendations, this is not the case for WS2 and thus, by creating a report itself the mission of the group is concluded. This difference can be attributed to the fact that during the first WS, there was a need to see it through since the IANA transition would not happen otherwise. The change in circumstances and the corresponding lack of urgency render the process less powerful, the second time round. The final recommendations are now being discussed in the relevant charting organizations within ICANN such as the Government Advisory Council (GAC) and subsequent to their approval,, it will be sent to the Board who will decide to adopt them or not. If adopted, ICANN and its sub organizations will have to see how they can implement these recommendations. The co-chairs of the group will be the point of reference for the chartering organizations and an implementation oversight team has been formed, consisting of the Rapporteurs of the sub teams and the co-chairs. A Feasibility Assessment Report will be made public in due time which will describe the resources that would take to implement the recommendations. Since it would be a huge undertaking for ICANN to implement the above, the compliance process is expected to take a few years. .

 

The link to report can be found here.

Regulating the Internet: The Government of India & Standards Development at the IETF

by Aayush Rathi, Gurshabad Grover and Sunil Abraham — last modified Jan 22, 2019 07:29 AM
The institution of open standards has been described as a formidable regulatory regime governing the Internet. Given the regulatory and domestic policy implications that technical standards can have, there is a need for Indian governmental agencies to focus adequate resources geared towards achieving favourable outcomes at standards development fora.

This brief was authored by Aayush Rathi, Gurshabad Grover and Sunil Abraham. Click here to download the policy brief.


Executive Summary

 

The institution of open standards has been described as a formidable regulatory regime governing the Internet. As the Internet has moved to facilitate commerce and communication, governments and corporations find greater incentives to participate and influence the decisions of independent standards development organisations.

While most such bodies have attempted to systematise fair and transparent processes, this brief highlights how they may still be susceptible to compromise. Documented instances of large private companies like Microsoft, and governmental instrumentalities like the US National Security Agency (NSA) exerting disproportionate influence over certain technical standards further the case for increased Indian participation.

The debate around Transport Layer Security (TLS) 1.3 at the Internet Engineering Task Force (IETF) forms an important case for studying how a standards body responded to political developments, and how the Government of India participated in the ensuing discussions. Lasting four years, the debate ended in favour of greater communications security. One of the security improvements in TLS 1.3 over its predecessor is that is makes less information available to networking middleboxes. Considering that Indian intelligence agencies and government departments have expressed fears of foreign-manufactured networking equipment being used by foreign intelligence to eavesdrop on Indian networks, the development is potentially favourable for the security of Indian communication in general, and the security of military and intelligence systems in particular.  India has historically procured most networking equipment from foreign manufacturers. While there have been calls for indigenised production of such equipment, achieving these objectives will necessarily be a gradual process. Participating in technical standards can, then, be an effective interim method for intelligence agencies, defence wings and law enforcement for establishing trust in critical networking infrastructure sourced from foreign enterprises.

Outlining some of the existing measures the Indian government has put in place to build capacity for and participate in standard setting, this brief highlights that while these are useful starting points, they need to be harmonised and strengthened to be more fruitful. Given the regulatory and domestic policy implications that technical standards can have, there is a need for Indian governmental agencies to focus adequate resources geared towards achieving favourable outcomes at standards development fora.


Click here to download the policy brief.

Note: The recommendations in the brief were updated on 17 December 2018 to reflect the relevance of technical standard-setting in the recent discussions around Indian intelligence concerns about foreign-manufactured networking equipment.

Cyberspace and External Affairs:A Memorandum for India Summary

by Arindrajit Basu and Elonnai Hickok — last modified Dec 01, 2018 04:10 AM
This memorandum seeks to summarise the state of the global debate in cyberspace; outline how India can craft it’s global strategic vision and finally, provides a set of recommendations for the MEA as they craft their cyber diplomacy strategy.

It limits itself to advocating certain procedural steps that the Ministry of External Affairs should take towards propelling India forward as a leading voice in the global cyber norms space and explains why occupying this leadership position should be a vital foreign policy priority. It does not delve into content-based recommendations at this stage. Further, this memorandum is not meant to serve as exhaustive academic research on the subject but builds on previous research by the Centre for Internet & Society in this area to highlight key policy windows that can be driven by India.

This memorandum provides a background to global norms formation focussing on key global developments over the past month; traces the opportunities s for India to play a lead role in the global norms formulation debate and then charts out process related recommendations on next steps towards India taking this forward.


Click here to read more

A Critical Look at the Visual Representation of Cybersecurity

by Paromita Bathija, Padmini Ray Murray, and Saumyaa Naidu — last modified Aug 21, 2019 08:00 AM
The Centre for Internet and Society and design collective Design Beku came together on the 15th of November for a workshop on Illustrations and Visual Representations of Cybersecurity. Images in the public sphere such as visuals in the media, Wikipedia commons, and stock images - play a vital role in the public’s perception of cybercrime and cybersecurity.
  • Edited by Karan Saini / Illustrations by - Paul Anthony George, and Roshan Shakeel

The existing imagery comprises of largely stereotypical images of silhouettes of men in hoodies, binary codes, locks, shields; all in dark tones of blue and green. The workshop aimed at identifying the concerns with these existing images and ideating on creating visuals that capture the nuanced concepts within cybersecurity as well as to contextualise them for the Global South. It began with a discussion on the various concepts within cybersecurity including disinformation, surveillance in the name of security, security researchers, regulation of big technology companies, gender and cybersecurity, etc. This was followed by a mapping of different visual elements in the existing cybersecurity imagery to infer the biases in them. Further, an ideation session was conducted to create alternate visualisations that counter these biases. A detailed report of the workshop can be read here.

The participants began by discussing the concerning impacts of present visualisations – there is a lack of representation and context of the global south. Misrepresentation of cybersecurity leads people to be susceptible to disinformation, treats cybercrime as an abstract concept that does not have a direct impact, and oversimplifies the problem and its solutions. The ecosystem in which this imagery exists also presented a larger issue. A majority of the images are created as clickbait alongside media articles. Media houses thus benefit from the oversimplification and mystification of cybersecurity in such images.

Through the mapping of existing images present online, several concerns were identified. The vague elements and unclear representation add to the mystification of cybersecurity as a concept. In present depictions, the use of technological devices and objects, leads to the lack of a human element, distancing the threat from any real impact to people using these devices. The metaphor of a physical threat is often used to depict cybersecurity using elements such as a lock and key. Recurring use of these elements gives a false idea of what is being secured or breached and how. Representations rely on tropes regarding the identity of hackers, and fail to capture the vulnerability of the system. The imagery gives the impression that systems which are breached are immensely secure to begin with and are compromised  only as a result of sophisticated attacks carried out by malicious actors. The identity of hackers is commonly associated with cyber attacks and breaches, and the existing imagery reinforces this. Visuals showing a masked man or a silhouette of a man in dark background are the usual markers of a malicious hacker in conventional cybersecurity imagery. While there is a lack of representation of women in stock cybersecurity images, another trope found was that of a cheerful woman coder. There were also images of faceless women with laptops[1]. The reductive nature of these images point to deeper concerns around gender representation in cybersecurity.

The participants examined what the implications of such visual representation would be, and why there is a need to change the imagery. How can visual depictions be more representative? Can they avoid subscribing to a homogenised idea of an Indian context – specific without being reductive? Can better depiction broaden understanding of cybercrime and emphasize the proximity of those threats? With technology, concepts are often understood through metaphors – how data is explained impacts how people perceive it. Visual imagery can play a critical role in demystifying concepts when done well; illustrations can change the discourse. They must begin to incorporate intersecting aspects of gender, privacy, susceptibility of vulnerable populations, generational and cultural gaps, as well as  manifestations of the described crimes to make technological laypersons more aware of the threat.

Potential new imagery would need to address aspects such as disinformation, the importance of privacy and who has a right to it, change representation of hackers, depict the cybersecurity community, explain specific concepts to both – the general user and to the people part of cybersecurity efforts in the country, the implications of cybercrime on vulnerable populations, and more in an attempt to deconstruct and disseminate what cybersecurity looks like today.

The ideation session involved rethinking specific concepts such as disinformation, and ethical hacking to create alternate imagery. For instance, disinformation was visually imagined as a distortion of an already distorted message being perceived by the viewer. In order to bring attention to the impact of devices, a phone was thought of as a central object to which different concepts of cybersecurity can be connected.

null

‘Fake News Cascade’ by Paul Anthony George

null

‘Fake News’ by Paul Anthony George

null

null

‘Disinformation/ Fake News’  by Roshan Shakeel; The sketch is about questioning the validity of what we see online, and that every message we see is constructed in some form or the other by someone else.

null

‘Disinformation/ Fake News’  by Roshan Shakeel; The sketch visualizes how the source of information ('the original') gets distorted after a certain point.

For ethical hacking, a visualisation depicting a day in the life of an ethical hacker was thought of to normalize hacking and to focus on their contribution in security research.

null

‘A Day in the Life of an Indian Hacker’ by Paul Anthony George

null

'Surveillance in the Name of Security' by Roshan Shakeel

Resources on ethical hacking (HackerOne)[2] and hacker culture (2600.com)[3] were also consulted as part of the exercise to gather references on the work done by hackers. This allowed a deeper understanding of how the hacker community depicts itself.  Check Point Research[4] and Kerala Police Cyberdome[5] were also examined for further insight into cybersecurity.  With regard to gender representation, sources that use visual techniques to communicate concerns and advocacy campaigns were also referred to. The Gendering Surveillance[6] initiative by the Internet Democracy project[7], which looks at how surveillance harms and restricts women, also offered insights on the use of illustrations supporting the case studies. Another reference was the "Visualising Women's Rights in the Arab World"[8] project by the Tactical Technology Collective[9]. The project aims to “strengthen the use of visual techniques by women's rights advocates in the Arab world, and to build a network of women with these skills”.[10]

More visual explainers and animations[11] from the Tactical Technology Collective were noted for their broader engagement with digital security and privacy. A video by the Internet Democracy Project that explains the Internet through rangoli[12], was observed specifically for setting the concept in Indian context through the use of aesthetics.

The workshop concluded with a discussion of potential visual iterations – imagery of cybersecurity that is not technology-oriented but focussed on the behavioural implications of access to such technology, illustrated public service announcements enhancing the profile of cybersecurity researchers or the everyday hacker. The impact of the discussion itself can indicate the relevance of such an effort. Artists and designers can be encouraged to create a body of imagery that shifts discourse and perception, to begin visualising for advocacy, demystify and stop the abstraction of cybercrime that can lead to a false sense of security, incorporate unique aspects of the debate within the Indian context, and generate new dialogue and understanding of cybersecurity. A potential step forward from this workshop would be to engage with the design community at large along with the domain experts to create more effective imagery for cybersecurity.


[1] https://www.hackerone.com/

[2] https://2600.com/

[3] https://research.checkpoint.com/about-us/

[4] http://www.cyberdome.kerala.gov.in/

[5] https://genderingsurveillance.internetdemocracy.in/

[6] https://internetdemocracy.in/

[7] https://visualrights.tacticaltech.org/index.html

[8] https://tacticaltech.org/

[9] https://visualrights.tacticaltech.org/content/about-website.html

[10] https://tacticaltech.org/projects/survival-in-the-digital-age-ono-robot-2012/

[11] https://internetdemocracy.in/2018/08/dots-and-connections/

[12] https://www.independent.co.uk/life-style/gadgets-and-tech/features/women-in-tech-its-time-to-drop-the-old-stereotypes-7608794.html

Event Report on Intermediary Liability and Gender Based Violence

by Akriti Bopanna last modified Dec 21, 2018 07:16 AM
This report is a summary of the proceedings of the Roundtable Conference organized by the Centre for Internet and Society (CIS) at the Digital Citizen Summit, an annual summit organized by the Digital Empowerment Foundation. It was conducted at the India International Centre in New Delhi on November 1, 2018 from 11.30 a.m. to 12.30 p.m.

With inputs and edited by Ambika Tandon. Click here to download the PDF


Introduction

Background

The topic of discussion was intermediary liability and Gender Based Violence (GBV), the debate on GBV globally and in India evolving to include myriad forms of violence in online spaces in the past few years. This ranges from violence native to the digital, such as identity theft, and extensions of traditional forms of violence, such as online harassment, cyberbullying, and cyberstalking[1]. Given the extent of personal data available online, cyber attacks have led to a variety of financial and personal harms.[2] Studies have explored the extent of psychological and even physical harm to victims, which has been found to have similar effects to violence in the physical world[3]. Despite this, technologically-facilitated violence is often ignored or trivialised. When present, redressal mechanisms are often inadequate, further exacerbating the effects of violence on victims.

TheRoundtable explored ways of how intermediaries can help tackle gender based violence and discussed attempts at making the Internet a safer place for women which can ultimately help make it a gender equal environment. It also analyzed the key concerns of privacy and security leading the conversation to how we can demand more from platforms for our protection and how best to regulate them.

The roundtable had four female and one male participants from various civil society organisations working on rights in the digital space.

Roundtable Discussion

Online Abuse

The discussion commenced with the acknowledgement of it being well documented that women and sexual minorities face a disproportionate level of violence in the digital space, as an extension/reproduction of physical space. GBV exists on a continuum from the physical, verbal, and technologically enabled, either partially or fully, with overflowing boundaries and deep interconnections between different kinds of violence. Some forms of traditional violence such as harassment, stalking, bullying, sex trafficking, extend themselves into the digital realm while other forms are uniquely tech enabled like doxxing and morphing of imagery. Due to this considerations of anonymity, privacy, and consent, need to be re-thought in the context of tech enabled GBV. These come into play in a situation where the technological realm has largely been corporatised and functions under the imperative of treating the user and their data as the final product.

It was noted early on that GBV online can be a misnomer because it can be across a number of spaces and, the participants concentrated on laying down the specific contours of tech mediated or tech enabled violence. One of the discussants stated that the term GBV is a not a useful one since it does not encompass everything that is talked about when referring to online abuse. The phenomenon that gets the most traction is trolling on social media or abuse on social media. This is partly because it is the most visible people who are affected by it, and also since often, it is the most difficult to treat under law. In a 2012 study by the Internet Democracy Project focusing on online verbal abuse in social media, every woman they interviewed started by asserting that she is not a victim. The challenge with using the GBV framework is that it positions the woman as a victim. Other incidents on social media such as verbal abuse where there are rape threats or death threats, especially when there is an indication that the perpetrator is aware of the physical location of the victim, need to be treated differently from say online trolling.

Further, certain forms of violence, such as occurrences of ‘revenge porn’ or the non-consensual sharing of intimate images, including rape videos are easier to fit within the description of GBV. It is important to make these distinctions because the remedies then should be commensurate with perceived harm. It is not appropriate to club all of these together since the criminal threshold for each act is different. Whereas being called a “slut” or a “bitch” would not be enough for someone to be arrested, if a woman is called that repetitively by a large number of people the commensurate harm could be quite significant. Thus, using GBV as a broad term for all forms of violence ends up invisiblising certain forms of violence and prevents a more nuanced treatment of the discussion.

In response to this, a participant highlighted the normalisation of gendered hate speech, to the extent of lack of recognition as a form of hate speech. This lacunae in our law stems from the fact that we inherited our hate speech laws from a colonial era where it was based on the grounds of incitement of violence, more so physical violence. As a result, we do not take the International Covenant on Civil and Political Rights (ICCPR) standard of incitement to discrimination. If the law was based on an incitement to discriminate point of view then acts of trolling could come under hate speech. Even in the United Kingdom where there is higher sentencing for gender based crime as compared to other markers of identity such as race, gender does not fall under the parameters of hate speech. This can also be attributed to the threshold at which criminalization kicks in for such acts.

A significant aspect of online verbal abuse pointed out by a participant was that it does not affect all women equally. In a study, the Twitter accounts of 12 publicly visible women across the political spectrum were looked at for 2 weeks in early December, 2017. They were filtered against keywords and analyzed for abusive content. One Muslim woman in the study had extremely high levels of abuse, being consistently addressed as “Jihad man, Jihad didi or Jihad biwi”. According to the participant, she is also the least likely to get justice through the criminal system for such vitriol and as such, this disparity in the likelihood of facing online abuse and accessing official redressal mechanisms should be recognized. Another discussant reaffirmed the importance of making a distinction between online abuse against someone as opposed to gender based violence online where the threat itself is gendered.

In a small ethnographic study with the Bangalore police undertaken by one of the participants, the police were asked for their opinion on the following situation: A women voluntarily providers photos of herself in a relationship and once the relationship is over, the man distributes it. Is there a cause for redressal?

Policemen responded that since she gave it voluntarily in the first instance, the burden of the consequences is now on her. So even in a feminist framework of consent and agency where we have laws for actions of voyeurism and publishing photos of private parts, it is not being recognized by institutional response mechanisms.

Intermediary Liability

Private communications based intermediaries can be understood to be of two types: those that enable the carriage/transmission of communications and provide access to the internet, and those that host third party content. The latter have emerged as platforms that are central to the exercising of voice, the exchange of information and knowledge, and even the mobilisation of social movements. The norms and regulations around what constitutes gender based violence in this realm is then shaped not only by state regulations, but content moderation standards of these intermediaries. Further, the kinds of preventive tools and tools providing redressal are controlled by these platforms. More than before, we are looking deeper into the role of these companies that function as intermediaries and control access to third party content without performing editorial functions.

In the Intermediary Liability framework in the United States formulated in the 1990s, the intermediaries that were envisioned were not the intermediaries we have now. With time, the intermediary today is able to access and possess your data while urging a certain kind of behaviour from you. There is then an intermediary design duty which is not currently accounted for by the law. Moreover, the law practices a one size fits all regime whereas what could be more suitable is having approached tailored as per the offence. So for child pornography, a ‘removal when uploaded’ action using artificial intelligence or machine learning is appropriate but a notice and takedown approach is better for other kinds of content takedown.

Globally, another facet is that of safe harbour provisions for platforms. When intermediaries such as Google and Facebook were established, they were thought of as neutral pipes since they were not creating the content but only facilitating access. However, as they have scaled and as their role in ecosystem has increased, they are now one of the intervention points for governments as gatekeepers of free speech. One needs to be careful in asking for an expansion of the role and responsibilities of platforms because then complementary to that we will also have to see that the frameworks regulating them need to be revisited. Additionally, would a similar standard be applicable to larger and smaller intermediaries, or do we need layers of distinction between their responsibilities? Internet platforms such as the GAFA (Google, Apple, Facebook and Amazon) yield exceptional power to dictate what discourse takes place and this translates into the the online and offline divide disappearing. Do we then hold these four intermediaries to a separate and higher standard? If not, then all small players will be held to stringent rules disadvantaging their functioning and ultimately, stifling innovation. Thus, regulation is definitely needed but instead of a uniform one, one that’s layered and tailor-made to different situations and platform visibility levels could be more useful.

Some participants shared the opinion that because these intermediaries are based in foreign countries and have primary legal obligations there, the insulation plays out in the citizen’s benefit. It lends itself a layer of freedom of speech and expression that is not present in the substantive law, rule of law framework or the institutional culture in India.

Child pornography is an area where platforms are taking a lot of responsibility. Google has spoken about how they have been using machine learning algorithms to block 40% of such content and Microsoft is also working on a similar process. If we argue for more intervention from platforms, we simultaneously also need to look at their machine learning algorithms. Concerns of how these algorithms are being deployed and further, being incorporated into the framework of controlling child pornography are relevant since there is not much accountability and transparency regarding the same.

Another fraction that has emerged from recent events is the divide between traditional form of media and new media. Taking the example of rape victims and sexual harassment claims, there are strict rules regarding the kinds of details that can be disclosed and the manner in which this is to be done. In the Kathua rape case, for instance, the Delhi High Court sent a notice to Twitter and Facebook for revealing details because there are norms around this even though they have not been applicable to platforms. Hence, there are certain regulations that apply to old media that have now escaped in the frameworks applicable to the new media and at some level that gap needs to be bridged.

Role of Law

One of the participants brought up the question; what is the proper role of the law and does it come first or last? In case of the latter, the burden then falls upon the kind of standard setting that we do as a society. The role of platforms as an entity in mediating the online environment was discussed, given the concerns that have been highlighted about this environment, especially for women. The third thing to be considered is whether we run the risk of enforcing patriarchal behaviour by doubling down on the either of the two aforementioned factors. If legal standards are made too harsh they may end up reinforcing a power structure that is essentially dominated by upper caste men who comprise a majority of staff within law enforcement and the judiciary. Even though the subordinate judiciary do have mahila courts now, the application of the law seems to reify the position of the woman as the victim. This also brings up the question of who can become a victim within such frameworks, where selective bias such as elements of chastity come to play as court functions are undertaken.

An assessment of the way criminal law in India is used to stifle free speech was carried out in 2013 and repeated in 2018, illustrating how censorship law is used to stifle voices of minorities and people critical of the political establishment. Even though it is perhaps time to revisit the earlier conceptualizations of intermediaries as neutral pipes, it is concerning to look at the the court cases regarding safe harbour in India. Many of them are carried out with the ostensible objective of protecting women's rights. In Kamlesh Vaswani V Union of India, the petition claims that porn is a threat to Indian women and culture, ignoring the reality that many women watch porn as well. Pornhub releases figures on viewership every year, and of the entirety of Indian subscribers one third are women. This is not taken into account in such petitions. In Prajwala V Union of India, an NGO sent the Supreme Court a letter raising concerns about videos of sexual violence being distributed on the internet. The letter sought to bring attention to the existence of such videos, as well as their rampant circulation on online platforms. At some point in the proceedings, the Court wanted the intermediaries to use keywords to take down content and keeping aside poor implementation, the rationale behind such a move is problematic in itself. For instance, if you choose sex as one of those words then all sexual education will disappear from the Internet. There are many problems with court encouraged filtering systems like one where a system automatically tells you when a rape video goes up. The question arises of how will you distinguish between a video that was consensually made depicting sexual activities and a rape video. The narrow minded responses to the Sabu Mathew and Prajwala cases originate in the conservative culture regarding sexual activity prevalent in India.

In a research project undertaken by one of the participants in the course of their work, they made a suggestion to include gender, sexuality and disability as grounds for hate speech while working with women’s rights activists and civil society organisations. This suggestion was not well received as they vehemently opposed more regulation. In their opinion, the laws that India has in place are not being upheld and creating new laws will not change if the implementation of legislation is flawed. For instance, even though the Supreme Court stuck down S.66A, Internet Freedom Foundation has earlier provided instances of its continued usage by police officers to file complaints.[4] Hate speech laws can be used to both ends, even though unlike in the US they do not determine whose speech they want to protect. Consequently, in the US a white supremacist gets as much protection as a Black Lives Matter activist but in India, that is not the case. The latest Law Commission Report on hate speech in India tries to make progress by incorporating the ICCPR view of incitement to discriminate and include dignity in the harms. It specifically speaks about hate speech against women saying that it does not always end up in violence but does result in a harm to dignity and standing in society. Often, protectionist forms of speech such as hate speech often end up hurting the people it aims to protect by reinforcing stereotypes.

Point of View undertook a study where they looked at the use of S.67 in the Information Technology (IT) Act which criminalizes obscene speech when you use a medium covered by the IT, in which they found that the section was used to criminalize political speech. In many censorship cases, the people who those provisions benefit are the ones in power.[5] For instance in S.67, obscenity provisions do not protect women's rights, they protect morality of society. Even though these are done in the name of protecting women, when a woman herself decides that she wants to publish a revealing picture of herself online, it is disallowed by the law. That kind of control of sexuality is part of a larger patriarchal framework which does not support women's rights or recognise her sexuality. However, under Indian law, there are quite a few robust provisions for image based abuse, and there is some recognition of women in particular being vulnerable to it. S.66A of the IT Act specifically recognizes that it is a criminal activity to share images of someone’s private parts without their consent. This then also encompasses instances of ‘revenge porn’. That provision has been in place in India since 2008, in contrast to the US where half the states still do not have such a provision. Certain kinds of vulnerability have adequate recognition in the law, thus one should be wary of calls of censorship and lowering the standards for criminalizing speech.

Non-legal interventions

This section centres around the discussions of redressal mechanisms that can be used to address some of the forms of violence which do not emanate from the law. All of the participants emphasized the importance of creating safe spaces through non-legal interventions. It was debated whether there is a need to always approach the law or if it is possible to categorize forms of online violence according to the gravity of the violation committed. These can be in the form of community solutions where law is treated as the last resort. For instance, there was support for using community tools such as ‘feminist trollback’ where humor can be used to troll the trolls. Trolls feed on the fear of being trolled, so the harm can be mitigated by using community initiatives wherein the target can respond to the trolls with the help of other people in the community. It was reiterated that non technical and legal interventions are needed not only from the perspective of power relations within these spaces but also access to the spaces in the first place. Accordingly, the government should work on initiatives that get more women online and focus on policies that makes smartphones and data services more accessible. This would also be a good method to increase the safety of women and benefit from the strength in numbers.

In cases of the non-consensual sharing of intimate images, law can be the primary forum but in cases of trolling and other social media abuse, the question was raised - should we enhance the role of the intermediary platforms? Being the first point of intervention, their responsibility should be more than it currently is. However this would require them to act in the nature of police or judiciary and necessitate an examination of their algorithms. A large proportion of the designers of such algorithms are white males, which increases the possibility of their biases against women of colour for instance, to feed into the algorithms and reinforce a power structure that lacks accountability.

Participants questioned the lack of privacy in design with the example in mind being of how registrars do not make domain owner details private by default. Users have to pay an additional fee for not exposing their details to public and the notion of having to pay for privacy is unsettling. There is no information being provided during the purchasing of the domain name about the privacy feature as well. It was acknowledged that for audit and law enforcement purposes it is imperative to have the information of the owner of a domain name and their details since in cases of websites selling fake medicines, arms or hosting child pornography. Thus, it boils down to the kind of information necessary for law enforcement. Global domain name rules also impact privacy on the national level. The process of ascertaining the suitability and necessity of different kinds of information excludes ordinary citizens since all the consultations take place between the regulatory authority and the state. This makes it difficult for citizens to participate and contribute to this space without government approval.

Issues were flagged against community standards in that the violence that occurs to women is also because the harms are not equal for all. Further, some users are targeted specifically because of the community they come from or the views they have. Often also because, they represent a ‘type’ of a woman that does not adhere to the ‘ideal’ of a woman held by the perpetrator. Unfortunately community standards do not recognise differential harms towards certain communities in India or globally. Twitter, for example, regularly engages in shadow banning and targets people who do not conform to the moral views prevalent in that society where the platform is engaging in censorship. We know these instances occur only when our community members notice and notify us of the same. There is a certain amount of labor that the community has already put in flagging instances of these violations to the intermediary which also needs recognition. In this situation, Twitter is disproportionately handling how it engages with the two entities in question. Community standards could thus become a double edged sword without adding additional protections for certain disadvantaged communities.

Conclusion

Currently, intermediaries are considered neutral pipes through which content flows and hence have no liability as long as they do not perform editorial functions. This has also been useful in ensuring that the freedom of speech is not harmed. However, given their potential ability to remedy this problem, as well as the fact that intermediaries sometimes benefit financially from such activities, it is important to look at the intermediaries’ responsibility in addressing these instances of violence. Governments across the world have taken different approaches to this question[6]. Models, such as in the US, where intermediaries have been solely responsible to institute redressal mechanisms have proven to be ineffectual. On the other hand, in Thailand, where intermediaries are held primarily liable for content, the monitoring of content has led to several free speech harms.

People are increasingly looking at other forms of social intervention to combat online abuse since technological and legal ones do not completely address and resolve the myriad issues emanating from this umbrella term. There is also a need to make the law gender sensitive as well as improving the execution of laws at ground level, possibly through sensitisation of law enforcement authorities. Gender based violence as a catchall phrase does not do justice to the full spectrum of experiences that victims face, especially women and sexual minorities.  Often these do not attract criminal punishment given the restricted framework of the current law and need to be seen through the prism of hate speech to strengthen these provisions.

Some actions within GBV receive more attention than others and as a consequence, these are the ones platforms and governments are most concerned with regulating. Considerations of free speech and censorship and the role of intermediaries in being the flag bearers of either has translated into growing calls for greater responsibility to be taken by these players. The roundtable raised some key concerns regarding revisiting intermediary liability within the context of the scale of the platforms, their content moderation policies and machine learning algorithms.


[1] See Khalil Goga, “How to tackle gender-based violence online”, World Economic Forum, 18 February 2015, <https://www.weforum.org/agenda/2015/02/how-to-tackle-gender-based-violence-online/>. See also Shiromi Pinto, “What is online violence and abuse against women?”, 20 November 2017, Amnest International, <https://www.amnesty.org/en/latest/campaigns/2017/11/what-is-online-violence-and-abuse-against-women/>.

[2] Nidhi Tandon, et. al., “Cyber Violence Against Women and Girls: A worldwide wake up call”, UN Broadband Commission for Digital Development Working Group on Broadband and Gender, <http://www.unesco.org/new/fileadmin/MULTIMEDIA/HQ/CI/CI/images/wsis/GenderReport2015FINAL.pdf>

[3] See Azmina Dhrodia, “Unsocial Media: The Real Toll of Online Abuse against Women”, Amnesty Global Insights Blog, <https://medium.com/amnesty-insights/unsocial-media-the-real-toll-of-online-abuse-against-women-37134ddab3f4>

[4] See Abhinav Sekhri and Apar Gupta, “Section 66A and other legal zombies”, Internet Freedom Foundation Blog, <https://internetfreedom.in/66a-zombie/?

[5] See Bishakha Datta “Guavas and Genitals”, Point of View <https://itforchange.net/e-vaw/wp-content/uploads/2018/01/Smita_Vanniyar.pdf>

[6] ‘Examining Technology-Mediated Violence Against Women Through a Feminist Framework: Towards appropriate legal-institutional responses in India’, Gurumurthy et al., January 2018.

Feminist Methodology in Technology Research: A Literature Review

by Ambika Tandon last modified Dec 25, 2018 03:18 PM
This literature review has been authored by Ambika Tandon, with contributions from Mukta Joshi. Research assistance was provided by Kumarjeet Ray and Navya Sharma. The publication has been designed by Saumyaa Naidu.

Abstract

Feminist research methodology is a vast body of knowledge, spanning across multiple disciplines including sociology, media studies, and critical legal studies. This literature review aims to understand key aspects of feminist methodology across these disciplines, with a particular focus on research on technology and its interaction with society. Stemming from the argument that the ontological notion of objectivity effaces power relations in the process of knowledge production, feminist research is critical of the subjects, producers, and nature of knowledge. Section I of the literature review explores this argument along with a range of theoretical concepts, such as standpoint theory and historical materialism, as well as principles of feminist research derived from these, such as intersectionality and reflexivity.

Given its critique of the "god's eye view" (Madhok and Evans, 2014) of objectivist research, feminist scholars have largely developed qualitative methods that are more conducive to acknowledgement of power hierarchies. Additionally, some scholars have recognised the political value in quantification of inequalities such as the wage gap, and have developed intersectional quantitative methods that aim at narrowing down measurable inequalities. Both sets of methods are explored in Section II of the literature review, interspersed with examples from research focused on technology.

Introduction

According to authoritative accounts on the subject, while research focused on gender or women predates its arrival, the field of ‘feminist methodology’ explores questions of epistemology and ontology of research and knowledge. Initiated in scholarship arising out of the second wave of North American feminism, it theoretically anchors itself in the post-modernist and post-structuralist traditions. It additionally critiques positivism for being a project furthering patriarchal oppression. North American feminist scholars critique traditional methods within the social sciences from an epistemological perspective, for producing acontextual and ahistorical knowledge, replicating the tendency of positivist science to enumerate and measure subjective social phenomena. This, according to them, leads to the invisiblising of the web of power relations within which the ‘known’ and ‘knower’ in knowledge production are placed. This is then used to devise methods and underlying principles and ethics for conducting more egalitarian research, aimed at achieving goals of social justice.

The second wave feminist movement was itself critiqued by Black and other feminists from the global South for being exclusionary of non-white and heterosexual identities. Given its origins in the global North, scholars from the South have interrogated the meaning of feminism and feminist research in their context. Some African scholars even detail difficulty in disclosing a project as feminist publicly due to popular resistance to the term feminism, which stems from it being rejected by certain social groups as an alien social movement that’s antithetical to their “African cultural values." Their own critique of “White feminism” comes from its essentialization of womanhood and the resultant negation of the (neo)colonial and racialised histories of African women. This has led scholars from the global South to critically interrogate feminism and feminist methods. They acknowledge the multiplicity of feminisms, and initiate creative inquiries into different forms of feminist methodology. Feminist researchers that work in contexts of political violence, instability, repression, scarcity of resources, poor infrastructure, and/or lack of social security, have pointed out that traditional research methods assume conditions that are largely absent in their realities, leading them to experiment with feminist research.

Feminist research across these variety of contexts raises ontological and epistemological concerns about traditional research methods and underlying assumptions about what can be known, who can know, and the nature of knowledge itself. It argues that knowledge production has historically led to the creation of epistemic hierarchies, wherein certain actors are designated as ‘knowers’ and others as the ‘known’. Such hierarchies wreak epistemic violence upon marginalised subjects by denying them the agency to produce knowledge, and delegitimize forms of knowledge that aren’t normative. Acknowledging the role of power in knowledge production has the radical implication that the subjectivities of the researchers and the researched inherently find their way into research and more broadly, knowledge production. This challenges the objectivity and “god’s eye view” of traditional humanistic knowledge and its processes of production. Feminist research eschews scientifically orthodox notions of how “valid knowledge will look”, and creates novel resources for understanding epistemic marginalization of various kinds. It then provides a myriad of tools to disrupt structural hierarchies through and within knowledge production and dissemination.

Feminist research, given its evolution from living movements and theoretical debates, remains a contested domain. It has reformulated a range of qualitative and quantitative research methods, and also surfaced its own, such as experimental and action-based. What these have in common are theoretical dispositions to identify, critique, and ultimately dismantle power relations within and through research projects. It is thus “critical, political, and praxis oriented. Several disciplines with the social sciences, such as feminist technology studies, cyberfeminism, and cultural anthropology, have built feminist approaches to the study of technology and technologically mediated social relations. However, this continues to remain a minor strand of research on technology.

This literature review aims to address that gap through scoping of such methods and their application in technological research. Feminist methodology provides a critical lens that allows us to explore questions and areas in technology-based research that are inaccessible by traditional methods. This paper draws on examples from technology-focused research, covering key interdisciplinary feminist methods across fields such as gender studies, sociology, development, and ICT for development. In doing so, it actively constructs a history of feminist methodology through authoritative sources of knowledge.


Read the full paper here

European E-Evidence Proposal and Indian Law

by Vipul Kharbanda last modified Dec 23, 2018 04:45 PM
In April of 2018, the European Union issued the proposal for a new regime dealing with cross border sharing of data and information by issuing two draft instruments, an E-evidence Regulation (“Regulation”) and an E-evidence Directive (“Directive”), (together the “E-evidence Proposal”). The Regulation is a direction to states to put in place the proper legislative and regulatory machinery for the implementation of this regime while the Directive requires the states to enact laws governing service providers so that they would comply with the proposed regime.

The main feature of the E-evidence Proposal is twofold: (i) establishment of a legal regime whereunder competent authorities can issue European Production Orders (EPOs) and European Preservation Orders (EPROs) to entities in any other EU member country (together the “Data Orders”); and (ii) an obligation on service providers offering services in any of the EU member countries to designate legal representatives who will be responsible for receiving the Data Orders, irrespective of whether such entity has an actual physical establishment in any EU member country.

In this article we will briefly discuss the framework that has been proposed under the two instruments and then discuss how service providers based in India whose services are also available in Europe would be affected by these proposals. The authors would like to make it clear that this article is not intended to be an analysis of the E-evidence Proposal and therefore shall not attempt to bring out the shortcomings of the proposed European regime, except insofar as such shortcomings may affect the service providers located in India being discussed in the second part of the article.

Part I - E-evidence Directive and Regulation

The E-evidence Proposal introduces the concept of binding EPOs and EPROs. Both Data Orders need to be issued or validated by a judicial authority in the issuing EU member country. A Data Order can be issued to seek preservation or production of data that is stored by a service provider located in another jurisdiction and that is necessary as evidence in criminal investigations or a criminal proceeding. Such Data Orders may only be issued if a similar measure is available for the same criminal offence in a comparable domestic situation in the issuing country. Both Data Orders can be served on entities offering services such as electronic communication services, social networks, online marketplaces, other hosting service providers and providers of internet infrastructure such as IP address and domain name registries. Thus companies such as Big Rock (domain name registry), Ferns n Petals (online marketplace providing services in Europe), Hike (social networking and chatting), etc. or any website which has a subscription based model and allows access to subscribers in Europe would potentially be covered by the E-evidence Proposal. The EPRO, similarly to the EPO, is addressed to the legal representative outside of the issuing country’s jurisdiction to preserve the data in view of a subsequent request to produce such data, which request may be issued through MLA channels in case of third countries or via a European Investigation Order (EIO) between EU member countries. Unlike surveillance measures or data retention obligations set out by law, which are not provided for by this proposal, the EPRO is an order issued or validated by a judicial authority in a concrete criminal proceeding after an individual evaluation of the proportionality and necessity in every single case.[1] Like the EPO, it refers to the specific known or unknown perpetrators of a criminal offence that has already taken place. The EPRO only allows preserving data that is already stored at the time of receipt of the order, not the access to data at a future point in time after the receipt of the EPRO.

While EPOs to produce subscriber data[2] and access data[3] can be issued for any criminal offence an EPO for content data[4] and transactional data[5] may only be issued by a judge, a court or an investigating judge competent in the case. In case the EPO is issued by any other authority (which is competent to issue such an order in the issuing country), such an EPO has to be validated by a judge, a court or an investigating judge. In case of an EPO for subscriber data and access data, the EPO may also be validated by a prosecutor in the issuing country.

To reduce obstacles to the enforcement of the EPOs, the Directive makes it mandatory for service providers to designate a legal representative in the European Union to receive, comply with and enforce Data Orders. The obligation of designating a legal representative for all service providers that are operating in the European Union would ensure that there is always a clear addressee of orders aiming at gathering evidence in criminal proceedings. This would in turn make it easier for service providers to comply with those orders, as the legal representative would be responsible for receiving, complying with and enforcing those orders on behalf of the service provider.

Grounds on which EPOs can be issued

The grounds on which Data Orders may be issued are contained in Articles 5 and 6 of the Regulation which makes it very clear that a Data Order may only be issued in a case if it is necessary and proportionate for the purposes of a criminal proceedi