Blog
A man walks past an electric board showing exchange rates of various cryptocurrencies at Bithumb cryptocurrencies exchange in Seoul, South Korea, January 11, 2018. Photo: Reuters/Kim Hong-Ji
It remains to be seen whether Google’s Privacy Sandbox project will be truly privacy-preserving. (Reuters Illustration: Francois Lenoir)
The 5G Factor:A Primer
Introduction
The reason the future of the 5G Industry merits discussion here is the unique leverage (in addition to its control of supply chains) it would provide to China if it emerges as the ultimate victor of the projected $2 trillion pie (by 2030). Amidst the pandemic, China has utilised its monopoly position in manufacture of medical equipment by undertaking what has been termed as “mask diplomacy” in the context of the vulnerable economic and public health situations. When states are struggling to restrict deaths and prevent hospitals from being overwhelmed, Xi Jinping strategically weaponised China’s monopolised supply chain of integral medical equipment including half the N95 masks in the world to further Huawei’s 5G gains.
The integral question that must be answered is why 5G infrastructure is so crucial from a geo-economic as well as political standpoint. 5G goes beyond just the generational jump and promises to be the harbinger of the future – with the Internet of Things, M2M communication, self-driving cars etc. The rollout of 5G is thus, not only an important economic interest for China and its telecom companies, but is also considered a national strategic priority in its competition with the United States for control of global telecom and IT infrastructure. This present push for 5G dominance goes beyond mere economic interests – one of the only areas where production and projected numbers conspicuously did not fall. China has kept on pushing for expansion through its BRI and Digital Silk Road initiatives. It represents a larger geopolitical battle for control of technological control and global governance in the future. This explains why the US has been making all attempts possible to prevent China from dominating 5G.
The telecommunication standard-setting bodies, most importantly the 3GPP represent a crucial aspect of the US-China 5G battle. The process of standard setting represents an important element of the geopolitical battle where other allies have not followed suit in isolating China. Domination in the standard setting process has further allowed Huawei to possess several patents and dominate global supply chains in 5G technology - providing the full stack of equipment required for 5G functionality.
After last year's sanctions on Huawei, the 3GPP had warned the US of possible breakdown of and divergence in standard setting mechanisms, given the dominant role Huawei and China have played in standard setting for 5G. The role of the US has been restricted in the standard setting process because US tech companies including Intel, Qualcomm, Amazon etc. refrained from active participation in the standard setting bodies where Huawei is a member. This was due to the uncertainty around the information or technology they could legally share because of the restrictions on working with Huawei.
Other than delays in release of the latest standards, the pandemic has impacted US-China relations significantly which in turn is likely to affect the standard setting process. The COVID-19 outbreak caused a worsening of relations and brought hostility from the United States which could have further harmed their own prospects in the 5G battle. The fresh set of sanctions on Huawei certainly indicated that the hostility might cause further disengagement from the standard setting process, as seen over the past year. However, the US Department of Commerce has created an exception on its Huawei ban to permit US companies to work with them in the standard setting process. This could enable balanced engagement through US private presence in standard-setting along with Huawei. The amended rule seeks to ensure full participation of US companies in voluntary standard setting bodies to curb Huawei’s position of strength.
The US has also released a “national strategy to secure 5G” to encourage further government involvement in the standard setting process. These measures indicate US recognition of the importance of international standard setting and its resolve to not permit Huawei to dictate the 5G standards. While these actions are ostensibly aimed at diluting Huawei’s influence in the standard-setting bodies - one of the most important spheres in the 5G battle - the actual tangible impact in affecting the dynamics at these bodies remains to be seen. In an international body where Huawei has consistently been the largest contributor towards 5G standards, it poses an uphill battle for US companies to immediately cover the lost ground and gain sufficient influence.
Apart from the above brief issues, the factors working in favour of strengthening China’s dominance in the 5G industry, potential threats, and the resultant geo-economic/political impacts will be analysed in detail.
Factors in favour of China’s 5G dominance
As the coronavirus outbreak had worsened in China, it seemed that the position as a leader in 5G technology was threatened by the virus. The initial reports in February pointed towards a slowdown in the rollout of 5G in China owing to the quarantine measures and complete lockdown in important regions such as Hubei. However, the SOEs (state-owned enterprises), i.e., China Mobile, China Unicom and China Telecom have continued to push for an increased pace of 5G rollout in China and towards laying the 5G infrastructural foundations abroad. Notably, as per a GSMA report in March, China has remained on track with setting up 5G base stations even at the peak of the outbreak.
The crisis has made it a lot more complicated for the US to lobby against Chinese 5G infrastructure being used in Europe and other developing economies. There are three important reasons for this: a) the massive financial burden of recovering from the pandemic b) the increased need for digital connectivity in the post-COVID world and c) China’s strategic leveraging of its monopoly on medical equipment to obtain acceptance of Huawei 5G.
On the international front, despite the United States best attempts to persuade its European allies to not adopt Huawei 5G infrastructure they have constantly been thwarted. Their objections have been based on alleged security concerns caused by “backdoors” present in the network. The US has warned the world that Huawei effectively functions as a wing of the Chinese government and conflicting state interests might threaten the stability of core communication systems and critical infrastructure including power grids and water supply.
However, Europe had taken a balanced approach by including risk mitigation plans which would limit access of ‘high-risk’ vendors (where Huawei has been included) particularly from “critical or sensitive infrastructure networks.” Further, they already have Huawei infrastructure integrated in their 4G networks and a ban would be financially imprudent given the costs of replacing existing networks and expanding 5G infrastructure from alternative sources. This is even more pronounced given Europe's current recessionary outlook and massive expenditure undertaken to support public healthcare and boost the economy. The likelihood of adopting China’s 5G infrastructure to cut costs has never been greater.
In developing economies, there is an enhanced need for digital connectivity to brave through current pandemic and prevent future crises. Beyond the conventional innovations associated with 5G networks, it was implemented in facilitating online schooling, public healthcare. Digital health technologies saw several innovations including use of Artificial Intelligence, big data, and cloud computing to improve case tracking, improved efficiency of diagnosis and treatment (telemedicine) as well as thermal imaging systems and health apps utilised during China’s months-long lockdown. The fallout of the pandemic and China’s seemingly effective response to it, have furthered the narrative that 5G-enabled digital connectivity and the associated technologies it functionalizes - cloud computing, AI etc, are a certain benefit for any country’s public healthcare and response mechanisms to future crises. This may further bolster China’s establishment of its 5G infrastructure in several developing economies in furtherance of its Digital Silk Road. In fact, Huawei has recently undertaken strategic events with India’s telecom association to boost its position in India including its 5G enabled health technologies used to tackle the virus, despite the strong anti-China sentiment.
On mask diplomacy, there had been reports that Jinping had made supply of masks and other integral medical equipment contingent on acceptance of Huawei 5G infrastructure. Using economic coercion to broaden Chinese influence and global power is a commonly occurring phenomenon with its patented ‘debt trap’ implemented successfully in the BRI. The response from countries accepting Chinese has been either gracious and grateful (as in the case of Italy, Eastern Europe, Spain, The Netherlands) or one of grudging acceptance with no choice at hand as seen with most major Western powers. Even Huawei, ZTE, Alibaba and other Chinese Big Tech companies stepped up to provide medical aid to improve their image after recent security-related controversies. (for a more detailed look into Europe-China relations during COVID-19 see here)
The claims of China hiding information about the coronavirus outbreak from the rest of the world and the WHO had contributed strongly to its negative image. As a crucial aspect of rehabilitating this negative perception, China adopted a massive propaganda campaign in Western media. China’s ‘information warfare’ aimed to further its narrative of acting decisively in managing the outbreak locally and as a leader in the global fight against coronavirus with its medical and humanitarian aid. These have also been viewed as China’s attempts to upend global governance norms and replace the US when it has not been able to respond adequately during the crisis.
However, the confused and sometimes aggressive messaging from the Chinese has backfired with some countries and the soft power gains are likely to be limited. China’s medical aid and propaganda push, which aimed to prevent an increase in its worsening image have witnessed limited success in specific geographic areas, but it has achieved the crucial task of securing loyalty from most of China’s allies including the South-East Asian Nations, Italy, Eastern Europe, and several others.
The crucial gain for China remains in states’ recognition of the possible repercussions of acting against Chinese interests, i.e., Huawei’s interests - the overt threat of retaliation in economic, political and diplomatic consequences. This highlights the point that China’s soft power gains, though limited, are underscored by China’s use of economic coercion to achieve its ends.
Threats to Huawei and Chinese Dominance
Beyond the beneficial contributors to Huawei’s dominance, recent developments have created factors that can threaten Huawei’s growing clout in the 5G industry. These include: a) the fresh set of sanctions targeting Huawei by the US and b) the UK’s reactionary security review of Huawei, the subsequent ban and its proposed 5G alliance. c) India's recent ban on Chinese apps amid border tensions with China and Reliance Jio, a major Indian telecom operator, announcing independent home-grown 5G network solutions. The likely impacts and effectiveness of these threats will be evaluated here.
The US, nearly a year after its first ineffective sanctions against Huawei has imposed fresh sanctions on them. Despite the sanctions on Huawei last year, which barred export of some American technology including access to Google on its mobile devices, the Chinese tech giant had a remarkably successful 2019 with a 19% rise in revenue. This prompted Washington to adopt a more direct approach by banning the use of any American tools for making crucial semiconductor chips for Huawei’s products. The official sanctions by the US Department of Commerce termed the Export Administration Regulations (EAR) place a bar on chips designed or customised to Huawei’s specifications being manufactured even by foreign companies using American equipment except with a license.
Superficially this appears to be a master-stroke since all major companies including Taiwan Semiconductor Manufacturing Corporation (TSMC) - Huawei’s main supplier - and China’s Semiconductor Manufacturing International Corporation (SMIC) use some American tools in their chip manufacturing process. However, the increasingly globalised nature of the chip-making industry with only 27% of American manufacturers’ plants based in the US, had curbed the effectiveness of the earlier sanctions - exposing the limitations of what US domestic law could achieve globally. It may appear that the target on chip-making tools where a majority of the assets of the larger companies are held in the US, allows more effective control.
Initial responses have focused on the possible impact of these sanctions threatening Huawei’s survival, which is however, not the first time Huawei has spoken in terms of ‘survival’ being threatened by US sanctions. Going with the trend of the US efforts to sanction Huawei, this move may backfire as well and there are a number of reasons for this. Legal experts have identified a few loopholes which would permit companies such as TSMC and others to continue supplying chips to Huawei. The scope of the words ‘designed by Huawei’ in the EAR remains vague, which may permit Huawei to purchase uncustomized chips. Further, Huawei’s system includes paying contract manufacturers to assemble its 5G base stations and mobile devices, and the chips are sent directly to them. Lawyers analysing the EAR have claimed the chips sent to third parties would be permissible for use, providing Huawei devices access to the chips indirectly.
The sanctions have also prompted numerous responses including the Chinese Government’s announcement of providing funding of $1.4 trillion by 2025 to develop independent capacity for its 5G technology industry including chip-making. To counter the fallout of an escalating trade-war and protect its core sectors, Huawei has been building a stock-pile of its most important chips since 2018, which ensures it has supply for the next two years. SMIC, China's biggest chip-maker has simultaneously announced that it has received a $2.2 billion investment from Chinese state investors to beef up its production capacity in China by nearly six times. Samsung Electronics also has a chip-making factory in Xian, China and it has intentions of investing nearly 115 billion in the coming decade. All of these measures suggest that Huawei will not be an entity easily taken down by US sanctions, not with the CCP behind it.
The UK was in favour of permitting Huawei access to 35% of the network in January 2020. However, implications of US sanctions and pressure from a group of MPs in the ruling party prompted a security review of Huawei’s involvement by the National Cyber Security Centre (NCSC). The NCSC security review was conducted in the backdrop of Chinese threats of retaliation, if the UK decides to exclude Huawei. The UK has subsequently banned Huawei, after its security review and hopes to phase out Huawei equipment from its networks by 2027. The alternatives that the UK must explore now include Nokia and Ericsson and their 5G rollout stands to be delayed by 2-3 years and with additional cost of replacement placed at nearly £2.5 billion. Beijing has responded with contempt for the decision, with the Chinese foreign ministry and Ambassador to the UK issuing threats stating that the Huawei ban will cause stiff retaliation from the Chinese administration and businesses.
On the UK's future 5G plans, as mentioned in Part 1, the UK had proposed a 5G alliance of democratic nations including the G7 nations, South Korea, India and Australia. The main objective of the alliance is to finance existing telecommunication companies to create 5G networks independent of Huawei due to its alleged national security concerns. However, beyond the US and Australia, the rest of the countries remain ambivalent on a 5G alliance.
Two challenges the 5G alliance would face include garnering support from key members of the alliance including the G-7 nations of Italy, France, and Germany as well as achieving the required technological capacity amidst economic turmoil. Italy continues to hold strong relations with China after it signed on to the BRI in 2019 and received Chinese medical aid when it had been abandoned by its European neighbours during the pandemic. France has stated it has no intention to discriminate against telecom operators including Huawei but will build in safeguards in their networks. Germany though undecided on its policy regarding Huawei has had certain important telecom players express the need for Huawei’s technological expertise which ties into the second challenge.
The required technological competence towards beating Huawei’s edge even with all the funding their bruised economies can muster presents an additional obstacle. If Huawei, the largest telecom technology provider in Europe is removed, the alternatives are sole reliance on Ericsson and Nokia with possible dangers of a duopoly over the entire continent emerging. Further Huawei’s presence in the continent is more than substantial, where it has spent more than decades conducting research and fine-tuning networks to industry requirements. (for specific details see here). Europe’s biggest telecom operator Deutsche Telekom has acknowledged recently that it “needs Huawei involvement” for construction of 5G base stations in Germany and their elimination could delay 5G rollouts in Europe by 2 years. Switching from Huawei presents not only a big economic cost to governments but also industries reliant on Huawei’s efficient network and communication systems.
The hopes of a democractic 5G alliance, while important given legitimate national security concerns with Huawei, are at a nascent stage and come with a unique set of cooperative challenges given the populist and isolationist policies in a number of countries as a consequence of the pandemic. Europe has largely been resistant to US warnings on national security threats posed by Huawei, even before the pandemic. Sans the key European nations, the alliance would lack crucial technological expertise and more importantly, funding it would need to beat China which keeps pumping billions into its 5G future.
India's recent ban on Chinese apps also merits some consideration at this juncture. The Indian government's interim order to ban 59 Chinese apps originated in the backdrop of border clashes at the Indo-China border. The ostensible reason for this move is the alleged practice of unauthorised sharing of Indian users’ data to servers outside India affecting national security and data privacy and China’s data-sharing laws as well. The ban’s enforceability remains uncertain with the Indian administration asking the banned apps to submit their data-sharing practices for the process of hearings to be conducted where the legality of the ban will be determined.
The possibility of further escalation from India (i.e. 5G) must be understood in the context of its trade relations with China and its strong reliance in several sectors beyond technology as well. There exists speculation that China’s rise as a global tech power may be threatened by the ban but several reports have contemplated the possibility of the ban inflicting larger harms to India than China. This is because India’s trade deficit towards China stands at nearly $50 billion, despite a fall in Chinese imports over the past two years. Thus, as emphasized in Part 1, decoupling from China particularly at an economically vulnerable stage, will come with harsh economic consequences on its struggling economy and its citizens with prices of non-Chinese goods likely to be higher.
Despite this reality, there have been reports suggesting that a ban on Huawei and ZTE’s 5G equipment may be under consideration by Indian ministers. The news of a possible ban on Chinese telecom equipment has been met with opposition from the Cellular Operators Association of India (COAI). Their argument relies on separating geopolitical factors from commercial decisions, barring which higher costs (25-30%) would invariably be incurred on network gear equipment and imposed on customers.
Conversely, India has recently witnessed Jio’s announcement of foraying into the 5G technology space - in line with the ‘Atma-nirbhar’ (self-reliant) narrative in India. The current political climate may suggest India's intention to limit Huawei’s access to the 5G network infrastructure in the country. With its enviable list of strategic investors (Google, Qualcomm, Facebook, Intel) and emerging technological solutions (OpenRAN and Cloud), Jio seemingly possesses the capability to produce wireless telecom equipment for 5G networks and associated 5G solutions. However, to make its mark, Jio must play catch-up with Huawei’s technological capability. It presents a tall order because of the long timelines - nearly 3 years - and associated challenges with using OpenRAN to develop competent software and hardware solutions which can integrate 4G networks and is also compliant with 3GPP standards. The success or failure of its purported full stack of “home-grown 5G solutions” also remains contingent on government policy and allocation of spectrum, slated a year later. With no real testing and proven use cases of its 5G solutions, the venture still remains a prospective success and does not realistically imply Huawei’s complete exclusion or Jio becoming an immediate competitor in global telecommunications equipment.
Path ahead for 5G
The developments over the past few months indicate that China faces several challenges to its dominant position in the global 5G race. However, a majority of these factors may not possibly deliver the impact that they promise.
Despite widespread claims that the US clampdown on supply of semiconductor chips to Huawei may hit them hard, precedent says otherwise. The company has not been one to shy away from drastic measures including 24-hour workdays and other innovations to overcome supply issues. Testament to this was Huawei’s aforementioned 19% growth despite sanctions in 2019, by using loopholes to obtain supply of essential components from American facilities abroad. Huawei enjoys the advantages of state subsidies and the full support of the CCP which allowed its rise as a global telecom leader. China’s unique economic and diplomatic position including control of integral supply chains provides it enough ammunition to counter the growing anti-China sentiment and alliances which largely remain restricted to a few countries of the developed world. Huawei’s conditions certainly remain precarious but as displayed in the past - the company’s resolve to take drastic measures coupled with state support have been able to provide solutions to problems (such as last year’s sanctions) that seemed insurmountable.
In addition to these factors, Huawei’s capability of providing the full stack of 5G technology from receiver base stations to smartphones. Huawei continues to hold the largest number of commercial 5G contracts - 91 worldwide and more than half (47) in Europe. Further, its dominant position in standard-setting ensured they hold the largest number of patents promising economic benefits despite the US restrictions. The economic benefits of 5G domination include the payouts for setting up base stations and network infrastructure and payments to Huawei for use of its patented technology by other companies.
Even assuming that the consistently increasing challenges spell the end for Huawei, there remain a few reasons why China’s 5G ambitions can’t be dismissed just yet. China’s tech surge goes beyond Huawei too and the possible failure of Huawei through bans and restrictions, though unlikely, would not prevent China from emerging as a technology superpower in the coming decades (also see here). With a population of 1.4 billion, where key US companies like Apple have nearly a fifth of their sales, losing access to their market would be disastrous for the rest of the world as compared to the loss China will face from losing out on the US market. Chinese investments in digital infrastructure clearly outpace its competitors and its recently concluded Two Sessions meeting saw an economic stimulus package of $506 billion with a focus on development of 5G and digital infrastructure (as opposed to massive US stimulus spending not focused on infrastructure).
Beyond the aspect of US sanctions, India represents a major digital market in China's quest for global technological dominance. Being the second most populous nation, and relatively unsaturated a closed Indian market could dampen Chinese ambitions. The Indian ban on Chinese apps has further bolstered the US’s anti-China stance with several statements highlighting their intent to ban TikTok. Combined with UK’s ban on Huawei, it may additionally serve as precedent for other countries to take a stronger stance against Chinese tech companies. The caveat of any measures against China however, is the inevitable pushback. The threat of Chinese retaliation through economic sanctions/ trade retaliations (as threatened against UK by Chinese officials), cyber attacks (as seen against Australia recently) and threats of action at multilateral fora such as the WTO (against India) would remain important concerns for any country before taking such a step amidst the pandemic.
Additionally, China has become the first major economy to recover and has posted a 3.2% growth in GDP in the second quarter this year, already beating predicted growth statistics after the dramatic fall in the first quarter. Even with growing anti-China rhetoric across countries, certain practical realities regarding China will play a part in policy decisions for most countries. Given the fact that it is the world’s second largest economy, holds crucial links in a majority of global supply chains and is a technological behemoth, it cannot be realistically isolated without plunging the global economy into worse suffering in the short run.
Thus, most of Europe, recognizing their own interests and unwilling to be pushed around by the US, has not ruled Huawei out. The disengagement with Huawei comes at heavy trade costs which the US and UK may be willing to bear but most countries are likely to consider those economic interests closely before upsetting Beijing. With access to 5G becoming crucial, the developing world will likely rely on Huawei even if a handful of US allies ban the Chinese entity. It might trigger a split in the world’s tech industry spilling over to all trade conducted between the West and China - something neither side would prefer.
Despite the seemingly unending challenges, the task of acting against Huawei and China presents even more complex problems, and it seems writing off Huawei from the future of 5G, may be a premature call.
Comments on NITI AAYOG Working Document: Towards Responsible #AIforAll
Fundamental Right to Privacy — Three Years of the Puttaswamy Judgment
Today marks three years since the Supreme Court of India recognised the fundamental right to privacy, but the ideals laid down in the Puttaswamy Judgment are far from being completely realized. Through our research, we invite you to better understand the judgment and its implications, and take stock of recent issues pertaining to privacy.
-
Amber Sinha dissects the Puttaswamy Judgment through an analysis of the sources, scope and structure of the right, and its possible limitations. [link]
-
Through a visual guide to the fundamental right to privacy, Amber Sinha and Pooja Saxena trace how courts in India have viewed the right to privacy since Independence, explain how key legal questions were resolved in the Puttaswamy Judgement, and provide an account of the four dimensions of privacy — space, body, information and choice — recognized by the Supreme Court. [link]
-
Based on publicly available submissions, press statements, and other media reports, Arindrajit Basu and Amber Sinha track the political evolution of the data protection ecosystem in India, on EPW Engage. They discuss how this has, and will continue to impact legislative and policy developments. [link]
-
For the AI Policy Exchange, Arindrajit Basu and Siddharth Sonkar examine the Automated Facial Recognition Systems (AFRS), and define the key legal and policy questions related to privacy concerns around the adoption of AFRS by governments around the world. [link]
-
Over the past decade, reproductive health programmes in India have been digitising extensive data about pregnant women. In partnership with Privacy International, we studied the Mother and Child Tracking system (MCTS), and Ambika Tandon presents the impact on the privacy of mothers and children in the country. [link]
-
While the right to privacy can be used to protect oneself from state surveillance, Mira Swaminathan and Shubhika Saluja write about the equally crucial problem of lateral surveillance — surveillance that happens between individuals, and within neighbourhoods, and communities — with a focus on this issue during the COVID-19 crisis. [link]
-
Finally, take a dive into the archives of the Centre for Internet and Society to read our work, which was cited in the Puttaswamy judgment — essays by Ashna Ashesh, Vidushi Marda and Bhairav Acharya that displaced the notion that privacy is inherently a Western concept, by attempting to locate the constructs of privacy in Classical Hindu [link], and Islamic Laws [link]; and Acharya’s article in the Economic and Political Weekly, which highlighted the need for privacy jurisprudence to reflect theoretical clarity, and be sensitive to unique Indian contexts [link].
Regulatory Road for Cryptocurrencies: Comments on the Report of the Inter-ministerial Committee on Virtual Currencies
Read full text here
Recommendations for EU cyber diplomacy
1.Key issues for EU cyber diplomacy
There are two key issues that the EU should take the lead on. Extra-territorial surveillance by several countries, in partnership with private actors continues with aplomb. In Schrems II, the Court of Justice of the European Union has already dealt a decisive victory for civil society actors campaigning against US law and surveillance policy, and protected the rights of EU citizens by doing so. Channelising the rich human rights jurisprudence in the European Convention on Human Rights, the court was able to highlight how existing US law and policy do not comply with the principle of proportionality in the ECHR..While the courts are an important avenue of resistance, other countries targeted by illegal and illegitimate surveillance often do not have judicial recourse or the clout to effectively counter surveillance practices.In line with the accepted principles of international law, the EU must engage in diplomatic posturing calling for reining in the use of extra-territorial surveillance,which includes surveillance enhancing technologies, mass dragnet surveillance, and surveillance by private actors.
The second key issue is that of ‘data sovereignty’-or a recognition that notwithstanding the significance of cross-border data flows, the ultimate responsibility of guaranteeing citizen rights in the digital sphere lie with the state enforcing laws in that jurisdiction. Undoubtedly, this responsibility must be discharged in conjunction with the principles of international law but the policy space itself should be sovereign, and not be dictated by other states or private actors. This sovereign space includes the right to regulate private actors such as technology companies through taxation, anti-trust laws, and impose on them key human rights obligations. It also includes an obligation to protect citizen interests against foreign adversaries.Sovereignty must not be conflated with brazen technology nationalism that involves restrictions on foreign technology or investment that harms the economic welfare or civil liberties of a state’s own citizens.
Several jurisdictions including the EU are grappling with the precise contours of ‘data sovereignty’ and what it means in today’s increasingly fractured geo-political climate. However, as it set the ball rolling with privacy enhancing diplomacy across the world, the EU has an opportunity to work with several key partners, including emerging economies such as India, Brazil and South Africa to ensure that these debates culminate in digital ecosystems that preserve the rule of law while also increasing digital accessibility and reducing inequality.
2. Multi-stakeholder coalitions
The EU has signed up for multilateral coalitions such as the Global Partnership on Artificial Intelligence and EU countries have signed onto multi-stakeholder digital agreements such as the Paris Call for Trust and Security in Cyberspace. While coalitions have been dismissed (incorrectly I believe) as talking shops, often efficient coalitions can attain key goals and promote core democratic values. Through these coalitions, the EU should look to attract as vast an array of stakeholders as possible-both states and private actors.However, that should happen once the key principles, objectives and mechanisms of engagement have been charted out by the coalition. Attracting too many stakeholders without having these clearly charted out allows for the agenda to be hijacked or limited.
3. Engagement with civil society abroad
The EU has to some extent successfully engaged civil society actors from various parts of the world. The Closing the Gap Conference held successfully by EU Cyber Direct in July showcased quality scholarship from all around the world and enabled dialogue between participants that we do not see often. The dialogue we are having today is a critical form of engagement.The EU should also consider supporting and providing resources for transnational movements such as the #Keepiton coalition that is advocating against internet shutdowns around the world and other civil society consortiums that are upholding values the EU also believes in around the world. Further, it is clear that European policy innovations-be it the GDPR or the European Data Strategy deeply impacts the future of global digital spaces. Therefore, robust consultative mechanisms should be deployed to ensure that academics and civil society participants from all over the world have a meaningful opportunity to shape these policies, keeping in mind the resources available for organisations, specially those in the global south to do the same.
17th September 2020.
(Remarks delivered via video-conferencing)
(Note: This write-up is not meant to be an exhaustive representation of all recommendations for EU cyber diplomacy but captures the statement made by Arindrajit at the Civil Society Forum)
Government’s COVID-19 Responses in the Context of Privacy: Part I
Introduction
The ongoing COVID-19 pandemic is one of the biggest health emergencies to hit the world in a long time. The health measures recommended by experts for the prevention and containment of the spread of COVID-19 include regular washing of hands, wearing masks, maintaining physical distance, isolation of suspected cases, etc. At the community level case isolation and contact tracing have emerged as key elements of the comprehensive strategy to control the spread and transmission of COVID-19. To this end the government of India, launched a contact tracing app known as Aarogya Setu and encouraged (in certain cases with a tinge of intimidation) the people to install and use the same in order to bolster its contact tracing measures.
Although a lot of attention has been given to the privacy issues related to the Aarogya Setu app, there has been comparatively less focus on the other measures taken by the Central and State governments for containment of COVID-19. Some of these measures include – stamping suspected cases with “Home Quarantine” using indelible ink (Maharashtra, Delhi, Karnataka), pasting notices outside the houses of individuals advised home quarantine (Delhi, Mumbai), establishing containment zones around the residences of COVID-19 positive patients, releasing the names and addresses of COVID-19 positive patients, etc. It is obvious that all these measures involve some measure of violation of the right to privacy of the individual concerned. However (as mentioned above) there has been little public discussion around the privacy rights violated by these measures, especially when one compares it to the media attention garnered by the privacy issues related to the Aarogya Setu app. It is not easy to find the reasons behind most of the measures mentioned above in official government guidelines as the guidelines themselves are often not publicly present or readily available online.Wherever such guidelines are available, such as the Central Government guidelines regarding containment zones, they do not contain any background as to why the government feels that such measures are needed.
Although it is obvious enough to everyone that there are privacy issues involved in the government measures listed above however that does not necessarily mean that these measures necessarily violate the right to privacy of an individual. This is because like any other legal right, the right to privacy is also not absolute and in certain cases the right to privacy has to give way to other considerations. We shall therefore discuss the privacy implications of the different government actions in this series of posts, each of which shall analyze one specific type of government response to determine whether it complies with the principle of protection of the right to privacy. In this particular piece we shall examine whether releasing the names of COVID positive patients violates their right to privacy.
The Law on Privacy
The right to privacy was not always recognized under Indian law, in fact early Supreme Court decisions such as M.P. Sharma v. Satish Chandra, [AIR 1954 SC 30] and Kharak Singh v. State of U.P., [AIR 1963 SC 1295] specifically denied the existence of a right to privacy. The first semblance of judicial recognition for the right to privacy was the minority opinion in Kharak Singh which was later adopted as the majority view in Gobind v. State of M.P., [(1975) 2 SCC 148] to uphold the existence of a right to privacy in India. However due to the fact that Gobind and other decisions recognizing the right to privacy such as R. Rajagopal v. State of Tamil Nadu, [(1994) 6 SCC 632] and People’s Union for Civil Liberties v. Union of India, [(1997) 1 SCC 301] were delivered by smaller Benches of the Supreme Court, a Nine Judge Bench was constituted in K.S. Puttaswamy v. Union of India, [(2017) 10 SCC 1] to authoritatively decide the existence and scope of the right to privacy. The Supreme Court in Puttaswamy not only categorically recognized the right to privacy, but also discussed in detail its origins and scope as well as the circumstances under which the right may be limited.
While a detailed analysis of the judgment and the law of privacy itself is beyond the scope of this paper, it might be useful to recount here the brief essence of the Puttaswamy judgment. Since there were six different orders delivered in this case, for the sake of avoiding any confusion, we will discuss only the judgment delivered by Justice D.Y. Chandrachud since that was the judgment delivered on behalf of four Judges (while the other five judgments were delivered on behalf of individual Judges), and therefore would have the most weight as a precedent. In this judgment it was held that privacy is a constitutionally protected right emerging not only from the right to privacy guaranteed under Article 21 of the Constitution but its elements also arise in varying contexts from other facets of freedom and dignity recognized and guaranteed under other fundamental rights. It was further held that not only does privacy include at its core the preservation of human intimacies, it also connotes a right to be left alone. While the legitimate expectation of privacy may vary from the private to the public arenas, it is not lost or surrendered merely because the individual is in a public place since it is an essential facet of the dignity of the human being. Informational privacy has also been specifically recognized as a facet of the right to privacy.
Most importantly the Court held that like other fundamental rights the right to privacy is also not an absolute right. However an invasion of privacy has to be justified on the basis of a law which stipulates a procedure which is just, fair and reasonable. The Court specified a three-fold test for any action that violates the right to privacy: (i) legality, which postulates the existence of law; (ii) need, defined in terms of a legitimate state aim; and (iii) proportionality which ensures a rational nexus between the objects and the means adopted to achieve them. We will examine the various State actions in light of the principles above and analyse whether each of these actions satisfy the three-fold test laid down in Puttaswamy.
Analysis of Government Action
In a number of states such as Uttar Pradesh, Orissa, etc. a daily list of the names, age, address, etc. of the individuals who have been reported to be COVID positive is released and widely circulated. Although this practice may now have ceased in some places, previously such lists were released either through the conscious actions or the negligence of the state health authorities. The release of such information in the public domain clearly has privacy repercussions for the individuals concerned. Besides, these actions also seem to be at odds with the Guidelines issued by the Central Government titled “Addressing Social Stigma Associated with COVID-19” which categorically ask the public not to spread names or identity of those affected or under quarantine or their locality on social media. The Madras High Court in a recent decision (K. Narayanan v. Chief Secretary, Government of Tamil Nadu) relied on the above Guidelines to reject a petition requiring the state government to publish the names of all COVID positive patients on a website. The reason for such a prayer was to warn the public to stay away from such COVID positive patients in order to prevent the spread of the disease. However this argument was categorically rejected by the Court on the ground that publishing the names may lead to law and order problems as well as social stigma for the patients and their families.
The Indian Medical Council (Professional Conduct, Etiquette and Ethics) Regulations, 2002 also require physicians to keep patient information confidential except in situations where such information needs to be used to prevent a healthy person from being exposed to a communicable disease (Indian Medical Council (Professional conduct, Etiquette and Ethics) Regulations, 2002, Chapter II, section 2.2). Even the World Health Organization’s Guidance on contact tracing requires that “ethics of public health information, data protection, and data privacy must be considered at all levels of contact tracing activities” and that safeguards must be put in place to guarantee privacy and data protection in accordance with the legal frameworks of the countries. The Supreme Court in the case of Mr. “X” vs. Hospital “Z”, [AIR 1999 SC 495] has also upheld the right to privacy of a patient regarding his medical records, except insofar as it may be necessary to disclose such information in order to protect third parties from harm.
As far as the legal framework of the right to privacy is concerned, the Supreme Court in Puttaswamy clearly states that the sphere of privacy stretches to those matters where there is a reasonable expectation of privacy and then specifically recognizes medical information as a category of data where such an expectation of privacy would exist. The specific example of medical records is used by the Supreme Court to illustrate the point on balancing a legitimate state interest in the private information of its citizens vis-à-vis the individual’s right to privacy. It was recognized that although medical records are generally protected by the right to privacy, the state may assert a legitimate interest in analyzing medical records to understand and deal with a public health epidemic to prevent a serious impact on the population. However the Court put a very important caveat saying that such information may be used by the state if it preserves the anonymity of the individual. Thus the state may assert a legitimate interest in acquiring and using health records of individuals to deal with an epidemic provided it preserves the anonymity of the individual.
The above illustration from the judgement seems to suggest that the State only has the power to retain the health records of individuals if their anonymity is preserved and does not have the power to make such records public at all. However this interpretation was implicitly rejected by the Orissa High Court in Ananga Kumar Otta v. Union of India and others, (Writ Petition (PIL) No.12430 of 2020, Order dated 16-07-2020). A PIL (Public Interest Litigation) was filed by an advocate asking the Court to issue directions to the state authorities to take action against those persons whose actions or negligence led to the disclosure of the names of COVID positive patients and also to ensure that such events do not happen in the future. The State of Odisha claimed that it had passed the Odisha COVID-19 Regulations, 2020 which provided that the name, exact address and phone number of persons under treatment should not be disclosed, except in special circumstances affecting public health and safety and with the approval of the State Government. Discussing (and implicitly upholding) the Regulations the Court refused to pass a blanket order preventing the disclosure of the names of COVID patients as the Regulations provided that there would be no indiscriminate disclosure, rather any disclosure of identity would only be in exceptional circumstances. The Court however clarified that any action of disclosure under these exceptional circumstances as per the Regulations would also have to satisfy the triple test laid down in Puttaswamy.
Conclusion
The legal position that emerges from the above analysis is that the names and addresses of COVID positive patients cannot be released by the state authorities under normal circumstances as this would be violative of the right to privacy. However since the right to privacy is not absolute and is subject to exceptions, therefore there can be no absolute ban on releasing the names of COVID positive patients and such an act may be allowed under exceptional circumstances, although no such circumstance has been considered or illustrated by any Court till date. The only scenario in which such disclosures were allowed was when the Odisha government wanted to release the names of deceased COVID warriors (government employees engaged in COVID containment activities) so as to bestow them with appropriate state honors during their funeral. However even this act was done only with the previous consent of the family members of the deceased.
Thus while the law leaves scope for situations where the names of COVID positive patients may be released by the state authorities, no specific examples of such situations have been listed out by the Courts. The only guidance given by Courts in this regard is that any such disclosure would have to satisfy the established exceptions to the right to privacy, more specifically the three-fold test laid down by the Supreme Court in Puttaswamy of legality, proportionality and legitimate state interest.
CIS digital policy organisation tracker
Introduction:
India’s burgeoning enthusiasm, both in the adoption of emerging technologies, and in the evolution of technology-policy, has brought with it a range of non-state actors from academia, the private sector, and civil society who shape technology policy discourse in multiple ways. This diverse set of organisations can holistically improve, and plug gaps in the framing of technology policy by the government, while also giving themselves an opportunity to influence the evolution of several policies for their strategic benefit. There is a need for any researcher, policy-maker, or industrialist to map and understand this landscape.
This tracker was compiled only using publicly available information from the organisations’ websites. This exercise does not seek in any form to analyse or evaluate the work of any of the organisations and therefore does not include external commentary of any form on them. Any information on the tracker is self-declared information obtained from the organisation’s website.
Due to the lack of publicly available self-declared documentation, it is not possible to carry out a full fledged study of the design and contours of technology policy lobbying in India. However, it is possible through publicly available information to compile a list of organisations that operate in this space, and map the work they do.
This tracker is a first step. The intended audience is a wide range of individuals. It could be useful for graduates looking to work in this space, for government employees looking to understand the organisations that respond to public calls for consultations, and for the civil society space to understand the work of other organisations to foster collaborations and joint advocacy efforts.
The tracker is designed to be an evolving document that will be updated as and when changes occur in the Indian technology policy landscape. The objective of this tracker is not to analyze trends or make recommendations but to place all the information in an easily comprehensible and accessible fashion.
Given that the policy space in India is evolving and dynamic this list will never be an exhaustive one, and we are always looking for inputs. We apologise in advance for any errors in this tracker.
Credits:
Conceptualisation: Arindrajit Basu, Aman Nair, Sapni GK
Compilation of tracker: Sapni GK, Aman Nair, Elizabeth Dominic, Mitali Bhasin
Introduction written by: Arindrajit Basu
Review by: Aman Nair, Shweta Reddy, Akash Tahenguira
Design by: Akash Sheshadri, Pranav MB
Acknowledgment:
Thanks to Amba Kak for an initial conversation that spurred this. Thank you also to Udbhav Tiwari for providing initial feedback.
Mapping Web Censorship & Net Neutrality Violations
For over a year, researchers at the Centre for Internet and Society have been studying website blocking by internet service providers (ISPs) in India. We have learned that major ISPs don’t always block the same websites, and also use different blocking techniques. To take this study further, and map net neutrality violations by ISPs, we need your help. We have developed CensorWatch, a research tool to collect empirical evidence about what websites are blocked by Indian ISPs, and which blocking methods are being used to do so. Read more about this project (link), download CensorWatch (link), and help determine if ISPs are complying with India’s net neutrality regulations.
- Using information from court orders, user reports, and government orders, and running network tests from six ISPs, Kushagra Singh, Gurshabad Grover and Varun Bansal presented the largest study of web blocking in India. Through their work, they demonstrated that major ISPs in India use different techniques to block websites, and that they don’t block the same websites (link).
- Gurshabad Grover and Kushagra Singh collaborated with Simone Basso of the Open Observatory of Network Interference (OONI) to study HTTPS traffic blocking in India by running experiments on the networks of three popular Indian ISPs: ACT Fibernet, Bharti Airtel, and Reliance Jio (link).
- For The Leaflet, Torsha Sarkar and Gurshabad Grover wrote about the legal framework of blocking in India — Section 69A of the IT Act and its rules. They considered commentator opinions questioning the constitutionality of the regime, whether originators of content are entitled to a hearing, and whether Rule 16, which mandates confidentiality of content takedown requests received by intermediaries from the Government, continues to be operative (link).
- In the Hindustan Times, Gurshabad Grover critically analysed the confidentiality requirement embedded within Section 69A of the IT Act and argued how this leads to internet users in India experiencing arbitrary censorship (link).
- Torsha Sarkar, along with Sarvjeet Singh of the Centre for Communication Governance (CCG), spoke to Medianama delineating the procedural aspects of section 69A of the IT Act (link).
- Arindrajit Basu spoke to the Times of India about the geopolitical and regulatory implications of the Indian government’s move to ban fifty-nine Chinese applications from India (link).
Comments to National Digital Health Mission: Health Data Management Policy
Read the full set of comments here.
Investigating Encrypted DNS Blocking in India
This report was edited and reviewed by Gurshabad Grover and Simone Basso.
The Domain Name System (DNS) translates human-readable web addresses, like ‘cis-india.org’, into machine-readable IP addresses, such as ‘172.67.211.18’, that the routers that comprise the internet can understand and direct traffic to. This basic function of the web has historically operated unencrypted — allowing intermediaries that facilitate access to the internet, like coffee shop Wi-Fi operators and internet service providers (ISPs), to view what websites we visit. This gap in privacy is being exploited by both public and private entities to censor access to the web and surveil our browsing habits.
New internet protocols are being deployed that attempt to encrypt connections to DNS providers. Through the use of these methods, the contents of DNS queries are hidden from network intermediaries and eavesdroppers and are only visible to the DNS provider chosen by an individual or a default one assigned to them by their ISP or web browser. While there are other ways of censoring web traffic, encrypted DNS protocols prevent censors from using their older DNS-based methods. In response to these new protocols, states like Iran are trying to block them entirely, to maintain the status quo.
In this report, we investigate and find that encrypted DNS protocols, specifically the DNS over HTTPS (DoH) and DNS over TLS (DoT) standards, are accessible through major Indian ISPs, and describe the technical details of our testing methodology.
Test Setup
We compiled a list of publicly accessible DNS resolvers that support the encrypted DoH and DoT protocols and tested access to them from four popular Indian ISPs, namely Airtel, Atria Convergence Technologies (ACT), Reliance Jio, and Vodafone. Together, these cover a large majority (roughly 95%, as reported by TRAI) of the Indian internet subscriber base.
To test connectivity, we used the Open Observatory for Network Interference (OONI) probe engine (version 0.18.0). Specifically, the ‘miniooni’ command-line interface tool bundled with it. Instructions on how to install this can be found here.
Test methodology
To test whether DNS providers are reachable over encrypted communication protocols, the tool performs a DNS query using the specified one (either DoH or DoT). If the connection is successful and we receive a response from the DNS server, we conclude that the protocol is not blocked. Failing to query a specific DNS server over DoT or DoH does not necessarily mean that it has been censored. To understand whether a failure could be censorship, rather than a transient error, we would correlate measurements from many users within the same ISP and country and use an alternate network, such as a VPN, to access the possibly blocked service from another country.
In Iran, where DNS over TLS is reported to be blocked, it was found that censorship occurs by interfering with the TLS handshake. Traffic corresponding to DNS over TLS is easier to identify and block as it communicates over a unique port and a distinctive ALPN, while DNS over HTTPS traffic is harder to block effectively as the HTTPS standard is widely used on the web and interference would lead to collateral censorship.
Results
The tests were run on each ISP in early October 2020 using the following command:
$ ./miniooni --file=./resolvers.txt dnscheck
The raw results in the OONI data format can be found here. A summary of the observations are as follows:
- All DNS resolvers tested were accessible over both DoH and DoT protocols from all ISPs tested.
- IPv6 addresses were not reachable through ACT broadband. This limitation was independently confirmed using the Test-IPv6 tool and has also been discussed on Reddit.
Limitations
As our previous research by the Centre for Internet and Society indicates, censorship practices vary across ISPs. While we find no evidence of encrypted DNS protocols being blocked on these four major ISPs, there may be others implementing such blocking.
The second limitation is that these tests were run on a handful of connections from a couple of locations (Delhi and Bangalore). Web censorship mechanisms may vary by location within the country.
Finally, the results only indicate the accessibility of encrypted DNS resolvers at a particular point in time. We have not put in place any continuous monitoring of the censorship of encrypted DNS protocols.
Conclusion
Broadly, the legal framework of web censorship in India allows the Government and courts to ask ISPs to block access to online resources. The precise technical details of how to implement the censorship are left to the ISPs.
Because of net neutrality obligations, ISPs are not supposed to arbitrarily block resources. Coupled with the fact that the use of encrypted DNS protocols is not related to any particular content/website deemed unlawful, it might be expected that ISPs are not blocking encrypted DNS protocols. However, previous evidence of arbitrary blocking by ISPs motivated us to study whether any major ISP was blocking the use of these protocols or preventing access to any third-party DNS server.
As part of this exercise, we also contributed code to the OONI probe engine, making it easier for other researchers to test connectivity to multiple DNS providers.
CIS Report on Legal and Policy Implications of Autonomous Weapons Systems
Link to full report: https://cis-india.org/internet-governance/legal-and-policy-implications-of-autonomous-weapons-systems
Wars have been a part of human existence from the very beginning. However, the evolution of civilization has led to the evolution of wars. As a society, our discourse is now centred around on how this new generation of wars is best fought rather than whether at all to fight them. This inevitability of war has further led countries to develop means and methods of warfare, for inevitability of war is only acceptable when it is accompanied by the inevitability of victory. Autonomous Weapon Systems (AWS) or Lethal Autonomous Weapons Systems (LAWS) have, in recent times, sparked a global debate regarding what is being called the future of technology: artificial intelligence. In the backdrop of revolutionizing wars, AWS are being developed by certain countries to gain an edge over the others, forcing others to participate in the arms race of the 21st century in order to prevent asymmetric development of warfare. The international community must now contemplate the legal, moral and ethical implications of further developing existing automated weapons and giving them more autonomy than ever before.
It is to ally such concerns that a Group of Governmental Experts (GGE) was convened by the United Nations Convention on Certain Conventional Weapons (UN CCW) in December 2016, clearly demonstrating the global interest in the issue at hand. The Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects or the UN CCW was established with the aim of restricting weapons considered to cause unnecessary suffering and impact civilians disproportionately and indiscriminately.
This paper is divided into 4 Chapters.
Chapter I authored by Anoushka Soni defines and differentiates between certain key terms imperative for a better understanding of autonomous weapon systems in all its technicalities. Further, the Chapter also provides a broad overview of the difference in existing state practice by reviewing the lack of universality of a definition for autonomous weapons.
Chapter II also authored by Anoushka Soni analyses autonomous weapons from the perspective of international humanitarian law. It first contemplates the prima facie illegality of autonomous weapons, and subsequently focuses on their lawful use with regard to the principles of distinction, proportionality and military necessity and the conclusion provides a normative look at the way forward.
Chapter III authored by Elizabeth Dominic goes into the question of accountability and redress and evaluates models of criminal and civil liability in case autonomous weapons systems go wrong.
Chapter IV authored by Elizabeth Dominic evaluates the role of the private sector in the development, trade and policy framework on autonomous weapons systems around the world.
Reclaiming AI Futures: Call for Contributions and Provocations
The Wolf in Sheep's Clothing: Demanding your Data
This piece was originally published in The Economic Times Telecom, on 8 September, 2020.
The increasing digitalization of the economy and ubiquity of the Internet, coupled with developments in Artificial Intelligence
(AI) and Machine Learning (ML) has given rise to transformational
business models across several sectors. These developments have changed
the very structure of existing sectors, with a few dominant firms
straddling across many sectors. The position of these firms is
entrenched due to the large amounts of data they have, and usage of
sophisticated algorithms that deliver very targeted service/content and
their global nature.
Such data based network businesses
are generally multi-sided platforms subject to network effects and
winner takes all phenomena, often, making traditional competition
regulation inappropriate. In addition, there has been concern that such
companies hurt competition as they are owners of large amounts of data
collected globally, the very basis on which new services are predicated.
Also since users have an inertia to share their data on multiple
platforms, new companies find it very challenging to emerge. Several of
the large companies are of US origin. Several regions/countries such as
EU, UK, India are concerned that while these companies benefit from the
data of their citizens or their devices,
SMEs and other companies in their own countries find it increasingly
difficult to remain viable or achieve scale. With the objective of
supporting enterprises, including SMEs in their own countries, Europe,
UK India are in different stages of data regulation initiatives.
In India, the Personal Data Protection
(PDP) Bill, 2019 deals with the framework for collecting, managing and
transferring of Personal Data of Indian citizens, including mandating
sharing of anonymized data of individuals and non-personal data for
better targeting of services or policy making. In addition, the Report
by the Committee of Experts (CoE) on Non Personal Data (NPD) came up
with a Framework for Regulating NPD. Since the NPD Report is a more
recent phenomenon, this articles analyzes some aspects of it.
According
to CoE, non-personal data could be of two types. First, data or
information which was never about an individual (e.g. weather data).
Second, data or information that once was related to an individual (e.g.
mobile number) but has now ceased to be identifiable due to the removal
of certain identifiers through the process of ‘anonymisation’. However,
it may be possible to recover the personal data from such anonymized
data and therefore, the distinction between personal and non-personal is
not clean. In any case, the PDP bill 2019 deals with personal data. If
the CoE felt that some aspect of personal data (including anonymized
data) were not adequately dealt with, it should work to strengthen it.
The current approach of the CoE is bound to create confusion and
overlapping jurisdiction. Since anonymized data is required to be
shared, there are disincentives to anonymization, causing greater risk
to individual privacy.
A new class of business based on a “horizontal classification cutting across different industry sectors” is defined. This refers to any business that derives “new or additional economic value from data, by collecting, storing, processing, and managing data”
based on a certain threshold of data collected/processed that will be
defined by the regulatory authority that is outlined in the report. The
CoE also recommends that “Data Businesses will provide, within India, open access to meta-data and regulated access to the underlying data” without any remuneration. Further, “By
looking at the meta-data, potential users may identify opportunities
for combining data from multiple Data Businesses and/or governments to
develop innovative solutions, products and services. Subsequently, data
requests may be made for the detailed underlying data”.
With
increasing digitalization, today almost every business is a data
business. The problem in such categorization will be with the definition
of thresholds. It is likely that even a small video sharing app or an
AR/VR app would store/collect/process/transmit more data than say a
mid-sized bank in terms of data volumes. Further, with increasing
embedding of IoT
in various aspects of our lives and businesses (smart manufacturing,
logistics, banking etc), the amount of data that is captured by even
small entities can be huge.
The private sector, driven by
profitability, identifies innovative business models, risks capital and
finds unique ways of capturing and melding different data sets. In
order to sustain economic growth, such innovation is necessary. The
private sector would also like legal protection over these aspects of
its businesses, including the unique IPR that may be embedded in the
processing of data or its business processes. But mandating such onerous
requirements on sharing by the CoE is going to kill any private
initiative. Any regulatory regime must balance between the need to
provide a secure environment for protecting data of incumbents and
making it available to SMEs/businesses.
Meta data
provides insights to the company’s databases and processes. These are
source of competitive advantage for any company. Meta data is not
without a context. The basis of demanding such disclosure is mandated
with the proposed NPD Regulator who would evaluate such a purpose. In
practice, purposes are open to interpretation and the structure of
appeal mechanism etc is going to stall any such sharing. Would such
mandates of sharing not interfere with the existing Intellectual
Property Rights? Or the freedom to contract? Any innovation could easily
be made available to a competitor that front-ends itself with a
start-up. To mandate making such data available would not be fair.
Further, how would the NPD regulator even ensure that such data is used
for the purpose (which the proposed regulator is supposed to evaluate)
that it is sought for? In Europe, where such data sharing
mandates are being considered, the focus is on public data. For private
entities, the sharing is largely based on voluntary contributions.
Compulsory sharing is mandated only under restricted situations where
market failure situations are not addressed through Competition Act and
provided legitimate interest of the data holder and existing legal
provisions are taken into account.
Further, the
compliance requirements for such Data Businesses is very onerous and
makes a mockery of “minimum government” framework of the government. The
CoE recommends that all Data Businesses, whether government NGO, or
private “to disclose data elements collected, stored and processed, and data-based services offered”. As if this was not enough, the CoE further recommends that “Every
Data Business must declare what they do and what data they collect,
process and use, in which manner, and for what purposes (like disclosure
of data elements collected, where data is stored, standards adopted to
store and secure data, nature of data processing and data services
provided). This is similar to disclosures required by pharma industry
and in food products”. Such disclosures are necessary in these
industries as the companies in this sector deal with critical aspects of
human life. But are such requirements necessary for all activities and
businesses? As long as organizations collect and process data, in a
legal manner, within the sectoral regulation, why should such
information have to be “reported”? Further, such bureaucratic processes
and reporting requirements are only going to be a burden to existing
legitimate businesses and give rise to a thriving regulatory license
raj.
Further questions that arise are: How is any
compliance agency going to make sure that all the underlying metadata is
made available in a timely manner? As companies respond to a dynamic
environment, their analysis and analytical tools change and so does the
metadata. This inherent aspect of businesses raises the question: At
what point in time should companies make their meta-data available? How
will the compliance be monitored?
Conclusion: The CoE
needs to create an enabling and facilitating an environment for data
sharing. The incentives for different types of entities to participate
and contribute must be recognized. Adequate provisions for risks and
liabilities arising out data sharing need to be thought through.
National initiatives on data sharing should not create an onerous
reporting regime, as envisaged by the CoE, even if digital.
DISCLAIMER: The views expressed are solely of the author and ETTelecom.com does not necessarily subscribe to it. ETTelecom.com shall not be responsible for any damage caused to any person/organisation directly or indirectly.
CIS Comments on Draft ODR Report
This submission is a response by the researchers at CIS to the report “Designing the Future of Dispute Resolution: The ODR Policy Plan for India” prepared by the NITI Aayog Expert Committee on ODR.
We have put forward the following comments based on our analysis of the draft report.
-
Structural considerations with ODR itself
-
The report classifies ODR as a singular entity rather than a group of technologies that require different approaches.
-
Currently ODR still has a number of functional limitations such as difficulty to account for nuance, limitation of algorithms and vulnerability of the systems.
-
The report also fails to address how the psychological limitations involved with ODR, such as involving communication, perception and preferences of parties will be solved for when implemented at the national level.
-
Socio-Economic considerations when transitioning to nation wide ODR
-
There is a lack of current access to digital infrastructure that limits ODR’s effectiveness.
-
The projections made in the report disproportionately rely on market forces while suggesting a lack of mandated standards
-
Privacy and Security concerns with moving to ODR
-
Need for greater clarity on oversight and regulation of ODR platforms
-
An independent sectoral regulator is a necessity
-
Other comments
-
The opt out model proposed must be changed to allow for the option of ADR as well.
The PDP Bill 2019 Through the Lens of Privacy by Design
Background
The Personal Data Protection (PDP) Bill, 2019 was introduced in the Lok Sabha on December 11, 2019 by the Minister of Electronics and Information Technology. The Bill aims to provide for protection of personal data of individuals, and establishes a Data Protection Authority for the same [1]. The PDP Bill, 2019 contains several clauses that have implications on the visual design of digital products. These include the specific requirements for communication of notice and consent at various stages of the product. The Bill also introduces the Privacy by Design policy. Privacy by Design (PbD), as a concept, was proposed by Ann Cavoukian in the 1990s, with the purpose of approaching privacy from a design-thinking perspective [2]. She describes this perspective to be holistic, interdisciplinary, integrative, and innovative. The approach suggests that privacy must be incorporated into networked data systems and technologies, by default [3]. It challenges the practice of enhancing privacy as an afterthought. It expects privacy to be a default setting, and a proactive (not reactive) measure that would be embedded into a design in its initial stage and throughout the life cycle of the product [4]. While PbD is a conceptual framework, it’s application can change the way digital platforms are created and the way in which people interact with them. From devising a business model, to making technological decisions, PbD principles can make privacy integral to the processes and standards of a digital platform.
The PDP Bill states that data fiduciaries are required to prepare a Privacy by Design policy and have it certified by the Data Protection Authority. According to the Bill, the policy would contain the managerial, organisational, business practices and technical systems designed to anticipate, identify and avoid harm to the data principal [5]. It would mention if the technology used in the processing of personal data is in accordance with the certified standards. It would also comprise of the ways in which privacy is being protected throughout the stages of processing of personal data, and that the interest of the individual is accounted for in each of these stages. Once certified by the Data Protection Authority, the data fiduciaries are also required to publish this policy on their website [6]. This forces the data fiduciaries to envision privacy as a fundamental requirement and not an afterthought. Such a policy would have a huge impact in the way digital platforms are conceptualised, both from the technological and the design point of view. The adoption of this policy by digital platforms would enable people to know if their privacy is protected by the companies, and what are the various steps being taken for this purpose. Besides the explicit Privacy by Design policy, the PDP Bill, 2019, also recommends the regulations for data minimisation, establishment of the Data Protection Authority (DPA), and the development of a consent framework. These steps are also part of the Privacy by Design approach.
This paper evaluates the PDP Bill based on the Privacy by Design approach. The Bill’s scope includes both the conceptual and technological aspects of a digital platform, as well as the interface aspect that the individual using the platform faces. The paper will hence analyse how PbD approach is reflected in both these aspects. At the conceptual level, it will look at the data ecosystem that the Bill unwittingly creates, and at the interface level, it will critically analyse the Bill’s implication on the notice and consent communication in the digital products. This includes the several points of communication or touchpoints between a company and an individual using their service, as dictated by the Bill, and how they would translate into visual design. Visual design forms an integral part of digital platforms. It is the way in which the platforms interact with the individuals. The choices made by individuals are largely driven by the visual structuring and presentation of information on these platforms. Presently, the interface design in several platforms is being used to perpetuate unethical data practices in the form of dark patterns. Dark Patterns are deceptive user interface interactions, designed to mislead or trick users to make them do something they don’t want to do [7]. The design of the notice and consent touchpoints can significantly influence the enforcement of this Bill, and how it benefits individuals. Moreover, digital platforms may technically follow the regulations but can still be manipulative through their interface design. Thus, the role and accountability of design becomes crucial in the interpretation of the data protection regulations.
The full paper can be read here.
[1] https://prsindia.org/billtrack/personal-data-protection-bill-2019
[2] https://iab.org/wp-content/IAB-uploads/2011/03/fred_carter.pdf
[3] https://iab.org/wp-content/IAB-uploads/2011/03/fred_carter.pdf
[4] https://www.smashingmagazine.com/2019/04/privacy-ux-aware-design-framework/
[5] http://164.100.47.4/BillsTexts/LSBillTexts/Asintroduced/373_2019_LS_Eng.pdf
[6] https://sflc.in/key-changes-personal-data-protection-bill-2019-srikrishna-committee-draft
[7] https://uxdesign.cc/dark-patterns-in-ux-design-7009a83b233c
Intermediary liability and Safe Harbour: On due diligence and automated filtering
This blogpost was authored by Gurshabad Grover and Anna Liz Thomas. It was first published at Law and Other Things.
Introduction
India’s intermediary liability regime flows from section 79 of the Information Technology Act, 2000 (the “Act”), a provision that exempts intermediaries from liability for third party content on their service, as long as certain conditions are fulfilled. Under Section 79(2)(c) of the Act, one of the conditions for an intermediary to claim safe harbour (immunity from liability for third party content) is that it:
“observes due diligence while discharging his duties under this Act and also observes such other guidelines as the Central Government may prescribe in this behalf.” (emphasis is authors’)
This post discusses this ‘due diligence’ obligation with a focus on its scope and its relationship with the intermediary guidelines issued under the Act. We primarily analyse the arguments made by T. Prashant Reddy in Back to the Drawing Board: What should be the new direction of the intermediary liability law?, (“the paper”) which was published last year in the NLUD Journal of Legal Studies.
While the paper aims to broadly engage with the question of how India’s intermediary liability regime should be reformed, this post only focuses on two of the arguments that form the basis of the paper. First, the paper suggests that ‘due diligence’ should be interpreted as a separate requirement from the intermediary guidelines (“the 2011 rules”) issued under the law. The second argument builds on this and argues that this due diligence requirement could be understood to mean that intermediaries should engage in proactive identification and filtering of unlawful content.
We explore the two questions in the same order, and then finally explore alternative interpretations of the due diligence requirement. We argue that (1) there are multiple ways to interpret the provision, but there may be merit in considering the ‘due diligence’ requirement as distinct from the guidelines; and that (2) even if it is a separate requirement, proactive filtering of content by intermediaries is unconstitutional, and thus cannot be the sort of ‘due diligence’ the law expects from intermediaries.
Is ‘due diligence’ a separate requirement?
Section 79 of the IT Act has long been criticised for its vague and poor drafting, including on whether the entire clause requiring ‘due diligence’ was mandatory at all. The paper only suggests that ‘due diligence’ is a separate requirement from the guidelines, with the interpretation being supported by two facts.
First, the paper points to the ‘and’ in Section 79(2)(c) that separates the obligation to conduct due diligence, and the obligation to observe the guidelines prescribed by the Central Government. This would indicate that the two obligations are to be separately fulfilled. We should point out that reading the statute in such a way does mean that the two obligations are distinct, but it could also imply that both ‘due diligence’ and ‘other guidelines’ can be notified by the Government. In fact, we think that evidence of the claim that ‘due diligence’ is a separate self-contained obligation is actually found in the word ‘also’ that succeeds ‘and’. If we interpret the provision in a way that the due diligence is only what is notified in the rules, the term ‘also’ ends up having no real significance. The rule of surplusage in interpretation states that “every word and every provision is to be given effect”, and that “none should be ignored.” Thus, the term ‘also’ can be understood as intentionally demarcating the ‘due diligence’ obligation and the one that obligates intermediaries to comply with the rules notified under the provision.
The paper further argues that the second fact supporting this interpretation is in the legislative history of section 79 of the Act. Section 79, as it presently exists, was the result of the amendments to the Act passed in 2008. The phrase ‘due diligence’ was retained in the text of the provision on the insistence of the Standing Committee which submitted a report on the Bill. The Committee had contextualized the due diligence requirement in relation to the need for an explicit provision requiring the blocking and elimination of objectionable content through technical mechanisms.
However, the paper does not consider the fact that the Committee had also specified that the reason it wanted ‘due diligence’ in the provision was because in their opinion, “removing an enabling provision which already exists in the principal Act and leaving it to be taken care of by the possible guidelines makes no sense”. From the perspective of the Standing Committee, the due diligence provision was an enabling one, i.e., primarily meant to allow the government to make guidelines in that regard. In an enabling provision like this one, retaining the term ‘due diligence’ and adding that intermediaries have an obligation to observe ‘such other’ guidelines curbed the possibility of excessive delegation, by ensuring that any guidelines prescribed specifically concern due diligence obligations.
Note that the judgement of the Andhra Pradesh High Court in Google India Private Limited vs M/S Visaka Industries Limited in November 2016 may support the paper’s argument in that the ‘due diligence’ obligation is distinct from the guidelines. In the absence of any explicit definition of ‘due diligence’ in the IT Act, the Court cited precedent that relied on dictionary meanings of due diligence and concluded that that in order to meet the requirement, an intermediary would have to prove that it “had acted as an ordinary reasonable prudent man”, which would be a “question of fact.” Perhaps the Delhi High Court was clearest in the matter in Christian Louboutin v Nakul Bajaj when it stated that “the ‘due diligence’ provided in the Act, has to be construed as being broad and not restricted merely to the guidelines themselves.”
On the other hand, like the paper notes, there are judgments like MySpace Inc. vs Super Cassettes Industries Ltd. by the Delhi High Court, which have not considered the specific question, but concluded Rule 3 of the 2011 rules to completely encapsulate ‘due diligence’. This is, of course, because of the language in the rules. While Section 79(2)(c) of the IT Act might suffer from some vagueness, Rule 3 of the 2011 rules is unequivocal in that it seeks to define the “due diligence to be observed by the intermediary.” As Chinmayi Arun notes, the notification of the rules is seen as serving to clarify the meaning of the requirement. It is no surprise that Rule 3 has become the traditionally-understood standard for fulfilling the ‘due diligence’ requirement under the law.
Overall, despite the lack of a crystal-clear answer, we agree with the paper that there is enough merit in seriously considering the ‘due diligence’ as distinct from the guidelines. The paper has rightly brought up an interpretation that needs more attention in literature and cases on intermediary liability in India.
Interpreting ‘Due diligence’
It thus becomes important to question what this ‘due diligence’ will entail for intermediaries if (and/or when) it is entirely distinct from the rules. The paper points to how the Committee had contextualized the due diligence requirement as a need for certain intermediaries to block and eliminate objectionable content through technical mechanisms. Using this frame of reference, Reddy suggests that this ‘due diligence’ requirement may mean that intermediaries are obligated to proactively filter objectionable content.
However, it is pertinent to note that the Standing Committee had originally intended that the ‘due diligence’ requirement be reinstated as a prerequisite for giving immunity to a specific kind of intermediary: online marketplaces and online auction sites. Their suggestions for automated tools for filtering content should also be understood then as targeted at these specific intermediaries. Therefore, there is nothing in the legislative history of Section 79(2)(c) that suggests that measures such as automated content filtration were even considered as obligations for all categories of intermediaries.
More importantly, as many have pointed out in the context of the proposed amendments to the intermediary guidelines, proactive filtering of content would be unreasonable and its application definitely an unconstitutional restriction on speech.
First, such a requirement would suffer from vagueness and overbreadth. There are lots of “automated tools” that can be used to filter content (keyword detection, hash-based content detection, machine learning, etc.), each with their merits and demerits. Even if delegated legislation were to provide clarity to the term, such a broad interpretation of ‘due diligence’ would not be consistent with the ‘case-by-case’ evaluation that is the usual understanding of the term. Apart from the fact that all forms of automated filtering have their inherent limitations, it would be impossible for certain kinds of intermediaries, like those that deal with end-to-end encrypted communications to implement such a requirement.
The determination of whether certain acts are illegal is a public function, left to the government and the courts. A broad proactive filtering obligation on intermediaries is state censorship by proxy, and worse yet, a form of privatized law enforcement. As a matter of principle, what the state cannot do directly, it cannot do indirectly. For such forms of censorship, Prof. Seth Kreimer has elucidated in detail the great dangers of “collateral damage” that come from placing restrictions on intermediaries (if not the speaker). On its face, it appears less egregious than a “frontal attack” on expression by the state, but it can have the same effects.
To understand the impact of such obligations in the context of intermediary liability, consider the even lower bar of requiring intermediaries to entertain third-party takedown notices. There is evidence from multiple jurisdictions to suggest that even third party notice-and-takedown systems make intermediaries over-censor in order to avoid liability. When such a system existed in India before the Supreme Court’s judgment in Shreya Singhal v. Union of India, a study by Rishabh Dara found that a majority of intermediaries (that they sent notices to) were over-censoring by complying with clearly frivolous takedown notices. The requirement of proactive filtering will undoubtedly cause a much amplified, unjust and disproportionate harm to the exercise of the right to freedom of expression. Furthermore, it has been confirmed by Shreya Singhal that the ‘knowledge;’ of content to be taken down, must only be construed as being brought to the intermediary through the medium of a court order. It, therefore, becomes difficult to reconcile Shreya Singhal with automatic filtration being mandated by law, since this would suggest that such ‘knowledge’ may be brought to the intermediary by way of an algorithm (with or without conjunction with human inspection), rather than a court order.
Rather than meeting T. Prashant Reddy’s aim, such a reading would also concentrate more powers in the hands of private companies like Facebook and Google that already exert an undue influence in the moderation of the online public sphere.
Instead of a draconian form of ‘due diligence’, it is important to consider what the range of possibilities that could inform the obligation. For instance, the UN Guiding Principles on Business and Human Rights require business enterprises to carry out a human rights due diligence on a regular basis, to identify, prevent, mitigate and account for how they address their impacts on human rights. Businesses, under these principles have differentiated responsibilities based on the size of the business, risk of severe human rights impacts and the nature and context of its operations. Once again, in this case, each intermediary’s performance of its due diligence obligation would be made on a case-to-case basis. Another interpretation can be the incorporation of safeguards in takedown process, as Article 19 has suggested. This could be to ensure that the companies are transparent in their decision-making, and users are able to challenge takedown decisions made by companies.
Conclusion
For the long-term reform of governance of online platforms, it is important to keep in mind that this is one of the many problems in section 79 of the IT Act. As the paper points out, the provision has been long criticised for having a “one-size-fits-all” approach to regulation, where internet service providers and social media companies are treated similarly when it comes to their conditions for exemption from liability. The conditions for exemption from liability in the provision contribute to confusion around their application to good faith content moderation and curation of newsfeeds.
There is also little in the law that advocates for transparency and fairness in the moderation of online content, which is the area where large and closed intermediaries act most as ‘gatekeepers’ and influence the public sphere. Unfortunately, while the paper recognises these issues, it goes on to advocate for proactive and automated content filtering, which is likely to concentrate even more power in the hands of big tech companies.
There are a host of problems that contribute to the misgovernance of online platforms, including an ineffective competition law framework, the lack of consumer protection standards applicable to most ‘free’ online services, and the opacity with which community standards are applied. A step towards addressing these issues would be a clearer and comprehensive intermediary liability legislation that recognises the role of intermediaries in facilitating the right to freedom of expression, holds them accountable to users, and dismantles unfair concentration of power in commercial interests.
The authors would like to thank Torsha Sarkar and the Editorial Board at Law and Other Things for their comments and suggestions.
Disclosure: CIS has been a recipient of research grants from Facebook and Google.
Comments on Data Empowerment and Protection Architecture
Read the full set of comments here
Government COVID-19 Responses in the Context of Privacy : Part II
This is the second part in a two part series of posts analysing the privacy implications of the state’s responses to COVID-19. In the previous post we discussed the privacy implications of releasing the names of COVID positive patients by the governments of certain states. In this piece we shall discuss the privacy implications of the following three government actions:
(i) putting stamps on the hands of individuals who are required to quarantine themselves (Maharashtra, Delhi, Karnataka);
(ii) putting up notices/posters outside the houses of individuals who were required to quarantine themselves (Delhi, Maharashtra); and
(iii) putting up barricades around the houses of COVID positive patients who prefer to not check themselves in to a hospital.
APPROACH OF THE PAPER
The principles regarding the scope and limitations of the right to privacy have been enunciated by various decisions of the Supreme Court after detailed discussions of previous case laws and esoteric discussions of jurisprudential principles. Therefore an analysis based on these discussions bears the risk of descending into complicated legalese. In order to avoid this problem and in the interest of making this article more accessible to those without a legal background, the author has tried to reduce these legal principles into a series of simple questions. This approach, while having the advantage of being more accessible carries the risk of losing some legal detail. The author has tried his best to ensure that this risk is reduced to a minimum and does not affect the accuracy of the analysis.
In the course of researching this paper, it came to light that a number of the above mentioned state actions were being taken pursuant to orders passed by State governments. However the orders issued by State governments are not readily available online due to various reasons, such as not being properly indexed, or being in vernacular languages or, in certain cases, not being uploaded at all. The author has therefore been unable to analyse the specific orders and has limited his analysis to a statement of the relevant legal principles.
PRIVACY IMPLICATIONS
The right to privacy is not an absolute right and the state is allowed to restrict it under certain circumstances which have been established by judicial decisions. Therefore in order to analyse the privacy implications of these state actions we must try to answer two basic questions, viz.
(i) do these actions violate the right to privacy of an individual, and
(ii) in case they do violate the right to privacy, can the state justify these actions as falling within the legally accepted exceptions to the said right.
The right to be let alone
Do actions such as putting a poster or a barricade outside the house or stamping the hand of a person violate the individual’s right to privacy? In R. Rajagopal v. State of Tamil Nadu, [(1994) 6 SCC 632] after a discussion of various cases on the right to privacy, the Supreme Court summarised the broad principles relating to the said right, holding that “The right to privacy is implicit in the right to life and liberty guaranteed to the citizens of this country by Article 21. It is a "right to be let alone"”. Later in the seminal case of K.S. Puttaswamy v. Union of India, [(2017) 10 SCC 1] (Puttaswamy I or Right to Privacy Judgment) it was held that:
“Privacy postulates the reservation of a private space for the individual, described as the right to be let alone. The concept is founded on the autonomy of the individual……….
The autonomy of the individual is associated over matters which can be kept private. These are concerns over which there is a legitimate expectation of privacy. The body and the mind are inseparable elements of the human personality. The integrity of the body and the sanctity of the mind can exist on the foundation that each individual possesses an inalienable ability and right to preserve a private space in which the human personality can develop……
Privacy at a subjective level is a reflection of those areas where an individual desires to be left alone. On an objective plane, privacy is defined by those constitutional values which shape the content of the protected zone where the individual ought to be left alone…….
Privacy includes at its core the preservation of personal intimacies, the sanctity of family life, marriage, procreation, the home and sexual orientation.”
Thus the right to be let alone is an integral facet of the fundamental right to privacy. This right to be let alone has been further explained by Justice S.A. Bobde in his opinion in Puttaswamy I (quoting from Warren and Brandeis’ 1890 Article in the Harvard Law Review) as “the condition or state of being free from public attention to intrusion into or interference with one’s acts or decisions”. He further clarified that what appears to be essential for privacy is the power to seclude oneself from intrusions by others and these intrusions “may be physical or visual, and may take any of several forms including peeping over one’s shoulder…..”
Thus it can be said that any intrusions into the state of seclusion would violate the right to be let alone and therefore violate the fundamental right to privacy. Since these intrusions may be “physical or visual” it is clearly possible to hold that any poster being put outside the house of a person (without consent) would violate this right to be let alone. Similarly, putting up barricades outside the house of a person which marks out the house of the individual would also be considered as an intrusion into the right to be let alone and therefore the right to privacy. Further, since the integrity of the body has been held to be part of the right to privacy, therefore any visual intrusion such as a stamp put on the body of a person without his or her consent would violate the individual’s right to privacy.
Thus the first question has to be answered in the affirmative for all three state actions being analysed in this article. However as pointed out before, there are certain exceptions under which the state is allowed to restrict this right and we shall now try to determine if these actions can be justified as falling within those exceptions.
The Three Step Test
The brief outline of the right to privacy as well as its scope and limitations has already been discussed in Part I of this series. As mentioned above the right to privacy is not absolute and the Supreme Court in Puttaswamy I laid down a three step test that any state action that affects privacy, has to satisfy in order to be constitutionally valid. This three step test illustrated in the opinion of Justice Chandrachud, speaking for himself and three other Judges, emanates from the procedural and content based mandate of Article 21, viz.:
(i) legality, which postulates the existence of law
This is an express requirement of Article 21 which provides that no person can be deprived of his life or personal liberty except in accordance with the procedure established by law. Thus the existence of a law is an essential requirement to satisfy this part of the test.
(ii) need, defined in terms of a legitimate state aim
This ensures that the nature and content of the law imposing the restriction is within the zone of reasonableness mandated by Article 14, which is a guarantee against arbitrary state action. The pursuit of a legitimate state aim ensures that the law does not suffer from manifest arbitrariness. The Supreme Court in E.P. Royappa v. State of Tamil Nadu, [1974 AIR 555] has held that State action must be based on valid and relevant principles and must not be guided by any extraneous or irrelevant considerations. Where the operative reason for State action is not legitimate and relevant but is extraneous and outside the area of permissible considerations, it would amount to mala fide exercise of power. In Natural Resources Allocation, In re, [(2012) 10 SCC 1] the Supreme Court explained “manifest arbitrariness” in the following terms:
“when it is not fair, not reasonable, discriminatory, not transparent, capricious, biased, with favoritism or nepotism and not in pursuit of promotion of healthy competition and equitable treatment. Positively speaking, it should conform to norms which are rational, informed with reason and guided by public interest, etc.”
(iii) proportionality which ensures a rational nexus between the objects and the means adopted to achieve them
This ensures that the means which are adopted in the state action are proportional to the object and needs which are sought to be fulfilled by it. Proportionality ensures that the nature and quality of the encroachment on the right to privacy is not disproportionate to the purpose of the state action. This doctrine has primarily been used and applied under Article 19 of the Constitution. The first case in which the Supreme Court categorically acknowledged and applied the doctrine of proportionality was Om Kumar v. Union of India, [AIR 2000 SC 3689] where the Court found that administrative action in India affecting fundamental freedoms (Article 19 and Article 21) have always been tested on the anvil of proportionality, even though it has not been expressly stated that the principle that is applied is the proportionality principle. In Modern Dental College and Research Centre v. State of M.P., [(2016) 7 SCC 353] a Constitutional Bench of the Supreme Court laid down the four tests to determine proportionality, viz. “(i) that the measure is designated for a proper purpose (ii) that the measures are rationally connected to the fulfillment of the purpose, (iii) that there are no alternative less invasive measures, and (iv) that there is a proper relation between the importance of achieving the aim and the importance of limiting the right.”
In K.S. Puttaswamy v. Union of India, [(2019) 10 SCC 1] (Puttaswamy II or the Adhaar Judgment) , Justice Chandrachud, elaborated on this third step of the three step test formulated by him in Puttaswamy I, by saying that the essential role of the proportionality test is to enable the court to determine whether the act in question is disproportionate in its interference with the fundamental right. To determine this, the court will have regard to whether a less intrusive measure which was consistent with the objective of the law could have been adopted and whether the impact of the encroachment on a fundamental right is disproportionate to the benefit which is likely to ensue. More recently, the Supreme Court summarized the requirements of the doctrine of proportionality in Anuradha Bhasin v. Union of India, [(2020) 3 SCC 637] (decided on January 10, 2020) while discussing the suspension on internet services in the State of Jammu and Kashmir, in the following words:
“In the first stage itself, the possible goal of such a measure intended at imposing restrictions must be determined. It ought to be noted that such goal must be legitimate. However, before settling on the aforesaid measure, the authorities must assess the existence of any alternative mechanism in furtherance of the aforesaid goal. The appropriateness of such a measure depends on its implication upon the fundamental rights and the necessity of such measure. It is undeniable from the aforesaid holding that only the least restrictive measure can be resorted to by the State, taking into consideration the facts and circumstances. Lastly, since the order has serious implications on the fundamental rights of the affected parties, the same should be supported by sufficient material and should be amenable to judicial review.”
The last prong of the requirements quoted above, that the state action should be backed by sufficient material was also applied by the Supreme Court in State of Maharashtra v. Indian Hotel and Restaurants Association, [(2013) 8 SCC 519] when discussing the ban on dance bars in the state of Maharashtra. The same principle, viz. that there must have been at least some empirical data to back the state action encroaching upon fundamental rights was also applied when striking down the RBI’s ban on cryptocurrencies in the case of Internet and Mobile Association of India v. Union of India, [Writ Petition (Civil) No.528 of 2018, decided on March 4, 2020].
To summarize, the test to determine proportionality can be expressed in simple words in the following manner:
(i) whether the measure has a legitimate goal,
(ii) did the authorities assess alternative mechanisms,
(iii) did the state choose the least intrusive measure under the circumstances, and
(iv) is the action of the state backed by sufficient material or empirical data.
Three Step Test in Three Questions
Apart from the opinion of four Judges which elucidated the three-fold test in Puttaswamy I, both Justice Bobde, [Para 46 of Justice Bobde’s Judgment in Puttaswamy I] as well as Justice Nariman, [Para 86 of Justice Bobde’s Judgment in Puttaswamy I] in their separate judgments, opined that since the right to privacy can be traced under various fundamental rights, the test to be satisfied would be the one under whichever fundamental right the privacy right being violated is traced to in that particular case. This view is similar to the classic test for Article 21 which incorporates within it the test for Article 14 i.e. the manifest arbitrariness test as well as Article 19 i.e. the proportionality test. The above situation is elucidated in very simple terms by Justice Sikri, in Puttaswamy II, Para 89, viz.
“Therefore, in the first instance, any intrusion into the privacy of a person has to be backed by a law. Further, such a law, to be valid, has to pass the test of legitimate aim which it should serve and also proportionality i.e. proportionate to the need for such interference. Not only this, the law in question must also provide procedural guarantees against abuse of such interference.”
Reduced to its simplest terms, the test would require us to ask and answer three questions to determine whether each of the state actions being analyzed here is in fact violative of the right to privacy, or can it be said to fall within the exceptions to the said right. These questions are:
-
Is the state action backed by a valid law?
-
Does the law satisfy a legitimate state aim or is it arbitrary?
-
Is the law proportionate to the object being sought to be achieved?
ANALYSIS
We shall now apply the principles discussed above to the state actions to determine whether each of these acts of the authorities satisfies the three step test. More specifically we shall try to answer the three questions that we have formulated from the three step test for each of the state actions.
Ques. 1. Is the State action backed by a valid law?
Ans. As mentioned in the research methodology, due to the lack of availability of official materials from different states, the author is not in any position to comment on the legal validity of the orders issued by each state regarding their decisions to either stamp individuals, or put up posters or barricades outside their houses. In absence of such material the only thing that can be said is that if the relevant orders have been issued without jurisdiction or by an authority which does not have the power to issue such directions then the orders would be considered invalid. In the absence of any material which is indicative of such impropriety, one has to assume that the orders have been validly issued, in which case the actions of the state in stamping individuals or putting up posters or barricades outside the houses of individuals would be considered valid.
Despite the limitations regarding analysis of each order as specified above, it may be useful to briefly discuss whether a broad legal framework exists for the kinds of steps that are being analysed here. The Disaster Management Act, 2005 provides for the establishment of the Disaster Management Authorities at the national, state as well as district levels. The powers of the National Disaster Management Authority are extremely wide and under section 6(1) as well as 6(2)(i) it has broad powers to “take such other measures for the prevention of disaster, or the mitigation, or preparedness and capacity building for dealing with the threatening disaster situation or disaster as it may consider necessary”. Section 20 obligates the State Government to constitute a State Executive Committee to assist the State Disaster Management Authority. Section 22(b) empowers the State Executive Committee to “examine the vulnerability of different parts of the State to different forms of disasters and specify measures to be taken for their prevention or mitigation”. Under section 22(h) it also has the power to give directions to any Department or other authority of the State regarding actions to be taken in response to any threatening disaster situation or disaster. Under section 24(b) the State Executive Committee also has the power to “control and restrict the entry of any person into, his movement within and departure from, a vulnerable or affected area”. Section 24(h) also gives the State Executive Committee the power to procure exclusive or preferential use of any amenities from any authority or person. Similar provisions giving wide powers to the State Government exist in section 2 of the Epidemic Diseases Act, 1897 which empowers the State Government to take such measures and prescribe such temporary regulations to be observed by the public as it may deem necessary.
A perusal of the above provisions would show that the State Executive Committee has the power to restrict a person’s movement and hence the barricading of a COVID positive patient’s home may be justified through the provisions of the Disaster Management Act, 2005. As far as pasting notices and posters on the gate or outer wall of the house of a citizen is concerned, this power could be read into either section 24(h) or the blanket provisions of section 22(b) of the Disaster Management Act. Although this issue has not been substantively decided as yet by any Courts of law, it appears that the Courts are taking a negative view of the practice of pasting notices and posters outside the homes of COVID positive patients, with the Delhi High Court as well as the Supreme Court deprecating this practice by the State authorities in ongoing proceedings.
Regarding the stamping of incoming passengers, the State may try to justify these actions by relying on the blanket provisions contained in section 22(b) of the Disaster Management Act, 2005 or section 2 of the Epidemic Diseases Act, 1897, however it must be noted that these provisions do not specifically give the State government the power to violate the bodily privacy of an individual by forcibly stamping their hands. Therefore in the absence of any authoritative legal precedent, this question cannot be answered either way with any certainty.
Ques. 2. Does the law satisfy a legitimate State aim or is it arbitrary?
Ans. As with the first question, primary documents, which might contain the reasons behind the various state actions, are not freely available, therefore we have to rely upon statements in the media and news reports to determine the intention behind this move. From media reports (Maharashtra, Delhi, Karnataka), it appears that the reason for stamping individuals required to quarantine themselves was that some passengers were not following the instructions properly and therefore the governments in different states decided to follow the strategy to put a stamp (usually at the back of the palm) declaring that the person has been advised to be quarantined. It appears this has been done so that it would be easier for people to identify individuals who break quarantine, infact the Police Commissioner of Bengaluru even went on to state publicly that such individuals would be arrested and sent to government quarantine centers. Similarly in case of putting up posters outside the houses of people, the reason seems to be to ensure strict compliance with home quarantine requirements and ensuring easier monitoring at the village level. A petition was filed in the Delhi High Court challenging this action by the Delhi government, whereafter the Delhi Disaster Management Committee took a decision to stop putting up such posters as it was leading to hesitation amongst people to get themselves tested fearing stigmatization. Putting up barricades outside the homes of COVID positive patients was another method used by the authorities to ensure that the patients stay inside their houses and outsiders are unable to enter it, thereby stopping the spread of the disease by isolating the positive patients from healthy individuals.
The author is not aware of any other reasons due to which the States had undertaken actions such as stamping people’s hands or putting up posters or barricades outside their houses. We therefore will assume that the reason given in the media reports for these actions i.e. preventing the spread of COVID-19, is genuine. In that case, the objective behind this move, being a public health measure, would be a legitimate state aim. Since there is no information about any other extraneous, capricious or discriminatory reasons behind these actions, it can be said that the second prong of the three step test is satisfied.
Ques. 3. Is the law proportionate to the object being sought to be achieved?
Ans. This prong of the test refers to the doctrine of proportionality, which can be simplified into the following questions: (i) whether the measure has a legitimate goal, (ii) did the authorities assess alternative mechanisms, (iii) did the state choose the least intrusive measure under the circumstances, and (iv) is the action of the state backed by sufficient material or empirical data. We shall now try to answer these questions for each of the different types of state actions:
(i) Legitimate goal: The first question will have the same answer as for Ques. 2 above, i.e. the legitimate goal of each of these measures is to prevent the spread of COVID-19.
(ii) Assessment of alternative methods: Since information about the process adopted by the authorities before coming to the conclusion to implement each of these actions is not available, the second question cannot be answered with any authority. However, if challenged in Court the authorities will have to prove that in the process of deciding to implement any of these measures, the authorities did an assessment of all the possible alternative actions that they could have taken to achieve the objects being sought to be achieved. Thus they would have to show that they considered alternative methods to stop travelers from breaking the requirements of their quarantine other than stamping them, for eg. could the objective have been achieved through physical monitoring through surprise checks, etc. Similarly the state may have to show that they considered alternative methods to prevent the spread of the disease other than putting up posters or barricades outside the houses of COVID patients, for eg. physical monitoring through surprise checks, installation of cctv cameras, etc.
(iii) Least intrusive measure: It is not enough for the authorities to merely show that alternative methods were considered, but they would also have to prove that out of all the alternatives that they assessed, the one that they selected was the least intrusive method to achieve their objective. Thus the authorities would have to show that stamping travelers with indelible ink was the least invasive of all the other methods considered to ensure that they did not break the rules of their quarantine or that putting up posters outside the houses of people advised home quarantine was the least invasive measure to ensure strict compliance with their quarantine requirements under the circumstances. Similarly they would have to show that putting up barricades outside the houses of COVID - 19 positive patients was the least invasive method to ensure that patients do not interact with other individuals. An assessment of the negative impacts of such measures such as reduced testing due the fear of stigmatization would also be a consideration in the decision making process.
(iv) Based on sufficient material or empirical data: Since the COVID-19 situation is unprecedented, it is unlikely that the authorities would have any reliable empirical data indicating that the measures implemented by them would achieve their objectives. In such a situation one may have to turn to expert advice for guidance, which in this case would be advice from books on epidemiology and community health as well as academicians and experts in the field. In order to justify these actions the State will have to show that they relied upon sufficient material and applied their minds to come to the conclusion that out of all the methods available the methods adopted by them were the least invasive to achieve their goals based on expert advice and available data. The State would have to show that there was some valid material or data which indicated that stamping incoming travelers would be the best way to ensure that they did not violate their quarantine requirements, or that putting up posters and barricades were the best means to ensure that suspected or positive patients do not interact with other individuals.
CONCLUSION
The actions of the various state governments of stamping the hands of incoming travelers or putting up posters or barricades outside the houses of people would definitely be an interference with or restriction on their right to privacy, but whether it is a legal and valid restriction is very much a matter of debate. There exists a legal framework where such actions could possibly be considered as valid under specific circumstances upon satisfaction of certain conditions. However it is far from clear whether these conditions have been satisfied in case of every such state action.
It is clear from the discussion above that it cannot be authoritatively said that the three step test laid down by the Supreme Court has been satisfied in case of every state action. We can say with some certainty that the second step of the three step test would be satisfied in most cases, however whether the first and third steps, relating to existence of a valid law as well as proportionality are satisfied or not is far from certain. If these conditions cannot be said to have been satisfied by the State then the actions discussed above would certainly be a violation of the fundamental right to privacy of the citizens.
The post has been authored by Vipul Kharbanda and reviewed by Shweta Reddy.
Data driven election campaigning and India's proposed data protection framework
Close engagement with the electorate is a necessary tactic to win elections. The last decade has seen a shift in the strategies for election campaigning, from mere doorstep canvassing to using data analytics tools to understand voter sentiment. Online browsing patterns are combined with publicly available electoral databases to create data points of age, caste, religion, political beliefs etc. which are used to develop better advertising and marketing techniques to provide targeted information to the voters. The entry of third parties such as data brokers and data analysts has made the practice of election campaigning rather opaque. Apart from concerns around micro targeting and free will, political manipulation and its impact on democracy, these activities raise serious questions around the legality of personal data processing for such micro targeting activities. The Cambridge Analytica incident and the accompanied questions around the misuse of personal data disclosed through social media have resulted in several data protection authorities clambering to issue guidance on the use of personal data for election activities. Even though studies on privacy harms of data driven elections are minimal, the research is still heavily set in the context of the United States and the resultant harms due to their lax data protection measures. Research on data driven elections in Germany has highlighted the impact of strong data protection provisions on the rights of the citizen. This essay will examine if the proposed data protection framework of India is equipped to deal with the shift towards data driven elections by focusing mainly on the automated nature of decision making and the accompanied profiling for targeted communication.
How does data-driven election campaigning work?
The shift towards data driven elections assumes that comprehensive knowledge of the voter’s political preferences and belief will aid in developing an effective communications strategy. For example, a particular voter’s view on the policy of immigration can be analysed based on the data gathered. If the voter has indicated negative attitudes towards the policy, targeted communication related to the candidate’s intentions of curbing undocumented immigration can be sent to that particular voter. Hence, the campaign depends on figuring out new or different sources of data that will allow political parties to analyze the preferences of the voters. Large amounts of data about individuals is gathered. Basic details regarding the voters, such as names, addresses, age, are usually available in most countries as part of their election laws. Details regarding the individuals’ opinions, preferences, concerns etc. can be gathered from data that is considered “publicly available” through social media websites or datasets obtained from data brokers. Data can also be gathered using cookies, social plugins, and other tracking technologies. This is subsequently used to profile and then predict their preferences for internal strategic campaign discussions, or to send targeted political advertisements based on their preferences. Studies have shown how personality traits, political views and other characteristics can be inferred from facebook likes. However, the range of such micro-targeting can differ and may not be as simple as the one highlighted in the above-mentioned example.
The process of personal data processing
Personal data processing begins with clearly identifying the main objective/ purpose of processing. The purpose of processing has to be specific and cannot be vague or ambiguous in nature to ensure that the data points collected are not excessive in comparison to the main purpose. Then the personal data and sensitive personal data categories that are required to achieve those purposes are identified. It is essential that each data point collected is directly related to the purpose of processing. Based on the personal data categories that need to be collected, the lawful ground of processing based on applicable legislation has to be identified. Once the lawful ground and the main purpose of processing is identified, the retention period and subsequent destruction methods for the personal data collected will have to be determined.
Data driven election campaigning and key considerations for data protection legislations
The legality of personal data processing in these data driven elections is dependent on the data protection laws of every country. In Canada and Australia, political parties are exempted from the application of the data protection law. However, private entities that provide services to political parties will need to comply with the overarching privacy framework. Neither the General Data Protection Regulation of the EU nor India’s proposed data protection framework make a distinction between political parties and private entities providing services to such political parties. In short, in India the application of the proposed legislation extends to political parties as well as to the private entities that might be involved in the process. In addition to mere application of data protection laws, some of the key provisions that need to be analyzed within the context of targeted communication for election campaigning are the legal grounds for processing personal data, notice requirements, approach towards publicly available personal data, data principal rights (specifically, rights against automated decision making and right to object) and oversight over the data processing.
Privacy notice
Privacy notice is supposed to be provided to the data principal prior to data collection so that the data principal understands the details of the processing that will be undertaken after they disclose their data. In India (like in most countries), electoral rolls of constituencies are public documents. Political parties can gain access to these lists in accordance with the Registration of Electoral Rules, 1960. The information on the electoral rolls provides the analysts with access to the individual’s name, their father’s name, voter ID, location, and age. Details about their socio-economic status can be obtained through land records, BPL lists etc. Additional details can be obtained through third parties such as low level mobile operators (people who sell sim cards), banks and other data brokers. Both GDPR and the proposed data protection framework of India require notice to be provided to the individual when personal data is not directly collected from the individual.
In the context of election campaigning, compliance with the legal requirement to provide such a notice will have to be examined within the larger context of the secrecy around campaign strategies. Since the notice will require specific details regarding the processing, chances are this could potentially reveal their campaign strategy. In such a case, either they may simply omit compliance with the privacy notice or not provide sufficient details in the notice – both of which will be a violation of the requirement. Regardless, since the primary purpose is profiling for the purpose of targeted communication,the transparency fallacy of providing adequate explanations of automated decision-making systems can extend to the initial privacy notice itself. These concerns are exacerbated in the Indian context due to the absence of the requirement of providing the data principals with details regarding the automated decision-making systems. In the absence of knowledge regarding such operations, data principals will not be able to exercise their corresponding data principal rights. It is important to note that, if such data can be used to send targeted communication, similar datasets of the voters can also be abused to discriminate based on their political affiliations and other preferences. Hence, clear and enforceable guidelines on drafting of the privacy notice is highly encouraged.
Lawful grounds for processing
Before initiating data collection, entities are expected to identify the lawful ground for such collection and processing based on the applicable legislation. As a result of the investigation initiated by the ICO and their code of practice on personal data usage in political campaigns, there is considerably more guidance around the appropriate lawful ground of processing from the jurisdiction of the EU. Such guidance will be useful for our analysis of the Indian law as we seem (or claim?) to have borrowed heavily from GDPR. Under GDPR, the most relevant grounds for processing personal data in the electoral context are the consent of the individual, compliance with a legal obligation or performance of a task carried out in the legitimate interest of one of the actors. India’s proposed data protection framework does include the lawful grounds of consent and processing based on legislation. However, in place of processing based on legitimate interest, the framework includes processing based on reasonable purposes. The feasibility of relying on the legal grounds of consent and processing based on reasonable purposes shall be examined below.
Consent
The standard for valid consent under the proposed framework remains the same as that of GDPR, i.e., it has to be free, informed, specific and capable of being withdrawn. Theoretically, of all the lawful grounds identified in data protection legislations, informed consent is by far the most ideal notion. It provides the individual with the best context driven control over disclosure and use of personal data. However, it might be necessary to re-examine the viability of complying with the high standard of consent for complex data processing operations such as the ones that enable data driven elections.
Free Consent
For consent to be “free”, the individual must have had real choice in providing their assent without fear of the negative consequences in the absence of such assent. An important metric to determine the same is to examine the relationship between the entity seeking consent and the individual providing consent. For example, the power imbalance between governments and citizens will make the former’s reliance on consent as lawful ground of processing questionable. In the case of processing for election campaigning, since most of the indirect data collection is undertaken through social media, and a case of power imbalance may be difficult to establish. “Free” will have to be examined in the context of the choices provided for in the relevant market and whether the services offered by other organisations in the similar market are deemed to be equivalent. The absence of real choice in the market coupled with the fact that in most instances individuals do not have a chance to negotiate the terms of processing with social media companies can result in unfair tying of services in the absence of transparent enforcement.
Informed and specific
Another important element of valid consent that indicates that the individual has autonomy over their personal data is their ability to comprehend a privacy policy and then give their assent to the processing. Even though the onus is on the individual to comprehend the policy, entities are required to provide details regarding the processing in clear and concise manner. In the electoral context, since micro targeting relies heavily on automated decision-making systems, the privacy notice should ideally include the operations of such a system in a comprehensible manner for the consent to be considered as “informed”.
GDPR’s notice requirements mandate data controllers to provide “meaningful information about the logic involved as well as the significance and the envisaged consequences of such processing for the data subject” in the event of profiling. A determination about whether a right to explanation can be provided has to be taken on a case by case basis. A realistic determination of the feasibility of simplifying the details of complex operations will have to be undertaken. Due to the complex nature of these operations, there are questions around what constitutes as meaningful information for the data subject to be considered as informed for the purposes of satisfying the conditions of valid consent. In India, the proposed data protection framework doesn’t require data fiduciaries to notify the existence of an automated decision-making system, let alone provide meaningful information regarding the logic involved and its significance on the data principal. In the absence of such a requirement, it is highly unlikely that profiling in the electoral context will satisfy the conditions of a valid consent. Relying on consent in the absence of real choice stemming from the lack of details around the processing will make consent meaningless and will just be an excuse to extract personal data from unsuspecting data principals.
Legitimate interest and publicly available personal data
Under GDPR, the lawful ground of legitimate interest can be exercised only if the entity’s interests are not overridden by the interests or the fundamental rights and freedoms of the data subjects. According to the Article 29 working party, for the balancing test to be carried out, the interest must be clearly articulated and a restricted approach should be taken while substantively analyzing the balancing test. Even in cases where organisations have a legitimate interest to know their customer’s preferences to target them with better advertisements and personalize their offers, it doesn’t mean that the balancing test will naturally fall in their favour. Since customer’s preferences can be used to create complex profiles that can reveal highly sensitive personal data, the controller’s interest may be overridden by the interests and rights of the data subject. India’s proposed data protection framework uses the term “reasonable purposes” in place of “legitimate interests”. The balancing test for exercising the lawful ground of legitimate interests under GDPR and the lawful ground of reasonable purposes in India is similar. However, the most vital difference, specifically in the context of processing for election campaigning, is the explicit inclusion of processing of publicly available personal data as one of the purposes under processing for reasonable purposes in the Indian framework.
In the context of election campaigning, if the lawful ground of legitimate interests is to be exercised under GDPR, the fact that personal data is publicly available is considered one of the many factors in conducting the balancing test. Under India’s proposed framework, processing publicly available personal data may be included as one of the stand-alone purposes under the lawful ground of processing for reasonable purposes. This can mean that entities are allowed to process personal data by virtue of it merely being public without regard to the overall objective of data processing. GDPR requires notice to be provided to the individual in case the data that is being processed is publicly available and has been collected from a source that is not directly the individual. The proposed framework, in its current form, may not require entities to provide individuals a privacy notice prior to initiating the processing of such publicly available personal data.
GDPR’s prohibition around processing special categories of personal data doesn’t extend to that personal data that has been manifestly made public by the data subject. However, the determination of what is manifestly made public is relatively restrictive. The Article 29 working party in its guidance for the law enforcement directive explains the phrase ‘manifestly made public’ as that data which the individual is aware of that will be available to everyone including the authorities. Hence using similar logic, in the context of social media, for information to be considered manifestly public the individual should have preempted the availability and use of their data for receiving targeted communications in the course of election campaigning. In its current form, the proposed framework doesn’t require entities to examine the context and purpose of initial disclosure before classifying personal data as “publicly available personal data” for the purposes of the legislation. There are no additional safeguards for such processing of publicly available personal data that reveals any other detail that can be considered as sensitive personal data. The lawful ground of processing personal data for reasonable purposes in the proposed data protection framework in its current form may be relied on for data driven election campaigning. However, further deliberation on the impact of its current treatment of publicly available personal data on the data principals is highly encouraged before the enactment of the Bill. Considering a free for all usage of publicly available personal data without taking into consideration the context behind such disclosure is counter intuitive to protecting personal data altogether.
Data principal rights
Apart from the privacy notice, exercising data principal rights are another method through which individuals can exercise control over their data. Some of the key data principal rights that are available to a data principal in India under the proposed data protection framework are right to access and confirmation, correction, erasure, data portability and the right to be forgotten. The existence of these rights is far better than the current framework under the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011. However, within the context of targeted communication for election campaigning, we seem to be missing two of the key rights included in GDPR – the right against automated decision making and the right to object.
Right against automated decision making
Since much of the micro targeting activities are solely automated decisions, an analysis of the data principal rights that apply directly to the situation is important. Article 22 of GDPR restricts the application of those solely automated decisions that have a legal or similarly significant impact on the data subjects. The Regulation permits such automated decisions only if additional safeguards such as including human intervention and providing the explanation of the decision to the data subject while providing the option to the data subject to challenge the decision are implemented. It's difficult to establish the existence of significant effects of such targeting as it is challenging to establish the cause and effect of such targeted communication i.e., the actions of the individual could have been influenced by many other reasons other than just the targeted communication. However, the inclusion of such a right and the accompanied analysis by the entity prior to initiation of data processing will give the data principal the added layer of protection that is currently absent in the Indian law.
Right to object
Under GDPR, if the entity relies on the lawful ground of processing for legitimate interests of the controller, the data subject also has the right to object to the processing of personal data. In case of the specific purpose of direct marketing, controllers will have to cease the processing operations once this right has been exercised by the data subject. Since most data driven election campaigning is based on direct marketing strategies, the existence of this right is important. The Sri Krishna committee justifies the absence of this right by relying on the data principal’s right to withdraw consent. However, it is important to take note that the data principal can withdraw consent only if the lawful ground for processing personal data is consent. The data principal will not be able to withdraw consent if the lawful ground of processing personal data for reasonable purposes is relied on.
Oversight
Just like with any other legislation, the efficacy of its application is determined by the nature of oversight that is provided. GDPR, along with setting up strict data protection requirements also requires member states to set up independent data protection authorities. These data protection authorities are empowered with strong investigative, corrective and guidance powers that provide them with the necessary power to enforce the obligations prescribed by the laws. Since election campaigning is, by its very nature, a political topic, it is essential that the entity providing oversight over such activities is free from external political influence. The proposed data protection framework in India sets up the Data Protection Authority of India whose investigatory, corrective and advisory powers are similar to its European counterparts. However, the procedure for appointment of the members of the Authority and the criteria of the selection committee raises questions around its perceived independence. The central government has been tasked with the power to appoint the members of the Authority on the recommendation of the selection committee which is composed of individuals only from the Executive branch of the government. Since election strategies directly influence the Executive, it is reasonable to be apprehensive regarding potential external influence.
Conclusion
Even though the proposed data protection framework has reference to most of the internationally accepted privacy principles, the obligations stemming from those principles have not been modified to address the changing landscape of personal data processing. The absence of key data principal rights and relaxed protection provided to publicly available personal data reflects lack of conversation around the complexities surrounding contextual disclosures, further processing of personal data, profiling etc. It is admitted that international guidance around election campaigning and data protection has stemmed from the advisory powers of the data protection authorities. However, the absence of codification of crucial data protection obligations may run afoul to the spirit of safeguarding privacy of the individuals that was enshrined in the Puttaswamy judgement. It is imperative that the joint parliament committee currently deliberating the provisions of the proposed framework introduce some of these obligations in the primary legislation itself.
Reviewed by Arindrajit Basu and Pallavi Bedi.
Would banning Chinese Telecom Companies make 5g secure in India?
Speaking on the status of 5g in India at a virtual summit, Niti Aayog CEO Amitabh Kant noted the country is set to go all out in its adoption, but that there exist security concerns with the technology. He also pointed out that India is yet to make a decision on the participation of Chinese telecom companies in its 5g infrastructure. In many ways, this has been the story of 5g adoption globally. Governments see the potential of 5g to usher in a new era of prosperity and development but are wary of the risks it poses. Central to these risks is the fear of espionage conducted by Chinese corporations like Huawei and ZTE that are the major suppliers of the components required for 5g networks. These concerns have resulted in a multitude of sanctions being levied against Chinese telecom corporations by Western nations. Whether that be through the United States citing cybersecurity concerns while issuing an executive order effectively banning Chinese companies Huawei and ZTE from participating in their 5g telecom network. Or UK Prime Minister Boris Johnson laying out a 2027 deadline for removal of all Huawei equipment from UK networks.
Closer to home, 2020 has already seen a severe deterioration in Sino-Indian relations following cross border firing at the Glawan valley. The Indian government has deployed a number of cyber related sanctions against China in retaliation for the military clashes between the States - such as the banning of a number of Chinese apps, including TikTok. Despite these sanctions being levied against China, one area where no action has yet been taken is Chinese companies’ participation in India’s 5g infrastructure. As of writing, Chinese telecom companies are still permitted to undertake testing of 5g within India. However in light of the strained relationship between the two countries, as well as the security concerns now identified by other States, a scenario where Chinese companies are banned from India’s telecom networks in the future is plausible - if not highly likely.
The possibility of such a scenario raises a number of questions. How would such a ban on Chinese participation in 5g affect India domestically? Would banning Chinese telecom companies be enough to solve India’s 5g security concerns? And if not how can India develop a strategy to ensure that consumers have fair access to secure 5g networks?
Why have Chinese vendors been banned in other countries?
The primary concern from the West relates to Huawei’s proximity to the Chinese Government. Chinese legislation requiring Chinese companies to assist the State in matters of national intelligence are seen as obvious threats by the US and its allies in a situation wherein trust is hard to come by. While Huawei has stated that it would not cooperate with China in any form of geopolitical espionage, this has done little to quell suspicion.
What does banning Chinese companies mean for Indian consumers?
As of right now, not much really. 5g is at an incredibly nascent stage and its adoption in India is estimated to be a few years away at the earliest, with no clear deadline on when some of the 5g spectrum will be auctioned off in India. Moreover, Chinese companies are as of now permitted to undertake 5g testing in the country.
However, in a hypothetical situation where these companies are banned, the effects will be seen in a few years time. The most obvious impact is that a ban for Chinese providers will result in a less competitive market consisting of fewer actors. Developing 5g in India is incredibly expensive for three reasons. Firstly, is the cost associated with upgrading infrastructure to that which is compatible and optimized for 5g. Secondly, India has the highest cost (reserve price) for purchasing spectrum in the world. Thirdly is existing debt among telecom companies. The costs involved in developing 5g to these companies, therefore, far outweighs the benefits. This problem will only be compounded by banning Chinese companies in the space, who are seen to operate cheaper than their European counterparts. Such a ban could therefore further delay 5g’s adoption in India by a significant amount of time.
Moreover, given the security concerns, the government could proceed with favouring Indian only companies within the sector. With Reliance now claiming to have developed its own 5g technology within India it could result in a situation wherein it becomes the primary, or even sole, provider for 5g infrastructure in India. Any such corporate monopoly over such critical infrastructure would undoubtedly harm domestic consumers.
Does banning Chinese companies solve India’s security concerns relating to 5g?
Despite all of the potential negative repercussions, the argument to exclude a hostile nation from potentially having access into Indian infrastructure is a persuasive one. Citizens data privacy and national security have to be prioritised over any marginal economic gains that may result from allowing Chinese corporations to be involved in 5g infrastructure. And it's feasible that the negative side effects regarding the rise of a domestic monopoly can be handled by effective State regulation. But this leaves us with the question, is banning Chinese companies all that the government has to do to ensure 5g is implemented securely?
Not really. Limiting the involvement of Chinese companies will undoubtedly remove certain threats of espionage, but this is far from the only concern with 5g. While 5g has made certain improvements in security when compared to 4g, it is far from unbreakable. Firstly, initial rollouts of 5g are expected to be done on top of existing cellular networks so as to avoid new infrastructure costs. This interoperation of 5g with existing 4g (and in some cases 3g) networks will result in early 5g being subject to the same exploits that 4g is.
Secondly, 5g presents a risk due to the additional avenues through which it can be attacked. 5gs software based routing system and its connection to a wide amount of traffic points through the internet of things (IOT) would create more points of potential vulnerability that can be exploited. Finally, the globally accepted standards of 5g themselves allow companies the discretion to implement them in a more lenient manner. This includes making optional the use of certain cypher algorithms that look to protect user integrity. So it would come as no surprise that companies motivated by the profit motive would in the future look to cut these corners, making the network less secure.
All of this comes together to mean one thing: no matter how many Chinese companies India excludes from its 5g infrastructure, it will never be absolutely secure. Moreover, needing such formalised access through a company has never been a prerequisite for a State to target another through a cyber attack. Cyber attacks perpetrated by external actors outside of companies or States have existed in the past and will continue to exist in the future. Now that isn’t to say that the government should include Chinese corporations in 5g - the concerns over espionage make it clear that they shouldn’t be involved. What it does say, however, is that this has to be one step in a larger 5g strategy that would look to ensure implementation while maintaining security.
India’s 5g strategy
In order for 5g’s implementation in India to be successful it has to fulfil two criteria - it must be secure and it must generally be in the economic interest of the consumer. Both of these criteria can be met with a mixture of legislative and strategic decisions.
On the side of security, the most obvious step that can be taken would be to prevent the participation of those companies that are either primarily based in a hostile State or that can be significantly compromised through foreign legislation - such as Huawei and ZTE. In terms of legislative actions, the government must aim to address the security concerns regarding 5g as part of a dedicated cybersecurity law. Such a cybersecurity law must ensure that telecom companies are tasked with a duty of care to ensure cybersecurity and privacy of user data. This would compel companies working on 5g to ensure that they meet the highest threshold of security standards when implementing their networks. Such a law can also lay down strict requirements and standards of data encryptions that can serve to minimise damage in cases wherein a 5g system is compromised.
On the economic side, the government must view 5g as a form of critical infrastructure. If we are to believe the vision of a future wherein 5g is a necessity then the State must take steps to ensure its widespread availability to all sections of society by limiting its cost. Private participation in this sector must therefore be appropriately regulated keeping this goal in mind. Given the reduction in market actors for security reasons, there must be strict enforcement of competition laws to prevent domestic telecom providers from forming monopolies or cartels and setting exorbitant prices. One other way to reduce costs would be for the State to ensure that gaps in 5g supply chains are properly dealt with so as to reduce dependence on foreign States’ for components. Beyond these measures, consumers must also be educated so as to be able to make better informed decisions regarding their 5g access and have recourse to efficient grievance redressal mechanisms.
Ultimately if India is to ensure that 5g is implemented in a manner that is both safe and fair, it must look to balance out security and consumer benefit. And while banning Chinese corporations would make the system more secure, such an action would mean little without a series of additional steps to handle other security concerns with 5g while ensuring that Indian consumers don’t miss out.
India Digital Freedom Series: Internet Shutdowns, Censorship and Surveillance
Read the reports
Background
Amidst global trends towards authoritarianism and closing space for civil society, India’s dynamic changing landscape calls for ongoing attention. In the last year alone, upheaval around the Citizenship Amendment Act protests, sectarian violence and communal riots in Delhi and elsewhere, the emergence of Covid-19, and issues of statelessness and discrimination have raised questions about the state of civic freedoms in India. At the same time, efforts to mold and restrict civil society, through funding limitations and a narrative against activism and ‘foreign agents,’ continue to reverberate across the non-profit sector. Technology has played a major role in all of these developments, with expression and democratic debate increasingly carried into the digital sphere, and privacy, data, and surveillance taking center stage, particularly amidst a global pandemic. India additionally has the notorious distinction of being the world’s democracy with the longest-running internet shutdown. Other examples of how digital rights are being impacted in India abound: possible government-sanctioned surveillance on activists and journalists; various forms of censorship, and denial of access to information.
Documentation and consideration of such phenomena is critical, given the role digital developments will play shaping Indian society in the 21st century. Technology can be a great enabler of constitutional values, welfare, and act as a facilitator of public discourse. It can also be used by the state to fetter the realization of constitutional rights and restrict the growth of civil society activism and public discourse. To date, there exists little comprehensive coverage of the overall universe of policies and laws affecting digital rights, and how their implementation is impacting Indian civil society actors, including non-profits, activists, media, minority groups, and others.
India’s constitutional ethos provides for a wide array of fundamental rights designed to protect and empower the most vulnerable. It views the state as a key actor in breaking existing barriers of structural inequality - something technology can play a role in - if designed and implemented reasonably, with the widest possible consultation. Given India’s status as the world’s most populous democracy, along with its considerable heft in the Information and Communications Technology (ICT) sector globally, how these issues play out will be critical for the future of digital civic space, in South Asia, Asia, and beyond.
This report undertakes an examination of key topics related to digital rights and civic space in India. It focuses on four areas of particular concern, where restrictive policies threaten to violate fundamental freedoms and restrict civil society and public participation. The topics covered include: 1) Internet Shutdowns, 2) Online Censorship, 3) Platform Governance, and 4) Surveillance. Each chapter begins with a factual overview identifying the scope of the problem across the country. It proceeds to evaluate relevant Indian laws and regulations affecting the enjoyment of fundamental human rights of members of civil society online, including the rights to free association, assembly, expression, privacy, access to information and public participation. The chapter then summarizes relevant international law and standards, many of which are obligatory on the Indian government and constitute binding international commitments, and concludes with some reflections and recommendations.
Ultimately, the report emphasizes the importance of a free, fair, and democratic digital civic space in line with international law and best practices. It evaluates ongoing Indian policies in the four topic areas in light of these standards, and provides suggestions for paths to reform that Indian policymakers can undertake to enable the use of technology in consonance with India’s rich constitutional ethos.
Methodology
This report was researched and written by the Centre for Internet and Society (CIS), with support from the International Center for Not-for-Profit Law (ICNL). Researchers at CIS with specialized knowledge in digital rights undertook an expansive review of a wide range of sources related to this topic, including academic scholarship and legal literature, news articles, government documents, laws, and other publications. In addition to desk research, two teams of CIS researchers travelled across five cities - Jodhpur and Jaipur (state of Rajasthan), Ahmedabad (state of Gujarat), Siliguri (state of West Bengal), and Guwahati (state of Assam). Each of these states have a vibrant civic space, and have seen a number of individuals and organizations engaging with key issues in the digital space over the past months. Researchers interviewed a diverse array of stakeholders, including student activists, public interest lawyers, government officials, party workers, and journalists. While refraining from undertaking quantitative or empirical analysis of the fieldwork findings, the qualitative insights and data gathered from these interviews were instrumental in the shaping of this report.
This report uses the World Bank’s definition of “civil society,” namely: “a wide array of organizations: community groups, non-governmental organizations [NGOs], labour unions, indigenous groups, charitable organizations, faith-based organizations, professional associations, and foundations.” However, to truly understand public participation in a democracy, the report looks beyond organised groups and their workings, and examines how various individuals participate in public processes - including through protests, writing, and engagement through social media. Thus, when considering the impact of digital rights, this report did not limit its investigation only to organised civil society but considered a larger scope to engage with a broader notion of public participation.
Acknowledgements
Widening the Horizons of Surveillance - Lateral Surveillance Mechanisms
The pandemic has brought to light several fissures in existing patterns of governance-focussing on governmentality that snatches autonomy from the citizen and enmeshes it within existing power structures. Datafication through the phenomenon of lateral surveillance has been pushed across the globe as a way of combating human challenges in the 21st century, including those brought about by the pandemic.
Lateral surveillance is the act of ‘watching over’. Lateral surveillance differs from typical surveillance as the power dynamic between the one watching and the one being watched is not structural or hierarchical but more decentralized and balanced. The surveillance takes place between individuals themselves, without the involvement of any organizational entity such as the government. Looking back, the initiatives which encouraged lateral surveillance originated in the form of neighbourhood watch schemes and community policing initiatives in the United States and later spread across the world. These neighbourhood watch schemes enabled individuals to become the ‘eyes and ears’ of law enforcement agencies. With the advancements in technology, these neighbourhood watch and community building initiatives have transformed into easily accessible mobile applications, operated by law enforcement agencies or private entities, to mobilize citizens to monitor their surroundings or provide them with information sharing platforms to enable peer to peer or citizen communication. Though they aim to help in reducing the crime rates, improving the quality of life, building community pride and unity, they actually have many negative effects on people. This paper seeks to analyze the societal and legal implications of such technologies and provides recommendations to governments so that citizens’ rights are kept as a bare minimum threshold and not an option in a checklist.
While the essay released in May 2020 focused on the impact of lateral surveillance during COVID’19, this paper focuses largely on the history and evolution of lateral surveillance and the technologisation of the same. This paper also sheds light on the effects of lateral surveillance on the society and the challenges it poses to certain fundamental rights guaranteed under the Constitution.
Read the full paper here.
The research was submitted for review in May 2020 and accepted for publication in June 2020.
PDP Bill is coming: WhatsApp Privacy Policy analysis
On January 4, 2021, WhatsApp announced a revised privacy policy. The announcement was through an in-app notification. Users were asked to agree to the policy by February 8, else they will lose access to their accounts. The announcement triggered a backlash, globally and in India and it led to millions of users in India migrating to other messaging platforms. In light of the backlash, WhatsApp had on January 15 announced that it will delay rolling out the new policy to May 15, 2021.
It is important to note that many users have also commented that the new explicit terms of mandatory data sharing with Facebook and the extent of metadata collection haven’t changed drastically from WhatsApp’s existing operations. In 2016, WhatsApp had revised its privacy policy to enable data sharing with Facebook. Users were provided 30 days to opt out of such data sharing. However, the option to opt out was not provided to users who joined the service after September 25, 2016 or who failed to exercise the opt-out option. The changes in the policy were challenged in the Delhi High Court. The High Court (i) directed WhatsApp to delete the complete information of users who exercised the option to opt out before September 25, 2016; and (ii) with respect to users who did not exercise the opt-out option, WhatsApp was directed to not share the information of users collected until September 25, 2016 with Facebook. The matter is currently pending before the Supreme Court.
The change in people’s reactions to the data processing from 2016 can partly be attributed to the change in the users perception of privacy and personal data protection. Conversations around privacy and data protection and harms arising out of unauthorized data collection are much more prevalent. What has also irked a large number of users is the difference between the privacy policy applicable to the European Region and the policy applicable to the rest of the world; There is a disparity in the two policies regarding the rights of the users in relation to sharing of data with Facebook Companies(Facebook payments inc, Facebook Payments International Limited, Onavo, Facebook technologies LLC, Facebook Technologies Ireland limited, WhatsApp inc. WhatsApp Ireland Limited and Crowdtangle) due to the application of the General Data Protection Regulation.
Currently, Indian users have a fundamental right to privacy and an overarching data protection framework is set to be tabled in the Parliament soon. The Personal Data Protection Bill, 2019, being deliberated by the Joint Parliamentary Committee, is expected to provide comprehensive requirements for authorized collection and management of personal data. The proposed Bill, despite several shortcomings, does offer significantly more protection than the current framework consisting of S. 43A of Information Technology Act, 2000 and the Information Technology (Reasonable Security practices and procedures and sensitive personal data or Information) Rules, 2011. This blogpost will examine the viability of the revised privacy policy of WhatsApp if the proposed bill is enacted in the currently available public version of the Bill. In the subsequent posts we will analyse the effect of the revised privacy policy on the pending litigation.
Privacy notice
Section 7 of the proposed bill puts an obligation on the data fiduciary to provide a privacy notice, i.e. a document containing granular details of the processing of personal data to the data principals. The details must be provided in a manner that is clear, concise and easily comprehensible to a reasonable person. The notice should also be provided in multiple languages where necessary and practicable. The importance of a clear and concise policy has been highlighted in the Justice Srikrishna Report on Data Protection. However, there is no guidance from the Indian authorities on what it constitutes. Guidance from the Article 29 working party in the EU suggests that the policy must be presented in a manner that avoids information fatigue. In the digital context, it has been recommended that presenting a policy in a layered format enhances readability. The guidance also suggests that policy should avoid reliance on complex sentences and abstract terms to convey the details of the processing operations. The revised privacy policy of WhatsApp cannot be termed a clear and concise policy. The purely text-based policy, containing around 3800 words, is not presented in a layered format resulting in shockingly low readability for the amount and type of personal data collection the policy is attempting to convey. In addition to improper design and structure, the policy contains vague language providing an average user a hazy understanding of the extent of data processing and can leave room for different interpretations. The earlier version of the policy also uses similar language and structure to convey details regarding the processing and doesn’t provide transparent details regarding its data sharing with Facebook. Relying on a similar format as its earlier versions without revising it based on global discussions around the best methods seems to be an opportunity lost to remedy the privacy policy. The structure, form and language of the policy will have to be revised if the Bill is enacted in its current form and the policy will also have to be provided in multiple languages.
Bundled consent
According to its policy, WhatsApp relies on the consent of the user for the purpose of providing messaging and communication services, sharing information with third party service providers that help WhatsApp “operate, provide, improve, understand, customize, support, and market” their Services, and sharing information with other Facebook companies for “providing integrations with Facebook Company products” to name a few. It is important to verify if the consent being obtained is valid according to the standard set by the proposed framework.
For consent to be valid under the proposed framework (Section 11(4)) , the provision and quality of services provided should not be linked to consenting to processing of personal data that is not directly necessary for that purpose. In WhatsApp’s case, the primary purpose of processing is to provide messaging and communication services on that particular platform. Neither sharing personal data with third party service providers for better marketing of their services on other platforms nor sharing it with Facebook company of products for better integration of services is incidental to the primary purpose of processing. The bundling of consent results in forcing individuals to either accept processing of personal data for all of the purposes outlined or lose the services altogether resulting in an invalid consent. An explicit opt-in mechanism for all those processing operations that are not compatible with the primary purpose of processing will have to be provided to the Indian users if the Bill is enacted in its current form and consent is being relied on as the lawful ground of processing.
Data sharing with Facebook
WhatsApp’s policy on sharing of information with Facebook has garnered a significant amount of attention and has also raised privacy concerns amongst WhatsApp users in non-European countries. This is because the policy applicable to non- European countries now does not provide the user option to opt out from sharing the information if the user wants to continue using and operating WhatsApp. The policy under the heading ‘How we work with other Facebook Companies’ states that “As part of the Facebook Companies, WhatsApp receives information from, and shares information (see here) with, the other Facebook Companies. We may use the information we receive from them, and they may use the information we share with them, to help operate, provide, improve, understand, customize, support, and market our Services and their offerings, including the Facebook Company Products.” The information that may be shared by WhatsApp with Facebook Companies includes; (i) users phone number; (ii) transaction data; (iii) service-related information, (iv) information on how the users interact with others (including businesses); (v) mobile device information; (vi) the user’s IP address; and (vii) and any other data covered by the privacy policy. All this information/data will fall within the ambit of personal data in terms of the current version of the Bill and therefore WhatsApp would have to comply with the obligations put on it under the Bill for it to be able to share personal data with other data fiduciaries including Facebook Companies.
As noted earlier, it is pertinent to note that the privacy policy is not the same globally. As per the privacy policy applicable to Europe, WhatsApp states that any information that it shares with Facebook Companies is to be used on WhatsApp’s behalf and in accordance with its instructions. Any such information cannot be used for the Facebook Companies own purposes. This statement is not reflected in the privacy policy applicable to non European countries. Facebook has in a statement stated that “For the avoidance of any doubt, it is still the case that WhatsApp does not share European region WhatsApp user data with Facebook for the purpose of Facebook using this data to improve its products or advertisements”
Data sharing with other third party service providers
It is also important to note that sharing of information is not limited to Facebook Companies, but also extends to other third party service providers. However, apart from a vaguely drafted statement stating that WhatsApp works with third party service providers as well as other Facebook Companies to help it to “operate, provide, improve, understand, customize, support, and market our Services”, the privacy policy is silent and does not provide any insight or clear information on (a) the nature of these third party entities; (b) extent of information shared with such third party entities. Further, even though the policy provides a link to the other Facebook Companies (Facebook Payments Inc, Facebook International Limited, Onavo CrowdTangle) that it works with; there is again no clarity as to what are the specific services provided by these companies.
One of the rights provided to a data principal under Section 17 (3) and Section 7 (1)(g) of the current version of the Bill, is the right to be informed and the consent to be obtained from the data principal about the individuals or entities with whom personal data may be shared. The data principal also has the right to be informed about and given access to the categories of personal data shared with the other data fiduciaries. However, the policy as it stands on date is silent about both the details of the third parties service providers as well as the categories of personal data that could be shared with them.
Metadata collection and data minimisation
The details on usage and log information in the previous version of the policy were rather vague as a result of which the extent of data collection was difficult to ascertain. The revised version indicates that WhatsApp’s metadata collection went further than most of the other popular messaging applications and the data being collected was linked back to the user and device identity. The principle of data minimisation (Section 6 of the proposed framework) limits the collection of personal data to that which is necessary for the purpose of processing. The compelling reasons that justify the metadata collection for the primary purpose of messaging and communication are so far unclear. The metadata collection section is similar in the privacy policy for the EU region and on the face of it doesn’t look GDPR compliant as well. Collection of those categories of personal data that are not necessary for processing of the primary purpose will need to be discontinued if the Bill is enacted in its current form.
Data Principal rights
The difference between the protection afforded to Indian resident users and European resident users is highlighted in the rights accorded to the data principal under the two privacy policies. The European privacy policy has a section dedicated to how users can exercise their rights and specifies that users have the right to access, rectify, port, and erase their information, as well as the right to restrict and object to certain processing of their information. These rights are a reflection of the protection afforded to data principles under the GDPR. As per the current version of the Bill, the data principal will have the right to (i) confirmation and access (Section 17); (ii) correction and erasure (Section 18); and (iii) data portability (Section 19). If the current version of the Bill is enacted, then WhatsApp will be required to amend its privacy policy regarding its applicability to India and incorporate the rights of data accorded to the data principal .
Grievance redressal
The European Region privacy policy specifies the entity within WhatsApp responsible for addressing the complaints of the users and it further also informs the user that they have the right to approach the Irish Data Protection Commission, or any other competent data protection supervisory authority. None of these provisions are specified in the Non-European Region privacy policy. The current version of the PDP Bill places an obligation on the data fiduciary to establish an effective grievance redressal mechanism (Section 32(1)) and to inform the data principal about their right to approach the Data Protection Authority (which is proposed to be established under the PDP Bill) (Section 7(k)). Additional details regarding the same will have to be provided if the Bill is enacted in its current form.
Clarifications from WhatsApp
On January 13, 2021, WhatsApp published a blog stating that the changes to the privacy policy will not affect users who use the platform messaging with friends and family, the changes will only apply to users who use the platform to communicate with business accounts. As per WhatsApp messages to business accounts on WhatsApp can be shared with third-party service providers, which may include Facebook itself. As per the blog, “But whether you communicate with a business by phone, email, or WhatsApp, it can see what you’re saying and may use that information for its own marketing purposes, which may include advertising on Facebook.” It is important to note that we recognise that the content of the messages and the call remains encrypted, however, the concern arises from the collection and use of ‘metadata.’
WhatsApp’s repeated assurances and clarifications asserting their commitment to data privacy falls short. Their insistence that their chats still use end to end encryption and that only interactions with WhatsApp Business will be shared with Facebook indicates ignorance with regard to the different contours of informational privacy. The expectations of privacy that individuals have over their personal data is linked to the extent of control they have over disclosure of such data. The mandatory metadata collection and lack of opt out clauses for data sharing for marketing purposes results in a mere illusion of control through its façade consent collecting process.
For the most part, the proposed framework should provide us the same level of protection offered to EU users of WhatsApp regarding some of the key contentions highlighted above. However, additional data principal rights such as the right to object and right to restrict processing will give additional protections to the data principal in case of data processing for marketing purposes. The uproar over the data collection practices of WhatsApp have cemented the immediate need for an effective data protection legislation in the country. The final draft of the Bill with 89 new amendments is expected to be released soon. Considering the renewed apprehensions regarding unwarranted processing of personal data, we can only hope that the amendments have taken into consideration the feedback and comments provided by relevant stakeholders.
(This post was edited and reviewed by Amber Sinha, Arindrajit Basu and Aman Nair)
Response to Mozilla DNS over HTTPS (DoH) and Trusted Recursive Resolver (TRR) Comment Period
This submission presents a response by the Centre for Internet & Society (CIS) to Mozilla’s DNS over HTTPS (DoH) and Trusted Recursive Resolver (TRR) Comment Period (hereinafter, the “Consultation”) released on November 18, 2020. CIS appreciates Mozilla’s consultations, and is grateful for the opportunity to put forth its views and comments.
Read the response here.
TIkTok: It’s time for Biden to make a decision on his digital policy with China
While on the campaign trail, now US president elect Joe Biden, made it clear to voters that he viewed Tik Tok as “a matter of genuine concern.” The statement came amidst a growing environment of hostility within the American government against the application. At the helm of the hostility was (now former) president Donald Trump’s passing of an executive order banning Tik Tok in the country and his attempts at forcing its parent company ByteDance to restructure the app under American ownership. Now, as the presidency passes hands, it is worth examining how the government got here and just how concerned the Biden administration should be with Tik Tok and how their strategy with the app could set the tone for digital relations with China going forward
The Road so far: The ban and forced sale of TikTok
America’s motivation to ban and sell the application can be explained by two contrasting factors: the cybersecurity risks that TikTok poses, and the country’s currently ongoing trade war with China. On the security side TikTok has faced immense scrutiny from governments around the world as to the amount of data that the application collects from its users as well as the potential links between Bytedance and the Chinese government. Furthermore there is a belief that due to the Chinese legislation that compels companies to assist the state on matters of national intelligence, there is little TikTok could do should the Chinese state decide to use it as an instrument of data collection. On the side of trade, the TikTok ban represents one of the more landmark blows dealt by the Trump government in its trade war with China. The US, since the start of his presidency has levied exclusive tariffs on specific Chinese commodities totalling to more than $550 billion. China has in response levied its own tariffs on certain American goods, with a total value of those estimated at $185 billion. Beyond these tariffs, the move to ban TikTok extends the trade war by creating clear hurdles for Chinese corporations to exist within the US market and firmly extended Trump’s protectionist trade policies into the digital sphere.
As such, on 6th August 2020, Trump released an executive order banning TikTok (as well as Chinese messaging and social media app Wechat). The ban has, however, since been indefinitely suspended as part of ongoing litigation on the matter at the federal level.
Shortly after the ban, came the attempts at forcing through the sale. While the deal has generally been referred to as ‘the TikTok sale’, it is not actually an outright purchase of the social media platform by an American company (Microsoft attempted such a purchase but was rejected by Byte Dance). Rather, the deal would see the establishment of a new US based subsidiary called TikTok global that would be partly owned (20%) by Oracle and Walmart, with Oracle becoming a trusted technology provider in order to ensure that US user’s data remains within the state. The agreement stipulates that the board of this new entity would have 4 out of 5 of the seats populated by US citizens, and that the company would go public as well. The current agreement would still see Bytedance retain ownership of the algorithms used by TikTok, which is in line with restrictions from the Chinese government preventing the sale of the algorithm to a foriegn owner without a state granted license.
How should the Biden administration handle this situation?
Dealing with the TikTok question must be one of the Biden administration’s top most priorities. The most obvious question they face is whether or not to reverse the ban and to continue to push through the sale between Bytedance and Oracle.
The case for enforcing the ban until the sale to American owners seems one that is straightforward enough. The cybersecurity concerns surrounding Bytedance’s proximity to the Chinese state and the influence of Chinese legislation are reasonable concerns. And any data gained from the application in the hands of a hostile state could be potentially harmful. This threat could be potentially reduced based on the role played by Oracle as a trusted technology partner. However with details of what exactly constitutes the functions of a ‘trusted technology partner’ it is impossible to say this with any great certainty. Simultaneously, there is a slight sense of irony in a Chinese based digital company protesting against another country’s protectionist stance to the internet.
Nonetheless these benefits are in many ways greatly over exaggerated, and in many ways allowing TikTok to return without requiring a sale could prove more beneficial in the long term. Not only would the app’s return be welcomed by its immense audience (estimated 100 million US users), it would also be a clear demonstration of America’s commitment to a less fragmented internet and more open digital economy. Furthermore, revoking the ban would also allow for the opportunity to reassess and reformulate the US’s economic and political strategy with regards to Chinese technology.
On the economic side, a retraction of the ban could signal the beginning of the end of the US-China trade war. Chinese investors are sure to see the shift from a radical republican president to a centrist democrat one as the perfect opportunity to increase foreign investment, which had been steadily declining recently. Such investment could prove significantly more substantial to the United States in a post covid-19 world as opposed to even in 2019. It is not unimaginable that Biden would look to maximise this opportunity to boost the economy.
On the political side, the government has to evaluate the success of sanctions levied against Chinese technology and whether that approach of blanket banning will translate effectively to the digital sphere. Not only has the US’s sanctions against certain chinese technologies proved unsuccessful, tools such as VPNs that can negate a ban make this strategy even less effective in the digital space.
The largest hurdle to revoking the ban would be the genuine cybersecurity concerns with a Chinese corporation having access American citizens’ data. However, dealing with these concerns through a simple ban of the application would only solve this one instance of excessive surveillance and data collection by a foreign app. Rather any solution must look to fix the issue at its root - that being the need for a more cohesive, detailed and overarching national data protection and cybersecurity policy. Such a policy could place clear limitations on data collection, stipulate data localisation policies for sensitive information and outline numerous other means of reducing the threat involved with allowing applications from states such as China to operate in the US.
Ultimately, Biden will be confronted with the reality of this situation the moment he enters office. The decision he makes on TikTok would set the tone for his term and for his government’s relationship with China. Whatever he decides to do, he needs to do it as soon as possible. The clock is ticking.
The Boss Will See You Now - The Growth of Workplace Surveillance in India, is Data Protection Legislation the Answer?
The use of pervasive technologies to monitor employees was picking up pace in India, the pandemic accelerated it. The pandemic has changed the way we work either through permanent work from home mandates for those who can work remotely, to heightened social distancing norms for office goers. A recent survey of 12,000 employees across the US, Germany, and India revealed that as of June 2020, some companies were forced to move up to 40 percent of the employees to remote working. Companies big and small now need to look at ways to ensure a returned trust in the product, the safety of the employee while also ensuring that the productivity picks up pace post lockdown. The safety standards which are mandated by the government include adequate social distancing, regular temperature checks, mandatory use of masks, and collection of information for tracing. Some private offices, as well as most government offices, have also mandated the compulsory downloading and verification of the status of the employee on the Aarogya Setu mobile application. All these measures and more are needed to be done daily and with the least human intervention. This is where technologies such as facial recognition, increased use of CCTV’s, and thermal screening come into play. In addition, for employees who are working remotely, there are a number of software and technologies that are being used to track them during and maybe even after working hours.
Employee Monitoring Technology in India
When companies collect data from the consumers, the company is mandated to reveal if they are sharing this data with third parties or government agencies. The consumer also has the right and the option to not choose a particular company or to withdraw their consent. In the case of employees, however, the data collected is more continuous, can be identified back to them, and can have an immediate and direct impact on their life; such as hiring, firing, or promotions. In light of this, the option to withdraw consent for employees leaves only two choices: either to consent to surveillance or lose their jobs.
The use of employee monitoring technologies such as facial recognition is not new in India. While there are a number of reports on how factories are being made safe, the people who bear the brunt of these measures are not consulted. In 2018, Tech Mahindra announced the rollout of facial recognition technology to record not just the attendance of their employees but also the “mood of the workforce”. In an interview regarding the implementation of such measures, Tech Mahindra’s spokesperson stated that the employee has the choice to consent to the use of such a system. However, in a similar interview, the Tech Mahindra group also stated that soon recording attendance through facial recognition would be mandatory.
Madurai Corporation has also introduced facial detection to record the attendance of the sanitation workers. Similarly or rather much worse, for some the surveillance is not limited to the confines of the workplace, for example, a report revealed that Panchkula’s Municipal Corporation had made their employees wear wearable devices called “Human Efficiency Tracker” to monitor the location as well as see and hear the sanitation worker. The report also stated that similar employee surveillance systems were being used in Mysore, Lucknow, Indore, Thane, Navi Mumbai, Nagpur, and Chandigarh. Closer home, building security app Mygate allows residents of an apartment complex to rate and review their domestic help, and can even prevent their access to the building once they are fired. However, the ratings are not two ways and the domestic help cannot rate the employer nor do they have a chance to question the actions and decisions taken about them.
The monitoring as we can see is not just limited to the confines of the physical workspace. A number of remote employee monitoring software have been in use for a while. These include software to monitor the online activity of the employees, from email and social media screeners, cameras that can record the amount of time spent on a webpage, laptops that take timed photos of the employee, to even technology that records the keystroke movement of the keyboard. A simple online search will reveal the number of companies that provide employee monitoring services. For example, XNSPY allows the employer to monitor every activity of the employee in their official devices from call records to emails, contacts, photos, and video, location, and even Whatsapp messages. According to the website this software once installed runs invisibly in the background, meaning that the employee might not even be aware of it being installed. Similarly, Bangalore-based EmpMonitor takes screenshots from the employee’s laptop at intervals determined by the employer, along with the provision to get the browsing history or the top apps used by the employee. EmpMonitor also states in its FAQ that the employer can capture all keystrokes by the employee including passwords. Similar to XNSPY, EmpMonitor also claims that it runs in the background invisibly, and “They also couldn’t stop being monitored”(sic).
As the sudden requirement to work from home has resulted in employees working on their personal devices, a mandatory requirement to download monitoring software can create grave issues about privacy. Another important issue that was highlighted in the report on the Panchkula’s Municipal Corporation sanitation workers, was the fear that they had about the supervisors listening to their private conversations when they had to take the device home at the end of the day for charging. A study of the women working in garment factories relieved that they were given no notice or explanation for the CCTV cameras that were being installed in their factories. These measures are also likely to say even when the pandemic is over.
These are just a few examples of the growing interest in using new technologies to know more about the employee not just what they do in the office but also outside of working hours. However, the few examples mentioned above expose how the employees working in the “blue-collar jobs” - domestic help, delivery personnel, factory workers, sanitation workers all faced a greater level and more pervasive surveillance, without so much as an intimation While employers that are already using pervasive technologies to monitor employees, they often justify it with quotes about employee satisfaction. However, in a system that is based on power imbalance, in addition to the looming fear of loss of income, and unemployment, there is very little that an employee can do to push back.
Covid and New Office Procedures here to stay?
The Coronavirus has now added extra dimensions to the existing features of employee monitoring, including ways to check the temperature of a person in a crowd as well as recognise people even through masks. The demand for systems with facial recognition, temperature screen, and mask enforcement has seen a growing demand especially in factories and large offices.
Mygate has also started providing temperature checks and masks compliance. In pursuance of this, employers are frequently notified about the employees’ body temperature as well as whether the worker has worn a mask or not. In June 2020, the Ministry of Health and Family Welfare released a new set of guidelines for resuming offices. The Standard Operating Procedure (SOP) made it mandatory for people working in public services who were also classified as essential workers to use the Aarogya Setu application. Several government offices across India such as Srinagar and Puducherry were also mandated to install and use the app. The use of the app was not limited to the public sector. Around April 2020, online food delivery service companies such as Grofers, Swiggy, and Zomato had mandated their delivery agents to use the app. The apps also displayed the temperature readings of the agents in addition to the people involved in preparing the food.
Although the mandatory nature of the app has been removed and most companies no longer require their employees to download the app, new instances of the enforcement of the app in the public sector emerge. For example, in January 2021, the Indian Railways resumed its e-catering services “RailRestro” while imposing the mandatory use of the Aarogya Setu app. The guidelines of the e-catering service in the Indian Railways also require mandatory thermal scanning of delivery agents and restaurant staff. It is anticipated that the use of the app might come back to prominence during the vaccination drive as well.
The Defence Research and Development Organisation (DRDO) is also looking at ways to record the attendance of employees by developing “artificial intelligence-based face recognition systems” which they plan to commercialise. Similarly, mobility apps such as Uber, in the process of resuming operations, and as a part of their safety measures, are requiring the drivers to take selfies to verify that they are wearing masks to the Uber's Real-Time ID Check system, and only then can the ride proceed.
The pushback to using these invasive apps is now slowing gaining speed. For example, the Indian Federation of App-Based Transport Workers (hereinafter “IFAT”), in a press statement, highlighted the issues with the use of the Aarogya Setu app. In their press note, the Federation highlighted the concerns with the use of the app, most importantly the possibility of misuse of the data and continued surveillance through the app. The statement also draws emphasis on the absence of a personal data protection bill, and the fear that the data collected through the app could be retained and processed in the future.
The Privacy Harms of surveillance of employees
The note by the IFAT on the use of Aarogya Setu best emphasises the uneasiness that comes with employee surveillance and the collection and processing of employee data. The note also shed light on the issues that could arise due to the use of monitoring apps (in this case, Aarogya Setu) on employees which included decisions about retaining or removing from employment based on the health data in the app, decisions based on the app to remove insurance cover and the possibility of the app being consulted to make decisions on payment and compensation. These concerns and more can be attributed to the plethora of employee monitoring apps and technologies.
When we look at employee surveillance and the different forms it can take, it can be understood that the issue is one of privacy as well as of data protection. When we look at the effects it has on privacy or the right to be let alone, a constant fear of being watched and recorded can have a detrimental impact on the person as well as a feeling that they are not trusted. As seen in the study of garment manufacturers - which is the case with most companies - the employees are not made aware that they are being monitored, something which the monitoring companies sometimes include in their advertisement. The decisions made based on these technologies are also not shared with the employees. As a result, they are often unaware of what the technology records and what decisions are made based on the time they come to work or the number of breaks they take.
Apart from the privacy harms, and the feeling of being watched, the data collected by employers poses a data protection issue. The collection of an employee’s data begins from the time of job application where the CV’s are vetted. However, there is no clarity on where the data collected through the application process is stored or if and when or whether they are being deleted. The terms of employment and contracts such as non-disclosure agreements are necessary, but also a way that can restrict the right of employees over their data.
Existing Frameworks for Protection
Although employee surveillance cannot be entirely avoided, there is a need to ensure that employees are not subjected to increased surveillance in the guise of increased productivity. Additionally, similar to the existing provisions of data protection in India allow companies to use vague provisions and unclear notice and choice-based framework to process consumer data, the absence of clear provisions for the processing of employee data puts employees at a greater disadvantage.
The Indian labour laws do not provide for provisions that deal with employee monitoring and surveillance. Hence, the provisions that are to be consulted which address the issue of data protection and privacy is the Information Technology (Amendment) Act, 2008 (hereinafter, “IT Act”) and the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 (hereinafter, “IT Rules”). Section 72A of the IT Act protects personal information from unlawful disclosure in breach of contract. In addition, Section 43A of the IT Act empowers the Central Government to stipulate the IT Rules which seek to provide individuals certain rights with regards to their information. This section also provides for the protection of sensitive personal data or information (hereinafter, “SPDI”).
The IT Rules seeks to distinguish between personal information and SDPI. According to Rule 2(1)(i), personal information is defined as that information which directly or indirectly relates to a person, “in combination with other information available or likely to be available with a body corporate, is capable of identifying such person”. In comparison, Rule 3 fleshes out the composition of SDPI which includes examples of sensitive information are passwords, medical history, biometric information, sexual orientation, bank account details, physiological or mental health condition, etc.
Rule 5 of the IT Rules states that while collecting SDPI, the data collector should seek consent through writing and must ensure that the collection is based on the principles of legality and necessity. Rule 5 also states that the individual whose data is being collected should be made aware of the reason behind the collection of information and who would have access to such information. If an agency is involved in collecting and retaining the information pertaining to individuals, details of such agencies also need to be disclosed. The data collector must also practice purpose limitations, as stipulated under Rule 5, and is hence, precluded from retaining the information indefinitely.
It is imperative to note that Rule 8 read with Section 43A of the IT Act places civil liabilities on corporations in the event of mishandling SDPI. These liabilities involve compensating the individuals whose data has been mishandled. The aggrieved employee could approach an adjudicating officer appointed under the IT Act where the compensation claimed is up to INR 5 crores. However, if the compensation claimed exceeds INR 5 crore, the appropriate civil courts can be approached.
Although the IT Act and the SPDI Rules provide checks on the body corporate and means of recourse for non-compliance, there still exist several lacunas. Firstly, the provision of notice and consent does not require the companies to ensure that the terms and laid out in such a manner that the person consenting to the data can fully understand. Additionally, the absence of the need for renewed consent would mean that the consent would be used to justify further data collection and processing, at times with the use of new devices. For example, the consent given for CCTV surveillance could be construed as consent for setting up facial or gait recognition in the future.
Light at the end of the tunnel? - The Personal Data Protection Act
With regards to the current version of the draft Personal Data Protection Bill, 2019 (hereinafter, “Bill”), Section 13 provides the employer with a leeway into processing employee data other than sensitive personal data without consent based on two grounds: when consent is not appropriate, or when obtaining consent would involve disproportionate effort on the part of the employer. Furthermore, personal data can only be collected without consent for four purposes, namely, recruitment, termination, attendance, provision of any service or benefit, and assessing performance. These purposes comprehensively cover almost all activities that workers may potentially undertake, or be subjected to, as part of their work-life. However, with respect to this provision, the current version of the Bill is better than the 2018 version, which did not exclude sensitive personal data from non-consensual processing.
The Bill labels employees as “data principal” and provides them with a plethora of rights. These include the right to confirmation and access, portability of data, and withdrawal of consent. However, the present and earlier versions of the Bill fail to define “employee”, “employer”, or “employment”, with respect to the provisions of the Bill. This, in turn, brings out ambiguity as to whom these provisions address. There is no uniform labour law in India and every legislation, be it the Industrial Employment (Standing orders) Act or the Employee’s Compensation Act provides different conditions to be qualified as an employee, and sometimes only addresses workers or “workmen”. Hence, the lack of a clear indication as to whom this provision applies creates an added layer of ambiguity the effects of which would be borne by the employee.
However, the phrasing of employers as “data fiduciaries” provides that they are to ensure that collection and processing of data are in line with the principles of collection limitation and purpose limitation, is accurate, is stored securely, and only for the time period needed. Furthermore, the employer is required to provide notice to employees about their rights to confirmation, access, correction, and portability with respect to their data. The consent exception only extends to the collection of personal data and does not extend to the collection of sensitive personal data by employers. It is important to note that most of the data collected by employers and especially through new technologies is sensitive personal data - including financial data, and most importantly health data and biometrics. According to the Bill, sensitive personal data requires additional safeguards such as explicit consent.
The Bill also adds in another category of data fiduciaries - significant data fiduciaries, based on factors such as the volume of data processed, the sensitivity of that data, risk of harms, and the use of technologies. The Bill also requires that if these data fiduciaries undergo processing by involving new technologies, or use sensitive data such as genetic or biometric data such processing should only be done after a data protection impact assessment. However, until the PDP Bill becomes law all these provisions and safeguards cannot be used against the current and rapid adoption of surveillance technologies in the workplace.
Conclusion
While we do not know what the provisions relating to employee data would be in the final version of the PDP Bill, policies are already in the way to make it easier to share employee data. The Ministry of Skill Development and Entrepreneurship in its report on Adopting e-Credentialing in the Skilling Ecosystem states how the digital skill credential could be used to allow employers to verify the credentials of the applicants. The policy itself states that the anonymised data from these credentials could be used in data and analytics and to know the most sought after skills. Interestingly, a study conducted by Rocher et al. revealed that even datasets that have gone through the de-identification process or anonymised datasets could, in fact, be re-identified with 99.98% accuracy. Although the PDP Bill in its current version provides some rights to the employees over their data, it is yet to be made into an Act.
In the current situation, one can only hope that the steps taken for more and more data collection and surveillance of employees during the pandemic are not continued after the pandemic ends. While the fear of mission creep and function creep by the government through contact tracing apps looms, the same is dire in the case of workplaces where employees are already vulnerable due to the erosion of labour laws, pay cuts, and the looming threat of unemployment.
The push towards new ways of data collection should ideally happen when there is a means for the individual to question or seek clarification and hopefully have a choice and autonomy. Hence, it is imperative that these pervasive technologies are implemented on keeping a “rights-friendly” approach, as observed in other countries. Employers and workplaces should look at ways to ensure the safety of the employee and ensure trust in them, instead of using technology as a placebo, for example instead of being concerned about employees turning to work sick, or with fever (measures such as temperature checks and health monitoring) wouldn’t it be just easy to let the person rest and recover at home? Or if employees were not complying with the mask policy, maybe providing them with washable masks and educating them about the concerns for their health as well, instead of resorting to facial recognition for the same.
____________________________________________________________
Edited by Arindrajit Basu
With inputs from Shweta Reddy, Sumandro Chattapadhyay, and Shruti Trikanad
The Government needs to make sure our emails don't destroy the environment
Ask people to name the first things they think of when you say climate change and you can expect a few standard answers. Polar bears on shrinking ice caps, cities suffocated from car exhaust fumes and mass deforestation are all surely to be somewhere on the list of responses. What you probably won’t find, however, is people discussing their social media. Or their email. Or any piece of the immeasurable amount of data that we produce on the internet on a daily basis. Yet all of this data is far from green, and is substantially increasing our carbon footprint. So the question arises, how is our data contributing to climate change, and what can policy makers do about it?
There is a tendency to focus on the turnover of hardware when discussing the climate impact of digital technology. And while this is an important element of the sector’s impact, it is essential that policymakers also recognise the impact of intangible elements of the digital ecosystem - such as data. Every piece of data that is created or transmitted across the internet has an environmental cost. That cost being the energy required (and by extension the fossil fuel amount used) to operate the technology that hosts and transports the data.
Admittedly, the environmental impact and cost of one person checking their instagram or even reading this article is quite low. But aggregated across the estimated number of internet users in the world, digital technologies are estimated to be responsible for 1.7 billion tonnes of greenhouse gases - which is about 4% of the global greenhouse gas production and roughly how much is produced by the global airline industry.
Another key element of data’s environmental impact is the establishment and operation of data centres. Data centres are establishments that house computing and ICT equipment. These centres are critical infrastructure components to the functioning of the internet and are used to store an immense volume of data. As the number of data centres has exploded over the last decade, they have come to account for 1% all global greenhouse gas production on their own, and are expected to contribute to 14% of all emissions by 2040.
India’s growing data centre problem
As the number of Internet users in India grows at an exponential rate, it is imperative that the government take a proactive approach to creating sustainable infrastructure that can meet the ICT demands of the population.
Recently, the Ministry of Electronics and Information technology, released its draft policy on data centres. The policy outlined the government’s aim at establishing a large number of domestic data centres that will be used to store all data created within the country. The government’s policy envisions India as being one of the world leaders in data centre establishment and operation - on a par with countries such as Singapore who now hold that mantle.
However, despite presenting this grand vision, the policy provides no specifics on how it plans to cope with the environmental stress that these new centres would bring. The policy states that ensuring uninterrupted power to these centres will be a key priority of the government - a burden that would be far beyond the capacity of current renewable energy sources in the country. Taking the example of Singapore, almost 7% of all electricity consumption in the country was from data centres. Such proportionate consumption by Indian data centres would realistically only be possible through an expanded use of fossil fuel generated electricity.
To give the policy some credit, it does mention ‘encouraging’ the use of renewable energy for data centres but fails to mention any specific schemes or measures to ensure renewable energy investment and growth is enough to keep up with growing data centre energy demands.
What can policy makers do?
The question arises, how can policy makers make data centres more sustainable? Is there any way of reducing the energy consumption of these data centres?
In short, not really right now. It has been estimated that 40% of total energy consumption by data centres is used in cooling. And while there is the possibility that building these data centres in cooler environments would reduce these costs - converting shimla, coorg, ooty and other cool weathered hill stations into monuments of data centre infrastructure does not seem particularly practical. And so short of investing heavily into research and development for the future and conforming to global standards of data centre operation, there is not much the government can do now outside of focusing on the source of the energy that is used by these centres.
Keeping this in mind, the first step in evolving India’s data infrastructure has to be investing in and developing clear schemes for promoting renewable energy in the country. While India has seen positive growth in renewable energy infrastructure, it would require substantial private and public investment in order to meet its target of 450 GW of renewable energy by 2021. Widespread development of data centres would only further stress India’s energy needs and would therefore require a commensurate increase in the amount of renewable energy available. As such it is imperative that the state not stick to vague statements of ‘encouraging renewable energy’ or ‘collaborating between ministries’ and rather adopt a revised policy for developing renewable energy for digital infrastructure.
Such a step would ensure the sustainability of the country’s digital infrastructure, and ensure that every Indian has access to both clean air and their email.
Notes From a Foreign Field: The European Court of Human Rights on Russia’s Website Blocking
This blogpost was authored by Gurshabad Grover and Anna Liz Thomas. It was first published at the Indian Constitutional Law and Philosophy Blog on February 5, 2021, and has been reproduced here with permission.
From PUBG to TikTok, online services are regularly blocked in India under an opaque censorship regime flowing from section 69A of the Information Technology (IT) Act. Russia happens to have a very similar online content blocking regime, parts and processes of which were recently challenged in the European Court of Human Rights (‘the Court’). This blogpost summarises the human rights principles applied by the Court to website blocking, and discusses how they can be instructive to petitions in the Delhi High Court that challenge arbitrary censorship in India.
Challenges to Russia’s Website Blocking Practices
On 23 June 2020, the Court delivered four judgements on the implementation of Russia’s Information Act, under which content on the internet can be deemed illegal and taken down or blocked. Under some of these provisions, a court order is not required, and the government can send a blocking request directly to Roskomnadzor, Russia’s telecom service regulator. Roskomnadzor, in turn, requests internet service providers (ISPs) to block access to the webpage or websites. Roskomnadzor also notifies the website owner within 24 hours. Under the law, once the website owner notifies the Roskomnadzor that the illegal content has been removed from the website, the Roskomnadzor verifies the same and informs ISPs that access to the website may be restored for users.
In the case of Vladimir Kharitonov, the complainant’s website had been blocked as a result of a blocking order against another website, which shared the same IP address as that of the complainant. In Engels, the applicant’s website had been ordered by a court to be blocked for having provided information about online censorship circumvention tools, despite the fact that such information was not unlawful under any Russian law. OOO Flavius concerned three online media outlets that had their entire websites blocked on the grounds that some of their webpages may have featured unlawful content. Similarly, in the case of Bulgakov, the implementation of a blocking order targeting extremist content (one particular pamphlet) had the effect of blocking access to the applicant’s entire website. In both the cases of Engels and Bulgakov, where court proceedings had taken place, the proceedings had been concluded inter se the Prosecutor General and server providers, without the involvement of the website owner. In all four cases, appeals to higher Russian courts had been summarily dismissed. Even in those cases where website owners had taken down the offending content, their websites had not been restored.
The Court assessed the law and its application on the basis of a three-part test on whether the censorship is (a) prescribed by law (including foreseeability and accessibility aspects of the law), (b) necessary (and proportionate) in a democratic society, and (c) pursuing a legitimate aim.
Based on the application of these tests, the Court ruled against the Russian authorities in all four cases. The Court also held that the wholesale blocking of entire websites was an extreme measure tantamount to banning a newspaper or a television station, which has the collateral effect of interfering with lawful content. According to the Court, blocking entire websites can thus amount to prior restraint, which is only justified in exceptional circumstances.
The Court further held that procedural safeguards were required under domestic law in the context of online content blocking, such as the government authorities: (a) conducting an impact assessment prior to the implementation of blocking measures; (b) providing advance notice to website owners, and their involvement in blocking proceedings; (c) providing interested parties with the opportunity to remove illegal content or apply for judicial review; and (d) requiring public authorities to justify the necessity and proportionality of blocking, provide reasons as to why less intrusive means could not be employed and communicate the blocking request to the owner of the targeted website.
The Court also referenced an earlier judgment it had issued in the case of Ahmet Yildirim vs. Turkey, acknowledging that content creators are not the only ones affected; website blocking interferes with the public’s right to receive information.
The Court also held that the participation of the ISP as a designated defendant was not enough in the case of court proceedings concerning blocking requests, because the ISP has no vested interest in the proceedings. Therefore, in the absence of a targeted website’s owner, blocking proceedings in court would lose their adversarial nature, and would not provide a forum for interested parties to be heard.
Implications for India
The online censorship regime in India is similar to Russian terms of legal procedure, but perhaps worse when it comes to the architecture of the law’s implementation. Note that for this discussion, we will restrict ourselves to government-directed blocking and not consider court orders for content takedown (the latter may also include intellectual property infringement and defamatory content).
Section 69A of the Information Technology (IT) Act permits the Central Government to order intermediaries, including ISPs, to block online content on several grounds when it thinks it is “necessary or expedient” to do so. Amongst others, these grounds include national security, public order and prevention of cognisable offences.
In 2009, the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (‘blocking rules’) were issued under the Act. They lay out an entirely executive-driven process: a committee (consisting entirely of secretaries from various Ministries) examines blocking requests from various government departments, and finally orders intermediaries to block such content.
As per Rule 8, the chairperson of this committee is required to “make all reasonable efforts identify the person or intermediary who has hosted the information” (emphasis ours) and send them a notice and give them an opportunity for a hearing. A plain reading suggests that the content creator can then not be involved in the blocking proceedings. Even this safeguard can be circumvented in “emergency” situations as described in Rule 9, under which blocking orders can be issued immediately. The rules ask for such orders to be examined by the committee in the next two days, where they can decide to continue or rescind the block.
The rules also task a separate committee, appointed under the Telegraph Act, to meet every two months to review all blocking orders. Pertinently, only ministerial secretaries comprise that committee as well.
These are the limited safeguards prescribed in the rules. Public accountability in the law is further severely limited by a requirement of strict confidentiality (Rule 16) of blocking orders. With no judicial, parliamentary or public oversight, it is easy to see how online censorship in India operates in complete secrecy, making it susceptible to wide abuse.
When the constitutionality of provision and the blocking rules was challenged in Shreya Singhal v. Union of India, the Supreme Court was satisfied with these minimal safeguards. However, it saved the rules only because of two reasons. First, it noted that an opportunity of a hearing is given “to the originator and intermediary” (emphasis ours: notice how this is different from the ‘or’ in the blocking rules). It also specifically noted that the law required reasoned orders that could be challenged through writ petitions.
On this blog, Gautam Bhatia has earlier argued that the judgment then should be read as obligating the government to mandatorily notify the content creator before issuing blocking orders. Unfortunately, the reality of the implementation of the law has not lived up to this optimism. While intermediaries (ISPs when it comes to website blocking) may be getting a chance to respond, content creators are also almost never given a hearing. As we saw in the European Court’s judgment, ISPs do not have any incentive to challenge the government’s directions.
Additionally, although the law states that “reasons [for blocking content are] to be recorded in writing”, leaked blocking orders suggest that even ISPs are not given this information. Apart from the opacity around the rationale for blocking, RTI requests to uncover even the list of blocked websites have been repeatedly rejected (for comparison, Roskomnadzor at least maintains a public registry of websites blocked in Russia). This lack of transparency and fair proceedings also means that entire websites may be getting blocked when there are only specific web pages on that website that serve content related to unlawful acts.
When it comes to the technical methods of blocking, the rules are silent, leaving this decision to the ISPs. While a recent study by the Centre for Internet and Society showed that popular ISPs are using methods that target specific websites, there are some recent reports that suggest ISPs may be blocking IP addresses too. The latter can have the effect of blocking access to other websites that are hosted on the same address.
There are two challenges to the rules in the Delhi High Court, serving as opportunities for reform of website blocking and content takedown in India. The first was filed in December 2019 by Tanul Thakur, whose website DowryCalculator.com (a satirical take on the practice of dowry) was blocked without any notice or hearing. Tanul Thakur was not reached out to by the committee responsible for passing blocking orders despite the fact that Thakur has publicly claimed its ownership multiple times, and has been interviewed by the media about the website. When Thakur filed a RTI asking why DowryCalculator.com was blocked, the Ministry of Electronics cited the confidentiality rule to refuse sharing such information!
This month, an American company providing mobile notifications services, One Signal Inc., has alleged that ISPs are blocking its IP address, and petitioned the court to set aside any government order to that effect because they did not receive a hearing. Interestingly, the IP address belongs to a popular hosting service provider, which serves multiple websites. Considering this fact and the lack of transparency in blocking orders, one may question whether One Signal was the intended target at all! The European Court’s judgment in Vladimir Kharitonov is quite relevant here: ISPs should not be blocking IP addresses that are shared amongst multiple websites, because such a measure can cause collateral damage, and make other legitimate expression inaccessible.
Given the broad similarities between the Indian and Russian website blocking regimes, the four judgements by the European Court of Human Rights will be instructive to the Delhi High Court. Note that section 69A is used for content takedown in general, i.e. censoring posts on Twitter, not just blocking websites): the right to hearing must extend to all such content creators. The principles applied by the European Court can thus provide for a more rights respecting foundation for content blocking in India for the judiciary to uphold, or for the legislature to amend.
CIS Comments on the National Strategy on Blockchain
This submission is a response by the researchers at CIS to the report “National Strategy on Blockchain” prepared by Ministry of Electronics and Information Technology (MEITY) under the Government of India.
We have put forward the following comments based on our analysis of the report.
-
General Comments on the National Strategy
-
There are currently a number of reports and policies on blockchain use across departments, ministries and even states. The absence of a harmonised blockchain policy across all departments and institutions of government must be fixed.
-
There are inherent dangers with viewing blockchain as a silver bullet solution.
-
Informational concerns with blockchain are existent and policies must be designed to reflect these concerns and minimise their occurrences.
-
Section Specific Comments
-
Section 6.1 - There is a need for greater decentralisation and a shift away from a solely government operated blockchain
-
Section 6.2:
-
The legality of blockchain also faces the hurdle of smart contracts
-
The RBI decision to halt the use of cryptocurrencies was struck down by the Supreme Court
-
The right to be forgotten exists as an extension of the right to privacy as well
-
Section 7 - There is a need for greater detail and granularity in the report’s analysis and in the suggestions and recommendations that it makes.
CIS comments on the Revised Non Personal Governance Framework Report
This submission presents a response by researchers at the Centre for Internet and Society, India (CIS) to the second version of the Report on Non-Personal Data Governance Framework prepared by the Committee of Experts (hereafter “Report”). CIS had also provided inputs to 1 the draft version of the Report published in July 2020.
Executive Summary
It is beyond doubt that there must exist a regulatory frameowrk that governs the rights accorded to individual, businesses and the state in the context of the use of non personal data. However, based on the recommendations in the Report, we have found that the following areas require greater clarity and deliberation before being enacted.
General Comments
1. Examining the economic considerations underpinning the non-personal data governance framework
a. Open Data access is not enough to offset network effects and existing power imbalances in key digital sectors
b. Increased Data collection leads to Data Appropriation
2. Addressing the societal concerns that arise with sharing Non Personal Data sharing
a. De-anonymization and harm linked with sharing Non Personal Data
b. ● Sharing non-personal data could result in a culture of data maximisation
Section Specific Comments
The full version of the submission can be found at: http://www.cis-india.org/internet-governance/cis-comments-revised-npd-report
New intermediary guidelines: The good and the bad
This article originally appeared in the Down to Earth magazine. Reposted with permission.
-------
The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The operation of these rules would be in supersession of the existing intermediary liability rules under the Information Technology (IT) Act, made back in 2011.
These IL rules would have a significant impact on our relationships with internet ‘intermediaries’, i.e. gatekeepers and getaways to the internet, including social media platforms, communication and messaging channels.
The rules also make a bid to include entities that have not traditionally been considered ‘intermediaries’ within the law, including curated-content platforms such as Netflix and Amazon Prime as well as digital news publications.
These rules are a significant step-up from the draft version of the amendments floated by the Union government two years ago; in this period, the relationship between the government around the world and major intermediaries changed significantly.
The insistence of these entities in the past, that they are not ‘arbiters of truth’, for instance, has not always held water in their own decision-makings.
Both Twitter and Facebook, for instance, have locked the former United States president Donald Trump out of their platforms. Twitter has also resisted to fully comply with government censorship requests in India, spilling into an interesting policy tussle between the two entities. It is in the context of these changes, therefore, that we must we consider the new rules.
What changed for the good?
One of the immediate standouts of these rules is in the more granular way in which it aims to approach the problem of intermediary regulation. The previous draft — and in general the entirety of the law — had continued to treat ‘intermediaries’ as a monolithic entity, entirely definable by section 2(w) of the IT Act, which in turn derived much of its legal language from the EU E-commerce Directive of 2000.
Intermediaries in the directive were treated more like ‘simple conduits’ or dumb, passive carriers who did not play any active role in the content. While that might have been the truth of the internet when these laws and rules were first enacted, the internet today looks much different.
Not only is there a diversification of services offered by these intermediaries, there’s also a significant issue of scale, wielded by a few select players, either by centralisation or by the sheer number of user bases. A broad, general mandate would, therefore, miss out on many of these nuances, leading to imperfect regulatory outcomes.
The new rules, therefore, envisage three types of entities:
- There are the ‘intermediaries’ within the traditional, section 2(w) meaning of the IT Act. This would be the broad umbrella term for all entities that would fall within the ambit of the rules.
- There are the ‘social media intermediaries’ (SMI), as entities, which enable online interaction between two or more users.
- The rules identify ‘significant social media intermediaries’ (SSMI), which would mean entities with user-thresholds as notified by the Central Government.
The levels of obligations vary based on these hierarchies of classification. For instance, an SSMI would be obligated with a much higher standard of transparency and accountability towards their users. They would have to fulfill by publishing six-monthly transparency reports, where they have to outline how they dealt with requests for content removal, how they deployed automated tools to filter content, and so on.
I have previously argued how transparency reports, when done well, are an excellent way of understanding the breadth of government and social media censorships. Legally mandating this is then perhaps a step in the right direction.
Some other requirements under this transparency principle include giving notice to users whose content has been disabled, allowing them to contest such removal, etc.
One of the other rules from the older draft that had raised a significant amount of concern was the proactive filtering mandate, where intermediaries were liable to basically filter for all unlawful content. This was problematic on two counts:
- Developments in machine learning technologies are simply not up there to make this a possibility, which would mean that there would always be a chance that legitimate and legal content would get censored, leading to general chilling effect on digital expression
- The technical and financial burden this would impose on intermediaries would have impacted the competition in the market.
The new rules seemed to have lessened this burden, by first, reducing it from being mandatory to being best endeavour-basis; and second, by reducing the ambit of ‘unlawful content’ to only include content depicting sexual abuse, child sexual abuse imagery (CSAM) and duplicating to already disabled / removed content.
This specificity would be useful for better deployment of such technologies, since previous research has shown that it’s considerably easier to train a machine learning tool on corpus of CSAM or abuse, rather than on more contextual, subjective matters such as hate speech.
What should go?
That being said, it is concerning that the new rules choose to bring online curated content platforms (OCCPs) within the ambit of the law by proposals of a three-tiered self-regulatory body and schedules outlining guidelines about the rating system these entities should deploy.
In the last two years, several attempts have been made by the Internet and Mobile Association of India (IAMAI), an industry body consisting of representatives of these OCCPs, to bring about a self-regulatory code that fills in the supposed regulatory gap in the Indian law.
It is not known if these stakeholders were consulted before the enactment of these provisions. Some of this framework would also apply to publishers of digital news portals.
Noticeably, this entire chapter was also missing from the old draft, and introducing it in the final form of the law without due public consultations is problematic.
Part III and onwards of the rules, which broadly deal with the regulation of these entities, therefore, should be put on hold and opened up for a period of public and stakeholder consultations to adhere to the true spirit of democratic participation.
The author would like to thank Gurshabad Grover for his editorial suggestions.
Regulating Sexist Online Harassment: A Model of Online Harassment as a Form of Censorship
Read the full paper here.
The Competition Law Case Against Whatsapp’s 2021 Privacy Policy Alteration
Executive Summary
On January 4, 2021, Whatsapp announced a revised privacy policy through an in-app notification. It highlighted that the new policy would impact user interactions with business accounts, including those which may be using Facebook's hosting services. The updated policy presented users with the option of either accepting greater data sharing between Whatsapp and Facebook or being unable to use the platform post 15th May, 2021. The updated policy resulted in temporarily slowed growth for Whatsapp and increased growth for other messaging apps like Signal and Telegram. While Whatsapp has chosen to delay the implementation of this policy due to consumer outrage, it is important for us to unpack and understand what this (and similar policies) mean for the digital economy, and its associated competition law concerns. Competition law is one of the sharpest tools available to policy-makers to fairly regulate and constrain the unbridled power of large technology companies.
While it is evident the Indian competition landscape will benefit from revisiting the existing law and policy framework to reign in Big technology companies, we argue that the change in Whatsapp’s privacy policy in 2021 can be held anti-competitive using legal provisions as they presently stand. Therefore, in this issue brief, we largely limit ourselves to evaluating the legality of Whatsapp’s privacy policy within the confines of the present legal system.
First, we dive into an articulation of the present abuse of dominance framework in Indian Competition Law. Second, we analyze whether there was abuse of dominance-bearing in mind an economic analysis of Whatsapp’s role in the relevant market by using tests laid out in previous rulings of the CCI
The framework for determining abuse of dominance as per The Competition Act is based on three factors:
1. Determination of relevant market
2. Determination of dominant position
3. Abuse of the dominant position
In two previous orders in 2016 and 2020, CCI has held that Whatsapp is dominant in its relevant market based on several factors which we explore. These include:
-
Advantage in user base, usage and reach,
-
Barriers to entry for other competitors
-
Power of acquisition over competitors.
However, in both orders, CCI held that Whatsapp did not abuse its dominance by arguing that the practices in question allowed for user choice. We critique these judgments for not reflecting the market structures and exploitative practices of large technology companies. We also argue that even if we use the test of user choice laid down by the CCI in its previous orders concerning Whatsapp and Facebook, the changes made to the privacy policy in 2021 did abuse dominance,and should be held guilty of violating competition law standards.
Our analysis revolves around examining the explicit and implicit standards of user choice laid out by the CCI in its 2016 and 2020 judgements as the standard for evaluating fairness in an Abuse of Dominance claim.We demonstrate how the 2021 changes failed to meet these standards.
Finally, we conclude by noting that the present case offers a crucial opportunity for India to take a giant step forward in its regulation of big tech companies and harmonise its rulings with regulatory developments around the world.
The full issue brief can be found here
Recommendations for the Covid Vaccine Intelligence Network (Co-Win) platform
The first confirmed case of Covid-19 was recorded in India on January 30, 2020, and India’s vaccination drive started 12 months later on January 16, 2021; with the anxiety and hope that this signals the end of the pandemic. The first phase of the vaccination drive identified healthcare professionals and other frontline workers as beneficiaries. The second phase, which has been rolled out from March 1, covers specified sections of the general population; those above 60 years and those between 45 years and 60 with specific comorbid conditions. The first phase also saw the deployment of the Covid Vaccine Intelligence Network (Co-Win) platform to roll out and streamline the Covid 19 vaccination process. For the purpose of this blog post, the term CoWIn platform has been used to refer to the CoWin App and the CoWin webportal.
During the first phase, it was mandatory for the identified beneficiaries to be registered on the Co-Win App prior to receiving the vaccine. The Central Government had earlier indicated that it would be mandatory for all the future beneficiaries to register on the Co-Win app; however, the Health Ministry hours before the roll out of the second phase tweeted that beneficiaries should use the Co-Win web portal (not the Co-Win app) to register themselves for the vaccine. The App which is currently available on the play store is only for administrators; it will not be available for the general public. Beneficiaries can now access the vaccination by; (i) registering on the CoWin website; or (ii) Certain vaccination (sites) have a walk-in-facility: On-site registration, appointment, verification, and vaccination will all be on-site the same day; or (iii) register and get an appointment for the vaccination through the Aarogya Setu app.
The scale and extent of the global pandemic and the Covid-19 vaccination programme differs significantly from the vaccination/immunisation programmes conducted by India previously, and therefore, the means adopted for conducting the vaccination programme will have to be modified accordingly. However, as several newspaper reports have indicated the roll out of the CoWin platform has not been smooth. There are several glitches; from the user data being incorrectly registered, to beneficiaries not receiving the one time password required to schedule the appointment.
An entirely offline or online method (internet penetration is at 40% ) to register for the vaccine is not feasible and a hybrid model (offline registration and online registration) should be considered. However, the specified platform should take into account the concerns which are currently emanating from the use of Co-Win and make the required modifications.
Privacy Concerns
When the beneficiary uses the Co-Win website to register, she is required to provide certain demographic details such as name, gender, date of birth, photo identity and mobile number. Though Aadhar has been identified as one of the documents that can be uploaded as a photo identity, the Health Ministry in a response to a RTI filed by the Internet Freedom Foundation (IFF) clarified that Aadhaar is nor mandatory for registration either through the Co-Win website or through Aarogya Setu. While, the Government has clarified that the App cannot be used by the general public to register for the vaccination, it still leaves open the question of the status of the personal data of the beneficiaries identified in the first phase of the process, who were registered on the App, and whose personal details were pre-populated on the App. In fact in certain instances, Aadhar details were uploaded on the app as the identity proof, without the knowledge of the beneficiary.
These concerns are exacerbated in the absence of a robust data protection law and with the knowledge that the Co-Win platform (App and the website) does not have a dedicated independent privacy policy. While the Co-Win web portal does not provide any privacy policy, the privacy policy hyperlinked on the App directs the user to the Health Data Policy of the National Health Data Management Policy, 2020. The Central Government approved the Health Data Management Policy on December 14, 2020. It is an umbrella document for all entities operating under the digital health ecosystem.
An analysis of the Health Policy against the key internationally recognised privacy principles which are represented in most data protection frameworks in the world, including the Personal Data Protection Bill, 2019, highlights that the Health Policy does not provide any information on data retention, data sharing and the grievance redressal mechanism. It is important to note that the Health policy has also been framed in the absence of a robust data protection law; the Personal Data Protection Bill is still pending before Parliament.
The Co-WIn website does not provide any separate information on how long the data will be retained, whether the data will be shared and how many ministries/departments have access to the data.
A National Health Policy cannot and should not be used as a substitute for specific independent privacy policies of different apps that may be designed by the Government to collect and process the health data of users. Health Data is recognised as sensitive personal data under the proposed personal data protection bill and should be accorded the highest level of protection. This was also reiterated by the Karnataka High Court in its recent interim order on Aarogya Setu. It held that medical information or data is a category of data to which there is a reasonable expectation of privacy, and “the sharing of health data of a citizen without his/her consent will necessarily infringe his/her fundamental right of privacy under Article 21 of the Constitution of India.”
Link with Aarogya Setu
A beneficiary registered on the Co-Win platform can use the Aarogya Setu App to download their vaccination certificate. Beneficiaries have now also been provided an option to register for vaccination through Aarogya Setu. However, the rationale for linking the two separate platforms is not clear, especially as Aaroya Setu has primarily been deployed as a contact tracing application.
There is no information on whether the data (and to what extent) that is stored in the Co-Win platform will be shared with Aarogya Setu. It is also not clear whether the consent of the beneficiary registered on the Co-Win platform will be obtained again prior to sharing the data or whether registration on the Co-Win platform will be regarded as general consent for sharing the data with Aarogya Setu. This is contrary to the principle of informed consent (i.e the consent has to be unambiguous, specific, informed and voluntary), which a data fiduciary has to comply with prior to obtaining personal data from the data principal. The privacy policy of Aarogya Setu has also not been amended to reflect this change in the purpose of the App.
Co-Win registration as an entry to develop health IDs?
One of the objectives of the Health Data Management Policy is to develop a digital unique health ID for all the citizens. The National Health Data Management Policy states that participation in the National Health Data Ecosystem is voluntary; and the participants will, at any time, have the right to exit from the ecosystem. Currently, the policy has been rolled out on a pilot basis in 6 union territories, namely; Chandigarh, Dadra & Nagar Haveli, Daman & Diu, Puducherry, Ladakh and Lakshadweep. As Health is a state subject under the Indian Constitution, Chhattisgarh has raised concerns about the viability and necessity of the policy, especially in the absence of a robust data protection legislation.
Mr. R.S. Sharma, the Chairperson of the ‘Empowered Group on Technology and Data Management to combat Covid-19’ had in an interview to India Today stated “ “Not just for vaccinations, but the platform will be instrumental in becoming a digital health database for India”. This indicates that this is an initial step towards generating health ID for all the beneficiaries. It would also violate the principle of purpose limitation, that data collected for one purpose (for the vaccine) cannot be reused for another (for the creation of the Digital Health ID system) without an individual’s explicit consent and the option to opt-out.
Conclusion
Given India’s experience and reasonable success with childhood immunisation, there is reasonable confidence that the country has the ability to scale up vaccination. However, the vaccination drive should not be used as a means to set aside the legitimate concerns of the citizens with regard to the mechanism deployed to get pet people to register for the vaccination drive. As a first step it is essential that Co-Win has a separate dedicated privacy policy which conforms to the internationally accepted privacy principles and enumerated in the Personal Data Protection Bill. It is also essential that Co-Win or any other app/digital platform should not be used as a backdoor entry for the government to create unique digital health IDs for the citizens, especially without their consent and in the absence of a robust data protection law.
Comments and recommendations to the Guidelines for “Influencer Advertising on Digital Media”
The authors would like to thank Merrin Muhammed for research assistance, and Pranav MB for editorial assistance.
Introduction
The Centre for Internet and Society (CIS) is a non-profit research organisation that works extensively on policy issues relating to privacy, freedom of expression, accessibility for persons with diverse abilities, access to knowledge, intellectual property rights and openness. In the past, CIS has also engaged with and contributed to an extensive body of work in India, concerning intermediary liability, regulation of social media and platform governance. The research at CIS seeks to understand the reconfiguration of social processes and structures through the internet and digital media technologies, and vice versa.
Please find below our recommendations for the Guidelines for "Influencer advertising on digital media" [“the Guidelines”]. The first section summarizes a few of our specific comments and concerns with the Guidelines, while the second section brings up a few other general observations that the ASCI ought to take into account. CIS is grateful for the opportunity to submit its views.
High-level comments
Operation of these Guidelines vis-a-vis the Consumer Protection Act, 2019
The Consumer Protection Act, 2019 [“the Act”], makes provisions for regulating ‘advertisements’ and ‘endorsements.’ For instance, section 2(1) of the Act defines advertisements as:
“[...] any audio or visual publicity, representation, endorsement or pronouncement made by means of light, sound, smoke, gas, print, electronic media, internet or website and includes any notice, circular, label, wrapper, invoice or such other documents;”
Further, section 2(18) of the Act defines endorsement, in relation to an advertisement as:
“[...] (i) any message, verbal statement, demonstration; or
(ii) depiction of the name, signature, likeness or other identifiable personal characteristics of an individual; or
(iii) depiction of the name or seal of any institution or organisation,
which makes the consumer to believe that it reflects the opinion, finding or experience of the person making such endorsement.”
Additionally the Central Consumer Protection Authority (CCPA) is vested with the power to conduct investigations in instances of false or misleading advertisements, order discontinuation or modification of advertisements, and impose penalties.
We believe these provisions are expansive enough to cover those aspects of influencer advertising that the ASCI is intending to regulate. In light of this, it is important for the ASCI to clarify how the Complaints Procedure set up in the original ‘The Code for Self Regulation’ would operate vis-a-vis the power of the CCPA.
Proposed Guidelines
Definition
More specific definitions for Digital Media
While it is commendable that the Guidelines identify a multitude of entities and services to encompass the definition for ‘Digital Media,’ we must highlight that these definitions are currently ambiguous. For instance, the Guidelines do not make it clear what Near Video on Demand, Subscription Video on Demand, Pay Per View, etc. are. These are pertinent details that would help consumers identify the nature of the viewed content, as well as allow influencers and brands to make clearer advertisement decisions.
Additionally, in light of the notification of The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 [“the 2021 rules”], which encompass online curated content providers (OCCPs), it is important for the Guidelines to clarify the relationship between its identified Digital Media entities and the OCCPs under the relevant law. While we recognize that the obligations for the different entities under the Guidelines and the 2021 rules are distinct, the lack of clarification might lead to a confusing ecosystem of regulatory obligations for entities that can be assuaged at this stage.
Influencer
The Guidelines define “Influencers” as “someone who has access to an audience and the power to affect their audience's purchasing decisions or opinions about a product, service, brand or experience, because of the influencer's authority, knowledge, position, or relationship with their audience, An influencer can intervene in an editorial context or in collaboration with a brand to publish content.” Although this definition is all encompassing, it could lead to confusion among users of social media on the matter of whether they are Influencers or not, since the Guidelines don’t mention any specific audience thresholds that serve as a prerequisite for qualifying under the Guidelines. The confusion also extends to the existing definition of “Celebrities” under the ASCI Guidelines For Celebrities In Advertising.
The Guidelines For Celebrities In Advertising state that:
“Celebrities” are defined as famous and well-known people who are from the field of Entertainment and Sports and would also include other famous and well-known personalities like Doctors, Authors, Activists, Educationists, etc. who get compensated for appearing in advertising.
The definition is substantiated by an endnote which states that a celebrity is one who is
“*Compensated Rs. 20 lakhs or above as per current limit for appearing in a single advertisement or a campaign or per year, whichever is more AND / OR is listed in top 100 celebrities as per any one of the current and immediate past list of Forbes or the Times or Celebrity track report of Hansa Research or any such list which is intended to be indicative and not exhaustive.”
We believe that a more clearer definition of “Influencers” similar to the definition of “Celebrities” in the Guidelines with markers such as verification, number of followers, income from posts per year etc., could be used to highlight who these Guidelines apply to. This will benefit the Influencer, the user, and the complaint handling authority.
Details of specific media channels
In the chapter ‘Ready reckoner for specific media channels,’ the Guidelines mention a catalogue of places and instances where such disclosure ought to be made, for specific media channels. While the Guidelines mention the exact details for Facebook, and Instagram (including reels, stories, etc.), these details are missing for some of the other media channels mentioned, including Twitter, Pinterest, and Snapchat.
For Twitter, the Guidelines state: “Include the disclosure label or tag at the beginning of the body of the message as a tag.” Similar directions are given for promotions to be done via Pinterest. and Snapchat, where the disclosure is ought to be in the ‘message.’ However, the main method of communication on all these platforms is via other methods, and not ‘messages.’ Since this direction does not clarify where the disclosure ought to be, it has the potential to create confusion for both influencers, and brands on how best to comply with the Guidelines. Hence, we believe that the Guidelines should be updated to reflect the exact specifications of the media channels, and the places where the disclosures ought to be made.
Other Comments
The need for some guidelines on advertisements directed at children
It is estimated that as of February 2021, 10.6 percent of Instagram users in India are from the age group of 13-17 years. Hence there is a need to look at responsible advertising as well as think of the products that the influencers advertise. Additionally, a large number of influencers’ posts are targeted at children and teenagers, which increases their responsibility connected to advertisements. The draft Personal Data Protection Bill, 2019 prohibits guardian data fiduciaries, i.e. data fiduciaries who operate commercial websites, or online services directed at children (or process large volumes of personal data of children) from profiling, tracking, or behavioural monitoring of, or targeted advertising directed at, children and undertaking any other processing of personal data that can cause significant harm to the child. Though this is a good move, the obligation to not target advertisements at children is not extended to all data fiduciaries. While we do understand that it is difficult to gauge which posts are being viewed by children, the Guidelines could recommend that the Influencers who are aware of their main demographic being children, or teenagers, must take more care in the products they endorse, and take greater care to make the children aware that the post they are sharing is an advertisement.
Additionally we suggest that based on the control that the brands have in terms of content and decision making, and choose the influencer they want to engage with the brands could also ensure the correct audience for their product. Hencer along with the influencer the brand should also take care to ensure who the influencers main demographic are and see if the product is suited for that age group.
A PDF version of this response can be accessed here.
Rethinking Data Exchange & Delivery Models
Executive Summary
In 2020, reports of the government's proposal to create a social registry to update the Socio Economic Caste Census 2011 data started surfacing. Based on the limited information around these proposals in the public domain, it is imperative that adequate consideration be provided to develop such systems in a manner that protects the informational privacy of the individuals. Currently, the proposed Personal Data Protection Bill, 2019 is being deliberated by the Joint Parliamentary Committee and is expected to be tabled in the Monsoon Session of Parliament. The proposed data protection framework is a marked improvement over its predecessor, Section 43A of the Information Technology Act, 2000 and the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules 2011. One substantial change in the context of welfare delivery is that the scope of the application of the proposed framework extends to the personal data processing by the government and its agencies.
The objective of the white paper is to examine the application of the proposed data protection provisions on such a welfare delivery model (data exchange and delivery model) and suggest ways to operationalise key provisions. The scope of this white paper is limited to examining the personal data implications of the model and the effective governance of such platforms in India. The paper relies on publicly available details of India’s and other selected countries (Indonesia, Brazil, China, Malawi, Kenya, Estonia) digital infrastructure, proposals, schemes and legal frameworks in relation to welfare delivery in the country. International best practices around implementation of the principles of privacy and openness are analysed to suggest methods to operationalise these requirements in the context of the data exchange and delivery models and the proposed data protection framework of the country.
Based on the global experience of implementing data exchange and delivery models and the best practices for implementation of data protection provisions, following are some of the key recommendations (in addition to discussing ways to operationalise the data protection provisions) for such a platform in the Indian context:
-
Application of Data Protection Legislation: Due to the sensitive processing of personal data accompanied with harms arising from unlawful surveillance, such a data exchange and delivery model should not be deployed without an overarching data protection legislation. It is vital that the application of the legislation extends to the model. The Data Protection Authority of India should be able to exercise its investigative, corrective and advisory powers over the functioning and management of the model.
-
Independent Regulator: Oversight over the functioning of the platform should not be vested with the agency that is responsible for the maintenance of the platform to address potential conflict of interest issues. Additional sub - committees based on subject matter expertise for each individual scheme can be set up to assist the regulator, if required. The independent regulator should have strong investigative, corrective and advisory powers for effective oversight over the activities of the platform. Enforcement actions of the regulator should be transparent.
-
Governance: The data fiduciary responsible for the management and operation of the data exchange and delivery platform should be clearly identified. The platform should have valid legislative backing. In case of involvement of private actors, additional safeguards related to the privacy and confidentiality of the data in the platform should be implemented.
-
Data Protection Authority of India and Platform: There should be clear channels of communication between the data protection authority of India and the data fiduciaries managing and accessing the platform for guidance on data protection issues.
-
Grievance Redressal Mechanism: An accessible grievance redressal mechanism should be set up at different points of the service delivery and their existence should be publicised through different mediums. As the platform can act as a single point of failure for multiple schemes, an integration of the redressal mechanisms across multiple schemes should be considered based on existing institutional structures. Multiple channels for receiving complaints must be set up for the citizen’s convenience.
Regulating Sexist Online Harassment as a Form of Censorship
Introduction
The proliferation of internet use was expected to facilitate greater online participation of women and other marginalised groups. However, over the past few years, as more and more people have come online, it is evident that social power in online spaces mirrors offline hierarchies. While identity and security thefts may be universal experiences, women and the LGBTQ+ community continue to face barriers to safety that men often do not, aside from structural barriers to access. Sexist harassment pervades the online experience of women, be it on dating sites, online forums, or social media.
In her book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, Zeynep Tufekci argues that the nature and impact of censorship on social media are very different. Earlier, censorship was enacted by restricting speech. But now, it also works in the form of organised harassment campaigns, which use the qualities of viral outrage to impose a disproportionate cost on the very act of speaking out. Therefore, censorship plays out not merely in the form of the removal of speech but through disinformation and hate speech campaigns.
In most cases, this censorship of content does not necessarily meet the threshold of hate speech, and free speech advocates have traditionally argued for counter speech as the most effective response to such speech acts. However, the structural and organised nature of harassment and extreme speech often renders counter speech ineffective. This paper will explore the nature of online sexist hate and extreme speech as a mode of censorship. Online sexualised harassment takes various forms including doxxing, cyberbullying, stalking, identity theft, incitement to violence, etc. While there are some regulatory mechanisms – either in law, or in the form of community guidelines that address them, this paper argues for the need to evolve a composite framework that looks at the impact of such censorious acts on online speech and regulatory strategies to address them.
Click on to read the full text [PDF; 495 Kb]
Beyond Public Squares, Dumb Conduits, and Gatekeepers: The Need for a New Legal Metaphor for Social Media
There is a dire need to think of regulatory strategies that look beyond the ‘dumb conduit’ metaphors that justify safe harbor protection to social networking sites. Alongside, it is also important to critically analyze the outcomes of regulatory steps such that they do not adversely impact free speech and privacy. By surveying the potential analogies of company towns, common carriers, and editorial functions, this essay provides a blueprint for how we may envision differentiated intermediary liability rules to govern social networking sites in a responsive manner.
Introduction
Only months after Donald Trump’s 2016 election victory — a feat mired in controversy over alleged Russian interference using social media, specifically Facebook — Mark Zuckerberg remarked that his company has grown to serve a role more akin to government, rather than a corporation. Zuckerberg argued that Facebook was responsible for creating guidelines and rules that governed the exchange of ideas of over two billion people online. Another way to look at the same argument is to acknowledge that, today, a quarter of the world’s population (and of India) are subject to the laws of Facebook’s terms and conditions and privacy policies, and public discourse around the globe is shaped within the constraints and conditions they create. Social media platforms, like Facebook, wield hitherto unimaginable power to catalyze public opinions, causing a particular narrative to gather steam — that Big Tech can pose an existential threat to democracy.
This, of course, is in absolute contrast to the early utopian visions which imagined that the internet would create a more informed public, facilitate citizen-led engagement, and democratize media. Instead, what we see now is the growing association of social media platforms with political polarization and the entrenchment of racism, homophobia, and xenophobia. The regulation of social networking sites has emerged as one of the most important and complex policy problems of this time. In this essay, I will explore the inefficacy of the existing regulatory framework, and provide a blueprint for how to think of appropriate regulatory metaphors to revisit it.
- Click on to read the article published by IT for Change
- Download the PDF (34,328 Kb) to read the full article, pages 126 - 138.
CIS Comments on the Phase III of E-Courts draft policy
EXECUTIVE SUMMARY
This submission is a response by the researchers at CIS to the Supreme Court E-committee’s draft vision document of phase III of the E-courts project.
We have put forward the following comments and recommendations based on our analysis of the draft report:
A. General Comments
-
The report must place greater emphasis and take into consideration the digital divide between the urban and rural population as well as the the gender divide that exists amongst Indian citizens
-
There is a lack of clarity on how the data will be collected and shared between the different systems within the ICJS and for how long will the data be retained.
-
There is a lack of clarity on the rules and regulations surrounding storage of data collected under this project
-
There are a number of key limitations of the proposed technologies (automated courts, virtual courtrooms and online dispute resolution mechanisms) that will limit their effectiveness
-
Increased technological integration would require dedicated efforts to foster public trust in the judicial process.
B. International Comparison
We have comparatively analysed the integration of digital technology into the judiciary in both South Asia and Africa. Having identified their implementation in both these regions we have identified the following trends:
-
While South Asian countries like Philippines and Thailand are constantly developing strong systems to handle most cases online and transitioning to an e-court system, countries like Vietnam and Indonesia have introduced limited systems for exchange of documents before hearings.
-
The issues reported with the functioning of the e-court system in South Asian nations include insufficient infrastructure and equipment, inadequate training of court personnel, limited IT support, and staff shortages that constrain data encoding and updating of court records.
-
Countries like China and Singapore undertook a deliberative slow uptake process, applying eCourts and technology to judicial hearings sectorally in the beginning to test their effectiveness. Thereafter large-scale implementation of virtual or digital courts and new technologies like - data analytics for caseload prediction in Singapore and China’s e-evidence platform based on blockchain technology - have proved to serve the intended purpose of efficient and effective judicial process with the aid of digital technologies.
-
African countries such as Kenya and Libya have seamlessly transitioned to virtual court systems and e-filings along with other e-services for justice delivery. However, challenges with implementation persist, mainly relating to -
-
● Low internet penetration rates creating a digital divide mainly between the urban and rural areas of Africa.
-
● Power outages, mainly in rural areas of Africa thus creating an impediment to access to justice with respect to virtual hearings in rural areas without electricity backup.
-
● Lack of skills for operating digital judicial systems requiring effective and continuous user-training to operate technologies like Kenya’s Electronic Case Management System (ECMS).
-
● Challenges with complicated digital systems where continuous user platform development is required to simplify processes to access and use systems like online-filing or access to judicial websites so as to make it easy to use for all stakeholders involved.
-
● Need for a singular legislative and regulatory framework prior to adoption, whereby different rules on similar cases in different virtual courts across states causes inter-state judicial splits, an impediment to access to justice
-
C. Recommendations:
- 1. Dedicated programs must be identified and supported to ensure that citizen focused
digitisation takes place so as to not leave any people out of the scope of the judiciary
-
A dedicated regulatory and administrative framework must be published as soon as possible that takes into consideration questions of data storage, data protection and purpose limitation among other considerations. Such a framework must also explicitly call out the limited use cases of technologies like virtual courts.
-
The MHA to codify and specify the regulations with regard to the processing of data through the systems under the ICJS and clear directives on the nature and scope of integration of judicial infrastructure with the ICJS must be provided
-
Studies to be conducted to identify the challenges that may arise when implementing proposals such as virtual or automated courts, virtual courtrooms that use audio visual software and online dispute resolution mechanisms. Such studies would allow for policies to be effectively identified prior to widespread implementation and would significantly reduce the possibility of unintended harms.
-
Identifying measures to improve public trust in the integration of technology within the judiciary through judicial education schemes, etc.
-
Due to varying precedents provided by High Courts and the Supreme Court of the country, there is a requirement for uniform and clear guidelines/directives with respect to the process of electronic evidence management and preservation in India.
A Comparative Analysis of Cryptocurrency Reporting in Financial Statements
The Ministry of Corporate Affairs (MCA) on March 24, 2021, came out with a notification inter alia mandating disclosures of cryptocurrency holdings by companies in their balance sheets. These changes have been effectuated by making requisite amendments to Schedule III of the Companies Act, 2013. The notification specified that companies are now required to report the profit or loss accrued due to trade or investment in any type of cryptocurrency or virtual currency, the amount of cryptocurrency that the company holds on the reporting date, and the deposits or advances from any person that have been made for the purposes of trading or investing in cryptocurrencies or virtual currencies.
The decision on new disclosure requirements comes amidst parliamentary discussions on cryptocurrency and speculations of another attempt at prohibition. Meanwhile, this step has been welcomed by the cryptocurrency industry in India as it signals towards a more positive approach being taken by the government with regards to corporate cryptocurrency transactions in India. Moreover, while it opens up new possibilities of scrutiny of such transactions, this measure will also be beneficial in identifying key policy gaps in cryptocurrency regulation in India when we look at corresponding requirements in foreign jurisdictions.
In this Issue Brief, the policy landscape in the United States of America (USA), United Kingdom (UK), and Japan is discussed and particular emphasis is placed upon definition, accounting practices, and taxation, with respect to cryptocurrencies. It is thus identified that such jurisdictions have taken concrete steps in this regard by providing clear guidance (such as through HMRC’s Cryptoassets Manual and ASBJ’s advisory notification on accounting for cryptocurrencies).
Then, the regulations in India are looked into comprehensively and specific policy recommendations are made, as it is ascertained that no clear steps have been taken in the aspects that have been mentioned above. Although the March MCA Notification is a positive step on corporate cryptocurrency transactions, the following steps are needed further: firstly, a clear and comprehensive definition of cryptocurrency and cryptoassets must be laid down, preferably through a central legislation; secondly, a separate category for cryptocurrencies under the Indian Accounting Standards (Ind AS) should be created; and thirdly, complete guidance on applicable taxes on cryptocurrency transactions, by individuals and corporates, must be provided.
It is thus concluded that while the government is willing to engage with various stakeholders, with positive intent, comprehensive and definitive steps are the need of the hour. This is essential to safeguard the large number of cryptocurrency investors in India, and to quell the uncertainty that is created by speculative measures such as banks declining services for cryptocurrency transactions.
The full issue brief can be read here
Cryptocurrency in Financial Statements
Cryptocurrency in financial statements.pdf
—
PDF document,
408 kB (418358 bytes)
Does Google’s bid to replace third party cookies with FLOCs protect user privacy?
Introduction
The revenue model of major corporations like Google and Facebook is advertisements. In 2020, Google's annual ad revenue amounted to 146.92 billion US dollars. Companies like Google collect data through their services like Gmail, YouTube, Google Search, and third-party cookies on other websites. Accordingly, Google provides targeted ads based on the vast amount of data it collects on an individual. Google AdSense is the company's advertisement service, which improves the marketing efforts of its customers. It allows advertisers to promote ads, list products, and offer services to web users through Google's vast ad network (properties, affiliate pages, and apps). Till now, third-party cookies -- enabling companies to track users’ browsing habits -- have been an enabling force for targeted advertisements. However, fears about data collection without consent via cookies have prompted information privacy laws such as GDPR in Europe. One should note that web browsers like Apple’s Safari and Mozilla Firefox have deprecated third-party cookies from their browsers in March 2020 and September 2019 respectively. In January 2020, Google also decided to phase out third-party cookies in Chrome.
In its efforts to deprecate third-party cookies, Google, in August 2019, has brought an alternative plan with its new Privacy Sandbox platform. This plan promises to preserve anonymity when serving tailored advertising. While unveiling the system, Google explained that even though advertising is necessary to keep the web available to everyone, the web ecosystem is at risk if privacy policies do not keep pace with evolving expectations. Accordingly, Google has proposed a dynamic and evolving range of bird-themed targeted advertising and measuring approaches. It strives to uproot third-party cookies. Federated Learning of Cohorts (FLoC) and TURTLEDOVE are among the most popular. Google envisions it to be the industry norm for serving advertisements on the internet. Google is rolling out its FLoC in the new version of its web browser, Chrome, and seems bound eventually to replace third-party cookies.
This article explains how FLoC works and demonstrates how it is different from conventional third-party cookies for online targeted advertisements. It goes on to evaluate Google’s claims that it protects user privacy. Finally, it assesses whether FLoC will allow Google to entrench its position in the digital market further.
How does FLoC operate?
FLoC is only one component of Google's Privacy Sandbox program, consisting of a series of measures and updates. It aims to transform the existing ad tech ecosystem and adopt a privacy-first approach to the internet. FLoC aims to deliver personalized advertising to large groups of users with common interests.
Chrome puts users into ‘cohorts’ with the help of on-device machine learning based on their browsing behavior. These clusters of large groups of people showing similar interests on the web make the individual user indistinguishable from other people in the cohort. In this manner, an individual is put in multiple cohorts like the ‘car cohort’ or the ‘cooking cohort.' Google FLoC observes the similarity in interests through predictive and contextual users by observing the pattern of the user’s website and page visits. So, the advertisers identify the groups (FLoCs) and show ads to those FLoCs. These changes, nevertheless, will take time, with an anticipated period of two years for the elimination of third-party cookies. Google claims that FLoC enhances consumer privacy when allowing personalized ads by targeting user groups instead of individuals.
How is FLoC different from a Cookie?
Before making out a difference between a cookie and FLoC, first, let us discuss what a cookie is and how a third-party cookie works in an advertisement. A cookie is a small piece of text sent to the browser by a website visited by a user. It helps the site remember the visitor's information, making it easier to revisit the site and make it more useful. Third-party cookies facilitate similar functionalities, enabling tracking by websites other than what a user is currently on.
The structure of the advertising cookie is designed primarily to collect information from users. Advertisers can only locate these cookies on a website with the consent of the website owner. The information that cookies gather on users acts as a digital footprint for marketers and businesses to create an integrated network. It serves as a profile with thorough information about a user's tastes, shopping habits and other preferences. These cookies are generally third-party or persistent cookies. Google uses Ad Words and AdSense for advertising, and the best use of ad banners is for retargeting purposes. Google, not the website maker, deposits these cookies -hence the term “third-party cookies.” In this way, the company pays Google to show the specific visual ad to all the people who visit its website on all the other Google AdSense network websites.
Unlike third-party cookies, Google FLoCs are different in several ways. FLoC takes the user’s browsing history in Chrome and analyzes it to assign the user to a category or “cohort.” Most importantly, it does not give a unique identifier to the individual users. Instead, they exist only as part of a larger cohort, with at least a thousand users to sustain anonymity. In FLoC enabled Chrome, the user’s web browser shares a "cohort ID" with websites and marketers. Consequently, advertisers now have to target users depending on the category to which they belong.
Furthermore, FLoC passes the user information onto the user’s device rather than circulating it on the internet. Google explains that the idea is to disable the reconstruction of cross-site browsing history. The information that it obtains locally from the browser stays on the device; Google only shares the cohorts. The concept behind this interest-based targeting strategy is to conceal people "in the crowd." To keep their browsing history confidential and protect user identity by restricting individual profiling.
It is still unclear whether FLoC will replace third-party cookies completely since FLoC currently requires third-party cookies to work. The FLoC GitHub documentation states that the user must not block third-party cookies on the device to log and sync cohort data. On the surface, the new technology ensures that it shuffles fewer data across the web and enhances user privacy, but there is more to it.
Potential privacy issues with Google FLoC
Despite Google's plan to replace third-party cookies and not create alternative identifiers, the company can now access the user’s search history. This means that the new rules do not apply to themselves. This change does not mean that Google will not track users now. It still tracks users when they use Google websites, Google Docs, Gmail, YouTube, and Google search. This tracking is in the form of a first-party cookie that Google deposits on its own services, including the applications using the ad sense network.
Google FLoC, based on the browsing behavior of an individual, would put users into ever-changing categories or ‘cohorts’ on a weekly basis.
Along with these developments comes a hidden risk-the ability of automated systems to perpetuate bias. FLoC’s clustering algorithm may replicate the potentially illegal discriminatory behavior that results from algorithmic behavioral targeting. Similar concerns are around FLoC, the clustering algorithm may group people by sensitive attributes such as race, sexual orientation, or disability. For example, in 2019, US Department of Housing and Urban Development charged Facebook for ads that discriminated against people based on their race, sex, and disability. It alleged that the platform allowed the house sellers and landlords to discriminate among the users. Besides, researchers claim that a company’s advertising algorithm exacerbates gender bias. The University of Southern California researchers found that men were more likely to see Domino’s pizza delivery job ads on Facebook. At the same time, women were more likely to see Instacart shopper ads. Even though Google acknowledges the risk of algorithmic bias but fails to articulate safeguards that are robust enough to mitigate this.
The Google FLoC documentation states that it monitors cohorts through auditing. It checks the usage of sensitive data like race, religion, gender, age, health, and financial status. Moreover, it plans to analyze the correlation between the resulting cohort and “sensitive” categories. Suppose it finds that too many users belonging to a cohort visits a specific type of “sensitive” website. In that case, Google either will block the cohort or change the cohort forming algorithm. They have also said that it is against their ad policies to serve personalized ads based on sensitive categories. However, by collating people’s general behaviors and interests, the system may infer sensitive information. Therefore, Google, through its services and Cohort ID, will have access to more personal data. The technology is against the objective it seeks to achieve, i.e., to put an end to individual profiling or revealing sensitive attributes. Moreover, the accusation that the company allows advertisers to discriminate against users renders it more sinister.
The vital question centers around whether FLoC constitutes “personal data” and complies with the privacy laws. The European Union's General Data Protection Regulation, 2016 (GDPR), one of the strictest privacy laws, clarifies what "personal data" is. It says data is "personal" in cases when an individual is identifiable directly or indirectly, using online identifiers such as their name, identification number, IP addresses, or their location data. The FLoC proposal itself highlights the concern as mentioned above. The proposal reads that the websites that are aware of the person’s PII (personally identifiable information), e.g., when a person signs in using their email address, can record and reveal their cohort. It is so because, by collating people’s general behaviors and interests, the system may infer sensitive information.
Therefore, FLoC can crumble anonymity and privacy online if one combines FLoC data with information like site sign-in to trace an individual. In this manner, it can reveal sensitive information that allows advertisers to misuse and discriminate against the users. With the change in the advertising ecosystem, the browser generates FLoCs, with advertisers merely at the receiving end.
The Electronic Frontier Foundation (EFF) compares FLoC to a "behavioral credit score," finding it a "terrible idea." It poses new privacy threats, as websites can uniquely fingerprint FLoC users and access more sensitive information than is needed to serve related advertising. Suppose the user visits a retail website. In that case, a retail website should not know what other websites the user has earlier visited. It should not know about the user's political inclinations or about being on treatment for depression. Later, the Chrome browser observes the browsing pattern and categorizes users into the "type of person" and the 'group' they belong to. Google via FLoC will therefore share user's online behavioral patterns with every website the user visits.
Thus, Google FLoC harms privacy-by-design by sharing information with advertisers and websites. Advertisers would not have access to such information using third-party cookies. Moreover, FLoC would make Chrome reveal browsing history with sites--something none of the existing browsers do. Introducing FLoC intends to provide the right amount of data to advertisers without revealing too much about any individual. However, FLoC poses more privacy concerns by sharing more user information than is required. It changes the approach from a contextual to a behavioral one. Hence it does not protect user privacy. Moreover, with its first-party analytic and advertising cookies, Google has access to much more data than with third-party cookies. The proposal does not mention how it processes the data and is quiet with respect to procedural transparency.
One of the most pressing questions that remain is with regards to the FLoC’s effective functioning. Google's ad team has validated this privacy-first solution by developing simulations focused on the concepts described in Chrome's FLoC proposal. Findings indicate that FLoC has the potential to replace third-party cookies to create interest-based datasets effectively. Google claims, “tests of FLoC to reach in-market and affinity Google Audiences show that advertisers can expect to see at least 95% of the conversions per dollar spent when compared to cookie-based advertising.” The outcome of FLoC's forming algorithm and the target audience will determine its power. Google has not published any hard statistics on "how private" FLoCs are or anything about privacy measurements in general.
Another legal issue that comes to light is accountability. Apart from the publishers' accountability, which asks for the user data and processes for targeted advertising, what would be the browsers' accountability actually processing the FLoC data (Browsing history)? Google's standard should specifically emphasize the browser's accountability as it is the sole FLoC data controller. The onus is on them for legitimate processing. For similar reasons, Google announced that it would not proceed with FLoC testing in Europe and countries that fall within GDPR and the ePrivacy Directive. The reason is the lack of clarity regarding which entities serve as the data controller and processor respectively when creating cohorts.
Looking at user data collection, taking consent seems to be the last resort and serves as the legal basis for the lawful processing of personal data. It is unlawful for browsers to process the browsing history without consent. Even if the company claims not to share any profiles, they are bound to ask for specific, informed consent. Under the GDPR, a necessary condition for personal data processing must be within specified lawful grounds, such as the subject’s consent to the processing for a particular purpose. It further strengthens the consent requirement as a legal basis. In India, similar to the GDPR, the Personal Data Protection Bill, 2019 (PDP), has been laid on the bedrock of consent. Under Clause 11 of the Bill, consent is qualified by “free,” “specific,” and “informed.” and requires to be clear in scope and capable of being withdrawn. Therefore, data processing should be allowed only when the individual permits it. The PDP Bill further requires that Data fiduciaries offer adequate information to Data Principles about data processing for keeping it transparent and accountable in the eventa data breach.
Therefore, consent is vital for transparency in processing. In its absence, the data cannot be appropriated or sent. Thus, introducing FLoC falls foul of privacy laws such as GDPR and the Indian PDP Bill. Due to the lack of consent and privacy reasons, privacy-centric, Chromium-based browsers like Brave and Vivaldi are already disabling Google FLoC.
Google’s gambit to reorient the adtech ecosystem under the garb of privacy ends up undermining it. Urgent regulation and advocacy in all jurisdictions is needed to ensure that risks are mitigated and Google does not end up unduly benefiting from this ecosystem at the expense of online individuals and communities.
Acknowledgements: The author would like to thank Ali Jawed, Arindrajit Basu and Gurshabad Grover for their feedback and editorial suggestions.
Vipul Kharbanada and Pallavi Bedi served as blind peer-reviewers for this piece.
The author graduated from the Faculty of Law, Aligarh Muslim University, in 2019 and holds an LL.M (Constitutional and Administrative Law) from Symbiosis Law School, Pune. She has a keen interest in Digital Rights & Tech Policy.
email: [email protected]
(Disclosure: The Centre for Internet & Society has received funds from Google)
On the legality and constitutionality of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
On 25 February 2021, the Ministry of Electronics and Information Technology (Meity) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (hereinafter, ‘the rules’). In this note, we examine whether the rules meet the tests of constitutionality under Indian jurisprudence, whether they are consistent with the parent Act, and discuss potential benefits and harms that may arise from the rules as they are currently framed. Further, we make some recommendations to amend the rules so that they stay in constitutional bounds, and are consistent with a human rights based approach to content regulation. Please note that we cover some of the issues that CIS has already highlighted in comments on previous versions of the rules.
The note can be downloaded here.
Pandemic Technology takes its Toll on Data Privacy
The article by Aman Nair and Pallavi Bedi was published in the Deccan Herald on June 13, 2021.
People show Arogya Setu App installed in their phones while travelling by special New Delhi-Bilaspur train from New Delhi Railway Station. Credit: PTI File Photo
Jabalpur: A beneficiary shows his certificate on his mobile phone after receiving COVID-19 vaccine dose, at Gyan Ganga College in Jabalpur, Saturday, May 15, 2021. (PTI Photo)
At a time when technology is spawning smart solutions to combat Covid-19 worldwide, India’s digital response to the pandemic has stoked concerns that surveillance could pose threats to the privacy of the personal data collected. Be it apps or drones, there is widespread criticism that digital tools are being misused to share information without knowledge or consent. At the other end of the spectrum, the great urban-rural digital divide is hampering the already sluggish vaccination drive, exposing vulnerable populations to a fast-mutating virus.
Last year, the Centre, states and municipal corporations launched more than 70 apps relating to Covid-19, demonstrating the country’s digital-driven approach to handling the pandemic. Chief among these was the central government’s contact tracing app Aarogya Setu. Launched under the Digital India programme, the app quickly came under scrutiny over data privacy.
As per its privacy policy, Aarogya Setu collects personal details such as name, age, sex, profession and location. As there is no underlying legislation forming its basis, and in the absence of a personal data protection bill, serious privacy concerns regarding the collection, storage and use of personal data have been raised.
The government has attempted to mitigate these concerns with reassurances that the data will be used solely in tracing the spread of the virus. However, recent reports from the Kulgam district of Jammu and Kashmir point to the sharing of application data with police. This demonstrates how easy it is to use personal data for purposes other than which it was collected, and presents a serious threat to citizen privacy.
Though Aarogya Setu was initially launched as ‘consensual’ and ‘voluntary’, it soon became mandatory for individuals to download the app for various purposes such as air and rail travel (this order was subsequently withdrawn) and for government officials. Initially it was also mandatory for the private sector, but this was later watered down to state that employers should, on a ‘best effort basis', ensure that the app is downloaded by all employees having compatible phones. However, the ‘best effort basis’ soon translated into mandatory imposition for certain individuals, especially those working in the ‘gig economy’.
Several states had also launched apps for various purposes ranging from contact tracing of suspected Covid patients to monitoring the movement of quarantined patients. As a report by the Centre for Internet and Society observed, given the attention on Aarogya Setu, most of the apps launched by the state governments escaped scrutiny and public attention.Most of these apps either did not have a privacy policy or the policy was vague and often did not provide important details such as who was collecting the data, the time period for retaining the data and whether personal data could be shared with other departments, most notably, law enforcement.Apart from contact tracing apps, the pandemic also ushered in a wave of other apps and digital tools by the government. These include systems such as drones to check whether people are following Covid-19 norms and facial recognition cameras to report to the police whether someone has broken quarantine. Similar to Aarogya Setu, these tools have also largely been brought about in the absence of a legal and regulatory framework.
The absence of any legal framework has meant these tools are now being used for purposes beyond managing the pandemic.
The government is now planning to use facial recognition technology along with Aadhaar toauthenticate people before giving them vaccine shots.
Aarogya Setu is now linked with the vaccination process. Beneficiaries have been provided an option to register through Aarogya Setu. The pandemic has also provided a means for the government to bring in changes to health policies and introduce the National Health Data Management Policy for the creation of a Unique Health Identity Number for citizens.
Vaccination and digital platforms
The use of digital technology has extended to the vaccination process through the deployment of the Covid Vaccine Intelligence Network (Co-WIN) platform.During the first phase of inoculation, beneficiaries were required to register on the Co-WIN app while in the subsequent phases, registration was to be done on the Co-WIN website. The beneficiary is required to upload a photo identity proof.
While Aadhaar has been identified as one of the seven documents that can be uploaded for this, the Health Ministry has clarified that Aadhaar is not mandatory for registration either through Co-WIN or through Aarogya Setu. However, as per media reports, certain vaccination centres still seem to insist on Aadhaar identity even though beneficiaries may have used another identity proof to register on the Co-WIN website.
It is also pertinent to note that the website did not have a privacy policy till the Delhi High Court issued directions on June 2, 2021. The privacy policy hyperlinked on the Co-WIN app directed the user to the Health Data Policy of the National Health Data Management Policy, 2020.
The vaccination drive has been used as a means to push the health identity project forward as beneficiaries who have opted to provide Aadhaar identity proof have also been provided with a health identity number on their vaccination certificate. It is interesting to note that Co-WIN’s privacy policy now states that if the beneficiary uses Aadhaar as identity proof, it can 'opt' to get a Unique Health Id.However, as a recent report revealed, health identity numbers have already been generated for certain beneficiaries without obtaining consent from them for the purpose.
Have the apps been successful?
One could argue that privacy concerns are a worthwhile tradeoffin order to contain the spread of thepandemic. But it is worth examining how successful these technologies have been. In reality, the use of digital technology at every stage of combating the pandemic has clearly highlighted the extent of our digital divide. As per data from TRAI, there are around 750 million Internet subscribers in India,which is only a little more than half of India’s estimated 1.3 billion citizens — with this gap having a significant impact on the efficacy of the government’s strategies. Aarogya Setu has fallen far short of its goal, of having near universal adoption. It has limited adoption in much of the country. This has severely limited its efficacy in tracing the spread of the virus. Research from Maulana Azad Medical College has cited socio-economic inequalities,educational barriers and the lack of smartphone penetration as being the key causes behind the app’s limited success, pointing back to the digital divide. Moreover, the app has also brought with it a host of associated problems including lateral surveillance and function creep caused by the addition of new features. All of which, along with the previously mentioned privacy concerns, have served to hamper public trust and adoption.
A similar situation is seen in the case of vaccination and the Centre’s Co-WIN web portal. The need for registration, first on the Co-WIN app and later on the Co-WIN web portal, has disproportionately affected those who either have no or limited digital access. Many of them belong to vulnerable groups such as migrant and informal sector workers (mainly from disadvantaged castes), LGBTQIA + individuals, sex workers and both urban and rural poor. These issues have also been acknowledged by the Supreme Court, which raised serious concerns about the government being able to achieve its stated object of universal vaccination.
As the inoculation exercise opened up for the 18-45 age group, it increasingly favoured the urban population who possessed the technological and digital literacy to either create or access a host of tools. One need to only look at the wave of automated CO-WIN bots that arose as soon as the vaccination process was expanded to see how these dynamics manifested.
Ultimately, the digital-driven approach that the governments have adopted has resulted in a number of issues — most notably, data privacy and exclusion. Going forward, government strategies must actively account for these factors and ensure that citize rights are adequately protected.
Submission to the Facebook Oversight Board in Case 2021-008-FB-FBR: Brazil, Health Misinformation and Lockdowns
Background
The Oversight Board is an expert body created to exercise oversight over Facebook’s content moderation decisions and enforcement of community guidelines. It is entirely independent from Facebook in its funding and administration and provides decisions on questions of policy as well as individual cases. It can also make recommendations on Facebook’s content policies. Its decisions are binding on Facebook, unless implementing them could violate the law. Accordingly, Facebook implements these decisions across identical content with parallel context, when it is technically and operationally possible to do so.
In June 2021, the Board made an announcement soliciting public comments on case 2021-008-FB-FBR, concerning a Brazilian state level medical council’s post questioning the effectiveness of lockdowns during the COVID-19 pandemic. Specifically, the post noted that lockdowns (i) are ineffective; (ii) lead to an increase in mental disorders, alcohol abuse, drug abuse, economic damage etc.; (iii) are against fundamental rights under the Brazilian Constitution; and (iv) are condemned by the World Health Organisation (“WHO”). These assertions were backed up by two statements (i) an alleged quote by Dr. Nabarro (WHO) stating that “the lockdown does not save lives and makes poor people much poorer”; and (ii) an example of how the Brazilian state of Amazonas had an increase in deaths and hospital admissions after lockdown. Ultimately, the post concluded that effective COVID-19 preventive measures include education campaigns about hygiene measures, use of masks, social distancing, vaccination and extensive monitoring by the government — but never the decision to adopt lockdowns. The post was viewed around 32,000 times and shared over 270 times. It was not reported by anyone.
Facebook did not take any action against the post, since it had opined that the post is not violative of its community standards. Moreover, WHO has also not advised Facebook to remove claims against lockdowns. In such a scenario, Facebook referred the case to the Oversight Board citing its public importance.
In its announcement, the Board sought answers on the following points:
-
Whether Facebook’s decision to take no action against the content was consistent with its Community Standards and other policies, including the Misinformation and Harm policy (which sits within the rules on Violence and Incitement).
-
Whether Facebook’s decision to take no action is consistent with the company’s stated values and human rights commitments.
-
Whether, in this case, Facebook should have considered alternative enforcement measures to removing the content (e.g., the False News Community Standard places an emphasis on “reduce” and “inform,” including: labelling, downranking, providing additional context etc.), and what principles should inform the application of these measures.
-
How Facebook should treat content posted by the official accounts of national or sub-national level public health authorities, including where it may diverge from official guidance from international health organizations.
-
Insights on the post’s claims and their potential impact in the context of Brazil, including on national efforts to prevent the spread of COVID-19.
-
Whether Facebook should create a new Community Standard on health misinformation, as recommended by the Oversight Board in case decision 2020-006-FB-FBR.
Submission to the Board
Facebook’s decision to take no action against the post is consistent with its (i) Violence and Incitement community standard read with the COVID-19 Policy Updates and Protections; and (ii) False News community standard. Facebook’s website as well as all of the Board’s past decisions refer to the International Covenant on Civil and Political Rights’ (ICCPR) jurisprudence based three-pronged test of legality, legitimate aim, and necessity and proportionality in determining violations of Facebook’s community standards. Facebook must apply the same principles to guide the use of its enforcement actions too, keeping in mind the context, intent, tone and impact of the speech.
First, none of Facebook’s aforementioned rules contain explicit prohibitions on content questioning lockdown effectiveness. There is nothing to indicate that “misinformation”, which is undefined, includes within its scope information about the effectiveness of lockdowns. The World Health Organisation has also not advised against such posts. Applying the principle of legality, any person cannot reasonably foresee that such content is prohibited. Accordingly, Facebook’s community standards have not been violated,
Second, the post does not meet the threshold of causing “imminent” harm stipulated in the community standards. Case decision 2020-006-FB-FBR, notes that an assessment of “imminence” is made with reference to factors like context, speaker credibility, language etc. Presently, the post’s language and tone, including its quoting of experts and case studies, indicate that its intent is to encourage informed, scientific debate on lockdown effectiveness.
Third, Facebook’s false news community standard does contain any explicit prohibitions. Hence there is no question of its violation. Any decision to the contrary may go against the standard’s stated policy logic of not stifling public discourse, and create a chilling effect on posts questioning the lockdown efficacy. This will set a problematic precedent that Facebook will be mandated to implement.
Presently, Facebook cannot remove the post since no community standards have been violated. Facebook must not reduce the post’s circulation since this may stifle public discussion around lockdown effectiveness. Further, its removal would have resulted in violation of the user’s right to freedom of opinion and expression, as guaranteed by the Universal Declaration of Human Rights (UDHR) and the ICCPR, which are in turn part of Facebook’s Corporate Human Rights Policy.
Instead, Facebook can provide additional context along with the post through its “related articles” feature, by showing fact checked articles talking about the benefits of lockdown. This approach is the most beneficial since (i) it is less restrictive than reducing circulation of the post; (ii) it balances interests better than not taking any actions by allowing people to be informed about both sides of the debate on lockdowns so that they can make an informed assessment.
Further, Facebook’s treatment of content posted by official accounts of national or sub-national health authorities should be circumscribed by its updated Newsworthy Content Policy, and the Board’s decision in the 2021-001-FB-FBR, which had adopted the Rabat Plan of Action to determine whether a restriction on freedom of expression is required to prevent incitement. The Rabat Plan of Action proposes a six-prong test, that considers: a) the social and political context, b) status of the speaker, c) intent to incite the audience against a target group, d) content and form of the speech, e) extent of its dissemination and f) likelihood of harm, including imminence. Apart from taking these factors into consideration, Facebook must perform a balancing test to determine whether the public interest of the information in the post outweighs the risks of harm.
In the Board’s decision in 2020-006-FB-FBR, it was recommended to Facebook to: a) set out a clear and accessible Community Standard on health misinformation, b) consolidate and clarify existing rules in one place (including defining key terms such as misinformation) and c) provision of "detailed hypotheticals that illustrate the nuances of interpretation and application of [these] rules" to provide further clarity for users. Following this, Facebook has notified its implementation measures, where it has fully implemented these recommendations, thereby bringing it into compliance.
Finally, Brazil is one of the worst affected countries in the pandemic. It has also been struggling to combat the spread of fake news during the pandemic. President Bolsanaro has been criticised for curbing free speech by using a dictatorship-era national security law., and questioned on his handling of the pandemic, including his own controversial statements questioning lockdown effectiveness. In such a scenario, the post may be perceived in a political colour rather than as an attempt at scientific discussion. However, it is unlikely that the post will lead to any-knee jerk reactions, since people are already familiar with the lockdown debate on which much has already been said and done. A post like this which merely reiterates one side of an ongoing debate is not likely to cause people to take any action to violate lockdown.
For detailed explanation on these questions, please see here.
At the Heart of Crypto Investing, There is Tether. But Will its Promise Pan Out?
A man walks past an electric board showing exchange rates of various cryptocurrencies at Bithumb cryptocurrencies exchange in Seoul, South Korea, January 11, 2018. Photo: Reuters/Kim Hong-Ji
The article was published in the Wire on June 30, 2021.
Cryptocurrencies have become the centerpiece of the global digital zeitgeist in 2021. Anyone remotely familiar with them would probably be able to name a few of the famous ones like Bitcoin and Ethereum.
However, there exists a lesser known cryptocurrency at the heart of this $ 3 trillion market, Tether. Issued by the company Tether.ltd, Tether forms the foundation for modern day crypto trading and could potentially be one of the biggest schemes in financial history.
Tether is a special type of cryptocurrency known as a stablecoin. Unlike coins such as Bitcoin and Ethereum, Tether’s monetary value is not a function of the forces of the crypto market but is rather pegged to the US Dollar. What this means is that 1 Tether will always be worth exactly 1 USD. This fixed value has allowed it to occupy a unique position within the crypto ecosphere, with it becoming the de facto standard of liquidity within these markets by acting as a widely accepted substitute to the US dollar.
At present, buying cryptocurrency using traditional fiat money (like dollars or rupees) comes with certain challenges. Purchasing with traditional currencies requires the use of banking services that come with a host of fees and time delays. At the same time, purchasing one type of crypto coin like Bitcoin with another coin like Ethereum can prove difficult due to the constantly shifting values of both coins. This is where Tether comes in. Acting as a bridge between the traditional financial world and the crypto market, it has become a sort of digital dollar — one that makes cryptocurrency trading significantly easier.
The problem with tether
On the surface, Tether seems like a perfectly reasonable innovation that looks to fill in the gaps that exist within the market. Dig a little deeper than the surface and the discrepancies start to appear.
The premise of Tether’s appeal comes from its value being pegged to the US dollar. The company initially claimed to have achieved this by ensuring that their currency was “fully backed” by cash reserves.
The process looked something like this: You gave the company 1 US dollar and they gave you 1 Tether that you could use to make other crypto purchases. If you returned your Tether, you would get your dollar back and the Tether you returned would be ‘burned’ (removed from circulation). This meant that for every Tether that existed the company would have 1 corresponding dollar in reserve in the bank, ensuring that the currency was backed.
An illustrated image shows US dollars, cryptocurrency and NFT written on a phone. Photo: Marco Verch/Flickr CC BY 2.0
However, there was an enormous flaw in this system. Since Tether.ltd was the sole creator of the coin, it could create as many of them as it wanted while falsely claiming that these new Tethers were also backed fully by cash reserves. And this is exactly what is alleged to have happened in a case brought forward against Tether.ltd by the New York Attorney General’s office. The filings made by the attorney general noted that in their investigation they found that not only did the company have inadequate reserves to back the number of Tethers in circulation, but that there were significant periods of time wherein the company did not have any bank accounts or any access to banking at all — thereby exposing Tethers claims of being backed as being demonstrably false.
The scam was alleged to have worked as follows. First, the company would issue new coins that were not actually backed by any corresponding dollars. These new Tethers were then transferred to Bitfinex – a cryptocurrency exchange that was owned by Tether.ltd. These unbacked Tethers would then be used to buy bitcoin, with the momentum from this increased demand causing the price of bitcoin to rise. They would then exchange their newly appreciated bitcoins for actual US dollars — thereby essentially creating real money where none had previously existed. While there is no conclusive evidence for this being true, research has pointed to increased tether supply causing a boom in bitcoin prices in 2017.
The company has since altered its claim from being backed by cash reserves, to now being backed by a number of assets (which it refers to as its ‘reserves’) – of which cash only formed a small subset. It maintains that the cumulative value of their assets does equal the number of Tethers in circulation, though it is worth noting that the veracity of these claims has been consistently challenged.
How does this affect the rest of the crypto market?
Tether’s problems are unfortunately not limited to itself, but rather affect the entire crypto marketplace. If the New York Attorney General’s filings are true, then it would mean that a significant amount of the demand in the crypto market could potentially not be backed by any actual purchasing power and that the price of cryptocurrencies like bitcoin have been artificially inflated.
If Tether was ever found (either by a regulatory body or through leaks) to have been creating unbacked units of its currency then it would result in a significant amount of buying pressure disappearing from the crypto market. And since Tether isn’t just any other cryptocurrency but rather is a medium for exchange in the crypto world, its downfall would have severe knock on effects that could cause a serious crash in the entire crypto market.
Quantifying such knock-on effects would be extremely difficult, however as previously mentioned, research has clearly outlined a significant causal relationship between tether’s supply and increased bitcoin prices. This leads to the conclusion that the reverse would likely be true; that a rapid decrease in tethers would cause a significant decrease in the price of bitcoin and other cryptocurrencies.
Ultimately, no one knows for sure whether Tether is a scheme or not. However, mounting evidence from a number of independent sources have all pointed to discrepancies in the company’s functioning. What is clear is that, if the allegations are in fact true, then Tether poses a serious risk to the entire crypto marketplace and investors.
Right to Exclusion, Government Spaces, and Speech
This article first appeared on the Indian Journal of Law and Technology (IJLT) blog, and can be accessed here. Cross-posted with permission.
---
Introduction
On April 8, the Supreme Court of the United States (SCOTUS), vacated the judgment of the US Court of Appeals for Second Circuit’s in Knight First Amendment Institute v Trump. In that case, the Court of Appeals had precluded Donald Trump, then-POTUS, from blocking his critics from his Twitter account on the ground that such action amounted to the erosion of constitutional rights of his critics. The Court of Appeals had held that his use of @realDonaldTrump in his official capacity had transformed the nature of the account from private to public, and therefore, blocking users he disagreed with amounted to viewpoint discrimination, something that was incompatible with the First Amendment.
The SCOTUS ordered the case to be dismissed as moot, on account of Trump no longer being in office. Justice Clarence Thomas issued a ten-page concurrence that went into additional depth regarding the nature of social media platforms and user rights. It must be noted that the concurrence does not hold any direct precedential weightage, since Justice Thomas was not joined by any of his colleagues at the bench for the opinion. However, given that similar questions of public import, are currently being deliberated in the ongoing Sanjay Hegde litigation in the Delhi High Court, Justice Thomas’ concurrence might hold some persuasive weightage in India. While the facts of these litigations might be starkly different, both of them are nevertheless characterized by important questions of applying constitutional doctrines to private parties like Twitter and the supposedly ‘public’ nature of social media platforms.
In this essay, we consider the legal questions raised in the opinion as possible learnings for India. In the first part, we analyze the key points raised by Justice Thomas, vis-a-vis the American legal position on intermediary liability and freedom of speech. In the second part, we apply these deliberations to the Sanjay Hegde litigation, as a case-study and a roadmap for future legal jurisprudence to be developed.
A flawed analogy
At the outset, let us briefly refresh the timeline of Trump’s tryst with Twitter, and the history of this litigation: the Court of Appeals decision was issued in 2019, when Trump was still in office. Post-November 2020 Presidential Election, where he was voted out, his supporters broke into Capitol Hill. Much of the blame for the attack was pinned on Trump’s use of social media channels (including Twitter) to instigate the violence and following this, Twitter suspended his account permanently.
It is this final fact that seized Justice Thomas’ reasoning. He noted that a private party like Twitter’s power to do away with Trump’s account altogether was at odds with the Court of Appeals’ earlier finding about the public nature of the account. He deployed a hotel analogy to justify this: government officials renting a hotel room for a public hearing on regulation could not kick out a dissenter, but if the same officials gather informally in the hotel lounge, then they would be within their rights to ask the hotel to kick out a heckler. The difference in the two situations would be that, “the government controls the space in the first scenario, the hotel, in the latter.” He noted that Twitter’s conduct was similar to the second situation, where it “control(s) the avenues for speech”. Accordingly, he dismissed the idea that the original respondents (the users whose accounts were blocked) had any First Amendment claims against Trump’s initial blocking action, since the ultimate control of the ‘avenue’ was with Twitter, and not Trump.
In the facts of the case however, this analogy was not justified. The Court of Appeals had not concerned itself with the question of private ‘control’ of entire social media spaces, and given the timeline of the litigation, it was impossible for them to pre-empt such considerations within the judgment. In fact, the only takeaway from the original decision had been that an elected representative’s utilization of his social media account for official purposes transformed only that particular space into a public forum where constitutional rights would find applicability. In delving into questions of ‘control’ and ‘avenues of speech’, issues that had been previously unexplored, Justice Thomas conflates a rather specific point into a much bigger, general conundrum. Further deliberations in the concurrence are accordingly put forward upon this flawed premise.
Right to exclusion (and must carry claims)
From here, Justice Thomas identified the problem to be “private, concentrated control over online content and platforms available to the public”, and brought forth two alternate regulatory systems — common carrier and public accommodation — to argue for ‘equal access’ over social media space. He posited that successful application of either of the two analogies would effectively restrict a social media platform’s right to exclude its users, and “an answer may arise for dissatisfied platform users who would appreciate not being blocked”. Essentially, this would mean that platforms would be obligated to carry all forms of (presumably) legal speech, and users would be entitled to sue platforms in case they feel their content has been unfairly taken down, a phenomenon Daphne Keller describes as ‘must carry claims’.
Again, this is a strange place to find the argument to proceed, since the original facts of the case were not about ‘dissatisfied platform users’, but an elected representative’s account being used in dissemination of official information. Beyond the initial ‘private’ control deliberation, Justice Thomas did not seem interested in exploring this original legal position, and instead emphasized on analogizing social media platforms in order to enforce ‘equal access’, finally arriving at a position that would be legally untenable in the USA.
The American law on intermediary liability, as embodied in Section 230 of the Communications Decency Act (CDA), has two key components: first, intermediaries are protected against the contents posted by its users, under a legal model termed as ‘broad immunity’, and second, an intermediary does not stand to lose its immunity if it chooses to moderate and remove speech it finds objectionable, popularly known as the Good Samaritan protection. It is the effect of these two components, combined, that allows platforms to take calls on what to remove and what to keep, translating into a ‘right to exclusion’. Legally compelling them to carry speech, under the garb of ‘access’ would therefore, strike at the heart of the protection granted by the CDA.
Learnings for India
In his petition to the Delhi High Court, Senior Supreme Court Advocate, Sanjay Hegde had contested that the suspension of his Twitter account, on the grounds of him sharing anti-authoritarian imagery, was arbitrary and that:
- Twitter was carrying out a public function and would be therefore amenable to writ jurisdiction under Article 226 of the Indian Constitution; and
- The suspension of his account had amounted to a violation of his right to freedom of speech and expression under Article 19(1)(a) and his rights to assembly and association under Article 19(1)(b) and 19(1)(c); and
- The government has a positive obligation to ensure that any censorship on social media platforms is done in accordance with Article 19(2).
The first two prongs of the original petition are perhaps easily disputed: as previous commentary has pointed out, existing Indian constitutional jurisprudence on ‘public function’ does not implicate Twitter, and accordingly, it would be a difficult to make out a case that account suspensions, no matter how arbitrary, would amount to a violation of the user’s fundamental rights. It is the third contention that requires some additional insight in the context of our previous discussion.
Does the Indian legal system support a right to exclusion?
Suing Twitter to reinstate a suspended account, on the ground that such suspension was arbitrary and illegal, is in its essence a request to limit Twitter’s right to exclude its users. The petition serves as an example of a must-carry claim in the Indian context and vindicates Justice Thomas’ (misplaced) defence of ‘dissatisfied platform users’. Legally, such claims perhaps have a better chance of succeeding here, since the expansive protection granted to intermediaries via Section 230 of the CDA, is noticeably absent in India. Instead, intermediaries are bound by conditional immunity, where availment of a ‘safe harbour’, i.e., exemption from liability, is contingent on fulfilment of statutory conditions, made under section 79 of the Information Technology (IT) Act and the rules made thereunder. Interestingly, in his opinion, Justice Thomas had briefly visited a situation where the immunity under Section 230 was made conditional: to gain Good Samaritan protection, platforms might be induced to ensure specific conditions, including ‘nondiscrimination’. This is controversial (and as commentators have noted, wrong), since it had the potential to whittle down the US' ‘broad immunity’ model of intermediary liability to a system that would resemble the Indian one.
It is worth noting that in the newly issued Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, proviso to Rule 3(1)(d) allows for “the removal or disabling of access to any information, data or communication link [...] under clause (b) on a voluntary basis, or on the basis of grievances received under sub-rule (2) [...]” without dilution of statutory immunity. This does provide intermediaries a right to exclude, albeit limited, since its scope is restricted to content removed under the operation of specific sub-clauses within the rules, as opposed to Section 230, which is couched in more general terms. Of course, none of this precludes the government from further prescribing obligations similar to those prayed in the petition.
On the other hand, it is a difficult proposition to support that Twitter’s right to exclusion should be circumscribed by the Constitution, as prayed. In the petition, this argument is built over the judgment in Shreya Singhal v Union of India, where it was held that takedowns under section 79 are to be done only on receipt of a court order or a government notification, and that the scope of the order would be restricted to Article 19(2). This, in his opinion, meant that “any suo-motu takedown of material by intermediaries must conform to Article 19(2)”.
To understand why this argument does not work, it is important to consider the context in which the Shreya Singhal judgment was issued. Previously, intermediary liability was governed by the Information Technology (Intermediaries Guidelines) Rules, 2011 issued under section 79 of the IT Act. Rule 3(4) made provisions for sending takedown orders to the intermediary, and the prerogative to send such orders was on ‘an affected person’. On receipt of these orders, the intermediary was bound to remove content and neither the intermediary nor the user whose content was being censored, had the opportunity to dispute the takedown.
As a result, the potential for misuse was wide-open. Rishabh Dara’s research provided empirical evidence for this; intermediaries were found to act on flawed takedown orders, on the apprehension of being sanctioned under the law, essentially chilling free expression online. The Shreya Singhal judgment, in essence, reined in this misuse by stating that an intermediary is legally obliged to act only when a takedown order is sent by the government or the court. The intent of this was, in the court’s words: “it would be very difficult for intermediaries [...] to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not.”
In light of this, if Hegde’s petition succeeds, it would mean that intermediaries would now be obligated to subsume the entirety of Article 19(2) jurisprudence in their decision-making, interpret and apply it perfectly, and be open to petitions from users when they fail to do so. This might be a startling undoing of the court’s original intent in Shreya Singhal. Such a reading also means limiting an intermediary’s prerogative to remove speech that may not necessarily fall within the scope of Article 19(2), but is still systematically problematic, including unsolicited commercial communications. Further, most platforms today are dealing with an unprecedented spread and consumption of harmful, misleading information. Limiting their right to exclude speech in this manner, we might be exacerbating this problem.
Government-controlled spaces on social media platforms
On the other hand, the original finding of the Court of Appeals, regarding the public nature of an elected representative’s social media account and First Amendment rights of the people to access such an account, might yet still prove instructive for India. While the primary SCOTUS order erases the precedential weight of the original case, there have been similar judgments issued by other courts in the USA, including by the Fourth Circuit court and as a result of a lawsuit against a Texas Attorney General.
A similar situation can be envisaged in India as well. The Supreme Court has repeatedly held that Article 19(1)(a) encompasses not just the right to disseminate information, but also the right to receive information, including receiving information on matters of public concern. Additionally, in Secretary, Ministry of Information and Broadcasting v Cricket Association of Bengal, the Court had held that the right of dissemination included the right of communication through any media: print, electronic or audio-visual. Then, if we assume that government-controlled spaces on social media platforms, used in dissemination of official functions, are ‘public spaces’, then the government’s denial of public access to such spaces can be construed to be a violation of Article 19(1)(a).
Conclusion
As indicated earlier, despite the facts of the two litigations being different, the legal questions embodied within converge startlingly, inasmuch that are both examples of the growing discontent around the power wielded by social media platforms, and the flawed attempts at fixing it.
While the above discussion might throw some light on the relationship between an individual, the state and social media platforms, many questions still continue to remain unanswered. For instance, once we establish that users have a fundamental right to access certain spaces within the social media platform, then does the platform have a right to remove that space altogether? If it does so, can a constitutional remedy be made against the platform? Initial commentary on the Court of Appeals’ decision had contested that the takeaway from that judgment had been that constitutional norms had a primacy over the platform’s own norms of governance. In such light, would the platform be constitutionally obligated to not suspend a government account, even if the content on such an account continues to be harmful, in violation of its own moderation standards?
This is an incredibly tricky dimension of the law, made trickier still by the dynamic nature of the platforms, the intense political interests permeating the need for governance, and the impacts on users in the instance of a flawed solution. Continuous engagement, scholarship and emphasis on having a human rights-respecting framework underpinning the regulatory system, are the only ways forward.
---
The author would like to thank Gurshabad Grover and Arindrajit Basu for reviewing this piece.
Community Data and Decisional Autonomy: Dissecting an Indian Legal Innovation for Emerging Economies
Key Takeaways
-
Concerned with the power asymmetries between big tech companies and Indian citizens in terms of data sharing and processing practices, the Indian government has put in place a number of policies seeking to unlock the developmental potential of data for Indian citizens.
-
While several policy instruments are still works in progress, and need improvement to be in line with India’s constitutional framework, international human rights law, and economic welfare, they have advanced some important conceptual innovations. One such innovation is “community data,” which attempts to delineate the rights and interests a community would have in its data.
-
However, the existing framework does not satisfactorily define community, and does not sufficiently balance the privacy and decisional autonomy of individuals with the interests of the community and the nation in economic and social empowerment.
-
The gap can be addressed by looking at Indian jurisprudence on privacy and decisional autonomy, and analysing how existing case law (T Sareetha and Venkat Subbaiah and KS Puttaswamy I v Union of India)can be applied to the digital era. As Europe grapples with debates“technological sovereignty,” the framing of community data in line with Indian privacy jurisprudence may be valuable.
Policy Recommendation 1:
By studying unique Indian case law on privacy that deals with the question of individual and group rights, we find that decisional autonomy is the fulcrum of privacy jurisprudence, and thus should be the edifice for any policy framework. In a case of conflict between individual and group rights, individual rights must prevail.
Policy Recommendation 2:
Providing communities with adequate rights and interests while also prioritising individual rights is very much in line with human rights principles espoused by Europe, and endorsed in the GDPR, and Europe should consider how an improved version of India’s community data approach may be used to further its digital sovereignty vision without compromising on European human rights ethos.
You can read the full paper on Medium here
Acknowledgments
We are indebted to Pooja Saxena for thoroughly editing, reviewing and greatly improving the piece. We would also like to thank Agnidipto Tarafder, Katharina Naumann, and the anonymous peer-reviewer for insights. All errors remain our own.
This case study is part of an edited volume of case studies ‘Digital Asia: highlighting digitization trends in Asia’ co-published by Konrad-Adenauer-Stiftung Regional Programme Political Dialogue, Singapore, and Digital Asia Hub.
Comments on the Cinematograph (Amendment) Bill, 2021
This submission presents comments by CIS on the Cinematograph (Amendement) Bill, 2021 (“the Bill”) which were released on 18 June 2021 for public comments. These comments examine whether the proposed amendments are compatible with established constitutional principles, precedents, previous policy positions and existing law. While we appreciate the opportunity to submit comments, we note that the time allotted for doing so was less than a month (the deadline for submission was 2 July 2021). Given the immense public import in the proposed changes, and the number of stakeholders involved, we highlight that the Ministry of Information and Broadcasting (MIB) should have provided more time in the final submission of comments.
Read our full submission here.
State of Consumer Digital Security in India
Since 2006, successive Union governments in India have shown increased focus on digital governance. The National e-Governance Plan was launched by the UPA government in2006, and several digital projects led by the state such as digitisation of the filing of taxes, appointment process for passports, corporate governance, and the Aadhaar programme(India’s unique digital identity system that utilises biometric and demographic data) arose under it, in the form of mission mode projects (projects that are part of a broader National e-governance initiative, each focusing on specific e-Governance aspects, like banking, land records, or commercial taxes). In 2014, when the NDA government came to power, the National e-Governance Plan was subsumed under the government’s flagship project of Digital India, and several mission mode projects were added. In the meantime, the internet connectivity, first in the form of wire connectivity, and later in the form of mobile connectivity has increased greatly. In the same period, use of digital services, first in new services native to the Internet such as email, social networking, instant messaging, and later the platformization and disruption of traditional business models in transportation, healthcare, finance and virtually every sector, has led to a deluge of digital private service providers in India.
Currently, India has 500 million internet users — over a third of its total population — making it the country with the second largest number of Internet users after China. The uptake of these technological services has also been accompanied by several kinds of digital threats that an average digital consumer in India must regularly contend with. This report is a mapping of consumer-facing digital threats in India and is intended to aid stakeholders in identifying and addressing digital security problems. The first part of the report categorises digital threats into four kinds, Personal Data Threats, Online Content Related Threats, Financial Threats, and Online Sexual Harassment Threats. Threats under each category are then defined, with detailed consumer-facing consequences, and past instances where harm has been caused because of these threats.
Read the full report here.
Interoperability and Portability as a Lever to Enhance User Choice and Privacy in Messaging Platforms
Since last year, digital platforms have been actively making the headlines in various countries for different acquisitions, raising questions around the anti-competitive nature of their behaviour. In the US, about 46 states filed an antitrust case against Facebook along with the Federal Trade Commission in December 2020, accusing them of buying out rivals such as WhatsApp, Instagram etc[1]. Recently, the US supreme court overturned the case by 46, stating it to be tardy and FTC’s case to be “legally insufficient”[2]. However, one of the solutions proposed for this problem by various experts and politicians is to break up Facebook[3].
Influential people such as Vijay Shekhar Sharma (CEO, Paytm) in India argued similarly when Whatsapp updated its privacy policy to share data with Facebook. They suggested that the movement of users towards Signal could break Facebook's monopoly[4]. While it is conceivable that breaking up a platform or seeking an alternative for them will bring an end to their monopoly, well, in reality, is it so? This post will try to answer this question. In section 1, I discuss the importance of interoperability and portability amongst the messaging platforms for tackling monopoly, which, in turn, helps in enhancing user outcomes such as user choice and privacy. Section 2 discusses the enablers, legislative reimagining, and structural changes required in terms of technology to enable interoperability and portability amongst the messaging platforms. In section 3, I discuss the cost structure and profitability of a proposed message gateway entity, followed by the conclusion.
1. Introduction
In the case of the platform economy, the formation of a monopoly is inevitable, especially in messaging platforms, because of (a) network effects and (b) lack of interoperability and portability between messaging platforms[5]. As the network effect gets vigorous, more users get locked into a single messaging platform leading toward a lack of user choice (in terms of switching platforms) and privacy concerns (as the messaging platforms get more significant, it poses a high risk in terms of data breaches, third-party data sharing etc.). For instance, as a WhatsApp user, it is difficult for me to switch towards any other messaging platforms as my friends, family and business/work still operate on WhatsApp. Messaging platforms also use the network effect towards their favour (a) by increasing the switching cost (b) by creating a high barrier to entry within the market[6].
If there was interoperability between the messaging platforms, I could choose between the platforms freely- thereby negating some of the aforementioned limitations. Therefore, to create a competitive environment amongst messaging platforms to enhance user choice and privacy, it is crucial to have an interoperability and portability framework. To deploy interoperability and portability, it is imperative to have coordination among platforms while still competing for individual market share[7]. Interoperability and portability will also bring in healthy competition, as platforms will be nudged to explore alternative value propositions to remain competitive in the market[8]. One of the outcomes of this could be better consumer protection through innovation of privacy safeguards, etc. In addition to this, interoperability and portability could enable a low barrier to entry (through breaking the network effect), which could, in turn, increase online messaging penetration in untapped geographies as more messaging platforms emerge in the market.
There are two kinds of interoperability, vertical interoperability – i.e., interoperability of services across complementary platforms and horizontal interoperability – i.e., interoperability of services between competing platforms. While vertical interoperability exists in the form of the cloud system, multiple system login, etc., horizontal interoperability is yet to experiment at the market level. Nonetheless, realising the competition concerns in the digital platforms’ market, the European Union (European Electronic Communications Code[9], Digital Service Act etc[10].), the US (Stigler Committee Report[11]) and the UK Competition and Markets Authority[12] are mulling a move towards interoperability amongst the digital platforms. Furthermore, Facebook has already commissioned its efforts towards horizontal interoperability[13] amongst its messaging platforms, i.e., Messenger, WhatsApp and Instagram direct messages. This again adds to the competition concerns, as one platform uses interoperability towards its favour.
Besides, one of the bottlenecks towards enabling horizontal interoperability is the lack of technical interoperability – i.e., the ability to accept or transfer data, perform a task etc., across platforms. In the case of messaging platforms, lack of technical interoperability is caused due to the presence of different kinds of messaging platforms operating with different technical procedures. Therefore, to have effective horizontal interoperability and portability, it is crucial to streamline technical procedures and have guidelines which will enable technical interoperability. In the following section, I discuss the enablers, legislative reimagining, and structural changes required in terms of technology to enable interoperability and portability amongst the messaging platforms.
2. Message Gateway Entity
2.1. Formation of Message Gateway Entity to Enable Interoperability
To drive efficacious interoperability, it is imperative to form message gateway entities as for-profits that are regulated by a regulator (either an existing one such as TRAI or a newly established one). The three key functions of message gateway entities should be: (a) Maintain standard format for messaging prescribed by a standard-setting council, (b) Provide responsive user message delivery system to messaging platforms, (c) Deliver messages from one messaging platform to another seamlessly in real-time. There have to be multiple message gateway entities to enable competition, which will bring out more innovations, penetration, and effectiveness. Besides, it is prudent to have private players as message gateway entities as government-led message gateway entities for interoperability will not be fruitful as there will be a question of efficacy. Also, this might, in a way, bring the tender style business, which is problematic as the government could have a say in how and who it will provide its service (gatekeeping). However, the government has to set it up by itself only if it is a public good (missing markets) which might not be the case in message gateway entities.
Messaging platforms should be mandated through legislation/executive order to be a member of at least one of the message gateway entities to provide interoperability benefits to its users. Simultaneously, messaging platforms can also handle internal message delivery - User A to User B within the platform - amongst themselves.
While message gateway entities will enable interoperability between messaging platforms, it is crucial to have interoperability among themselves to compete in the market. For instance, a user from messaging platform under gateway A should be able to send messages to a user of a messaging platform under gateway B. Perhaps as we enable competition amongst the message gateways entities, the enrollment price will also become commensurate and affordable for small and new messaging platforms. In addition to this, to increase interoperability, message gateway entities should develop various awareness programs at the user level.
Further, the regulatory guidelines for message gateway entities (governed by the regulator) must be uniform, with leeway for gateways to innovate technology to attract messaging platforms. Borrowing some of the facets from the various existing legislations, the below suggested aspects should advise the uniform guidelines,
-
End-to-end encryption: As part of the uniform guidelines, message gateway entities should be mandated to enable end-end encryption for message delivery. In contrast, the recent Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021[14] tries to break the end-end encryption by mandating significant social media intermediaries to identify the first originator of a particular message (part II section 4 rule 2) sought through an order. As this mandate impinges upon user privacy and free speech, the Indian government should revise this rule to keep end-to-end encryption intact. Besides, WhatsApp (a significant social media intermediary) has moved to Delhi High Court to block the implementation of the rules, which came into force on May 27th, 2021[15]. Also, Rule 4(2) of IT Rules 2021 contradicts provisions of the PDP Bill 2019 such as privacy by design[16] (Section 22) and the right to be forgotten (Section 20).
-
Neutrality: The guidelines should have a strict rule for enforcing non-discrimination (similar to the Indian Government's 2018 net neutrality principles[17]) in delivering messages by message gateway entities. Discrimination against both messaging platforms and other message gateway entities has to be scrutinised. In addition to that, to hold message gateway entities accountable, the guidelines should mandate monthly disclosure of information (at the messaging platform level with information on which gateway entity they are routed through) on message deliveries and failures in a prescribed user-friendly format to the public.
-
Standard Format Setting: As various messaging platforms follow different formats for providing messaging services, to have seamless interoperability, message gateway entities must adhere to a standard format, which is compatible with formats followed within the market. This standard format has to keep up with technological evolution in this space and to be formulated by an independent standard-setting council (through stakeholder consultation) commissioned by the regulator. The maintenance of this standard format falls into the ambit of message gateway entities and should be governed by the regulator.
-
Uniform identification information: As the users of messaging platforms identify other users through various means, for instance, on WhatsApp, we use the telephone number, whereas, on Instagram, we use profile name; thus, the unique identification information (UII) of a user (which can be something existing like a phone number or a new dedicated identification number) has to be standardised. Message gateway entities should facilitate messaging platforms with this process, and the generation of UII should be seamless for the user. Besides, a user's unique identification information has to be an additional way to search for other users within a messaging platform and would be crucial for messaging across platforms.
-
Consumer choice: While interoperability should be a default option for all the users, there has to be a user-friendly way of opt-out for the user who wishes to compartmentalise different kinds of messages depending upon the platform used. The unique identification information (in case of a new dedicated number) of a user who had opted out must be ceased to avoid misuse.One of the major reasons users opt-out of interoperability services could be to keep various digital public spheres (personal, leisure, professional, etc.) distant. To tackle this dilemma of the users, the messaging platforms should enable options such as (a) the optional notification for cross-platform messages with the snooze option, so that the user can decide if she wants the cross-platform message to hit the enrolled messaging platform at the given time. (b) The messaging platform should enable the “opt-out from messaging platform” setting for the users to disable messages from a list of platforms. Besides, users might choose to opt-out due to lack of trust. This has to be tackled by both the message gateway entities by creating awareness amongst the users on their rights and messaging platforms by providing a user-friendly privacy policy.
-
Data Protection: As the emergence of message gateway entities creates new data flow, this new flow of data has to take a data minimisation approach. Message gateway entities should be recognised as the data processor (one who processes data for data fiduciary, i.e., messaging platforms). They should adhere to the upcoming Personal Data Protection regime[18] to protect the data principals' personal data and collect personal data as per the proportionality principle. Message gateway entities should not collect any non-personal data or process any form of data to infer the behavioural traits of the data principals or messaging platforms. In addition to this, the name of the message gateway entity enrolled by the messaging platform, data collected and processed by the message gateway entity should be disclosed to the data principals through the messaging platform’s privacy policy.
-
Licensing: There should be a certain level of restriction on licensing to create a level playing field. Applicants for message gateway entities should not have an economic interest in any messaging platforms or social media intermediaries. Applicants have to ensure that the delivery failure of the messages should be at the level of 2% to 1%. Besides, to ensure low levels of delivery failure, data protection compliance and to check other requirements, message gateway entities have to go through technical and regulatory sandbox testing before issuing a license.
-
Consumer Protection: Users should be given a choice to block another user (using unique identification information) for various reasons such as personal, non-personal, phishing etc. After a stipulated number of blocking by multiple users, the suspected user should be denied access (temporarily or permanently according to the reasons) to message gateway entities. Before denying access, the message gateway entities should indicate the messaging platforms to notify the user. There has to be a robust grievance redressal mechanism for users and messaging platforms to raise their complaints regarding blocking, data protection, phishing etc. Besides, unique identification information has to be leveraged to prevent bot accounts and imposters. In addition to this, message gateway entities should be compatible with measures taken by messaging platforms to prevent the spread of disinformation and misinformation (such as restrictions on the number of recipients for forward messages).
The figure below showcases the use case of the message exchange with the introduction of message gateway entities.
Source: Author’s own illustration of the process of interoperability
2.2. Portability Feature to Compliment Interoperability
In the case of messaging platforms, when we talk about portability, it is essential to differentiate it into two: (a) portability of the unique identification information of the user from one platform to other seamlessly (b) portability of the user data from one platform to other followed by the portability of unique identification information. As the generation of unique identification information is facilitated by the message gateway entities, the portability of the same has to be done by the respective messaging gateway entity. Adopting some features of process and protocols from Mobile Number Portability[19] mandated by the Telecom Regulatory Authority of India, standard-setting council for messaging gateway entities (discussed above) should streamline the unique identification information portability process across messaging gateway entities.
Followed by the unique identification information porting, the message gateway entities should trigger a notification to the messaging platform (on behalf of the user) to transfer user data towards the requested platform. As mentioned in chapter V, section 19(1)(b) of The Personal Data Protection Bill, 2019, messaging platforms should transfer the user data towards the platform notified by the message gateway entity in the suggested or compatible format.
Globally since the emergence of the General Data Protection Regulation (GDPR) and other legislation that mandates data portability, platforms have launched the Data Transfer Project (DTP)[20] in 2018 to create a uniform format to port data. There are three components to the DTP, of which two are crucial, i.e., Data models and Company Specific Adapter. A Data Model is a set of common formats established through legislation to enable portability; in the case of messaging platforms, the standard-setting council can come up with the Data Model.
Under Company Specific Adapter, there are Data Adapters and Authentication Adapters. The Data Adapter converts the exporter platform’s data format into the Data Model and then into the importer platform’s data format. The Authentication Adapter enables users to provide consent for the data transfer. While Company Specific Adapters under DTP are broadly for digital platforms, adopting the same framework, message gateway entities can act as both a Data Adapter and as an Authentication Adapter to enable user data portability amongst the messaging platforms. Message gateway entities can help enrolled messaging platforms in format conversion for data portability and support users' authentication process using the unique identification information. Besides, as messaging gateway entities are already uniform and interoperable, cross transfer across message gateway entities can also be made possible.
3. Profitability of Message Gateway Entities
As the message gateway entities would operate as for-profits, they may cost the messaging platform one-time enrolment fees for membership through which the member (messaging platform) can avail interoperability and portability services. The enrolment fees should be a capital cost that compensates the messaging gateway entities for enabling technical interoperability. In addition to this, message gateway entities may levy minimal yearly fees to maintain the system, customer (messaging platforms) service and grievances portal (for both users and messaging platforms). Besides, in terms of update (as per new standards) or upgradation of the system, message gateway entities may charge an additional fee to the member messaging platforms.
On the other hand, messaging platforms don’t charge[21] a monetary fee for the service because the marginal cost of providing the service is near zero, while they incur only fixed cost. Besides, nothing is free in the platform economy as we pay the messaging platforms in the form of our personal and non-personal (behavioural) data, which they sell to advertisers[22].
Therefore, messaging platforms have to consider the fee paid to the message gateway entities as part of their fixed cost such that they continue not to charge (monetary) users for the service as the cost-per-user would still be very low. Besides, messaging platforms also have economic incentives in providing interoperability as it could reduce multi-homing (i.e., when some users join or use multiple platforms simultaneously).
4. Conclusion
While breaking up Facebook and other bigger social media or messaging platforms could bring a level playing field, this process could consume a large portion of resources and time. Irrespective of a breakup, in the absence of interoperability and portability, the network effect will favour few platforms due to high switching cost, which leads to a high entry barrier.
When we text users using Short Message Service (SMS), we don't think about which carrier the recipient uses. Likewise, messaging across messaging platforms should be platform-neutral by adopting interoperability and portability features. Besides, interoperability and portability will also bring healthy competition, which would act as a lever to enhance user choice and privacy.
This also opens up questions for future research on the demand-side. We need to explore the causal effect of interoperability and portability on users to understand whether they will switch platforms when provided with port and interoperate options.
This article has been edited by Arindrajit Basu, Pallavi Bedi, Vipul Kharbanda and Aman Nair.
The author is a tech policy enthusiast. He is currently pursuing PGP in Public Policy from the Takshashila Institution. Views are personal and do not represent any organisations. The author can be reached at [email protected]
Footnotes
[1] Rodrigo, C. M., & Klar, R. (2020). 46 states and FTC file antitrust lawsuits against Facebook. Retrieved from The Hill: https://thehill.com/policy/technology/529504-state-ags-ftc-sue-facebook-alleging-anti-competitive-practices
[2] Is Facebook a monopolist? (2021). Retrieved from The Economist:https://www.economist.com/business/2021/07/03/is-facebook-a-monopolist
[3] Hughes, C. (2019). It’s Time to Break Up Facebook. Retrieved from The New York Times: https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-zuckerberg.html
[4] Shekar, K. (2021). An Elephant in the Room – Recent Case of WhatsApp Fallout Amongst Indian Users. Retrieved from Takshashila Institution: https://takshashila.org.in/an-elephant-in-the-room-recent-case-of-whatsapp-fallout-amongst-indian-users/
[5] Manur, A. (2018). How to Regulate Internet Platforms Without Breaking them . Retrieved from AsiaGlobal Online: https://www.asiaglobalonline.hku.hk/regulate-internet-platforms-antitrust-competition/
[6] Ibid
[7] Nègre, A. (2021). How Can Funders Promote Interoperable Payments? Retrieved from CGAP Blog: https://www.cgap.org/blog/how-can-funders-promote-interoperable-payments;
Cook, W. (2017). Rules of the Road: Interoperability and Governance. Retrieved from CGAP Blog: https://www.cgap.org/blog/rules-road-interoperability-and-governance
[8] Punjabi, A., & Ojha, S. (n.d.). PPI Interoperability: A roadmap to seamless payments infrastructure. Retrieved from PWC: https://www.pwc.in/consulting/financial-services/fintech/payments/ppi-interoperability.html
[9] Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on a Single Market For Digital Services (Digital Services Act) . (n.d.). Retrieved from European Union: https://eur-lex.europa.eu/legal-content/en/TXT/?qid=1608117147218&uri=COM%3A2020%3A825%3AFIN
[10] European Electronic Communications Code (EECC). (n.d.). Retrieved from https://www.gov.ie/en/publication/339a9-european-electronic-communications-code-eecc/
[11] Stigler Center News Stigler Committee on Digital Platforms: Final Report. (n.d.). Retrieved from Chicago Booth: https://www.chicagobooth.edu/research/stigler/news-and-media/committee-on-digital-platforms-final-report
[12] Brown, I. (n.d.). Interoperability as a tool for competition regulation. CyberBRICS.
[13] Facebook is hard at work to merge its family of messaging apps: Zuckerberg. (2020). Retrieved from Business Standard: https://www.business-standard.com/article/companies/facebook-is-hard-at-work-to-merge-its-family-of-messaging-apps-zuckerberg-120103000470_1.html
[14]Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021. (n.d.). Retrieved from: https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf
[15] Menn, Joseph. 2021. "WhatsApp sues Indian government over new privacy rules - sources." Reuters. Retrieved from: https://www.reuters.com/world/india/exclusive-whatsapp-sues-india-govt-says-new-media-rules-mean-end-privacy-sources-2021-05-26/
[16] Raghavan, M. (2021). India’s New Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation. Retrieved from Future of Privacy Forum:https://fpf.org/blog/indias-new-intermediary-digital-media-rules-expanding-the-boundaries-of-executive-power-in-digital-regulation/
[17]Net Neutrality. (n.d.). Retrieved from Department of Telecommunications: https://dot.gov.in/net-neutrality;
Parsheera, S. (n.d.). Net Neutrality In India: From Rules To Enforcement. Retrieved from Medianama: https://www.medianama.com/2020/05/223-net-neutrality-india-rules-enforcement/
[18]The Personal Data Protection Bill, 2019. (n.d.). Retrieved from: http://164.100.47.4/BillsTexts/LSBillTexts/Asintroduced/373_2019_LS_Eng.pdf
[19] Consultation Paper on Review of Interconnection Usage Charges, 2019. TRAI.
Mobile Number Portability. (n.d.). Retrieved from TRAI: https://www.trai.gov.in/faqcategory/mobile-number-portability
[20] Data Transfer Project. (2018). Retrieved from https://datatransferproject.dev
[21] Aulakh, G. (n.d.). How messaging apps like WhatsApp, WeChat can make money while offering free texting and calling. Retrieved from Economic Times: https://economictimes.indiatimes.com/tech/software/how-messaging-apps-like-whatsapp-wechat-can-make-money-while-offering-free-texting-and-calling/articleshow/62666227.cms
[22] (2019). Report of the Competition Law Review Committee. Ministry of Corporate Affairs.
Bibliography
- Master Direction on Issuance and Operation of Prepaid Payment Instruments. (n.d.). Retrieved from Reserve Bank of India: https://www.rbi.org.in/Scripts/BS_ViewMasDirections.aspx?id=11142
- Privacy Without Monopoly: Data Protection and Interoperability. (2021). Retrieved from Electronic Frontier Foundation: https://www.eff.org/wp/interoperability-and-privacy
- Sullivan, M. (2021). How interoperability could end Facebook’s death grip on social media. Retrieved from Fast Company: https://www.fastcompany.com/90609208/social-networking-interoperability-facebook-antitrust
- Tinworth, A. (n.d.). Why Messenger Interoperability is a digital canary in the coal mine. Retrieved from NEXT: https://nextconf.eu/2019/06/why-messenger-interoperability-is-a-digital-canary-in-the-coal-mine/#gref
The Ministry And The Trace: Subverting End-To-End Encryption
The paper was published in the NUJS Law Review Volume 14 Issue 2 (2021).
Abstract
End-to-end encrypted messaging allows individuals to hold confidential conversations free from the interference of states and private corporations. To aid surveillance and prosecution of crimes, the Indian Government has mandated online messaging providers to enable identification of originators of messages that traverse their platforms. This paper establishes how the different ways in which this ‘traceability’ mandate can be implemented (dropping end-to-end encryption, hashing messages, and attaching originator information to messages) come with serious costs to usability, security and privacy. Through a legal and constitutional analysis, we contend that traceability exceeds the scope of delegated legislation under the Information Technology Act, and is at odds with the fundamental right to privacy.
Click here to read the full paper.
Media Market Risk Ratings: India
Introduction
The harms of disinformation are proliferating around the globe—threatening our elections, our health, and our shared sense of facts.
The infodemic laid bare by COVID-19 conspiracy theories clearly shows that disinformation costs peoples’ lives. Websites masquerading as news outlets are driving and profiting financially from the situation.
The goal of the Global Disinformation Index (GDI) is to cut off the revenue streams that incentivise and sustain the spread of disinformation. Using both artificial and human intelligence, the GDI has created an assessment framework to rate the disinformation risk of news domains.
The GDI risk rating provides advertisers, ad tech companies and platforms with greater information about a range of disinformation flags related to a site’s content (i.e. reliability of content), operations (i.e. operational and editorial integrity) and context (i.e. perceptions of brand trust). The findings in this report are based on the human review of these three pillars: Content, Operations, and Context.
A site’s disinformation risk level is based on that site’s aggregated score across all of the reviewed pillars and indicators. A site’s overall score ranges from zero (maximum risk level) to 100 (minimum risk level). Each indicator that is included in the framework is scored from zero to 100. The output of the index is therefore the site’s overall disinformation risk level, rather than the truthfulness or journalistic quality of the site.
Key Findings
In reviewing the media landscape for India, the assessment found that:
Nearly a third of the sites in our sample had a high risk of disinforming their online users.
- Eighteen sites were found to have a high disinformation risk rating. This group includes sites that are published in all the three languages in our scope: English, Hindi and Bengali.
- Around half of the websites in our sample had a ‘medium’ risk rating. No site performed exceptionally on all fronts, resulting in no sites having a minimum risk rating. On the other hand, no site performed so poorly as to earn a maximum risk rating.
Only a limited number of Indian sites present low levels of disinformation risks.
- No website was rated as having a ‘minimum’ disinformation risk.
- Eight sites were rated with a ‘low’ level of disinformation risk. Seven out of these websites served content primarily in English, one in Hindi.
The media sites assessed in India tend to perform very poorly on publishing transparent operational checks and balances.
- Over one-third of the sites in our sample published little information about their ownership structure, and also failed to be transparent about their revenue sources.
- Only ten of the sites in our sample publish any information about their policies on how they correct errors in their reporting.
Association with traditional media did not play a significant factor in determining risk of disinformation.
- On average, websites associated with TV or print did not perform any differently when compared to websites that solely serve digital content.
The findings show that on the whole, Indian websites can substantially increase their trustworthiness by taking measures to address these shortfalls in their operational checks and balances. For example, they could increase transparency on the structure of their businesses and have clear policies on how they address errors in their reporting. Both of these measures are in line with universal standards of good journalistic practices, as agreed by the Journalism Trust Initiative.
Click to download the full report here. To read the report in Hindi, click here. The authors extend their thanks to Anna Liz Thomas, Sanah Javed, Sagnik Chatterjee, and Raghav Ahooja for their assistance.
Health IDs: Voluntary or Mandatory?
In January 2021, the Health Ministry officially allowed Aadhaar-based authentication when creating a UHID for identification and authentication of beneficiaries for various health IT applications promoted by the Ministry. This enabled the Co-Win portal, which is used to book COVID-19 vaccination appointments, to accept Aadhaar for authentication. As per Clause 2a of Co-Win’s privacy policy, “If you choose to use Aadhaar for vaccination, you may also choose to get a Unique Health ID (UHID) created for yourself.” The privacy policy stresses the voluntary nature of this process by stating that “This feature is purely optional.”
However, multiple media reports have mentioned that beneficiaries who have enrolled in the COVID-19 vaccination programme using their Aadhar number have had their UHIDs created without either obtaining their specific consent or being given the option to opt out. This is concerning as this done has been done based on the data entered by citizens and is linked to their Aadhaar, despite clarifications from the Government that Aadhaar is not mandatory for getting a UHID. It is also pertinent to note that the Co-Win website did not have a privacy policy until it was directed to publish one by the Delhi High Court on 2 June 2021 — almost three months after registration on Co-Win was made mandatory.
As per the NDHM, UHIDs have been rolled out on a pilot basis in the six union territories of India. They will be rolled out across the country in subsequent phases. However, as per newspaper reports, several people who had registered for the COVID-19 vaccine on the Co-Win website using their Aadhaar numbers received a UHID number on their COVID-19 vaccine certificates. This is not limited to the six union territories – UHID numbers have been generated for beneficiaries who had registered using their Aadhaar numbers across the country, without citizens having any choice in opting into the project. It appears that the UHID pilot project has been silently expanded across the country without any official announcement being made in this regard.
As per the Health Data Policy, UHIDs are to be generated on a voluntary basis after obtaining the consent of the beneficiary. However, at the time of registering on the Co-Win portal or at vaccination centers, no separate forms were shared with the beneficiaries to obtain their consent to generate UHIDs. This is contrary to the provisions of the Health Data Policy, which clearly states that the consent of the user must be obtained for the processing of personal data. Clause 9.2of the Health Data Policy states that consent of the “data principal will be considered valid only if it is (c) specific, where the data principal can give consent for the processing of personal data for a particular purpose; (d) clearly given; and (e) capable of being withdrawn.” The beneficiaries are also not informed of their right to de-activate the UHID and reactivate it later if required, Clause 15.8 of the Health Data Policy.
Interestingly, if a person in any of the six union territories tries to self-register for a UHID, they are directed to a page seeking their consent. The consent form states,
“I understand that my Health ID can be used and shared for purposes as may be notified by NDHM from time to time including provision of healthcare services. Further, I am aware that my personal identifiable information (Name, Address, Age, Date of Birth, Gender and Photograph) may be made available to the entities working in the National Digital Health Ecosystem (NDHE) … I am aware that my personal identifiable information can be used and shared for purposes as mentioned above. I reserve the right to revoke the given consent at any point of time.”
However, this information/consent form is not shared with beneficiaries who receive UHIDs when they register on Co-Win using their Aadhaar number. As per newspaper reports, several of these people are also completely unaware of the purposes of an UHID.
Absence of a data protection law and governance structure contemplated under the Health Data Policy
The entire digital health ecosystem is currently operating in the absence of any data protection law and the governance structure proposed under the Health Data Policy.
The Supreme Court of India, in Justice K. S. Puttaswamy (Retd) Vs Union of India, held that confidentiality and privacy of medical data is a fundamental right under Article 21 of the Constitution. Any action that negates the fundamental right to privacy will need to satisfy three conditions, namely (i) existence of a law; (ii) legitimate state aim; and (iii) proportionality
The first is that the action should be permissible under a law passed by the Parliament. This was also recognised by the Supreme Court in 2018 in the Aadhaar judgement, the court, while deciding on the validity of Aadhar, noted that “A valid law in this case would mean a law passed by Parliament, which is just, fair and reasonable. Any encroachment upon the fundamental right cannot be sustained by an executive notification.”
The Health Data Policy fails this condition as it is a policy and not a law and a policy is not a substitute for a law, For collection of personal data, it is imperative that a data protection law should be enacted at the earliest. Alternatively, or in addition, a comprehensive separate legislation should be enacted to regulate the digital health ecosystem.
It is also pertinent to note the Health Data Policy provides for the creation of a data protection officer as well as grievance redressal officer. Neither of these entities have been instituted so far. In other words, UHIDs are being issued without the governance structure prescribed by the Health Data Policy being in place.
Conclusion
The need for strong data protection legislation to protect users’ health data has been recognised across different jurisdictions and has also been emphasised by various international organisations. In 2006, the World Health Organization recommended that governments enact a robust data protection legislation before digitising the health sector.
The health identity project has been launched and UHIDs are being issued as part of the COVID-19 vaccination process in different parts of India without the initial steps such as enacting data protection legislation and creating a robust digital ecosystem either not been concluded or the process not yet been undertaken. Hasty implementation without adequate safeguards and preparation not only risks the privacy and security of medical
data, it may also undermine general trust in the system leading to low uptake.
CIS Seminar Series: Information Disorder
The CIS seminar series will be a venue for researchers to share works-in-progress, exchange ideas, identify avenues for collaboration, and curate research. We also seek to mitigate the impact of Covid-19 on research exchange, and foster collaborations among researchers and academics from diverse geographies. Every quarter we will be hosting a remote seminar with presentations, discussions and debate on a thematic area.
Seminar format
We are happy to welcome abstracts for one of two tracks:
Working paper presentation
A working paper presentation would ideally involve a working draft that is presented for about 15 minutes followed by feedback from workshop participants. Abstracts for this track should be 600-800 words in length with clear research questions, methodology, and questions for discussion at the seminar. Ideally, for this track, authors should be able to submit a draft paper two weeks before the conference for circulation to participants.
Coffee-shop conversations
In contrast to the formal paper presentation format, the point of the coffee-shop conversations is to enable an informal space for presentation and discussion of ideas. Simply put, it is an opportunity for researchers to “think out loud” and get feedback on future research agendas. Provocations for this should be 100-150 words containing a short description of the idea you want to discuss.
We will try to accommodate as many abstracts as possible given time constraints. We welcome submissions from students and early career researchers, especially those from under-represented communities.
All discussions will be private and conducted under the Chatham House Rule. Drafts will only be circulated among registered participants.
Please send all abstracts to [email protected].
Theme for the first seminar (to be held on an online platform)
The first seminar will be centered around the theme of ‘Information Disorder: Mis-, Dis- and Malinformation.’ While the issue of information disorder, colloquially termed as ‘fake news’, has been in the political forefront for the last five years, the flawed attempts at countering the ‘infodemic’ brought about by the pandemic proves that there still continues to be substantial gaps in the body-of-knowledge on this issue. This includes research that proposes empirical, replicable methods of understanding the types, forms or nature of information disorder or research that attempts to understand regulatory approaches, the layers of production and the roles played by different agents in the spread of ‘fake news’.
Accordingly, we invite submissions that address these gaps in knowledge, including those that examine the relationship between digital technology and information disorder across a spectrum of fields and disciplines. Areas of interest include but are not limited to:
- Information disorders during COVID-19
- Effects of coordinated campaigns on marginalised communities
- Journalism, the State, and the trust in media
- Platform responsibility in information disorder
- Information disorder in international law/constitutional/human rights law
- Information disorder as a geopolitical tool
- Sociopolitical and cultural factors in user engagement
Timeline
- Abstract Submission Deadline: August 25th
- Results of Abstract review: September 8th
- Full submissions (of draft papers): September 30th
- Seminar date: Tentatively October 7th
Contact details
For any queries please contact us at [email protected].
Comments on proposed amendments to the Consumer Protection (E-Commerce) Rules, 2020
The Consumer Protection (E-commerce) Rules, 2020 were first introduced in an attempt to ensure that consumers were granted adequate protections and to prevent the adoption of unfair trade practices by E-commerce entities. The amendments have proposed several rules which will protect the consumer with a restriction on misleading advertisements and appointment of grievance officers based in India. However, while on this path, the proposed rules have created hurdles in the operations of e-commerce, reducing the ease of business and increasing the costs of operations especially for smaller players; which could eventually pass on to the consumers.
In our submission to the Ministry of Consumer Affairs, we focussed our analysis on eight points: Definitions and Registration, Compliance, Data Protection and Surveillance, Flash Sales, Unfair Trade Practices, Jurisdictional Issues with Competition Law, Compliance with International Trade Law and Liabilities of Marketplace E-commerce Entities.
A snapshot of our recommendations and analysis is listed out below. To read our full submission, please click here.
Definitions and Registrations
The registration of entities with the DPIIT must be made as smooth as possible especially considering the wide definition of E-commerce entities in the rules, which may include smaller businesses as well. In particular, we suggested doing away with physical office visits.
Compliance
As a general observation, compliance obligations should be differentiated based on the size of the entity and the volume of transactions rather than adopting a ‘one size fits all’ approach which may harm smaller businesses, especially those that are just starting up. Before these rules come into force, further consultations with small and medium-sized business enterprises would be vital in ensuring that the regulation is in line with their needs and does not hamper their growth. Excessive compliance requirements may end up playing into the hands of the largest players as they would have larger financial coffers and institutional mechanisms to comply with these obligations.
There is some confusion in the law as to whether the Chief Compliance officer mentioned in the amended rules is the same as the “nodal person of contact or an alternate senior designated functionary who is resident of India” under Rule 5(1).
The safe harbour should therefore refer to due diligence by the CCO and not the e-commerce entity itself. The requirement for the compliance officer to be an Indian citizen who is a resident and a senior officer or managerial employee may place an undue burden on small E-commerce players not located in India.
Data Protection and Surveillance
In the absence of a Personal Data protection bill these rules do not adequately protect consumers’ personal data and reduce the powers given to the Central Government to access data or conduct surveillance
Flash Sales
Conventional flash sales should be defined. Clear distinction must be made between conventional flash sales and fraudulent flash sales. The definition should not be limited to interception of business “using technological means”, which limits the scope of the fraudulent flash sales. Further parameters must be provided for when a flash sale will be considered a fraudulent flash sale.
Unfair Trade Practices
The rules place restrictions on marketplace E-commerce entities from selling their own goods or services or from listing related enterprises as sellers on their platforms. No such restriction applies to brick and mortar stores, and this blanket ban must be rethought.
Jurisdictional Issues with Competition Law
This rule brings the issue of ‘abuse of dominant power’ under the fora of the Consumer Protection Authority or the Consumer Disputes Redressal Commissions. Overlapping jurisdiction of this nature could introduce regulatory delays into the dispute resolution process and can be a source of tension for the parties and regulatory authorities. The intention behind importing a competition law concept such as “abuse of dominant position” in the consumer protection regulations may be understandable, such a step might be effective in jurisdictions which have a common regulatory authority for both competition law as well as consumer protection issues, such as Australia, Finland, Ireland, Netherlands. However, in a country such as India which has completely separate regulatory mechanisms for competition and consumer law issues, such a provision may lead to logistical difficulties.
Compliance with International Trade Law
A robust framework on ranking with transparent disclosure of parameters for the same would also go a long way towards addressing concerns with discrimination and national treatment under WTO law. Further, the obligation to provide domestic alternatives should be clarified and amended to ensure that it does not cause uncertainty and open India up to a national treatment challenge at the WTO.
Liabilities of Marketplace E-commerce Entities
Fallback liability is an essential component of consumers’ protection in the E-commerce space. However, as currently envisioned there is a lack of clarity surrounding the extent to which fallback liability is applicable on E-commerce entities as well as exemptions to this liability. We have recommended alternate approaches adopted in other jurisdictions, which include
-
Liability through negligence
-
Liability as an exemption to safe harbour
Do We Really Need an App for That? Examining the Utility and Privacy Implications of India’s Digital Vaccine Certificates
This blogpost was edited by Gurshabad Grover, Yesha Tshering Paul, and Amber Sinha.
It was originally published on Digital Identities: Design and Uses and is cross-posted here.
In an experiment to streamline its COVID-19 immunisation drive, India has adopted a centralised vaccine administration system called CoWIN (or COVID Vaccine Intelligence Network). In addition to facilitating registration for both online and walk-in vaccine appointments, the system also allows for the digital verification of vaccine certificates, which it issues to people who have received a dose. This development aligns with a global trend, as many countries have adopted or are in the process of adopting “vaccine passports” to facilitate safe movement of people while resuming commercial activity.
Some places, such as the EU, have constrained the scope of use of their vaccine certificates to international travel. The Indian government, however, has so far skirted important questions around where and when this technology should be used. By allowing anyone to use the online CoWIN portal to scan and verify certificates, and even providing a way for the private-sector to incorporate this functionality into their applications, the government has opened up the possibility of these digital certificates being used, and even mandated, for domestic everyday use such as going to a grocery shop, a crowded venue, or a workplace.
In this blog post, we examine the purported benefits of digital vaccine certificates over regular paper-based ones, analyse the privacy implications of their use, and present recommendations to make them more privacy respecting. We hope that such an analysis can help inform policy on appropriate use of this technology and improve its privacy properties in cases where its use is warranted.
We also note that while this post only examines the merits of a technological solution put out by the government, it is more important to consider the effects that placing restrictions on the movement of unvaccinated people has on their civil liberties in the face of a vaccine rollout that is inequitable along many lines, including gender, caste-class, and access to technology.
How do digital vaccine certificates work?
Every vaccine recipient in the country is required to be registered on the CoWIN platform using one of seven existing identity documents. [1] Once a vaccine is administered, CoWIN generates a vaccine certificate which the recipient can access on the CoWIN website. The certificate is a single page document that contains the recipient’s personal information — their name, age, gender, identity document details, unique health ID, a reference ID — and some details about the vaccine given. [2] It also includes a “secure QR code” and a link to CoWIN’s verification portal.
The verification portal allows for the verification of a certificate by scanning the attached QR code. Upon completion, the portal displays a success message along with some of the information printed on the certificate.
Verification is done using a cryptographic mechanism known as digital signatures, which are encoded into the QR code attached to a vaccine certificate. This mechanism allows “offline verification”, which means that the CoWIN verification portal or any private sector app attempting to verify a certificate does not need to contact the CoWIN servers to establish its authenticity. It instead uses a “public key” issued by CoWIN beforehand to verify the digital signature attached to the certificate.
The benefit of this convoluted design is that it protects user privacy. Performing verification offline and not contacting the CoWIN servers, precludes CoWIN from gleaning sensitive metadata about usage of the vaccine certificate. This means that CoWIN does not learn about where and when an individual uses their vaccine certificate, and who is verifying it. This closes off a potential avenue for mass surveillance. [3] However, given how certificate revocation checks are being implemented (detailed in the privacy implications section below), CoWIN ends up learning this information anyway.
Where is digital verification useful?
The primary argument for the adoption of digital verification of vaccine certificates over visual examination of regular paper-based ones is security. In the face of vaccine hesitancy, there are concerns that people may forge vaccine certificates to get around any restrictions that may be put in place on the movement of unvaccinated people. The use of digital signatures serves to allay these fears.
In its current form, however, digital verification of vaccine certificates is no more secure than visually inspecting paper-based ones. While the “secure QR code” attached to digital certificates can be used to verify the authenticity of the certificate itself, the CoWIN verification portal does not provide any mechanism nor does it instruct verifiers to authenticate the identity of the person presenting the certificate. This means that unless an accompanying identity document is also checked, an individual can simply present someone else’s certificate.
There are no simple solutions to this limitation; adding a requirement to inspect identity documents in addition to digital verification of the vaccine certificate would not be a strong enough security measure to prevent the use of duplicate vaccine certificates. People who are motivated enough to forge a vaccine certificate, can also duplicate one of the seven ID documents which can be used to register on CoWIN, some of which are simple paper-based documents. [4] Requiring even stronger identity checks, such as the use of Aadhaar-based biometrics, would make digital verification of vaccine certificates more secure. However, this would be a wildly disproportionate incursion on user privacy — allowing for the mass collection of metadata like when and where a certificate is used — something that digital vaccine certificates were explicitly designed to prevent. Additionally, in Russia, people were found issuing fake certificates by discarding real vaccine doses instead of administering them. No technological solution can prevent such fraud.
As such, the utility of digital certificates is limited to uses such as international travel, where border control agencies already have strong identity checks in place for travellers. Any everyday usage of the digital verification functionality on vaccine certificates would not present any benefit over visually examining a piece of paper or a screen.
Privacy implications of digital certificates
In addition to providing little security utility over manual inspection of certificates, digital certificates also present privacy issues, these are listed below along with recommendations to mitigate them:
(i) The verification portal leaks sensitive metadata to CoWIN’s servers: An analysis of network requests made by the CoWin verification portal reveals that it conducts a ‘revocation check’ each time a certificate is verified. This check was also found in the source code, which is made openly available.
[5]
Revocation checks are an important security consideration while using digital signatures. They allow the issuing authority (CoWIN, in this case) to revoke a certificate in case the account associated with it is lost or stolen, or if a certificate requires correction. However, the way they have been implemented here presents a significant privacy issue. Sending certificate details to the server on every verification attempt allows it to learn about where and when an individual is using their vaccine certificate.
We note that the revocation check performed by the CoWIN portal does not necessarily mean that it is storing this information. Nevertheless, sending certificate information to the server directly contradicts claims of an “offline verification” process, which is the basis of the design of these digital certificates.
Recommendations: Implementing privacy-respecting revocation checks such as Certificate Revocation Lists, [6] or Range Queries [7] would mitigate this issue. However, these solutions are either complex or present bandwidth and storage tradeoffs for the verifier.
(ii) Oversharing of personally identifiable information: CoWIN’s vaccine certificates include more personally identifiable information (name, age, gender, identity document details and unique health ID) than is required for the purpose of verifying the certificate. An examination of the vaccine certificates available to us revealed that while the Aadhaar number is appropriately masked, other personal identifiers such as passport number and unique health ID were not masked. Additionally, the inclusion of demographic details, such as age and gender, provides little security benefit by limiting the pool of duplicate certificates that can be used and are not required in light of the security analysis above.
Recommendation: Personal identifiers (such as passport number and unique health ID) should be appropriately masked and demographic details (age, gender) can be removed.
The minimal set of data required for identity-linked usage for digital verification, as described above, is a full name and masked ID document details. All other personally identifying information can be removed. In case of paper-based certificates, which is suggested for domestic usage, only the details about vaccine validity would suffice and no personal information is required.
(iii) Making information available digitally increases the likelihood of collection: All of the personal information printed on the certificate is also encoded into the QR code. This is necessary because the digital signature verification process also verifies the integrity of this information (i.e. it wasn’t modified). A side effect of this is that the personal information is made readily available in digital form to verifiers when it is scanned, making it easy for them to store. This is especially likely in private sector apps who may be interested in collecting demographic information and personal identifiers to track customer behaviour.
Recommendation: Removing extraneous information from the certificate, as suggested above, mitigates this risk as well.
Conclusion
Our analysis reveals that without incorporating strong, privacy-invasive identity checks, digital verification of vaccine certificates does not provide any security benefit over manually inspecting a piece of paper. The utility of digital verification is limited to purposes that already conduct strong identity checks.
In addition to their limited applicability, in their current form, these digital certificates also generate a trail of data and metadata, giving both government and industry an opportunity to infringe upon the privacy of the individuals using them.
Keeping this in mind, the adoption of this technology should be discouraged for everyday use.
References
[1] Exceptions exist for people without state-issued identity documents.
[2] This information was gathered by inspecting three vaccine certificates linked to the author’s CoWIN account, which they were authorised to view, and may not be fully accurate.
[3] This design is similar to Aadhaar’s “offline KYC” process.
[4] “Aadhaar Card: UIDAI says downloaded versions on ordinary paper, mAadhaar perfectly valid”, Zee Business, April 29 2019, https://www.zeebiz.com/india/news-aadhaar-card-uidai-says-downloaded-versions-on-ordinary-paper-maadhaar-perfectly-valid-96790.
[5] This check was also verified to be present in the reference code made available for private-sector applications incorporating this functionality, suggesting that private sector apps will also be affected by this.
[6] Certificate Revocation Lists allow the server to provide a list of revoked certificates to the verifier, instead of the verifier querying the server each time. This, however, can place heavy bandwidth and storage requirements on the verifying app as this list can potentially grow long.
[7] Range Queries are described in this paper. In this method, the verifier requests revocation status from the server by specifying a range of certificate identifiers within which the certificate being verified lies. If there are any revoked certificates within this range, the server will send their identifiers to the verifier, who can then check if the certificate in question is on the list. For this to work, the range selected must be sufficiently large to include enough potential candidates to keep the server from guessing which one is in use.
Finding Needles in Haystacks - Discussing the Role of Automated Filtering in the New Indian Intermediary Liability Rules
This article first appeared on the KU Leuven's Centre for IT and IP (CITIP) blog. Cross-posted with permission.
----
Mathew Sag in his 2018 paper on internet safe harbours discussed how the internet resulted in a shift from the traditional gatekeepers of knowledge (publishing houses) that used to decide what knowledge could be showcased, to a system where everybody who has access to the internet can showcase their work. A “content creator” today ranges from legacy media companies to any person who has access to a smartphone and an internet connection. In a similar trajectory, with the increase in websites and mobile apps and the functions that they serve, the scope of what is an internet intermediary has widened all over the world.
Who is an Intermediary?
In India the definition of “intermediary” is found under Section 2(w) of the Information Technology (IT) Act 2000, which defines an Intermediary as “with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecoms service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-marketplaces and cyber cafes”. The all-encompassing nature of the definition has allowed the dynamic nature of intermediaries to be included under the definition of the Act, and the Guidelines that have been published periodically (2011, 2018 and 2021). With more websites and social media companies, and even more content creators online today, there is a need to look at ways in which intermediaries can remove illegal content or content that goes against their community guidelines.
Along with the definition of an intermediary, the IT Act, under Section 79, provides exemptions which grant safe harbours to internet intermediaries, from liability from third-party content, and further empowers the central government to make Rules that act as guidelines for the intermediaries to follow. The Intermediary Liability Rules hence seek to regulate content and lay down safe harbour provisions for intermediaries and internet service providers. To keep up with the changing nature of the internet and internet intermediaries, India relies on the Intermediary Liability Rules to regulate and provide a conducive environment for intermediaries. In view of this provision India has as of now published three versions of the Intermediary Liability (IL) Rules. The first Rules came out in 2011, followed by the introduction of draft amendments to the law in 2018 and finally the latest 2021 version, which would supersede the earlier Rules of 2011.
The Growing Use of Automated Content Moderation
With each version of the Rules there seemed to be changes that ensured that they were abreast with the changing face of the internet and the changing nature of both content and content creator. Hence the 2018 version of the Rules showcase a push towards automated content filtering. The text of Rule 3(9) reads as follows: “The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content”.
Under Rule 3(9), intermediaries were required to deploy automated tools or appropriate mechanisms to proactively identify, remove or disable public access to unlawful content. However, neither the 2018 IL Rules, nor the parent Act (the IT Act) specified which content can be deemed unlawful. The 2018 Rules also failed to establish the specific responsibilities of the intermediaries, instead relying on vague terms like “appropriate mechanisms” and with “appropriate controls”. Hence it can be seen that though the Rules mandated the use of automated tools, neither them nor the IT Act provided clear guidelines on what could be removed.
The lack of clear guidelines and list of content that can be removed had left the decision up to the intermediaries to decide which content, if not actively removed, could cost them their immunity. It has been previously documented that the lack of clear guidelines in the 2011 version of the Rules, led to intermediaries over complying with take down notices, often taking down content that did not warrant it. The existing tendency to over-comply, combined with automated filtering could have resulted in a number of unwarranted take downs.
While the 2018 Rules mandated the deployment of automated tools, the year 2020, (possibly due to the pandemic induced work from home safety protocols and global lockdowns) saw major social media companies announcing the move towards a fully automated system of content moderation. Though the use of automated content removal seems like the right step considering the trauma that human moderators had to go through, the algorithms that are being used now to remove content are relying on the parameters, practices and data from earlier removals made by the human moderators. More recently, in India with the emergence of the second wave of the COVID19 wave, the Ministry of Electronics and Information Technology has asked social media platforms to remove “unrelated, old and out of the context images or visuals, communally sensitive posts and misinformation about COVID19 protocols”.
The New IL Rules - A ray of hope?
The 2021 version of the IL Rules provides a more nuanced approach to the use of automated content filtering compared to the earlier version. Rule 4(4) now requires only “significant social media intermediaries” to use automated tools to identity and take down content pertaining to “child sexual abuse material”, or “depicting rape”, or any information which is identical to a content that has already been removed through a take-down notice. The Rules define a social media intermediary as “intermediary which primarily or solely enables interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services” .The Rules also go a step further to create another type of intermediary, the significant social media intermediary. A significant social media intermediary is defined as one “having a number of registered users in India above such threshold as notified by the Central Government''. Hence what can be considered as a social media intermediary that qualifies as a significant one could change at any time.
Along with adding a new threshold (qualifying as a significant social media intermediary) the Rules, in contrast to the 2018 version, also emphasises the need of such removal to be proportionate to the interests of freedom of speech and expression and privacy of users. The Rules also call for “appropriate human oversight” as well as a periodic review of the tools used for content moderation. The Rules by using the term “shall endeavor” aids in reducing the pressure on the intermediary to set up these mechanisms. This also means that the requirement is now on a best effort basis, as opposed to the word “shall” in the 2018 version of the Rules, which made it mandatory.
Although the Rules now narrow down the instances where automated content removal can take place, the concerns around over compliance and censorship still loom. One of the reasons for concern is that the Rules still fail to require the intermediaries to set up a mechanism for redress or for appeals to such removal. Additionally, the provision that states that automated systems could remove content that have been previously taken down, creates a cause for worry as the propensity of the intermediaries to over comply and take down content has already been documented. This then brings us back to the previous issue where the social media company’s automated systems were removing legitimate news sources. Though the 2021 Rules tries to clarify certain provisions related to automated filtering, like the addition of the safeguards, the Rules also suffer from vague provisions that could cause issues related to compliance. The use of terms such as “proportionate”, “having regard to free speech” etc. fail to lay down definitive directions for the intermediaries (in this case SSMI) to comply with. Additionally, as earlier stated, being qualified as a SSMI can change at any time, either based on the change in the number of users, or the change in the threshold of users, mandated by the government. The absence of human intervention during removal, vague guidelines and fear of losing out on safe harbour provisions, add to the already increasing trend of censorship in social media. With the use of automated means and the fast, and almost immediate removal of content would mean that certain content creators might not even be able to post their content online. With the use of proactive filtering through automated means the content can be removed almost immediately. With India’s current trend of new internet users, some of these creators would also be first time users of the internet.
Conclusion
The need for automated removal of content is understandable, based not only on the sheer volume of content but also the nightmare stories of the toll it takes on human content moderators, who otherwise have to go through hours of disturbing content. Though the Indian Intermediary Liability Guidelines have improved from the earlier versions in terms of moving away from mandating proactive filtering, there still needs to be consideration of how these technologies are used, and the laws should understand the shift in the definition of who a content creator is. There needs to be ways of recourse to unfair removal of content and a means to get an explanation of why the content was removed, via notices to the user. In the case of India, the notices should be in Indian languages as well, so that the people are able to understand them.
In the absence of further clear guidelines, the perils of over-censorship by the intermediaries in order to stay out of trouble could lead to further stifling of not just freedom of speech but also access to information. In addition, the fear of content being taken down or even potential prosecution could mean that people resort to self-censorship, preventing them from exercising their fundamental rights to freedom of speech and expression, as guaranteed by the Indian Constitution. We hope that the next version of the Rules take a more nuanced approach to automated content removal and ensure adequate and specific safeguards to ensure a conducive environment for both intermediaries and content creators.
Techno-solutionist Responses to COVID-19
The article by Amber Sinha, Pallavi Bedi, and Aman Nair was published in the Economic & Political Weekly, Vol. 56, Issue No. 29, 17 Jul, 2021.
Over the last two decades, slowly but steadily, the governance agenda of the Indian state has moved to the digital realm. In 2006, the National e-Governance Plan (NeGP) was approved by the Indian state wherein a massive infrastructure was developed to reach the remotest corners and facilitate easy access of government services efficiently at affordable costs. The first set of NeGP projects focused on digitalising governance schemes that dealt with taxation, regulation of corporate entities, issuance of passports, and pensions. Over a period of time, they have come to include most interactions between the state and citizens from healthcare to education, transportation to employment, and policing to housing. Upon the launch of the Digital India Mission by the union government, the NeGP was subsumed under the e-Gov and e-Kranti components of the project. The original press release by the central government reporting the approval by the cabinet of ministers of the Digital India programme speaks of “cradle to grave” digital identity as one of its vision areas. This identity was always intended to be “unique, lifelong, online and authenticable.”
Since the inception of the Digital India campaign by the current government, there have been various concerns raised about the privacy issues posed by this project. The initiative includes over 50 “mission mode projects” in various stages of implementation. All of these projects entail collection of vast quantities of personally identifiable information of the citizens. However, most of these initiatives do not have clearly laid down privacy policies. There is also a lack of properly articulated access control mechanism and doubts exist over important issues such as data ownership owing to most projects involving public–private partnership which involves a private organisation collecting, processing and retaining large amounts of data. Most importantly, they have continued to exist and prosper in a state of regulatory vacuum with no data protection legislation to govern them. Further, the state of digital divide and digital literacy in India should automatically underscore the need to not rely solely on digital solutions.
Click to read the full article here
Facial Recognition Technology in India
Executive Summary
Over the past two decades there has been a sustained effort at digitising India’s governance structure in order to foster development and innovation. The field of law enforcement and safety has seen significant change in that direction, with technological tools such as Closed Circuit Television (CCTV) and Facial Recognition Technology (FRT) increasingly being deployed by the government.
Yet for all its increased use, there is still a lack of a coherent legal and regulatory framework governing FRT in India. Towards informing such a framework, this paper seeks to document present uses of FRT in India, specifically by law enforcement agencies and central and state governments, understand the applicability of existing legal frameworks to the use of FRT, and define key areas that need to be addressed when using the technology in India. We also briefly look at how the coverage of FRT has increased beyond law enforcement; it now covers educational institutions, employment purposes, and it is now being used for providing Covid-19 vaccines.
We begin by examining use cases of FRT systems by various divisions of central and state governments. In doing so, it becomes apparent that there is a lack of uniform standards or guidelines at either the state or central level - leading to different FRT systems having differing standards of applicability and scope of use. And while the use of such systems seems to be growing at a rapid rate, questions around their legality persist.
It is unclear whether the use of FRT is compliant with the fundamental right to privacy as affirmed by the Supreme Court in 2017 in Puttaswamy. While the right to privacy is not an absolute right, for the state to curtail this right, the restrictions will have to comply with a three-fold requirement— first, being the need for explicit legislative mandate in instances where the government looks to curtail the right. However, the FRT systems we have analysed do not have such a mandate and are often the result of administrative or executive decisions with no legislative blessing or judicial oversight.
We further locate the use of FRT technology within the country’s wider legislative, judicial and constitutional frameworks governing surveillance. We also briefly articulate comparative perspectives on the use of FRT in other jurisdictions. We further analyse the impact of the proposed Personal Data Protection Bill on the deployment of FRT. Finally, we propose a set of recommendations to develop a path forward for the technology’s use which include the need for a comprehensive legal and regulatory framework that governs the use of FRT. Such a framework must take into consideration the necessity of use, proportionality, consent, security, retention, redressal mechanisms, purpose limitation, and other such principles. Since the use of FRT in India is also at a nascent stage, it is imperative that there is greater public research and dialogue into its development and use to ensure that any harms that may arise in the field are mitigated.
Click to download the entire research paper here
A Guide to Drafting Privacy Policy under the Personal Data Protection Bill, 2019
The Bill in its current form, doesn’t have explicit transitory provisions i.e. a defined timeline for the enforcement of the provisions of the Bill post its notification as an enforceable legislation. Since the necessary subject matter expertise may be limited on short notice and out of budget for certain companies, we intend to release a series of guidance documents that will attempt to simplify the operational requirements of the legislation.
Certain news reports had earlier suggested that the Joint Parliamentary Committee reviewing the Bill has proposed 89 new amendments and a new clause. The nature and content of these amendments so far remain unclear. However, we intend to start the series by addressing some frequently asked questions around meeting the requirements of publishing a privacy notice and shall make the relevant changes post notification of the new Bill. The solutions provided in this guidance document are mostly based on international best practices and any changes in the solutions based on Indian guidelines and the revised PDP Bill will be redlined in the future.
The frequently asked questions and other specific examples on complying with the requirements of publishing a privacy policy have been compiled based on informal discussions with stakeholders, unsolicited queries from smaller organizations and publicly available details from conferences on the impact of the Bill. We intend to conduct extensive empirical analysis of additional queries or difficulties faced by smaller organizations towards achieving compliance post the notification of the new Bill. Regardless, any smaller organizations(NGOs, start-ups etc.) interested in discussing compliance related queries can get in touch with us.
Click to download the full report here. The report was reviewed by Pallavi Bedi and Amber Sinha.
The Geopolitics of Cyberspace: A Compendium of CIS Research
With a rapidly digitizing economy and clear interests in shaping global rules that favour its strategic interests, India stands at a crucial juncture on various facets of this debate. How India governs and harnesses technology, coupled with how India translates these values and negotiates its interests globally, will surely have an impact on how similarly placed emerging economies devise their own strategies. The challenge here is to ensure that domestic technology governance as well as global engagements genuinely uphold and further India’s democratic fibre and constitutional vision.
Since 2018, researchers at the Centre for Internet and Society have produced a body of research including academic writing, at the intersection of geopolitics and technology covering global governance regimes on trade and cybersecurity, including their attendant international law concerns, the digital factor in bilateral relationships (with a focus on the Indo-US and Sino-Indian relationships). We have paid close focus to the role of emerging technologies in this debate, including AI and 5G as well as how private actors in the technology domain, operating across national jurisdictions, are challenging and upending traditionally accepted norms of international law, global governance, and geopolitics.
The global fissures in this space matter fundamentally for individuals who increasingly use digital spaces to carry out day to day activities: from being unwitting victims of state surveillance to harnessing social media for causes of empowerment to falling prey to state-sponsored cyber attacks, the rules of cyber governance, and its underlying politics. Yet, the rules are set by a limited set of public officials and technology lawyers within restricted corridors of power. Better global governance needs more to be participatory and accessible. CIS’s research and writing has been cognizant of this, and attempted to merge questions of global governance with constitutional and technical questions that put individuals and communities centre-stage.
Research and writing produced by CIS researchers and external collaborators from 2018 onward is detailed in the appended compendium.
Compendium
Global cybersecurity governance and cyber norms
Two decades since a treaty governing state behaviour in cyberspace was mooted by Russia, global governance processes have meandered along. The security debate has often been polarised along “Cold War” lines but the recent amplification of cyberspace governance as developmental, social and economic has seen several new vectors added to this debate. This past year two parallel processes at the United Nations General Assembly’s First Committee on Disarmament and International Security-United Nations Group of Governmental Experts (UN-GGE) and the United Nations Open Ended Working Group managed to produce consensus reports but several questions on international law, norms and geopolitical co-operation remain. India has been a participant at these crucial governance debates. Both the substance of the contribution, along with its implications remain a key focus area for our research.
Edited Volumes
- Karthik Nachiappan and Arindrajit Basu India and Digital World-Making, Seminar 731, 1 July 2020 (featuring contributions from Manoj Kewalramani, Gunjan Chawla, Torsha Sarkar, Trisha Ray, Sameer Patil, Arun Vishwanathan, Vidushi Marda, Divij Joshi, Asoke Mukerji, Pallavi Raghavan, Karishma Mehrotra, Malavika Raghavan, Constantino Xavier, Rajen Harshe' and Suman Bery)
Long-Form Articles
- Arindrajit Basu and Elonnai Hickok, Cyberspace and External Affairs: A Memorandum for India (Memorandum, Centre for Internet and Society, 30 Nov 2018)
- The Potential for the Normative Regulation of Cyberspace (White Paper, Centre for Internet and Society, 30 July 2018)
- Arindrajit Basu and Elonnai Hickok Conceptualizing an International Security Architecture for cyberspace (Briefings of the Global Commission on the Stability of Cyberspace, Bratislava, Slovakia, May 2018)
- Sunil Abraham, Mukta Batra, Geetha Hariharan, Swaraj Barooah, and Akriti Bopanna, India's contribution to internet governance debates (NLUD Student Law Journal, 2018)
Blog Posts and Op-eds
- Arindrajit Basu, Irene Poetranto, and Justin Lau, The UN struggles to make progress in cyberspace, Carnegie Endowment for International Peace, May 19th, 2021
- Andre’ Barrinha and Arindrajit Basu, Could cyber diplomacy learn from outer space, EU Cyber Direct, 20th April 2021
- Arindrajit Basu and Pranesh Prakash, Patching the gaps in India’s cybersecurity, The Hindu, 6th March 2021
- Arindrajit Basu and Karthik Nachiappan, Will India negotiate in cyberspace?, Leiden Security and Global Affairs blog,December 16, 2020
- Elizabeth Dominic, The debate over internet governance and cybercrimes: West vs the rest?, Centre for Internet and Society, June 08, 2020
- Arindrajit Basu, India’s role in Global Cyber Policy Formulation, Lawfare, Nov 7, 2019
- Pukhraj Singh, Before cyber norms,let's talk about disanalogy and disintermediation, Centre for Internet and Society, Nov 15th, 2019
- Arindrajit Basu and Karan Saini, Setting International Norms of Cyber Conflict is Hard, But that Doesn’t Mean that We Should Stop Trying, Modern War Institute, 30th Sept, 2019
- Arindrajit Basu, Politics by other means: Fostering positive contestation and charting red lines through global governance in cyberspace (Digital Debates, Volume 6, 2019)
- Arindrajit Basu, Will the WTO Finally Tackle the ‘Trump’ Card of National Security? (The Wire, 8th May 2019)
Policy Submissions
- Arindrajit Basu, CIS Submission to OEWG (Centre for Internet and Society, Policy Submission, 2020)
- Aayush Rathi, Ambika Tandon, Elonnai Hickok, and Arindrajit Basu. “CIS Submission to UN High-Level Panel on Digital Cooperation.” Policy submission. Centre for Internet and Society, January 2019.
- Arindrajit Basu,Gurshabad Grover, and Elonnai Hickok. “Response to GCSC on Request for Consultation: Norm Package Singapore.” Centre for Internet and Society, January 17, 2019.
- Arindrajit Basu and Elonnai Hickok. Submission of Comments to the GCSC Definition of ‘Stability of Cyberspace (Centre for Internet and Society, September 6, 2019)
Digital Trade and India's Political Economy
The modern trading regime and its institutions were born largely into a world bereft of the internet and its implications for cross-border flow and commerce. Therefore, regulatory ambitions at the WTO have played catch up with the technological innovation that has underpinned the modern global digital economy. Driven by tech giants, the “developed” world has sought to restrict the policy space available to the emerging world to impose mandates regarding data localisation, source code disclosure, and taxation - among other initiatives central to development. At the same time emerging economies have pushed back, making for a tussle that continues to this day. Our research has focussed both on issues of domestic political economy and data governance,and the implications these domestic issues have on how India and other emerging economies negotiate at the world stage.
Long-Form articles and essays
- Arindrajit Basu, Elonnai Hickok and Aditya Chawla, The Localisation Gambit: Unpacking policy moves for the sovereign control of data in India (Centre for Internet and Society, March 19, 2019)
- Arindrajit Basu,Sovereignty in a datafied world: A framework for Indian diplomacy in Navdeep Suri and Malancha Chakrabarty (eds) A 2030 Vision for India’s Economic Diplomacy (Observer Research Foundation 2021)
- Amber Sinha, Elonnai Hickok, Udbhav Tiwari and Arindrajit Basu, Cross Border Data-Sharing and India (Centre for Internet and Society, 2018)
Blog posts and op-eds
- Arindrajit Basu, Can the WTO build consensus on digital trade, Hinrich Foundation,October 05,2021
- Amber Sinha, The power politics behind Twitter versus Government of India, The Wire, June 03, 2021
- Karthik Nachiappan and Arindrajit Basu, Shaping the Digital World, The Hindu, 30th July 2020
- Arindrajit Basu and Karthik Nachiappan, India and the global battle for data governance, Seminar 731, 1st July 2020
- Amber Sinha and Arindrajit Basu, Reliance Jio-Facebook deal highlights India’s need to revisit competition regulations, Scroll, 30th April 2020
- Arindrajit Basu and Amber Sinha, The realpolitik of the Reliance-Jio Facebook deal, The Diplomat, 29th April 2020
- Arindrajit Basu, The Retreat of the Data Localization Brigade: India, Indonesia, Vietnam, The Diplomat, Jan 10, 2020
- Amber Sinha and Arindrajit Basu, The Politics of India’s Data Protection Ecosystem, EPW Engage, 27 Dec 2019
- Arindrajit Basu and Justin Sherman, Key Global Takeaways from India’s Revised Personal Data Protection Bill, Lawfare, Jan 23, 2020
- Nikhil Dave,“Geo-Economic Impacts of the Coronavirus: Global Supply Chains.” Centre for Internet and Society , June 16, 2020.
International Law and Human Rights
International law and human rights are ostensibly technology neutral, and should lay the edifice for digital governance and cybersecurity today. Our research on international human rights has focussed on global surveillance practices and other internet restrictions employed by a variety of nations, and the implications this has for citizens and communities in India and similarly placed emerging economies. CIS researchers have also contributed to, and commented on World Intellectual Property Organization negotiations at the intersection of international Intellectual Property (IP) rules and the human rights.
Long-form article
- Arindrajit Basu, Extra Territorial Surveillance and the incapacitation of international human rights law, 12 NUJS LAW REVIEW 2 (2019)
- Gurshabad Grover and Arindrajit Basu, ”Internet Blockage”(Scenario contribution to NATO CCDCOE Cyber Law Toolkit,2021)
- Arindrajit Basu and Elonnai Hickok, Conceptualizing an international framework for active private cyber defence (Indian Journal of Law and Technology, 2020)
- Arindrajit Basu,Challenging the dogmatic inevitability of extraterritorial state surveillance in Trisha Ray and Rajeswari Pillai Rajagopalan (eds) Digital Debates: CyFy Journal 2021 (New Delhi:ORF and Global Policy Journal,2021)
Blog Posts and op-eds
- Arindrajit Basu, “Unpacking US Law And Practice On Extraterritorial Mass Surveillance In Light Of Schrems II”, Medianama, 24th August 2020
- Anubha Sinha, “World Intellectual Property Organisation: Notes from the Standing Committee on Copyright Negotiations (Day 1, Day 2, Day 3 and 4)”, July 2021
- Raghav Ahooja and Torsha Sarkar,How (not) to regulate the internet:Lessons from the Indian Subcontinent,Lawfare,September 23,2021,
Bilateral Relationships
Technology has become a crucial factor in shaping bilateral and plurilateral co-operation and competition. Given the geopolitical fissures and opportunities since 2020, our research has focussed on how technology governance and cybersecurity could impact the larger ecosystem of Indo-China and India-US relations. Going forward, we hope to undertake more research on technology in plurilateral arrangements, including the Quadrilateral Security Dialogue.
- Arindrajit Basu and Justin Sherman, The Huawei Factor in US-India Relations,The Diplomat, 22 March 2021
- Aman Nair, “TIkTok: It’s Time for Biden to Make a Decision on His Digital Policy with China,” Centre for Internet and Society, January 22, 2021,
- Arindrajit Basu and Gurshabad Grover, India Needs a Digital Lawfare Strategy to Counter China, The Diplomat, 8th October 2020
- Anam Ajmal, The app ban will have an impact on the holding companies...global power projection begins at home, Times of India, July 7th, 2020 (Interview with Arindrajit Basu)
- Justin Sherman and Arindrajit Basu, Trump and Modi embrace, but remain digitally divided, The Diplomat, March 05th, 2020
Emerging Technologies
Governance needs to keep pace with the technological challenges posed by emerging technologies, including 5G and AI. To do so an interdisciplinary approach that evaluates these scientific advances in line with the regimes that govern them is of utmost importance. While each country will need to regulate technology through the lens of their strategic interests and public policy priorities, it is clear that geopolitical tensions on standard-setting and governance models compels a more global outlook.
Long-Form reports
- Anoushka Soni and Elizabeth Dominic, Legal and Policy implications of Autonomous weapons systems (Centre for Internet and Society, 2020)
- Aayush Rathi, Gurshabad Grover, and Sunil Abraham, Regulating the internet: The Government of India & Standards Development at the IETF (Centre for Internet and Society, 2018)
Blog posts and op-eds
- Aman Nair, Would banning Chinese telecom companies make India 5G secure in India? Centre for Internet and Society, 22nd December 2020
- Arindrajit Basu and Justin Sherman, Two New Democratic Coalitions on 5G and AI Technologies, Lawfare, 6th August 2020
- Nikhil Dave, The 5G Factor: A Primer, Centre for Internet and Society, July 20, 2020.
- Gurshabad Grover, The Huawei bogey Indian Express, May 30th, 2019
- Arindrajit Basu and Pranav MB, What is the problem with 'Ethical AI'?:An Indian perspective, Centre for Internet and Society, July 21, 2019
(This compendium was drafted by Arindrajit Basu with contributions from Anubha Sinha. Aman Nair, Gurshabad Grover, and Pranav MB reviewed the draft and provided vital insight towards its conceptualization and compilation. Dishani Mondal and Anand Badola provided important inputs at earlier stages of the process towards creating this compendium)
International Cyber Law Toolkit scenario: Internet blockage
Arindrajit Basu and Gurshabad Grover contribute a scenario and a legal analysis to the International Cyber Law in Practice Toolkit.
As per it’s website.
The Cyber Law Toolkit is a dynamic interactive web-based resource for legal professionals who work with matters at the intersection of international law and cyber operations. The Toolkit may be explored and utilized in a number of different ways. At its core, it presently consists of 24 hypothetical scenarios. Each scenario contains a description of cyber incidents inspired by real-world examples, accompanied by detailed legal analysis. The aim of the analysis is to examine the applicability of international law to the scenarios and the issues they raise.
A summary of the contribution:
In response to widespread protests, a State takes measures to isolate its domestic internet networks from connecting with the global internet. These actions also lead to a massive internet outage in the neighbouring State, whose internet access was contingent on interconnection with a large network in the former State. The analysis considers whether the first State’s actions amount to violations of international law, in particular with respect to the principle of sovereignty, international human rights law, international telecommunication law and the responsibility to prevent transboundary harm.
You can read the full scenario and analysis here.
The press release by NATO CCDCOE announcing the September 2021 update may be accessed here.
Beyond the PDP Bill: Governance Choices for the DPA
The Personal Data Protection Bill, 2019, was introduced in the Lok Sabha on 11 December 2019. It lays down an overarching framework for personal data protection in India. Once revised and approved by Parliament, it is likely to establish the first comprehensive data protection framework for India. However, the provisions of the Bill are only one component of the forthcoming data protection framework It further proposes setting up the Data Protection Authority (DPA) to oversee the final enforcement, supervision, and standard-setting. The Bill consciously chooses to vest the responsibility of administering the framework with a regulator instead of a government department. As an independent agency, the DPA is expected to be autonomous from the legislature and the Central Government and capable of making expert-driven regulatory decisions in enforcing the framework.
Furthermore, the DPA is not merely an implementing authority; it is also expected to develop privacy regulations for India by setting standards. As such, it will set the day-to-day obligations of regulated entities under its supervision. Thus, the effectiveness with which it carries out its functions will be the primary determinant of the impact of this Bill (or a revised version thereof) and the data protection framework set out under it.
The final version for the PDP Bill may or may not provide the DPA with clear guidance regarding its functions. In this article, we emphasise the need to look beyond the Bill and instead examine the specific governance choices the DPA must deliberate on vis-à-vis its standard-setting function, which are distinct from those it will encounter as part of its enforcement and supervision functions.
A brief timeline of the genesis of a distinct privacy regulator for India
The vision of an independent regulator for data protection in India emerged over the course of several intervening processes that set out to revise India’s data protection laws. In fact, the need for a dedicated data protection regulation for India, with enforceable obligations and rights, was debated years before the Aadhaar, Cambridge Analytica, and Pegasus revelations captured the public imagination and mainstreamed conversations on privacy.
The Right to Privacy Bill, 2011, which never took off, recognised the right to privacy in line with Article 21 of the Constitution of India, which pertains to the right to life and personal liberty. The Bill laid down express conditions for collecting and processing data and the rights of data subjects. It also proposed setting up a Data Protection Authority (DPA) to supervise and enforce the law and advise the government in policy matters. Upon review by the Cabinet, it was suggested that the Authority be revised to an Advisory Council, given its role under the Bill was limited.
Subsequently, in 2012, the AP Shah Committee Report recommended a principle-based data protection law, focusing on set standards while refraining from providing granular rules, to be enforced through a co-regulatory structure. This structure would consist of central and regional-level privacy commissioners, self-regulatory bodies, and data protection officers appointed by data controllers. There were also a few private members’ bills introduced between 2011 and 2019.
None of these efforts materialised, and the regulatory regime for data protection and privacy remained embedded within the Information Technology Act, 2000, and the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules). Though the SPDI Rules require body corporates to secure personal data, their enforcement is limited to cases of negligence in abiding by these limited set of obligations pertaining to sensitive personal information only, and which have caused wrongful loss or gain – a high threshold to prove for aggrieved individuals. Otherwise, the Intermediary Guidelines, 2011 require all intermediaries to generally follow these Rules under Rule 3(8). The enforcement of these obligations is entrusted to adjudicating officers (AO) appointed by the central government, who are typically bureaucrats appointed as AOs in an ex-officio capacity.
By 2017, the Aadhaar litigations had provided additional traction to the calls for a dedicated and enforceable data protection framework in India. In its judgement, the Supreme Court recognised the right to privacy as a fundamental right in India and stressed the need for a dedicated data protection law. Around the same time, the Ministry of Electronics and Information Technology (MeitY) constituted a committee of experts under the chairmanship of Justice BN Srikrishna. The Srikrishna Committee undertook public consultations on a 2017 white paper, which culminated in the nearly comprehensive Personal Data Protection Bill, 2018, and an accompanying report. This 2018 Bill outlined a regulatory framework of personal data processing for India and defined data processing entities as fiduciaries, which owe a duty of care to individuals to whom personal data relates. The Bill provided for the setting up of an independent regulator that would, among other things, specify further standards for data protection and administer and enforce the provisions of the Bill.
MeitY invited public comments on this Bill and tabled a revised version, the Personal Data Protection Bill, 2019 (PDP Bill), in the Lok Sabha in December 2019. Following public pressure calling for detailed discussions on the Bill before its passing, it was referred to a Joint Parliamentary Committee (JPC) constituted for this purpose. It currently remains under review; the JPC is reportedly expected to table its report in the 2021 Winter Session of Parliament. Though the Bill is likely to undergo another round of revisions following the JPC’s review, this is the closest India has come to realising its aspirations of establishing a dedicated and enforceable data protection framework.
This Bill carries forward the choice of a distinct regulatory body, though questions remain on the degree of its independence, given the direct control granted to the central government in appointing its members and funding the DPA.
Conceptualising an Independent DPA
The Srikrishna Committee’s 2017 white paper and its 2018 report on the PDP Bill discuss the need for a regulator in the context of enforcement of its provisions. However, the DPA under the PDP Bill is tasked with extensive powers to frame detailed regulations and codes of conduct to inform the day-to-day obligations of data fiduciaries and processors. To be clear, the standard-setting function for a regulator entails laying down the standards based on which regulated entities (i.e. the data fiduciaries) will be held accountable, and the manner in which they may conduct themselves while undertaking the regulated activity (i.e. personal data processing). This is in addition to its administrative and enforcement, and quasi-judicial functions, as outlined below:
Functions of the DPA under the PDP Bill 2019
At this stage, it is important to note that the choice of regulation via a regulator is distinct from the administration of the Bill by the central or state governments. Creating a distinct regulatory body allows government procedures to be replaced with expert-driven decision-making to ensure sound economic regulation of the sector. At the same time, the independence of the regulatory authority insulates it from political processes. The third advantage of independent regulatory authorities is the scope for ‘operational flexibility’, which is embodied in the relative autonomy of its employees and its decision-making from government scrutiny.
This is also the rationale provided by the Srikrishna Committee in stating their choice to entrust the administration of the data protection law to an independent DPA. The 2017 white paper that preceded the 2018 Srikrishna Committee Report proposed a distinct regulator to provide expert-driven enforcement of laws for the highly specialised data protection sphere. Secondly, the regulator would serve as a single point of contact for entities seeking guidance and will ensure consistency by issuing rules, standards, and guidelines. The Srikrishna Committee Report concretised this idea and proposed a sector-agnostic regulator that is expected to undertake expertise-driven standard-setting, enforcement, and adjudication under the Bill. The PDP Bill carries forward this conception of a DPA, which is distinct from the central government.
Conceptualised as such, the DPA has a completely new set of questions to contend with. Specifically, regulatory bodies require additional safeguards to overcome the legitimacy and accountability questions that arise when law-making is carried out not by elected members of the legislature, but via the unelected executive. The DPA would need to incorporate democratic decision-making processes to overcome the deficit of public participation in an expert-driven body. Thus, the meta-objective of ensuring autonomous, expertise-driven, and legitimate regulation of personal data processing necessitates that the regulator has sufficient independence from political interference, is populated with subject matter experts and competent decision-makers, and further has democratic decision-making procedures.
Further, the standard-setting role of the regulator does not receive sufficient attention in terms of providing distinct procedural or substantive safeguards either in the legislation or public policy guidance.
Reconnaissance under the PDP Bill: How well does it guide the DPA?
At this time, the PDP Bill is the primary guidance document that defines the DPA and its overall structure. India also lacks an overarching statute or binding framework that lays down granular guidance on regulation-making by regulatory agencies.
The PDP Bill, in its current iteration, sets out skeletal provisions to guide the DPA in achieving its objectives. Specifically, the Bill provides guidance limited to the following:
- Parliamentary scrutiny of regulations: The DPA must table all its regulations before the Parliament. This is meant to accord legislative scrutiny to binding legal standards promulgated by unelected officials.
- Consistency with the Act: All regulations should be consistent with the Act and the rules framed under it. This integrates a standard of administrative law to a limited extent within the regulation-making process.
However, India’s past track record indicates that regulations, once tabled before the Parliament, are rarely questioned or scrutinised. Judicial review is typically based on ‘thin’ procedural considerations such as whether the regulation is unconstitutional, arbitrary, ultra vires, or goes beyond the statutory obligations or jurisdiction of the regulator. In any event, judicial review is possible only when an instrument is challenged by a litigant, and, therefore, it may not always be a robust ex-ante check on the exercise of this power. A third challenge arises where instruments other than regulations are issued by the regulator. These could be circulars, directions, guidelines, and even FAQs, which are rarely bound by even the minimal procedural mandate of being tabled before the Parliament. To be sure, older regulators including the Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) also face similar issues, which they have attempted to address through various methods including voluntary public consultations, stakeholder meetings, and publication of minutes of meetings. These are useful tools for the DPA to consider as well.
Apart from these, specific guidance is provided with respect to issuing and approving codes of practice and issuing directions as follows:
- Codes of practice: The DPA is required to (i) ensure transparency,[1] (ii) consult with other sectoral regulators and stakeholders, and (iii) follow a procedure to be prescribed by the central government prior to the notification of codes of practice under the Bill.[2]
- Directions: The DPA may issue directions to individual, regulated entities or their classes from time to time, provided these entities have been given the opportunity to be heard by the DPA before such directions are issued.[3]
However, the meaning of transparency and the process for engaging with sectoral regulators remains unspecified under the Bill. Furthermore, the central government has been provided vast discretion to formulate these procedures, as the Bill does not specify the principles or outcomes sought to be achieved via these procedures. The Bill also does not specify instances where such directions may be issued and in which form.
Thus, as per its last publicly available iteration, the Bill remains silent on the following:
- The principles that may guide the DPA in its functioning.
- The procedure to be followed for issuing regulations and other subordinate legislation under the Bill.
- The relevant regulatory instruments, other than regulations and codes of practice – such as circulars, guidelines, FAQs, etc. – that may be issued by the DPA.
- The specifics regarding the members and employees within the DPA who are empowered to make these regulations.
It is unclear whether the JPC will revise the DPA’s structure or recommend statutory guidance for the DPA in executing any of its functions. This is unlikely, given that parent statutes for other regulators typically omit such guidance. As a result, the DPA may be required to make intentional and proactive choices on these matters, much like their regulatory counterparts in India. These are discussed in the section below.
Envisaging a Proactive Role for the DPA
As the primary regulatory body in charge of the enforcement of the forthcoming data protection framework, what should be the role of the DPA in setting standards for data protection?
The complexity of the subject matter, and the DPA’s role as the frontline body to define day-to-day operational standards for data protection for the entire digital economy, necessitates that it develop transparent guiding principles and procedures. Furthermore, given that the DPA’s autonomy and capacity are currently unclear, the DPA will need to make deliberate choices regarding how it conducts itself. In this regard, the skeletal nature of the PDP Bill also allows the DPA to determine its own procedures to carry out its tasks effectively.
This is not uncommon in India: various regulators have devised frameworks to create benchmarks for themselves. The Airports Economic Regulatory Authority (AERA) is obligated to follow a dedicated consultation process as per an explicit transparency mandate under the parent statute. However, the Insolvency and Bankruptcy Board of India (IBBI) has, on its own initiative, formulated regulations to guide its regulation-making functions. In other cases, consultation processes have been integrated into the respective framework through judicial intervention: the Telecom Regulatory Authority of India (TRAI) has been mandated to undertake consultations through judicial interpretation of the requirement for transparency under the Telecom Regulatory Authority of India Act, 1997 (TRAI Act).
In this regard, we develop a list of considerations that the DPA should look to address while carrying out its standard-setting functions. We also draw on best practices by Indian regulators and abroad, which can help identify feasible solutions for an effective DPA for India.
The choice of regulatory instruments
The DPA is empowered to issue regulations, codes of practice, and directions under the Bill. At the same time, regulators in India routinely issue other regulatory instruments to assign obligations and clarify them. Some commonly used regulatory instruments are outlined below. The terms used for instruments are not standard across regulators, and the list and description set out below outline the main concepts and not fixed labels for the instruments.
Overview of regulatory instruments
|
Circulars and Master Circulars |
Guidelines |
FAQs |
Directions |
Content |
Circulars are used to prescribe detailed obligations and prohibitions for regulated entities and can mimic regulations. Master circulars consolidate circulars on a particular topic periodically. |
These may be administrative or substantive, depending on the practice of the regulator in question. |
Issued in public interest by regulators to clarify the regulatory framework administered by them. They cannot prescribe new standards or create obligations. |
Issued to provide focused instructions to individual entities or class of entities in response to an adjudicatory action or in lieu of a current challenge. |
Binding character |
They are generally binding in the same manner as regulations and rules. However, if they go beyond the parent Act or existing rules and regulations, they may be struck down following a judicial review. |
They may or may not be binding depending upon the language employed or the regulator’s practice. |
Unclear whether these are binding and to what extent. However, crucial clarifications on important concepts sometimes emerge from FAQs. |
Binding in respect of the class of regulated entities to whom this is issued. |
Parliamentary scrutiny |
Unlike regulations, these do not have to be laid before the Parliament. |
Thus, all these instruments, to varying degrees, have been used to create binding obligations for regulated entities. The choice of regulatory instrument is not made systematically. Indeed, even a hierarchy of instruments and their functions are not clearly set out by most regulators. The rationale for deciding why a circular is issued as against a regulation is also unclear. A study on regulatory performance in India by Burman and Zaveri (2018) has highlighted an over-reliance on instruments such as circulars. As per their study, between 2014 and 2016, RBI and SEBI issued 1,016 and 122 circulars, as against 48 and 51 regulations, respectively. These circulars are not bound by the same pre-consultative mandate nor are they mandated to be laid before the Parliament. While circulars may have been intended for routine to routinely used to lay down administrative or procedural requirements, the study narrows its frame of reference to circulars which lay down substantive regulatory requirements. In this instance, it is unclear why parliamentary scrutiny is mandated for regulations alone, and not for instruments like circulars and directions, even though they lay down similarly substantive requirements. Furthermore, there have also been instances where certain instruments like FAQs have gone beyond their advisory scope to provide new directions or definitions that were not previously shared under binding instruments like regulations or circulars.
The DPA has been provided specific powers to issue regulations, codes of practice, and directions. However, the rationale for issuing one instead of the other has been absent from the PDP Bill so far. In such a scenario, it is important that the DPA transparently outlines the types of instruments it wishes to use, whether they are binding or advisory, and the procedure to be followed for issuing each.
Pre-legislative consultative rule-making
Participatory and consultative processes have emerged as core components of democratic rule-making by regulators. Transparent consultative mechanisms could also ameliorate capacity challenges in a new regulator (particularly for technical matters) and help enhance public confidence in the regulator.
In India, several regulators have adopted consultation mechanisms even when there is no specific statutory requirement. SEBI and IBBI routinely issue discussion papers and consultation papers. The RBI also issues draft instruments soliciting comments. As discussed previously, TRAI and AERA have distinct transparency mandates under which they carry out consultations before issuing regulations. However, these processes are not mandated all forms of subordinate legislation. Taking cognizance of this, the Financial Sector Legislative Reform Committee (FSLRC) has recommended transparency in the regulation-making process. This was carried forward by the Financial Stability and Development Council (FSDC), which recommended that consultation processes should be a prerequisite for all subordinate legislations, including circulars, guidelines, etc. A study on regulators’ adherence to these mandates, spanning TRAI, AERA, SEBI, and RBI, demonstrated that this pre-consultation mandate is followed inconsistently, if at all. Predictable consultation practices are therefore critical.
Furthermore, the study stated that it could not determine whether the consultation processes yielded meaningful participation, given that regulators are not obligated to disclose how public feedback was integrated into the rule-making process. Subordinate legislations issued in the form of circulars and guidelines also do not typically undergo the same rigorous consultation processes. Thus, an ideal consultation framework would comprise:
- Publication of the draft subordinate legislation along with a detailed explanation of the policy objectives. Further, the regulator should publish the internal or external studies conducted to arrive at the proposed legislation to engender meaningful discussion.
- Permitting sufficient time for the public and interested stakeholders to respond to the draft.
- Publishing all feedback received for the public to assess, and allowing them to respond to the feedback.
However, beyond specifying the manner of conducting consultations, it will be important for the DPA to determine where they are mandatory and binding, and for which type of subordinate legislations. These are discussed in the next section.
Choice of consultation mandates for distinct regulatory instruments
While the Bill provides for consultation processes for issuing and approving codes of practice, no such mechanism has been set out for other instruments. Nevertheless, specifying consultation mandates for different regulatory instruments is important to ensure that decision-making is consistent and regulation-making remains bound by transparent and accountable processes. As discussed above, regulatory instruments such as circulars and FAQs are not necessarily bound by the same consultation mandates in India. This distinction has been clarified in more sophisticated administrative law frameworks abroad. For instance, under the Administrative Procedures Act in the United States (US), all substantive rules made by regulatory agencies are bound by a consultation process, which requires notice of the proposed rule-making and public feedback. This does not preclude the regulatory agency from issuing clarifications, guidelines, and supplemental information on the rules issued. These documents do not require the consultation process otherwise required for formal rules. However, they cannot be used to expand the scope of the rules, set new legal standards, or have the effect of amending the rules. Nevertheless, agencies are not precluded from choosing to seek public feedback on such documents.
Similarly, the Information Commissioner’s Office in the United Kingdom (UK) takes into consideration public consultations and surveys while issuing toolkits and guidance for regulated entities on how to comply with the data protection framework in the UK.
Here, the DPA may choose to subject strictly binding instruments like regulations and codes of practice to pre-legislative consultation mandates, while softer mechanisms like FAQs may be subject to the publication of a detailed outline of the policy objective or online surveys to invite non-binding, advisory feedback. For each of these, the DPA will nonetheless need to create specific criteria by which it classifies instruments as binding and advisory, and further outline specific pre-legislative mandates for each category.
Framework for issuing regulatory instruments and instructions
While the DPA is likely to issue several instruments, the system based on which these instruments will be issued is not yet clear. Without a clearly thought-out framework, different departments within the regulator typically issue a series of directions, circulars, regulations, and other instruments. This raises questions regarding the consistency between instruments. This also requires stakeholders to go through multiple instruments to find the position of law on a given issue. Older Indian regulators are now facing challenges in adapting their ad hoc system into a framework. For example, the RBI currently issues a series of circulars and guidelines that are periodically consolidated on a subject-matter basis as Master Circulars and Master Directions. These are then updated and published on their website. IBBI also publishes handbooks and information brochures that consolidate instruments in an accessible manner.
While these are useful improvements, these practices cannot keep pace with rapid changes in regulatory instructions and are not complete or user-friendly (for example, the subject-matter based consolidation does not allow for filtering regulatory instructions by entity). Other jurisdictions have developed different techniques such as formal codification processes to consolidate regulations issued by government agencies under one unified code, register, or handbook, websites that allow for searches based on different parameters (subject-matter, type of instrument, chronology, entity-based), and guides tailored to different types of entities. The DPA, as a new regulator, can learn from this experience and adopt a consistent framework right from the beginning.
Further, an ethos of responsive regulation also requires the DPA to evaluate and revise directions and regulations periodically, in response to market and technology trends. A commitment to periodic evaluation of subordinate legislations entrenched in the rules is critical to reducing the dependence on officials and leadership, which may change. For instance, the IBBI has set out a mandatory review of regulations issued by it every three years.
Dedicating capacity for drafting subordinate legislations
The DPA has been granted the discretion to appoint experts and staff its offices with the personnel it needs. A study of European data protection authorities shows that by the time the General Data Protection Regulation, 2016 became effective, most of the authorities increased the number of employees with some even reporting a 240% increase. The annual spending on the authorities also went up for most countries. While these authorities do not necessarily frame subordinate legislations, they nonetheless create guidance toolkits and codes of practice as part of their supervisory functions.
In this regard, the DPA will need to ensure it has dedicated capacity in-house to draft subordinate legislations. Since regulators are generally seen as enforcement authorities, there is inadequate investment in capacity-building for drafting legislations in India.
Moreover, considering the multiplicity of instruments and guidance documents the DPA is expected to issue, it may seek to create templates for these instruments, along with compulsory constituents of different types of instruments. For instance, the Office of the Australian Information Commissioner is required to include a mandatory set of components while issuing or approving binding industry codes of practice.
Conclusion
The Personal Data Protection Bill, 2019 (in the final form recommended by the JPC and accepted by the MeitY) will usher in a new chapter in India’s data protection timeline. While the Bill will finally effectuate a nearly comprehensive data protection framework for India, it will also establish a new regulatory framework that sets up a new regulator, the DPA, to oversee the new data protection law. This DPA will be empowered to regulate entities across sectors and is likely to determine the success of the data protection law in India.
Furthermore, the DPA must not only contend with the complexity of markets and the fast pace of technological change, but it must also address anticipated regulatory capacity deficits, low levels of user literacy, the number and diversity of enities within its regulatory ambit, and the need to secure individual privacy within and outside the digital realm.
Thus, looking ahead, we must account for the questions of governance that the forthcoming DPA is likely to face, as these will directly impact how entities and citizens engage with the DPA. In India, regulatory agencies adopt distinct choices to fulfil their functions. Regulators have also fared variably in ensuring transparent and accountable decision-making driven by demonstrable expertise. Even if the final form of the PDP Bill does not address these gaps, the DPA has the opportunity to integrate benchmarks and best practices as discussed above within its own governance framework from the get-go as it takes on its daunting responsibilities under the PDP Bill.
(The authors are Research Fellow, Law, Technology and Society Initiative and Project Lead, Regulatory Governance Project respectively at the National Law School of India University, Bangalore. Views are personal.)
This post was reviewed by Vipul Kharbanda and Shweta Mohandas
References
- For a discussion on distinct regulatory choices, please see TV Somanathan, The Administrative and Regulatory State in Sujit Choudhary, Madhav Khosla, et al. (eds), Oxford Handbook of the Indian Constitution (2016).
- On best practices for consultative law-making, see generally European Union Better Regulation Communication, Guidelines for Effective Regulatory Consultations (Canada), and OECD Best Practice Principles for Regulatory Policy: The Governance of Regulators, 2014.
[1] Personal Data Protection Bill 2019, § 50(3).
[2] Personal Data Protection Bill 2019, § 50(4).
[3] Personal Data Protection Bill 2019, § 51.
Launching CIS’s Flagship Report on Private Crypto-Assets
This event will serve as a venue to bring together the various stakeholders involved in the crypto-asset space to discuss the state of crypto-asset regulation in India from a multitude of perspectives.
This event will serve as a venue to bring together the various stakeholders involved in the crypto-asset space to discuss the state of crypto-asset regulation in India from a multitude of perspectives.
About the private crypto-assets report
The first output under this agenda is our report on regulating private cryptocurrencies in India. This report aims to act as an introductory resource for policymakers who are looking to implement a regulatory framework for private crypto-assets. The report covers the technical elements of crypto-assets, their history, proposed use cases as well as its benefits and limitations. It also examines how crypto-assets fit within India’s current regulatory and legislative frameworks and makes clear recommendations for the same.
About the Event
The launch event will feature an initial presentation by researchers at CIS on the various findings and recommendations of its flagship report. This will be followed by a moderated discussion with 5 panelists who represent the space in policy, academia and industry. The discussion will be centered around the current status of crypto-assets in India, the government’s new proposed regulations and what the future holds for the Indian crypto market.
The confirmed panelists are as follows:
- Tanvi Ratna - Founder, Policy 4.0 and expert on blockchain and cryptocurrencies
- Shehnaz Ahmed - Senior Resident Fellow and Fintech Lead at Vidhi Centre for Legal Policy
- Nithya R. - Chief Executive Officer, Unos.Finace
- Prashanth Irudayaraj - Head of R&D, Zebpay
- Vipul Kharbanda - Non resident Fellow specialising in Fintech at CIS
- Aman Nair - Policy Offer, CIS (Moderator)
Registration link: https://us06web.zoom.us/webinar/register/WN_TdY-EPLoRvGY2rfsq4CENw
Agenda
17.30 - 17.35 | Welcome Note |
17.35 - 18.35 |
The status of private crypto assets in India
|
18.35 - 19.00 | Audience questions and discussion |
Report on Regulation of Private Crypto-assets in India
Link to Annex 1: Excerpts from the public consultation comments received from Ripple
EXECUTIVE SUMMARY
As of May 2021, the crypto-asset market in India stood at USD 6.6 billion. With no signs of slowing down, crypto-assets have become an undeniable part of both Indian and global financial markets. In the face of this rapid growth, policymakers are faced with the critical task of developing a regulatory framework to govern private crypto-assets.
This report is an introductory resource for those who are looking to engage with the development of such a framework. It first provides an overview of the technical underpinnings of crypto-assets, their history, and their proposed use cases. It then examines how they fit within India’s current legislative and regulatory framework before the introduction of a dedicated crypto-asset law and how the government and its institutions have viewed crypto-assets so far. We present arguments for and against the adoption of private crypto-assets and compare the experiences of 11 other countries and jurisdictions. Finally, we offer specific and actionable recommendations to help policymakers develop a cohesive regulatory framework.
What are crypto-assets?
At their core, cryptocurrencies (CCs) or virtual currencies (VCs) are virtual monetary systems consisting of intangible ‘coins’ that use blockchain technology and serve a multitude of functions. While the word ‘cryptocurrency’ is often used as an umbrella term to describe various assets within the crypto-market, we note that these assets do not all share the same characteristics and often serve different functions. Therefore, for the purposes of this report, we use the term ‘crypto-assets’ rather than ‘cryptocurrencies’ when discussing the broad range of technologies within the crypto-marketplace.
Crypto-assets utilize a distributed ledger technology (DLT) known as blockchain technology. A blockchain is a complete ledger of all recorded transactions, which is created by combining individual blocks, each of which stores some information and is secured by a hash. Blockchain, by the very nature of its architecture, can be used to ensure decentralisation, authenticity, persistence, anonymity, and auditability.
History and proposed uses of crypto-assets
While other forms of crypto-assets have been proposed in the past, the modern conception of one can be traced to a research paper published under the pseudonym, Satoshi Nakamoto, which first proposed the idea of bitcoin. Bitcoin, as it was presented, seemingly solved the ‘double spending’ problem by utilising a form of DLT known as blockchain. Bitcoin, which was first operationalised on 3 January 2009, has since become the dominant crypto-asset globally – trading at over USD 57,000 per bitcoin.
Following the popularity of bitcoin, several alternatives (known as alt coins) were launched, the most popular of which is ethereum. According to CoinMarketCap, as of April 2021, there are over 9,500 traded cryptocurrencies in existence, with a total market capitalisation of over USD 2 trillion. The rise of bitcoin and other crypto-assets also led to the emergence of crypto-exchanges such as Binance. These exchanges act as platforms for users to buy, sell, and trade crypto-assets.
Many potential use cases for crypto-assets have been identified, including:
-
A method of payment
-
A tradeable asset
-
Initial coin offerings
-
Crypto-asset funds and derivatives
-
Crypto-asset-related services
Legal frameworks and private crypto-assets in India
While crypto-assets are also referred to as virtual currencies and cryptocurrencies, they do not currently satisfy the legal requirements to be considered as currency under Indian law. Although they have not yet been classified as a financial instrument, it is possible, through executive action, to include them within the definition of any of the following instruments: currency, foreign currency, derivative, collective investment scheme, or payment system. Such a move would give the government a legal basis to regulate the hitherto unregulated crypto-asset market, thereby bringing about much-needed stability and minimising the risk of fraudulent practices.
Understanding the case for private crypto-assets
This report examines both the benefits and limitations of crypto-assets across a number of their use cases.
-
Benefits of crypto-assets as a currency and asset:
-
Decentralised and verifiable transactions
-
Reduced transaction costs
-
Confidentiality
-
Security
-
Easier cross-border transactions
-
A potential tool for financial inclusion
-
As a tool for verifying asset ownership
-
Limitations of crypto-assets as a currency and asset:
-
High environmental costs
-
Replaces traditional transaction costs with new costs
-
A few actors dominate mining
-
Cannot replace traditional money
-
Introduces challenges in implementing monetary policies
-
Lack of network externalities
-
The limited actual impact on financial inclusion
-
Use for illegal activities
-
Prone to schemes and scams
International Perspectives
In order to draw inferences and lessons from a multitude of perspectives, we examined the regulatory frameworks governing private crypto-assets in the following jurisdictions:
-
European Union
-
El Salvador
-
United States
-
United Kingdom
-
Japan
-
Venezuela
-
South Africa
-
Singapore
-
Indonesia
-
Switzerland
-
China
Recommendations
Keeping in mind the benefits and limitations, as well as the experiences of countries around the world, we recommend the following measures to develop an appropriate regulatory framework in India. We have divided our recommendations into 2 types: immediate or short term measures and longer term measures.
-
Immediate/ Short Term Measures
-
Steering clear of bans private crypto-assets
Earlier, regulatory bodies made calls to ban private crypto-assets, but this resulted in crypto-assets being assimilated into the unregulated black market, thereby stifling potential innovation. To that end we recommend avoiding a ban, and adopting a regulatory approach instead.
-
Recommend that regulatory bodies use their ad-hoc power to exercise interim oversight
During the interim period, prior to the adoption of a dedicated crypto-asset legislation, crypto-assets could be included under one of the existing financial instrument categories. The regulations governing them would apply to both cryptocurrency exchanges as well as vendors who accept payments in cryptocurrencies.
-
Long Term Measures
-
Specific Regulatory Framework
There needs to be an independent regulatory framework specific to crypto-assets since the unique features of crypto-assets make them unsuitable to be regulated through the existing regulatory frameworks.
-
Identify clear definitions
Policymakers should adopt a definition of crypto-assets that includes entities that have emerged within the crypto space but which cannot be classified as ‘currencies’. They must also categorise and define these various entities as well as crypto-asset service providers.
-
Limit the scope of regulations to crypto-assets rather than their underlying technologies
Any proposed regulation must differentiate between the assets themselves and the technology underlying them. This would ensure that crypto-assets are not defined by the technology they currently use (i.e., DLT and blockchain) but by the purpose they serve.
-
Introduce a licensing and registration system
A licensing system, similar to those adopted in other jurisdictions such as the EU or New York, can be adopted to ensure that the state is able to effectively monitor crypto-related activities.
-
Make provisions for handling environmental concerns
A dedicated taxation programme and strict limitations on mining can minimise the environmental costs associated with crypto-assets.
-
Consumer protection measures
Any potential licensing system must include mandatory obligations for crypto-asset service providers that ensure that consumer rights are protected.
-
Taking measures to limit the impact of crypto-asset volatility on the wider financial market
Governments must take measures to ensure that the volatility of crypto-markets does not have a significant knock-on effect on the wider financial market. Such steps can include limiting financial institution holdings and dealings in crypto-assets.
-
Extending Anti Money Laundering/ Counter Financing of Terrorism norms and exchange control regulations
Given the anonymous nature of crypto-assets and their potential for use in illegal activities, we recommend ensuring that crypto-specific anti-money laundering, prohibition of terror financing and foreign exchange management rules are introduced.
-
Create an oversight body
Subject to the availability of resources, the government might consider establishing a dedicated body to oversee and research changes in the crypto-marketplace and make appropriate suggestions to the concerned regulatory authorities.
-
Taxation
The existing uncertainty with regard to the correct tax provisions to be applied for various transactions when dealing with crypto-assets needs to be clarified with specific amendments to the tax provisions.
-
Stablecoin Specific Regulation
Given the specific position occupied by stablecoins, and the unique role that they perform in the crypto-ecosystem, any legislation that seeks to regulate private crypto-assets must focus heavily on them. To that end, policymakers should pay special attention to identifying the various entities associated with stablecoins, applying greater regulatory scrutiny onto those entities and taking steps to limit the risk that stablecoins pose to the wider financial system.
Note
Online caste-hate speech: Pervasive discrimination and humiliation on social media
Download the research report, which includes a preface authored by Murali Shanmugavelan.
Executive summary
In India, religious texts, social customs, rituals, and everyday cultural practices legitimise the use of hate speech against marginalised caste groups. Notions of ‘purity’ of “upper-caste” groups, and conversely of ‘pollution’ of “lower-caste” groups, have made the latter subject to discrimination, violence, and dehumanisation. These dynamics invariably manifest online, with social media platforms becoming sites of caste discrimination and humiliation.
This report explores two research questions. First, what are the specific contours of caste-hate speech and abuse online? Semi-structured interviews with 12 scholars and activists belonging to DBA groups show that marginalised groups regularly face hate and harassment based on their caste. In addition to the overt hate, DBA individuals and groups are often targeted with abuse for availing reservations – a constitutionally mandated right. More covert forms of hate and abuse are also prevalent: trolls mix caste names and words from different languages together so that their comments appear meaningless to individuals who are not keenly aware of the local context.
Such hateful expression often emerges as a reaction from “upper-caste” groups to DBA resistance and social justice movements. Our respondents reported that the hateful expression can sometimes silence caste-marginalised groups and individuals, exclude them from conversations, and adversely impact their physical and mental wellbeing.
The second question we explore is how popular social media platforms and online spaces moderate caste-hate speech and abuse. We analysed the community guidelines, policies, and transparency reports of Facebook, Twitter, YouTube, and Clubhouse. We find that Facebook, Twitter, and Youtube incorporated ‘caste’ as a protected characteristic in their hate speech and harassment policies only in the last two or three years – many years after they entered Indian and South Asian markets — showing a disregard for the regional contexts of their users. Even after these policy changes, many platforms – whose forms for reporting harmful content list gender and race – still do not list caste.
Social media companies should radically increase their investment and capacity in understanding regional contexts and languages; they must focus on the dynamics of casteist hate and abuse. They will need to collaborate with a diverse set of DBA activists to ensure that their community guidelines effectively tackle overt, covert, and hyperlocal forms of caste-hate speech and abuse, and that their implementation and reporting processes match these policy commitments.
Download the research report, authored by Damni Kain, Shivangi Narayan, Torsha Sarkar and Gurshabad Grover, with a preface authored by Murali Shanmugavelan (Faculty Fellow – Race and Technology, Data and Society).
Call for respondents: the implementation of government-ordered censorship
Call for respondents
To study the implementation of online censorship and the experience of content creators, the Centre for Internet and Society is conducting interviews with people whose content has been affected by blocking orders from the Indian Government. We aim to empirically record the extent of government notice and opportunity for hearing made available to content creators.
If you, or someone you know, has had their content blocked or withheld by a blocking order, please reach out to us via email (divyansha[at]cis-india.org) or DM us on Twitter.
The type of content that can includes (but is not limited to):
-
blocking or withholding access of posts or accounts on social media
-
blocking or withholding access of websites by ISPs
-
search results that have been delisted by blocking orders
Please read below for a brief legal background on the powers of the Central Government to issue content takedown orders. If you have any concerns about the nature of attribution of your responses, please reach out: we are confident we will be able to find a solution that works for you.
Background
The rate of online censorship in India is increasing at an alarming rate, with the Government of India ordering around 10,000 webpages/social media accounts to be blocked just in 2020. The legal powers and procedures that enable such censorship thus deserve closer scrutiny. In particular, Section 69A of the Information Technology (IT) Act permits the Central Government to ask intermediaries (ranging from internet service providers to social media platforms) to block certain content for their users. Among other grounds, these powers can be used by the government in the interest of Indian sovereignty, national security, and public order.
The regulations (‘blocking rules’) issued under the Act lay down the procedure for the government to exercise such powers, and have long been criticised for enabling an opaque regime of online censorship. Such orders are passed by a committee comprising only government officials. There is no judicial or parliamentary oversight over such orders. The government does in certain instances have an obligation to find the content creator to give them a notice or hearing, but this has rarely been implemented.
To exacerbate this unaccountable form of censorship, there is a rule mandating the confidentiality of content takedown orders. This means that these orders are not public, severely impeding the ability to challenge broad censorship in courts. There are also cases where even individuals who created the affected content were not able to access the orders! Journalists, civil society organisations and activists are also hindered from probing how widespread India’s online censorship is, since the Government routinely rejects Right to Information (RTI) requests about these orders based on the confidentiality provision or national security grounds.
When this censorship regime was challenged in Shreya Singhal v. Union of India, the Supreme Court Court stated that the procedural safeguards were adequate, but such content takedown orders must always be open to challenge in court. Specifically, multiple legal scholars have read the judgment to mean a pre-decisional hearing must be afforded to the affected content creators.
Our forthcoming research project (described above) seeks to empirically investigate whether the Central Government is following this obligation.
What does the 2022 Finance Bill mean for crypto-assets in India?
The recent budget speech saw the Finance Minister propose a slew of measures that seek to clarify the taxation regime with regards to crypto-assets in India. The speech, and the proposed measures, have led to significant discussion and debate within the domestic crypto-ecosystem as questions continue to be raised about the ambiguous legality of crypto-assets in the absence of any dedicated crypto legislation. In the face of this uncertainty, this blog post looks to contextualise the proposals put forth by the Finance Minister in her speech and clarify what they mean for crypto-asset regulation and use in India.
Crypto-assets defined as a virtual digital asset and taxed at 30%
The 2022 Finance Bill, introduces the definition of a ‘virtual digital asset’ as an amendment to the 1961 Income Tax Act. The government defines a virtual digital asset as:
-
Any information or code or number or token (not being Indian currency or foreign currency), generated through cryptographic means or otherwise, by whatever name called, providing a digital representation of value exchanged with or without consideration, with the promise or representation of having inherent value, or functions as a store of value or a unit of account including its use in any financial transaction or investment, but not limited to investment scheme; and can be transferred, stored or traded electronically;
-
A non-fungible token or any other token of similar nature, by whatever name called;
-
Any other digital asset, as the Central Government may, by notification in the Official Gazette specify
Furthermore, the bill also introduces section 115BBH to the Income Tax Act, according to which income or profits generated from the transfer of ‘virtual digital assets’ would be taxed at the rate of 30%. The Finance Minister further clarified that any expenses incurred in carrying out such trades cannot be set-off or deducted from the profits generated, except the amount spent on buying the crypto-asset in the first place. Further in case of losses incurred from crypto-asset trading, such losses cannot be carried over to subsequent financial years.
While this clarification of the provisions relating to crypto-assets under the Income Tax Act, 1961 drew much attention for their potential impact, it is important to note that this measure is far from a departure from the government’s pre-existing stance. In responses to parliamentary questions on 30th November 2021 and 23rd March 2021, the Minister of Finance has repeatedly stressed the liability to pay taxes on any profits arising out of crypto trading under Indian tax law.
The budget speech merely clarified the provisions under which profits from crypto trading shall be taxed. Prior to this, there had been a fair amount of debate as to whether profits from crypto trading would be included as part of the regular income, income from other sources, or if they would be taxed as capital gains. This distinction and categorisation was critical as it determined the rate of tax applicable to crypto profits. However with the proposed section 115BBH, the government has made the taxation regime clearer on how these profits are to be taxed.
Introduction of TDS onto crypto-asset transactions and transfers
Another provision that this budget has proposed is the introduction of a 1% TDS (Tax Deducted at Source) on any transfer of a crypto-asset, provided that other conditions in relation to aggregate sales specified in the proposed section 194-S are satisfied. It must be noted that this TDS shall be payable not only on cash transfers, but even on trades where one cryptocurrency has been traded for another cryptocurrency. Thus trades where Bitcoin is bought using Tether would also be liable to such TDS deduction. Interestingly, the way the provision is currently drafted, if any person accepts payment for any goods or services in cryptocurrency, then such a person would be liable to pay TDS at 1%. This is because the Income Tax Act treats the cryptocurrency as the asset being bought or sold and treats the good or service being provided by the “seller” as the consideration. Thus instead of it being looked at as a transaction where one person is paying for something by using cryptocurrency, it is looked at as a transaction where the other person is buying the cryptocurrency and paying for it in kind (through the goods or services of the “seller”).
Questions of enforcement still remain
While these measures do bring a certain level of clarity and stability in the taxation regime with regard to crypto-assets, one still needs to grapple with the issue of their implementation. News reports suggest that about 15-20 percent of the investors in crypto assets are in the 18-20 year age group. A number of such investors do not file tax returns since they are mainly students investing their extra savings or “pocket money” to make a quick profit. Ensuring that this demographic actually follows the letter of the law may be a challenge for the revenue authorities and it would be interesting to see how they overcome it.
Submission to the Facebook Oversight Board: Policy on Cross-checks
Whether a cross-check system is needed?
Recommendation for the Board: The Board should investigate the cross-check system as part of Meta’s larger problems with algorithmically amplified speech, and how such speech gets moderated.
Explanation: The issues surrounding Meta’s cross-check system are not an isolated phenomena, but rather a reflection of the problems of algorithmically amplified speech, as well the lack of transparency in the company’s content moderation processes at large. At the outset, it must be stated that the majority of information on the cross-check system only became available after the media reports published by the Wall Street Journal. While these reports have been extensive in documenting various aspects of the system, there is no guarantee that the disclosures obtained by them provides the complete picture regarding the system. Further, given that Meta has been found to purposely mislead the Board and the public on how the cross-check system operates, it is worth investigating the incentives that necessitate the cross-check system in the first place.
Meta claims that the cross-check system works as a check for false positives: they “employ additional reviews for high-visibility content that may violate our policies.” Essentially they want to make sure that content that stays up on the platform and reaches a large audience, is following their content guidelines. However, previous disclosures have proven policy executives have prioritized the company’s ‘business interests’ over removing content that violates their policies; and have waited to act on known problematic content until significant external pressure was built up, including in India. In this context, the cross-check system seems less like a measure designed to protect users who might be exposed to problematic content, and more as a measure for managing public perception of the company.
Thus the Board should investigate both how content gains an audience on the platform, and how it gets moderated. Previous whistleblower disclosures have shown that the mechanics of algorithmically amplified speech, which prioritizes engagement and growth over safety, are easily taken advantage of by bad actors to promote their viewpoints through artificially induced virality. The cross-check system and other measures of content moderation at scale would not be needed if it was harder to spread problematic content on the platform in the first place. Instead of focusing only on one specific system, the Board needs to urge Meta to re-evaluate the incentives that drive content sharing on the platform and come up with ways that make the platform safer.
Meta’s Obligations under Human Rights Law
Recommendation for the Board: The Board must consider the cross-check system to be violative of Meta’s obligations under the International Covenant of Civil and Political Rights (ICCPR). Additionally, the cross-check ranker must be incorporated with Meta’s commitments towards human rights, as outlined in its Corporate Human Rights Policy.
Explanation: Meta’s content moderation, and by extension, its cross-check system, is bound by both international human rights law as well as the Board’s past decisions. At the outset, The system fails the three-pronged test of legality, legitimacy and necessity and proportionality, as delineated under Article 19(3) of the International Covenant of Civil and Political Rights (ICCPR). Firstly, this system has been “scattered throughout the company, without clear governance or ownership”, which violates the legality principle, since there is no clear guidance on what sort of speech, or which classes of users, would deserve the treatment of this system. Secondly, there is no understanding about the legitimacy of aims with which this system had been set up in the first place, beyond Meta’s own assertions, which have been countered by evidence to the contrary. Thirdly, the necessity and proportionality of the restriction has to be read along with the Rabat Plan of Action, which requires that for a statement to become a criminal offense, a six-pronged test of threshold is to be applied: a) the social and political context, b) the speaker’s position or status in the society, c) intent to incite the audience against a target group, d) content and form of the speech, e) extent of its dissemination and f) likelihood of harm. As news reports have indicated, Meta has been utilizing the cross-check system to privilege speech from influential users, and in the process, have shielded inflammatory, inciting speech that would have otherwise qualified the Rabat threshold. As such, the third requirement is not fulfilled either.
Additionally, Meta’s own Corporate Human Rights Policy commits to respecting human rights in line with the UN Guiding Principles on Business and Human Rights (UNGPs). Therefore, the cross-check ranker must incorporate these existing commitments to human rights, including:
- The right to freedom of expression:, UN Special Rapporteur on freedom of opinion and expression report A/HRC/38/35 (2018); Joint Statement of international freedom of expression monitors on COVID-19 (March, 2020).
The Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression addresses the regulation of user-generated online content.
The Joint Statement issued regarding Governmental promotion and protection of access to and free flow of information during the pandemic.
- The right to non-discrimination: International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), Articles 1 and 4.
Article 1 of the ICERD defines racial discrimination.
Article 4 of the ICERD condemns propaganda and organisations that attempt to justify discrimination or are based on the idea of racial supremacism.
- Participation in public affairs and the right to vote: ICCPR Article 25.
- The right to remedy: General Comment No. 31, Human Rights Committee (2004) (General Comment 31); UNGPs, Principle 22.
The General Comment discusses the nature of the general legal obligation imposed on State Parties to the Covenant.
Guiding Principle 22 states that where business enterprises identify that they have caused or contributed to adverse impacts, they should provide for or cooperate in their remediation through legitimate processes.
Meta’s obligations to avoid political bias and false positives in its cross-check system
Recommendation for the Board: The Board must urge Meta to adopt and implement the Santa Clara Principles on Transparency and Accountability to ensure that it is open about risks to user rights when there is involvement from the State in content moderation. Additionally, the Board must ask Meta to undertake a diversity and human rights audit of its existing policy teams, and commit to regular cultural training for its staff. Finally, the Board must investigate the potential conflicts of interest that arise when Meta’s policy team has any sort of nexus with political parties, and how that might impact content moderation.
Explanation: For the cross-check system to be free from biases, it is important for Meta to come clear to the Board regarding the rationale, standards and processes of the cross check review, and report on the relative error rates of determinations made through cross check compared with ordinary enforcement procedures. It also needs to disclose to the Board in which particular situations it uses the system and in which it does not. Principle 4 under the Foundational Principles of the Santa Clara Principles on Transparency and Accountability in Content Moderation encourage companies to realize the risk to user rights when there is involvement from the State in processes of content moderation and asks companies to makes users aware that: a) a state actor has requested/participated in an action on their content/account, and b) the company believes that the action was needed as per the relevant law. Users should be allowed access to any rules or policies, formal or informal work relationships that the company holds with state actors in terms of content regulation, the process of flagging accounts/content and state requests to action.
The Board must consider that erroneous lack of action (false positives) might not always be a system's flaw, but a larger, structural issue regarding how policy teams at Meta functions. As previous disclosures have proven, the contours of what sort of violating content gets to stay up on the platform has been ideologically and politically coloured, as policy executives have prioritized the company’s ‘business interests’ over social harmony. In such light, it is not sufficient to simply propose better transparency and accountability measures for Meta to adopt within its content moderation processes to avoid political bias. Rather, the Board’s recommendations must focus on the structural aspect of the human moderator and policy team that is behind these processes. The Board must ask Meta to a) urgently undertake a diversity and human rights audit of its existing team and its hiring processes, b) commit to regular training to ensure that their policy staffs are culturally literate in the socio-political regions they work in. Further, the Board must seriously investigate the potential conflicts of interest that happen when regional policy teams of Meta, with nexus to political parties, are also tasked with regulating content from representatives of these parties, and how that impacts the moderation processes at large.
Finally, in case decision 2021-001-FB-FBR, the Board made a number of recommendations to Meta which must be implemented in the current situation, including: a) considering the political context while looking at potential risks, b) employment of specialized staff in content moderation while evaluating political speech from influential users, c) familiarity with the political and linguistic context d) absence of any interference and undue influence, e) public explanation regarding the rules Meta uses when imposing sanctions against influential users and f) the sanctions being time-bound.
Transparency of the cross-check system
Recommendation for the Board: The Board must urge Meta to adopt and implement the Santa Clara Principles on Transparency and Accountability to increase the transparency of its cross-check system.
Explanation: There are ways in which Meta can increase the transparency of not only the cross-check system, but the content moderation process in general. The following recommendations draw from The Santa Clara Principles and the Board’s own previous decisions:
Considering Principle 2 of the Santa Clara Principles: Understandable Rules and Policies, Meta should ensure that the policies and rules governing moderation of content and user behaviors on Facebook are clear, easily understandable, and available in the languages in which the user operates.
Drawing from Principle 5 on Integrity and Explainability and from the Board’s recommendations in case decision 2021-001-FB-FBR which advises Meta to“Provide users with accessible information on how many violations, strikes and penalties have been assessed against them, and the consequences that will follow future violations”, Meta should be able to explain the content moderation decisions to users in all cases: when under review, when the decision has been made to leave the content up, or take it down. We recommend that Meta keeps a publicly accessible running tally of the number of moderation decisions made on a piece of content till date with their explanations. This would allow third parties (like journalists, activists, researchers and the OSB) to keep Facebook accountable when it does not follow its own policies, as has previously been the case.
In the same case decision, the Board has also previously recommended that Meta “Produce more information to help users understand and evaluate the process and criteria for applying the newsworthiness allowance, including how it applies to influential accounts. The company should also clearly explain the rationale, standards and processes of the cross-check review, and report on the relative error rates of determinations made through cross-checking compared with ordinary enforcement procedures.” Thus, Meta should publicly explain the cross check system in detail with examples, and make public the list of attributes that qualify a piece of content for secondary review.
The Operational Principles further provide actionable steps that Meta can take to improve the transparency of their content moderation systems. Drawing from Principle 2: Notice and Principle 3: Appeals, Meta should make a satisfactory appeals process available to users - whether they be decisions to leave up or takedown content. The appeals process should be handled by context aware teams. Meta should then publish the results of the cross check system and the appeals processes as part of their transparency reports including data like total content actioned, rate of success in appeals and cross check process, decisions overturned and preserved etc, which would also satisfy the first Operational Principle: Numbers.
Resources needed to improve the system for users and entities who do not post in English
Recommendations for the Board: The Board must urge Meta to urgently invest in resources to expand Meta’s content moderation services into the local contexts in which the company operates and invest in training data for local languages.
Explanation: The cross-check system is not a fundamentally different problem than content moderation. It has been shown time and time again that Meta’s handling of content from non-Western, non-English language contexts is severely lacking. It has been shown how content hosted on the platform has been used to inflame existing tensions in developing countries, promote religious hatred in India, genocide in Mynmar, and continue to support human traffickers and drug cartels on the platform even when these issues have been identified.
There is an urgent need to invest resources to expand Meta’s content moderation services into the local contexts in which the company operates. The company should make all policies and rule documents available in the languages of its users; invest in creating automated tools that are capable of flagging content that is not posted in English; and add people familiar with the local contexts to provide context aware second level reviews. The Facebook Files show that even according to company engineering, automated content moderation is still not very effective in identifying hate speech and other harmful content. Meta should focus on hiring, training and retaining human moderators who have knowledge of local contexts. Bias training of all content moderators, but especially those who will participate in the second level reviews in the cross check system is also extremely important to ensure acceptable decisions.
Additionally, in keeping with Meta’s human rights commitments, the company should develop and publish a policy for responding to human rights violations when they are pointed out by activists, researchers, journalists and employees as a matter of due process. It should not wait for a negative news cycle to stir them into action as it seems to have done in previous cases.
Benefits and limitations of automated technologies
Meta recently changed its moderation practice wherein it uses technology to prioritize content for human reviewers based on their severity index. Facebook has not specified the technology it uses to prioritize high-severity content but its research record shows that it uses a host of automated frameworks and tools to detect violating content, including image recognition tools, object detection tools, natural language processing models, speech models and reasoning models. One such model is the Whole Post Integrity Embeddings (“WPIE”) which can judge various elements in a given post (caption, comments, OCR, image etc.) to work out the context and the content of the post. Facebook also uses image matching models (SimSearchNet++) that are trained to match variations of an image with a high degree of precision and improved recall; multi-lingual masked language models on cross-lingual understanding such as XLM-R that can accurately identify hate-speech and other policy-violating content across a wide range of languages. More recently, Facebook introduced its machine translation model called the M2M-100 whose goal is to perform bidirectional translation between 7000 languages.
Despite the advances in this field, there are inherent limitations of such automated tools. Experts have repeatedly maintained that AI will get better at understanding context but it will not replace human moderators for the foreseeable future. One such instance where these limitations were exposed was during the COVID-19 pandemic, when Facebook sent its human moderators home - the number of removals flagged as hate speech on its platform more than doubled to 22.5 million in the second quarter of 2020 but the number of successful content appeals was dropped to 12,600 from the 2.3 million figure for the first three months of 2020.
The Facebook Files show that Meta’s AI cannot consistently identify first-person shooting videos, racist rants and even the difference between cockfighting and car crashes. Its automated systems are only capable of removing posts that generate just 3% to 5% of the views of hate speech on the platform and 0.6% of all content that violates Meta’s policies against violence and incitement. As such, it is difficult to accept the company’s claim that nearly all of the hate speech it takes down was discovered by AI before it was reported by users.
However, the benefits of such technology cannot be discounted, especially when one considers automated technology as a way of reducing trauma for human moderators. Using AI for prioritizing content for review can turn out to be effective for human moderators as it can increase their efficiency and reduce harmful effects of content moderation on them. Additionally, it can also limit the exposure of harmful content to internet users. Moreover, AI can also reduce the impact of harmful content on human moderators by allocating content to moderators on the basis of their exposure history. Theoretically, if the company’s claims are to be believed, using automated technology for prioritizing content for review can help to improve the mental health of Facebook’s human moderators.
Click to download the file here.
Notes for India as the digital trade juggernaut rolls on
The article by Arindrajit Basu was published in the Hindu on February 8, 2022
Despite the cancellation of the Twelfth Ministerial Conference (MC12) of the World Trade Organization (WTO) late last year (scheduled date, November 30, 2021-December 3, 2021) due to COVID-19, digital trade negotiations continue their ambitious march forward. On December 14, Australia, Japan, and Singapore, co-convenors of the plurilateral Joint Statement Initiative (JSI) on e-commerce, welcomed the ‘substantial progress’ made at the talks over the past three years and stated that they expected a convergence on more issues by the end of 2022.
Holding out
But therein lies the rub: even though JSI members account for over 90% of global trade, and the initiative welcomes newer entrants, over half of WTO members (largely from the developing world) continue to opt out of these negotiations. They fear being arm-twisted into accepting global rules that could etiolate domestic policymaking and economic growth. India and South Africa have led the resistance and been the JSI’s most vocal critics. India has thus far resisted pressures from the developed world to jump onto the JSI bandwagon, largely through coherent legal argumentation against the JSI and a long-term developmental vision. Yet, given the increasingly fragmented global trading landscape and the rising importance of the global digital economy, can India tailor its engagement with the WTO to better accommodate its economic and geopolitical interests?
Global rules on digital trade
The WTO emerged in a largely analogue world in 1994. It was only at the Second Ministerial Conference (1998) that members agreed on core rules for e-commerce regulation. A temporary moratorium was imposed on customs duties relating to the electronic transmission of goods and services. This moratorium has been renewed continuously, to consistent opposition from India and South Africa. They argue that the moratorium imposes significant costs on developing countries as they are unable to benefit from the revenue customs duties would bring.
The members also agreed to set up a work programme on e-commerce across four issue areas at the General Council: goods, services, intellectual property, and development. Frustrated by a lack of progress in the two decades that followed, 70 members brokered the JSI in December 2017 to initiate exploratory work on the trade-related aspects of e-commerce. Several countries, including developing countries, signed up in 2019 despite holding contrary views to most JSI members on key issues. Surprise entrants, China and Indonesia, argued that they sought to shape the rules from within the initiative rather than sitting on the sidelines.
India and South Africa have rightly pointed out that the JSI contravenes the WTO’s consensus-based framework, where every member has a voice and vote regardless of economic standing. Unlike the General Council Work Programme, which India and South Africa have attempted to revitalise in the past year, the JSI does not include all WTO members. For the process to be legally valid, the initiative must either build consensus or negotiate a plurilateral agreement outside the aegis of the WTO.
India and South Africa’s positioning strikes a chord at the heart of the global trading regime: how to balance the sovereign right of states to shape domestic policy with international obligations that would enable them to reap the benefits of a global trading system.
A contested regime
There are several issues upon which the developed and developing worlds disagree. One such issue concerns international rules relating to the free flow of data across borders. Several countries, both within and outside the JSI, have imposed data localisation mandates that compel corporations to store and process data within territorial borders. This is a key policy priority for India. Several payment card companies, including Mastercard and American Express, were prohibited from issuing new cards for failure to comply with a 2018 financial data localisation directive from the Reserve Bank of India. The Joint Parliamentary Committee (JPC) on data protection has recommended stringent localisation measures for sensitive personal data and critical personal data in India’s data protection legislation. However, for nations and industries in the developed world looking to access new digital markets, these restrictions impose unnecessary compliance costs, thus arguably hampering innovation and supposedly amounting to unfair protectionism.
There is a similar disagreement regarding domestic laws that mandate the disclosure of source codes. Developed countries believe that this hampers innovation, whereas developing countries believe it is essential for algorithmic transparency and fairness — which was another key recommendation of the JPC report in December 2021.
India’s choices
India’s global position is reinforced through narrative building by political and industrial leaders alike. Data sovereignty is championed as a means of resisting ‘data colonialism’, the exploitative economic practices and intensive lobbying of Silicon Valley companies. Policymaking for India’s digital economy is at a critical juncture. Surveillance reform, personal data protection, algorithmic governance, and non-personal data regulation must be galvanised through evidenced insights,and work for individuals, communities, and aspiring local businesses — not just established larger players.
Hastily signing trading obligations could reduce the space available to frame appropriate policy. But sitting out trade negotiations will mean that the digital trade juggernaut will continue unchecked, through mega-regional trading agreements such as the Regional Comprehensive Economic Partnership (RCEP) and the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). India could risk becoming an unwitting standard-taker in an already fragmented trading regime and lose out on opportunities to shape these rules instead.
Alternatives exist; negotiations need not mean compromise. For example, exceptions to digital trade rules, such as ‘legitimate public policy objective’ or ‘essential security interests’, could be negotiated to preserve policymaking where needed while still acquiescing to the larger agreement. Further, any outcome need not be an all-or-nothing arrangement. Taking a cue from the Digital Economy Partnership Agreement (DEPA) between Singapore, Chile, and New Zealand, India can push for a framework where countries can pick and choose modules with which they wish to comply. These combinations can be amassed incrementally as emerging economies such as India work through domestic regulations.
Despite its failings, the WTO plays a critical role in global governance and is vital to India’s strategic interests. Negotiating without surrendering domestic policy-making holds the key to India’s digital future.
Arindrajit Basu is Research Lead at the Centre for Internet and Society, India. The views expressed are personal. The author would like to thank The Clean Copy for edits on a draft of this article.
CIS Comments and Recommendations on the Data Protection Bill, 2021
After nearly two years of deliberations and a few changes in its composition, the Joint Parliamentary Committee (JPC), on 17 December 2021, submitted its report on the Personal Data Protection Bill, 2019 (2019 Bill). The report also contains a new version of the law titled the Data Protection Bill, 2021 (2021 Bill). Although there were no major revisions from the previous version other than the inclusion of all data under the ambit of the bill, some provisions were amended.
This document is a revised version of the comments we provided on the 2019 Bill on 20 February 2020, with updates based on the amendments in the 2021 Bill. Through this document we aim to shed light on the issues that we highlighted in our previous comments that have not yet been addressed, along with additional comments on sections that have become more relevant since the pandemic began. In several instances our previous comments have either not been addressed or only partially been addressed; in such instances, we reiterate them.
These general comments should be read in conjunction with our previous recommendations for the reader to get a comprehensive overview of what has changed from the previous version and what has remained the same. This document can also be read while referencing the new Data Protection Bill 2021 and the JPC’s report to understand some of the significant provisions of the bill.
Read on to access the comments | Review and editing by Arindrajit Basu. Copy editing: The Clean Copy; Shared under Creative Commons Attribution 4.0 International license
How Function Of State May Limit Informed Consent: Examining Clause 12 Of The Data Protection Bill
The blog post was published in Medianama on February 18, 2022. This is the first of a two-part series by Amber Sinha.
In 2018, hours after the Committee of Experts led by Justice Srikrishna Committee released their report and draft bill, I wrote an opinion piece providing my quick take on what was good and bad about the bill. A section of my analysis focused on Clause 12 (then Clause 13) which provides for non-consensual processing of personal data for state functions. I called this provision a ‘carte-blanche’ which effectively allowed the state to process a citizen’s data for practically all interactions between them without having to deal with the inconvenience of seeking consent. My former colleague, Pranesh Prakash pointed out that this was not a correct interpretation of the provision as I had missed the significance of the word ‘necessary’ which was inserted to act as a check on the powers of the state. He also pointed out, correctly, that in its construction, this provision is equivalent to the position in European General Data Protection Regulation (Article 6 (i) (e)), and is perhaps even more restrictive.
While I agree with what Pranesh says above (his claims are largely factual, and there can be no basis for disagreement), my view of Clause 12 has not changed. While Clause 35 has been a focus of considerable discourse and analysis, for good reason, I continue to believe that Clause 12 remains among the most dangerous provisions of this bill, and I will try to unpack here, why.
The Data Protection Bill 2021 has a chapter on the grounds for processing personal data, and one of those grounds is consent by the individual. The rest of the grounds deal with various situations in which personal data can be processed without seeking consent from the individual. Clause 12 lays down one of the grounds. It allows the state to process data without the consent of the individual in the following cases —
a) where it is necessary to respond to a medical emergency
b) where it is necessary for state to provide a service or benefit to the individual
c) where it is necessary for the state to issue any certification, licence or permit
d) where it is necessary under any central or state legislation, or to comply with a judicial order
e) where it is necessary for any measures during an epidemic, outbreak or public health
f) where it is necessary for safety procedures during disaster or breakdown of public order
In order to carry out (b) and (c), there is also the added requirement that the state function must be authorised by law.
Twin restrictions in Clause 12
The use of the words ‘necessary’ and ‘authorised by law’ is intended to pose checks on the powers of the state. The first restriction seeks to limit actions to only those cases where the processing of personal data would be necessary for the exercise of the state function. This should mean that if the state function can be exercised without non-consensual processing of personal data, then it must be done so. Therefore, while acting under this provision, the state should only process my data if it needs to do so, to provide me with the service or benefit. The second restriction means that this would apply to only those state functions which are authorised by law, meaning only those functions which are supported by validly enacted legislation.
What we need to keep in mind regarding Clause 12 is that the requirement of ‘authorised by law’ does not mean that legislation must provide for that specific kind of data processing. It simply means that the larger state function must have legal backing. The danger is how these provisions may be used with broad mandates. If the activity in question is non-consensual collection and processing of, say, demographic data of citizens to create state resident hubs which will assist in the provision of services such as healthcare, housing, and other welfare functions; all that may be required is that the welfare functions are authorised by law.
Scope of privacy under Puttaswamy
It would be worthwhile, at this point, to delve into the nature of restrictions that the landmark Puttaswamy judgement discussed that the state can impose on privacy. The judgement clearly identifies the principles of informed consent and purpose limitation as central to informational privacy. As discussed repeatedly during the course of the hearings and in the judgement, privacy, like any other fundamental right, is not absolute. However, restrictions on the right must be reasonable in nature. In the case of Clause 12, the restrictions on privacy in the form of denial of informed consent need to be tested against a constitutional standard. In Puttaswamy, the bench was not required to provide a legal test to determine the extent and scope of the right to privacy, but they do provide sufficient guidance for us to contemplate how the limits and scope of the constitutional right to privacy could be determined in future cases.
The Puttaswamy judgement clearly states that “the right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21 and as a part of the freedoms guaranteed by Part III of the Constitution.” By locating the right not just in Article 21 but also in the entirety of Part III, the bench clearly requires that “the drill of various Articles to which the right relates must be scrupulously followed.” This means that where transgressions on privacy relate to different provisions in Part III, the different tests under those provisions will apply along with those in Article 21. For instance, where the restrictions relate to personal freedoms, the tests under both Article 19 (right to freedoms) and Article 21 (right to life and liberty) will apply.
In the case of Clause 12, the three tests laid down by Justice Chandrachud are most operative —
a) the existence of a “law”
b) a “legitimate State interest”
c) the requirement of “proportionality”.
The first test is already reflected in the use of the phrase ‘authorised by law’ in Clause 12. The test under Article 21 would imply that the function of the state should not merely be authorised by law, but that the law, in both its substance and procedure, must be ‘fair, just and reasonable.’ The next test is that of ‘legitimate state interest’. In its report, the Joint Parliamentary Committee places emphasis on Justice Chandrachud’s use of “allocation of resources for human development” in an illustrative list of legitimate state interests. The report claims that the ground, functions of the state, thus satisfies the legitimate state interest. We do not dispute this claim.
Proportionality and Clause 12
It is the final test of ‘proportionality’ articulated by the Puttaswamy judgement, which is most operative in this context. Unlike Clauses 42 and 43 which include the twin tests of necessity and proportionality, the committee has chosen to only employ one ground in Clause 12. Proportionality is a commonly employed ground in European jurisprudence and common law countries such as Canada and South Africa, and it is also an integral part of Indian jurisprudence. As commonly understood, the proportionality test consists of three parts —
a) the limiting measures must be carefully designed, or rationally connected, to the objective
b) they must impair the right as little as possible
c) the effects of the limiting measures must not be so severe on individual or group rights that the legitimate state interest, albeit important, is outweighed by the abridgement of rights.
The first test is similar to the test of proximity under Article 19. The test of ‘necessity’ in Clause 12 must be viewed in this context. It must be remembered that the test of necessity is not limited to only situations where it may not be possible to obtain consent while providing benefits. My reservations with the sufficiency of this standard stem from observations made in the report, as well as the relatively small amount of jurisprudence on this term in Indian law.
The Srikrishna Report interestingly mentions three kinds of scenarios where consent should not be required — where it is not appropriate, necessary, or relevant for processing. The report goes on to give an example of inappropriateness. In cases where data is being gathered to provide welfare services, there is an imbalance in power between the citizen and the state. Having made that observation, the committee inexplicably arrives at a conclusion that the response to this problem is to further erode the power available to citizens by removing the need for consent altogether under Clause 12. There is limited jurisprudence on the standard of ‘necessity’ under Indian law. The Supreme Court has articulated this test as ‘having reasonable relation to the object the legislation has in view.’ If we look elsewhere for guidance on how to read ‘necessity’, the ECHR in Handyside v United Kingdom held it to be neither “synonymous with indispensable” nor does it have the “flexibility of such expressions as admissible, ordinary, useful, reasonable or desirable.” In short, there must be a pressing social need to satisfy this ground.
However, the other two tests of proportionality do not find a mention in Clause 12 at all. There is no requirement of ‘narrow tailoring’, that the scope of non-consensual processing must impair the right as little as possible. It is doubly unfortunate that this test does not find a place, as unlike necessity, ‘narrow tailoring’ is a test well understood in Indian law. This means that while there is a requirement to show that processing personal data was necessary to provide a service or benefit, there is no requirement to process data in a way that there is minimal non-consensual processing. The fear is that as long as there is a reasonable relation between processing data and the object of the function of state, state authorities and other bodies authorised by it, do not need to bother with obtaining consent.
Similarly, the third test of proportionality is also not represented in this provision. It provides a test between the abridgement of individual rights and legitimate state interest in question, and it requires that the first must not outweigh the second. The absence of the proportionality test leaves Clause 12 devoid of any such consideration. Therefore, as long as the test of necessity is met under this law, it need not evaluate the denial of consent against the service or benefit that is being provided.
The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state, by setting the threshold to circumvent informed consent extremely low. In the next post, I will demonstrate the ease with which Clause 12 can allow indiscriminate data sharing by focusing on the Indian government’s digital healthcare schemes.
Clause 12 Of The Data Protection Bill And Digital Healthcare: A Case Study
The blog post was published in Medianama on February 21, 2022. This is the second in a two-part series by Amber Sinha.
In the previous post, I looked at provisions on non-consensual data processing for state functions under the most recent version of recommendations by the Joint Parliamentary Committee on India’s Data Protection Bill (DPB). The true impact of these provisions can only be appreciated in light of ongoing policy developments and real-life implications.
To appreciate the significance of the dilutions in Clause 12, let us consider the Indian state’s range of schemes promoting digital healthcare. In July 2018, NITI Aayog, a central government policy think tank in India released a strategy and approach paper (Strategy Paper) on the formulation of the National Health Stack which envisions the creation of a federated application programming interface (API)-enabled health information ecosystem. While the Ministry of Health and Family Welfare has focused on the creation of Electronic Health Records (EHR) Standards for India during the last few years and also identified a contractor for the creation of a centralised health information platform (IHIP), this Strategy Paper advocates a completely different approach, which is described as a Personal Health Records (PHR) framework. In 2021, the National Digital Health Mission (NDHM) was launched under which a citizen shall have the option to obtain a digital health ID. A digital health ID is a unique ID and will carry all health records of a person.
A Stack Model for Big Data Ecosystem in Healthcare
A stack model as envisaged in the Strategy Paper, consists of several layers of open APIs connected to each other, often tied together by a unique health identifier. The open nature of APIs has the advantage that it allows public and private actors to build solutions on top of it, which are interoperable with all parts of the stack. It is however worth considering both the ‘openness’ and the role that the state plays in it.
Even though the APIs are themselves open, they are a part of a pre-decided technological paradigm, built by private actors and blessed by the state. Even though innovators can build on it, the options available to them are limited by the information architecture created by the stack model. When such a technological paradigm is created for healthcare reform and health data, the stack model poses additional challenges. By tying the stack model to the unique identity, without appropriate processes in place for access control, siloed information, and encrypted communication, the stack model poses tremendous privacy and security concerns. The broad language under Clause 12 of the DPB needs to be looked at in this context.
Clause 12 allows non-consensual processing of personal data where it is necessary “for the performance of any function of the state authorised by law” in order to provide a service or benefit from the State. In the previous post, I had highlighted the import of the use of only ‘necessity’ to the exclusion of ‘proportionality’. Now, we need to consider its significance in light of the emerging digital healthcare apparatus being created by the state.
The National Health Stack and National Digital Health Mission together envision an intricate system of data collection and exchange which in a regulatory vacuum would ensure unfettered access to sensitive healthcare data for both the state and private actors registered with the platforms. The Stack framework relies on repositories where data may be accessed from multiple nodes within the system. Importantly, the Strategy Paper also envisions health data fiduciaries to facilitate consent-driven interaction between entities that generate the health data and entities that want to consume the health records for delivering services to the individual. The cast of characters involve the National Health Authority, health care providers and insurers who access the National Health Electronic Registries, unified data from different programmes such as National Health Resource Repository (NHRR), NIN database, NIC and the Registry of Hospitals in Network of Insurance (ROHINI), private actors such as Swasth, iSpirt who assist the Mission as volunteers. The currency that government and private actors are interested in is data.
The promised benefits of healthcare data in an anonymised and aggregate form range from Disease Surveillance to Pharmacovigilance as well as Health Schemes Management Systems and Nutrition Management, benefits which have only been more acutely emphasised during the pandemic. However, the pandemic has also normalised the sharing of sensitive healthcare data with a variety of actors, without much thinking on much-needed data minimisation practises.
The potential misuses of healthcare data include greater state surveillance and control, predatory and discriminatory practices by private actors which rely on Clause 12 to do away with even the pretense of informed consent so long as the processing of data is deemed necessary by the state and its private sector partners to provide any service or benefit.
Subclause (e) in Clause 12, which was added in the last version of the Bill drafted by MeitY and has been retained by the JPC, allows processing wherever it is necessary for ‘any measures’ to provide medical treatment or health services during an epidemic, outbreak or threat to public health. Yet again, the overly-broad language used here is designed to ensure that any annoyances of informed consent can be easily brushed aside wherever the state intends to take any measures under any scheme related to public health.
Effectively, how does the framework under Clause 12 alter the consent and purpose limitation model? Data protection laws introduce an element of control by tying purpose limitation to consent. Individuals provide consent to specified purposes, and data processors are required to respect that choice. Where there is no consent, the purposes of data processing are sought to be limited by the necessity principle in Clause 12. The state (or authorised parties) must be able to demonstrate necessity to the exercise of state function, and data must only be processed for those purposes which flow out of this necessity. However, unlike the consent model, this provides an opportunity to keep reinventing purposes for different state functions.
In the absence of a data protection law, data collected by one agency is shared indiscriminately with other agencies and used for multiple purposes beyond the purpose for which it was collected. The consent and purpose limitation model would have addressed this issue. But, by having a low threshold for non-consensual processing under Clause 12, this form of data processing is effectively being legitimised.
Nothing to Kid About – Children's Data Under the New Data Protection Bill
The article was originally published in the Indian Journal of Law and Technology
For children, the internet has shifted from being a form of entertainment to a medium to connect with friends and seek knowledge and education. However, each time they access the internet, data about them and their choices are inadvertently recorded by companies and unknown third parties. The growth of EdTech apps in India has led to growing concerns regarding children's data privacy. This has led to the creation of a self-regulatory body, the Indian EdTech Consortium. More recently, the Advertising Standard Council of India has also started looking at passing a draft regulation to keep a check on EdTech advertisements.
The Joint Parliamentary Committee (JPC), tasked with drafting and revising the Data Protection Bill, had to consider the number of changes that had happened after the release of the 2019 version of the Bill. While the most significant change was the removal of the term “personal data” from the title of the Bill, in a move to create a comprehensive Data Protection Bill that includes both personal and non personal data. Certain other provisions of the Bill also featured additions and removals. The JPC, in its revised version of the Bill has removed an entire class of data fiduciaries – guardian data fiduciary – which was tasked with greater responsibility for managing children's data. While the JPC justified the removal of the guardian data fiduciary stating that consent from the guardian of the child is enough to meet the end for which personal data of children are processed by the data fiduciary. While thought has been given to looking at how consent is given by the guardian on behalf of the child, there was no change in the age of children in the Bill. Keeping the age of consent under the Bill as the same as the age of majority to enter into a contract under the 1872 Indian Contract Act – 18 years – reveals the disconnect the law has with the ground reality of how children interact with the internet.
In the current state of affairs where Indian children are navigating the digital world on their own there is a need to look deeply at the processing of children’s data as well as ways to ensure that children have information about consent and informational privacy. By placing the onus of granting consent on parents, the PDP Bill fails to look at how consent works in a privacy policy–based consent model and how this, in turn, harms children in the long run.
1. Age of Consent
By setting the age of consent as 18 years under the Data Protection Bill, 2021, it brings all individuals under 18 years of age under one umbrella without making a distinction between the internet usage of a 5-year-old child and a 16-year-old teenager. There is a need to look at the current internet usage habits of children and assess whether requiring parental consent is reasonable or even practical. It is also pertinent to note that the law in the offline world does make the distinction between age and maturity. For example, it has been highlighted that Section 82 of the Indian Penal Code, read with Section 83, states that any act by a child under the age of 12 years shall not be considered an offence, while the maturity of those aged between 12–18 years will be decided by the court (individuals between the age of 16–18 years can also be tried as adults for heinous crimes). Similarly, child labour laws in the country allow children above the age of 14 years to work in non-hazardous industries, which would qualify them to fall under Section 13 of the Bill, which deals with employee data.
A 2019 report suggests that two-thirds of India’s internet users are in the 12–29 years age group, accounting for about 21.5% of the total internet usage in metro cities. With the emergence of cheaper phones equipped with faster processing and low internet data costs, children are no longer passive consumers of the internet. They have social media accounts and use several applications to interact with others and make purchases. There is a need to examine how children and teenagers interact with the internet as well as the practicality of requiring parental consent for the usage of applications.
Most applications that require age data request users to type in their date of birth; it is not difficult for a child to input a suitable date that would make it appear that they are over 18. In this case they are still children but the content that will be presented to them would be those that are meant for adults including content that might be disturbing or those involving use of alcohol and gambling. Additionally, in their privacy policies, applications sometimes state that they are not suited for and restricted from users under 18. Here, data fiduciaries avoid liability by placing the onus on the user to declare their age and properly read and understand the privacy policy.
Reservations about the age of consent under the Bill have also been highlighted by some members of the JPC through their dissenting opinions. MP Ritesh Pandey suggested that the age of consent should be reduced to 14 years keeping the best interest of the children in mind as well as to support children in benefiting from technological advances. Similarly, MP Manish Tiwari in his dissenting opinion suggested regulating data fiduciaries based on the type of content they provide or data they collect.
2. How is the 2021 Bill Different from the 2019 Bill?
The 2019 draft of the Bill consisted of a class of data fiduciaries called guardian data fiduciaries – entities that operate commercial websites or online services directed at children or which process large volumes of children’s personal data. This class of fiduciaries was barred from profiling, tracking, behavioural monitoring, and running targeted advertising directed at children and undertaking any other processing of personal data that can cause significant harm to the child. In the previous draft, such data fiduciaries were not allowed to engage in ‘profiling, tracking, behavioural monitoring of children, or direct targeted advertising at children’. There was also a prohibition on conducting any activities that might significantly harm the child. As per Chapter IV, any violation could attract a penalty of up to INR 15 crore of the worldwide turnover of the data fiduciary for the preceding financial year, whichever is higher. However, this separate class of data fiduciaries do not have any additional responsibilities. It is also unclear as to whether a data fiduciary that does not by definition fall within such a category would be allowed to engage in activities that could cause ‘significant harm’ to children.
The new Bill also does not provide any mechanisms for age verification and only lays down considerations that verification processes should be undertaken. Furthermore, the JPC has suggested that consent options available to the child when they attain the age of majority i.e. 18 years should be included within the rule frame by the Data Protection Authority instead of being an amendment in the Bill.
3. In the Absence of a Guardian Data Fiduciary
The 2018 and 2019 drafts of the PDP Bill consider a child to be any person below the age of 18 years. For a child to access online services, the data fiduciary must first verify the age of the child and obtain consent from their guardian. The Bill does not provide an explicit process for age verification apart from stating that regulations shall be drafted in this regard. The 2019 Bill states that the Data Protection Authority shall specify codes of practice in this matter. Taking best practices into account, there is a need for ‘user-friendly and privacy-protecting age verification techniques’ to encourage safe navigation across the internet. This will require looking at technological developments and different standards worldwide. There is a need to hold companies accountable for the protection of children’s online privacy and the harm that their algorithms cause children and to make sure that they are not continued.
The JPC in the 2021 version of the Bill removed provisions about guardian data fiduciaries, stating that there was no advantage in creating a different class of data fiduciary. As per the JPC, even those data fiduciaries that did not fall within the said classification would also need to comply with rules pertaining to the personal data of children i.e. with Section 16 of the Bill. Section 16 of the Bill requires the data fiduciary to verify the child’s age and obtain consent from the parent/guardian. The manner of age verification has also een spelt out. Furthermore, since ‘significant data fiduciaries’ is an existing class, there is still a need to comply with rules related to data processing. The JPC also removed the phrase “in the best interests of, the child” and “is in the best interests of, the child” under sub-clause 16(1), implying that the entire Bill concerned the rights of the data principal and the use of such terms dilutes the purpose of the legislation and could give way to manipulation by the data fiduciary.
Conclusion
Over the past two years, there has been a significant increase in applications that are targeted at children. There has been a proliferation of EduTech apps, which ideally should have more responsibility as they are processing children's data. We recommend that instead of creating a separate category, such fiduciaries collecting children's data or providing services to children be seen as ‘significant data fiduciaries’ that need to take up additional compliance measures.
Furthermore, any blanket prohibition on tracking children may obstruct safety measures that could be implemented by data fiduciaries. These fears are also increasing in other jurisdictions as there is a likelihood to restrict data fiduciaries from using software that looks out for such as Child Sexual Abuse Material as well as online predatory behaviour. Additionally, concerning the age of consent under the Bill, the JPC could look at international best practices and come up with ways to make sure that children can use the internet and have rights over their data, which would enable them to grow up with more awareness about data protection and privacy. One such example to look at could be the Children's Online Privacy Protection Rule (COPPA) in the US, where the rules apply to operators of websites and online services that collect personal information from kids under 13 or provide services to children that are directed at a general audience, but have actual knowledge that they collect personal information from such children. A form of combination of this system and the significant data fiduciary classification could be one possible way to ensure that children’s data and privacy are preserved online.
The authors are researchers at the Centre for Internet and Society and thank their colleague Arindrajit Basu for his inputs.
Response to MeitY's India Digital Ecosystem Architecture 2.0 Comment Period
This submission presents a response by the Centre for Internet & Society (CIS) to MeitY's India Digital Ecosystem Architecture 2.0 Comment Period (hereinafter, the “Consultation”) released in February 2022. CIS appreciates MeitY's consultations, and is grateful for the opportunity to put forth its views and comments.
Read the response here
Cybernorms: Do they matter IRL (In Real Life): Event Report
During the first half of the year, multilateral forums including the United Nations made some progress in identifying norms, rules, and principles to guide responsible state behaviour in cyberspace, even though the need for political compromise between opposing geopolitical blocs stymied progress to a certain extent.
There is certainly a need to formulate more concrete rules and norms. However, at the same time, the international community must assess the extent to which existing norms are being implemented by states and non-state actors alike. Applying agreed norms to "real life" throws up challenges of interpretation and enforcement, to which the only long-term solution remains regular dialogue and exchange both between states and other stakeholders.
This was the thinking behind the session titled "Cybernorms: Do They Hold Up IRL (in Real Life)?", organised at RightsCon 2021 by four non-governmental organisations: the Association for Progressive Communications (APC), the Centre for Internet & Society (CIS), Global Partners Digital (GPD), and Research ICT Africa (RIA). Cyber norms do not work unless states and other actors call out violations of norms, actively observe and implement them, and hold each other accountable. As the organisers of the event, we devised hypothetical scenarios based on three real-life examples of large-scale incidents and engaged with discussants who sought to apply agreed cyber norms to them. We chose to create scenarios without referring to real states as we wanted the discussion to focus on the implementation and interpretation of norms rather than the specific political situation of each actor.
Through this interactive exercise involving an array of expert stakeholders (including academics, civil society, the technical community, and governments) and communities from different regions, we sought to answer whether and how the application of cyber norms can mitigate harms, especially to vulnerable communities, and identify possible gaps in current normative frameworks. For each scenario, we aimed to diagnose whether cyber norms have been violated, and if so, what could and should be done, by identifying the next steps that can be taken by all the stakeholders present. For each scenario, we highlight why we chose it, outline the main points of discussion, and articulate key takeaways for norm implementation and interpretation. We hope this exercise will feed into future conversations around both norm creation and enforcement by serving as a framework for guiding optimal norm enforcement.
Read the full report here
CIS Seminar Series
The first seminar series was held on 7th and 8th October on the theme of ‘Information Disorder: Mis-, Dis- and Malinformation’,
Theme for the Second Seminar (to be held online)
Moderating Data, Moderating Lives: Debating visions of (automated) content moderation in the contemporary
Artificial Intelligence (AI) and Machine Learning (ML) based approaches have become increasingly popular as “solutions” to curb the extent of mis-, dis- mal-information, hate speech, online violence and harassment on social media. The pandemic and the ensuing work from home policy forced many platforms to shift to automated moderation which further highlighted the inefficacy of existing models (Gillespie, 2020) to deal with the surge in misinformation and harassment. These efforts, however, raise a range of interrelated concerns such as freedom and regulation of speech on the privately public sphere of social media platforms; algorithmic governance, censorship and surveillance; the relation between virality, hate, algorithmic design and profits; and social, political and cultural implications of ordering social relations through computational logics of AI/ML.
On one hand, large-scale content moderation approaches (that include automated AI/ML-based approaches) have been deemed “necessary” given the enormity of data generated (Gillespie, 2020), on the other hand, they have been regarded as “technological fixtures” offered by the Silicon Valley (Morozov, 2013), or “tyrannical” as they erode existing democratic measures (Harari, 2018). Alternatively, decolonial, feminist and postcolonial approaches insist on designing AI/ML models that centre voices of those excluded to sustain and further civic spaces on social media (Siapera, 2022).
From the global south perspective, issues around content moderation foreground the hierarchies inbuilt in the existing knowledge infrastructures. First, platforms remain unwilling to moderate content in under-resourced languages of the global south citing technological difficulties. Second, given the scale and reach of social media platforms and inefficient moderation models, the work is outsourced to workers in the global south who are meant to do the dirty work of scavenging content off these platforms for the global north. Such concerns allow us to interrogate the techno-solutionist approaches as well as their critiques situated in the global north. These realities demand that we articulate a different relationship with AI/ML while also being critical of AI/ML as an instrument of social empowerment for those at the “bottom of the pyramid” (Arora, 2016).
The seminar invites scholars interested in articulating nuanced responses to content moderation that take into account the harms perpetrated by algorithmic governance of social relations and irresponsible intermediaries while being cognizant of the harmful effects of mis-, dis- mal-information, hate speech, online violence and harassment on social media.
We invite abstract submissions that respond to these complexities vis-a-vis content moderation models or propose provocations regarding automated moderation models and their in/efficacy in furthering egalitarian relationships on social media, especially in the global south.
Submissions can reflect on the following themes using legal, policy, social, cultural and political approaches. Also, the list is not exhaustive and abstracts addressing other ancillary concerns are most welcome:
- Metaphors of (content) moderation: mediating utopia, dystopia, scepticism surrounding AI/ML approaches to moderation.
- From toxic to healthy, from purity to impurity: Interrogating gendered, racist, colonial tropes used to legitimize content moderation
- Negotiating the link between content moderation, censorship and surveillance in the global south
- Whose values decide what is and is not harmful?
- Challenges of building moderation models for under resourced languages.
- Content moderation, algorithmic governance and social relations.
- Communicating algorithmic governance on social media to the not so “tech-savvy” among us.
- Speculative horizons of content moderation and the future of social relations on the internet.
- Scavenging abuse on social media: Immaterial/invisible labour for making for-profit platforms safer to use.
- Do different platforms moderate differently? Interrogating content moderation on diverse social media platforms, and multimedia content.
- What should and should not be automated? Understanding prevalence of irony, sarcasm, humour, explicit language as counterspeech.
- Maybe we should not automate: Alternative, bottom-up approaches to content moderation
Seminar Format
We are happy to welcome abstracts for one of two tracks:
Working paper presentation
A working paper presentation would ideally involve a working draft that is presented for about 15 minutes followed by feedback from workshop participants. Abstracts for this track should be 600-800 words in length with clear research questions, methodology, and questions for discussion at the seminar. Ideally, for this track, authors should be able to submit a draft paper two weeks before the conference for circulation to participants.
Coffee-shop conversations
In contrast to the formal paper presentation format, the point of the coffee-shop conversations is to enable an informal space for presentation and discussion of ideas. Simply put, it is an opportunity for researchers to “think out loud” and get feedback on future research agendas. Provocations for this should be 100-150 words containing a short description of the idea you want to discuss.
We will try to accommodate as many abstracts as possible given time constraints. We welcome submissions from students and early career researchers, especially those from under-represented communities.
All discussions will be private and conducted under the Chatham House Rule. Drafts will only be circulated among registered participants.
Please send your abstracts to [email protected].
Timeline
- Abstract Submission Deadline: 18th April
- Results of Abstract review: 25th April
- Full submissions (of draft papers): 25th May
- Seminar date: Tentative 31st May
References
Arora, P. (2016). Bottom of the Data Pyramid: Big Data and the Global South. International Journal of Communication, 10(0), 19.
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 2053951720943234. https://doi.org/10.1177/2053951720943234
Harari, Y. N. (2018, August 30). Why Technology Favors Tyranny. The Atlantic. https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
Morozov, E. (2013). To save everything, click here: The folly of technological solutionism (First edition). PublicAffairs.
Siapera, E. (2022). AI Content Moderation, Racism and (de)Coloniality. International Journal of Bullying Prevention, 4(1), 55–65. https://doi.org/10.1007/s42380-021-00105-7
Personal Data Protection Bill must examine data collection practices that emerged during pandemic
The article by Shweta Mohandas and Anamika Kundu was originally published by news nine on November 29, 2021.
The Personal Data Protection Bill (PDP) is speculated to be introduced during the winter session of the parliament soon, and the report of the Joint Parliamentary Committee (JPC) has already been adopted by the committee on Monday. The Report of the JPC comes after almost two years of deliberation and secrecy over how the final version of the Personal Data Protection Bill will be. Since the publication of the 2019 version of the PDP Bill, the Covid 19 pandemic and the public safety measures have opened the way for a number of new organisations and reasons to collect personal data that was non-existent in 2019. Hence along with changes that have been suggested by multiple civil society organisations, the dissent notes submitted by the members of the JPC, the new version of the PDP Bill must also look at how data processing has changed over the span of two years.
Concerns with the bill
At the outset there are certain parts of the PDP Bill which need to be revised in order to uphold the spirit of privacy and individual autonomy laid out in the Puttaswamy judgement. The two sections that need to be in line with the privacy judgement are the ones that allow for non consensual processing of data by the government, and by employers. The PDP Bill in its current form provides wide-ranging exemptions which allow government agencies to process citizen's data in order to fulfil its responsibilities.
In the 2018 version of bill, drafted by the Justice Srikrishna Committee exemptions granted to the State with regard to processing of data was subject to a four pronged test which required the processing to be (i) authorised by law; (ii) in accordance with the procedure laid down by the law; (iii) necessary; and (iv) proportionate to the interests being achieved. This four pronged test was in line with the principles laid down by the Supreme Court in the Puttaswamy judgement. The 2019 version of the PDP Bill has diluted this principle by merely retaining the 'necessity principle' and removing the other requirements which is not in consonance with the test laid down by the Supreme Court in Puttaswamy.
Section 35 was also widely discussed in the panel meetings where members had argued the removal of 'public order' as a ground for exemption. The panel also insisted for 'judicial or parliamentary oversight' to grant such exemptions. The final report did not accept these suggestions stating a need to balance national security, liberty and privacy of an individual. There ought to be prior judicial review of the written order exempting the governmental agency from any provisions of the bill. Allowing the government to claim an exemption if it is satisfied to be "necessary or expedient" can be misused.
Another clause which gives the data principal a wide berth is with respect to employee data Section 13 of the current version of the bill provides the employer with a leeway into processing employee data (other than sensitive personal data) without consent based on two grounds: when consent is not appropriate, or when obtaining consent would involve disproportionate effort on the part of the employer.
The personal data so collected can only be collected for recruitment, termination, attendance, provision of any service or benefit, and assessing performance. This covers almost all of the activities that require data of the employee. Although the 2019 version of the bill excludes non-consensual collection of sensitive personal data (a provision that was missing in the 2018 version of the bill), there is still a lot of scope to improve this provision and provide employees further right to their data. At the outset the bill does not define employee and employer, which could result in confusion as there is no one definition of these terms across Indian Labour Laws.
Additionally, the bill distinguishes between employee and consumer, where the consumer of the same company or service has a greater right to their data than an employee. In the sense that the consumer as a data principal has the option to use any other product or service and also has the right to withdraw consent at any time, in the case of an employee the consequence of refusing consent or withdrawing consent would be being terminated from the employment. It is understood that there is a requirement for employee data to be collected, and that consent does not work the same way as it does in the case of a consumer.
The bill could ensure that employers have some responsibility towards the data they collect from the employees, such as ensuring that they are only used for the purpose for which they were collected, the employee knows how long their data will be retained, and know if the data is being processed by third parties. It is also worth mentioning that the Indian government is India's largest employer spanning a variety of agencies and public enterprises.
Concerns highlighted by JPC Members
Going back to the few members of the JPC who have moved dissent notes, specifically with regard to governmental exemptions. Jairam Ramesh filed a dissent note, to which many other opposition members followed suit. While Jairam Ramesh praised the JPC's functioning, he disagreed with certain aspects of the Report. According to him, the 2019 bill is designed in a manner where the right to privacy is given importance only in cases of private activities. He raised concerns regarding the unbridled powers given to the government to exempt itself from any of the provisions.
The amendment suggested by him would require parliamentary approval before exemption would take place. He also added that Section 12 of the bill which provided certain scenarios where consent was not needed for processing of personal data should have been made 'less sweeping'. Similarly, Gaurav Gogoi's note stated that the exemptions would create a surveillance state and similarly criticised Section 12 and 35 of the bill. He also mentioned that there ought to be parliamentary oversight for the exemptions provided in the bill.
On the same issue, Congress leader Manish Tiwari noted that the bill creates 'parallel universes' - one for the private sector which needs to be compliant and the other for the State which can exempt itself. He has opposed the entire bill stating there exists an "inherent design flaw". He has raised specific objections to 37 clauses and stated that any blanket exemptions to the state goes against the Puttaswamy Judgement.
In their joint dissent note, Derek O'Brien and Mahua Mitra have said that there is a lack of adequate safeguards to protect the data principals' privacy and the lack of time and opportunity for stakeholder consultations. They have also pointed out that the independence of the DPA will cease to exist with the present provision of allowing the government powers to choose members and the chairman. Amar Patnaik is to object to the lack of inclusion of state level authorities in the bill. Without such bodies, he says, there would be federal override.
Conclusion
While a number of issues were highlighted by civil society, the members of the JPC, and the media, the new version of the bill should also need to take into account the shifts that have taken place in view of the pandemic. The new version of the data protection bill should take into consideration the changes and new data collection practices that have emerged during the pandemic, be comprehensive and leave very little provisions to be decided later by the Rules.
Comments to the draft Motor Vehicle Aggregators Scheme, 2021
CIS, established in Bengaluru in 2008 as a non-profit organisation, undertakes interdisciplinary research on internet and digital technologies from public policy andacademic perspectives. Through its diverse initiatives, CIS explores, intervenes in, and advances contemporary discourse and regulatory practices around internet, technology,and society in India, and elsewhere.
CIS is grateful for the opportunity to submit its comments to the draft Scheme. Please find below our thematically organised comments.
Click here to read more.
Decoding India’s Central Bank Digital Currency (CBDC)
In her budget speech presented in the Parliament on 1 February 2022, the Finance Minister of India – Nirmala Sitharaman – announced that India will launch its own Central Bank Digital Currency (CBDC) from the financial year 2022–23. The lack of information regarding the Indian CBDC project has resulted in limited discussions in the public sphere. This article is an attempt to briefly discuss the basics of CBDCs such as the definition, necessity, risks, models, and associated technologies so as to shed more light on India’s CBDC project.
1. What is a CBDC?
Before delving into the various aspects of a CBDC, we must first define it. A CBDC in its simplest form has been described by the RBI as “the same as currency issued by a central bank but [which] takes a different form than paper (or polymer). It is sovereign currency in an electronic form and it would appear as liability (currency in circulation) on a central bank’s balance sheet. The underlying technology, form and use of a CBDC can be moulded for specific requirements. CBDCs should be exchangeable at par with cash.”
2. Policy Goals
Launching any CBDC involves the setting up of infrastructure, which comes with notable costs. It is therefore imperative that the CBDC provides significant advantages that can justify the investment it entails. Some of the major arguments in favour of CBDCs and their relevance in the Indian context are as follows.
Financial Inclusion: In countries with underdeveloped banking and payment systems, proponents believe that CBDCs can boost financial inclusion through the provision of basic accounts and an electronic payment system operated by the central bank. However, financial inclusion may not be a powerful motive in India, where at least one member in 99% of rural and urban households have a bank account, according to some surveys. Even the US Federal Reserve recognises that further research is needed to assess the potential of CBDCs to expand financial inclusion, especially among underserved and lower-income households.
Access to Payments: – It is claimed that CBDCs provide scope for improving the existing payments landscape by offering fast and efficient payment services to users. Further, supporters claim that a well-designed, robust, open CBDC platform could enable a wide variety of firms to compete to offer payment services. It could also enable them to innovate and generate new capabilities to meet the evolving needs of an increasingly digitalised economy. However, it is not yet clear exactly how CBDCs would achieve this objective and whether there would be any noticeable improvements in the payment systems space in India, which already boasts of a fairly advanced and well-developed payment systems market.
Increased System Resilience: Countries with a highly developed digital payments landscape are aware of their reliance on electronic payment systems. The operational resilience of these systems is of critical importance to the entire payments landscape. The CBDC would not only act as a backup to existing payment systems in case of an emergency but also reduce the credit risk and liquidity risk, i.e., the risk that payment system providers will turn insolvent and run out of liquidity. Such risks can also be mitigated through robust regulatory supervision of the entities in the payment systems space.
Increasing Competition: A CBDC has the potential to increase competition in the country’s payments sector in two main ways, (i) directly – by providing an alternative payment system that competes with existing private players, and (ii) by providing an open platform for private players, thereby reducing entry barriers for newer players offering more innovative services at lower costs.
Addressing Illicit Transactions: Cash offers a level of anonymity that is not always available with existing payment systems. If a CBDC offers the same level of anonymity as cash then it would pose a greater CFT/AML (Combating Financial Terrorism/ Anti-Money Laundering) risk. However, if appropriate CFT/AML requirements are built into the design of the CBDC, it could address some of the concerns regarding its usage in illegal transactions. Such CFT/AML requirements are already being followed by existing banks and payment systems providers.
Reduced Costs: If a CBDC is adopted to the extent that it begins to act as a substitute for cash, it could allow the central bank to print lesser currency, thereby saving costs on printing, transporting, storing, and distributing currency. Such a cost reduction is not exclusive to only CBDTs but can also be achieved through the widespread adoption of existing payment systems.
Reduction in Private Virtual Currencies (VCs): Central banks are of the view that a widely used CBDC will provide users with an alternative toexisting private cryptocurrencies and thereby limit various risks including credit risks, volatility risks, risk of fraud, etc. However if a CBDC does not offer the same level of anonymity or potential for high return on investment that is available with existing VCs, it may not be considered an attractive alternative.
Serving Future Needs: Several central banks see the potential for “programmable money” that can be used to conduct transactions automatically on the fulfilment of certain conditions, rules, or events. Such a feature may be used for automatic routing of tax payments to authorities at the point of sale, shares programmed to pay dividends directly to shareholders, etc. Specific programmable CBDCs can also be issued for certain types of payments such as toward subway fees, shared bike fees, or bus fares. This characteristic of CBDCs has huge potential in India in terms of delivery of various subsidies.
3. Potential Risks
As with most things, CBDCs have certain drawbacks and risks that need to be considered and mitigated in the designing phase itself. A successful and widely adopted CBDC could change the structure and functions of various stakeholders and institutions in the economy.
Both private and public sector banks rely on bank deposits to fund their loan activities. Since bank deposits offer a safe and risk-freeway to park one’s savings, a large number of people utilise this facility, thereby providing banks with a large pool of funds that is utilised for lending activities. A CBDC could offer the public a safer alternative to bank deposits since it eradicates even the minute risk of the bank becoming insolvent making it more secure than regular bank deposits. A widely accepted CBDC could adversely affect bank deposits, thereby reducing the availability of funds for lending by banks and adversely affecting credit facilities in the economy. Further, since a CBDC is a safer form of money, in times of stress, people may opt to convert funds stored in banks into safer CBDCs, which might cause a bank run. However, these issues can be mitigated by making the CBDC deposits non-interest-bearing, thus reducing their attractiveness as an alternative to bank deposits. Further, in times of monetary stress, the central bank could impose restrictions on the amount of bank money that can be converted into the CBDC, just as it has done in the case of cash withdrawals from specific banks when it finds that such banks are undergoing extreme financial stress.
If a significantly large portion of a country’s population adopts a private digital currency, it could seriously hamper the ability of the central bank to carry out several crucial functions, such as implementing the monetary policy, controlling inflation, etc.
It may be safe to say that the question of how CBDCs may affect the economy in general and more specifically, the central bank’s ability to implement monetary policy, seigniorage, financial stability, etc. requires further research and widespread consultation to mitigate any potential risk factors.
4. The Role of the Central Bank in a CBDC
The next issue that requires attention when dealing with CBDCs is the role and level of involvement of the central bank. This would depend not only on the number of additional functions that the central bank is comfortable adopting but also on the maturity of the fintech ecosystem in the country. Broadly speaking, there are three basic models concerning the role of the central bank in CBDCs:
(i) Unilateral CBDCs: Where the central bank performs all the functions right from issuing the CBDC to carrying out and verifying transactions and also dealing with the users by maintaining their accounts.
(ii) Hybrid or Intermediate Model: In this model, the CBDCs are issued by the central bank, but private firms carry out some of the other functions such as providing wallets to end users, verifying transactions, updating ledgers, etc. These private entities will be regulated by the central bank to ensure that there is sufficient supervision.
(iii) Synthetic CBDCs: In this model, the CBDC itself is not issued by the central bank but by private players. However, these CBDCs are backed by central bank liabilities, thus providing the sovereign stability that is the hallmark of a CBDC.
The mentioned models could also be modified to suit the needs of the economy; e.g., the second model could be modified by not only allowing private players to perform the user-facing functions, but also offering the same functions either by the central bank or even some other public sector enterprise. Such a scenario has the potential to offer services at a reduced price (perhaps with reduced functionalities) thereby fulfilling the financial inclusion and cost reduction policy goals mentioned above.
5. Role of Blockchain Technology
While it is true that the entire concept of a CBDC evolved from cryptocurrencies and that popular cryptocurrencies like Bitcoin and Ether are based on blockchain technology, recent research seems to suggest that blockchain may not necessarily be the default technology for a CBDC. Additionally, different jurisdictions have their own views on the merits and demerits of this technology, for example, the Bahamas and the Eastern Caribbean Central Bank have DLT-based systems; however, China has decided that DLT-based systems do not have adequate capacity to process transactions and store data to meet its system requirements.
Similarly, a project by the Massachusetts Institute of Technology (MIT) Currency Initiative and the Federal Reserve Bank of Boston titled “Project Hamilton” to explore the CBDC design space and its technical challenges and opportunities has surmised that a distributed ledger operating under the jurisdiction of different actors is not necessarily crucial. It was found that even if controlled by a single actor, the DLT architecture has downsides such as performance bottlenecks and significantly reduced transaction throughput scalability compared to other options.
6. Conclusion
Although a CBDC potentially offers some advantages, launching one is an expensive and complicated proposition, requiring in-depth research and detailed analyses of a large number of issues, only some of which have been highlighted here. Therefore, before launching a CBDC, central banks issue white papers and consult with the public in addition to major stakeholders, conduct pilot projects, etc. to ensure that the issue is analysed from all possible angles. Although the Reserve Bank of India is examining various issues such as whether the CBDC would be retail or wholesale, the validation mechanism, the underlying technology to be used, distribution architecture, degree of anonymity, etc., it has not yet released any consultation papers or confirmed the completion of any pilot programmes for the CBDC project.
It is, therefore, unclear whether there has been any detailed cost–benefit analysis by the government or the RBI regarding its feasibility and benefits over existing payment systems and whether such benefits justify the costs of investing in a CBDC. For example, several of the potential advantages discussed here, such as financial inclusion and improved payment systems may not be relevant in the Indian context, while others such as reduced costs and a reduction in illegal transactions may be achieved by improving the existing systems. It must be noted that the current system of distribution of central bank money has worked well over the years, and any systemic changes should be made only if the potential upside justifies such fundamental changes.
The Government of India has already announced the launch of the Indian CBDC in early 2023, but the lack of public consultation on such an important project is a matter of concern. The last time the RBI took a major decision in the crypto space without consulting stakeholders was when it banned financial institutions from having any dealings with crypto entities. On that occasion, the circular imposing the ban was struck down by the Supreme Court as violating the fundamental right to trade and profession. It is, therefore, imperative that the government and the Reserve Bank conduct wide-ranging consultations with experts and the public to conduct a detailed and thorough cost–benefit analysis to determine the feasibility of such a project before deciding on the launch of an Indian CBDC.
Response to the Pegasus Questionnaire issued by the SC Technical Committee
The questionnaire had 11 questions and the responses had to be submitted through an online form- which was available here. The last date for submitting the response was March 31, 2022. CIS had submitted the following responses to the questions in the questionnaire. Access the Response to the Questionnaire
Rethinking Acquisition of Digital Devices by Law Enforcement Agencies
Read the article originally published in RGNUL Student Research Review (RSRR) Journal
Abstract
The Criminal Procedure Code was created in the 1970s when the concept of the right to privacy was highly unacknowledged. Following the Puttuswamy I (2017) judgement of the Supreme Court affirming the right to privacy, these antiquated codes must be re-evaluated. Today, the police can acquire digital devices through summons and gain direct access to a person’s life, despite the summons mechanism having been intended for targeted, narrow enquiries. Once in possession of a device, the police attempt to circumvent the right against self-incrimination by demanding biometric passwords, arguing that the right does not cover biometric information . However, due to the extent of information available on digital devices, courts ought to be cautious and strive to limit the power of the police to compel such disclosures, taking into consideration the right to privacy judgement.
Keywords: Privacy, Criminal Procedural Law, CrPc, Constitutional Law
Introduction
New challenges confront the Indian criminal investigation framework, particularly in the context of law enforcement agencies (LEAs) acquiring digital devices and their passwords. Criminal procedure codes delimiting police authority and procedures were created before the widespread use of digital devices and are no longer pertinent to the modern age due to the magnitude of information available on a single device. A single device could provide more information to LEAs than a complete search of a person’s home; yet, the acquisition of a digital device is not treated with the severity and caution it deserves. Following the affirmation of the right to privacy in Puttuswamy I (2017), criminal procedure codes must be revamped, taking into consideration that the acquisition of a person’s digital device constitutes a major infringement on their right to privacy.
Acquisition of digital devices by LEAs through summons
Section 91 of the Criminal Procedure Code (CrPc) grants powers to a court or police officer in charge of a police station to compel a person to produce any form of document or ‘thing’ necessary and desirable to a criminal investigation. In Rama Krishna v State, ‘necessary’ and ‘desirable’ have been interpreted as any piece of evidence relevant to the investigation or a link in the chain of evidence. Abhinav Sekhri, a criminal law litigator and writer, has argued that the wide wording of this section allows summons to be directed towards the retrieval of specific digital devices.
As summons are target-specific, the section has minimal safeguards. However, several issues arise in the context of summons regarding digital devices. In the current day, access to a user’s personal device can provide comprehensive insight into their life and personality due to the vast amounts of private and personal information stored on it. In Riley v California, the Supreme Court of the United States (SCOTUS) observed that due to the nature of the content present on digital devices, summons for them are equivalent to a roving search, i.e., demanding the simultaneous production of all contents of the home, bank records, call records, and lockers. The Riley decision correctly highlights the need for courts to recognise that digital devices ought to be treated distinctly compared to other forms of physical evidence due to the repository of information stored on digital devices.
The burden the state must surpass in order to issue summons is low as the relevancy requirement is easily provable. As noted in Riley, police must identify which evidence on a device is relevant. Due to the sheer amount of data on phones, it is very easy for police to claim that there will surely be some form of connection between the content on the device and the case. Due to the wide range of offences available for Indian LEAs to cite, it is easy for them to argue that the content on the device is relevant to any number of possible offences. LEAs rarely face consequences for slamming the accused with a huge roster of charges – even if many of them are baseless – leading to the system being prone to abuse. The Indian Supreme Court in its judgement in Canara Bank noted that the burden of proof must be higher for LEAs when investigations violate the right to privacy. Tarun Krishnakumar notes that the trickle-down effect of Puttuswamy I will lead to new privacy challenges with regards to a summons to appear in court. Puttuswamy I, will provide the bedrock and constitutional framework, within which future challenges to the criminal process will be undertaken. It is important for the court to recognise the transformative potential within the Puttuswamy judgement to help ensure that the right to privacy of citizens is safeguarded. The colonial logic of policing – wherein criminal procedure law was merely a tool to maximise the interest of the state at the cost of the people – must be abandoned. Courts ought to devise a framework under Section 91 to ensure that summons are narrowly framed to target specific information or content within digital devices. Additionally, the digital device must be collected following a judicial authority issuing the summons and not a police authority. Prior judicial warrants will require LEAs to demonstrate their requirement for the digital device; on estimating the impact on privacy, the authority can issue a suitable summons. Currently, the only consideration is if the item will furnish evidence relevant to the investigation; however, judges ought to balance the need for the digital device in the LEA’s investigation with the users’ right to privacy, dignity, and autonomy.
Puttuswamy I provides a triple test encompassing legality, necessity, and proportionality to test privacy claims. Legality requires that the measure be prescribed by law, necessity analyses if it is the least restrictive means being adopted by the state, and proportionality checks if the objective pursued by the measure is proportional to the degree of infringement of the right. The relevance standard, as mentioned before, is inadequate as it does not provide enough safeguards against abuse. The police can issue summons based on the slightest of suspicions and thus get access to a digital device, following which they can conduct a roving enquiry of the device to find evidence of any other offence, unrelated to the original cause of suspicion.
Unilateral police summons of digital devices cannot pass the triple test as it is grossly disproportionate and lacks any form of safeguard against the police. The current system has no mechanism for overseeing the LEAs; as long as LEAs themselves are of the view that they require the device, they can acquire it. In Riley, SCOTUS has already held that warrantless seizure of digital devices constitutes a violation of the right to privacy. India ought to also adopt a requirement of a prior judicial warrant for the procurement of devices by LEAs. A re-imagined criminal process would have to abide by the triple test in particular proportionality wherein the benefit claimed by the state ought not to be disproportionate to the impact on the fundamental right to privacy; and further, a framework must be proposed to provide safeguards against abuse.
Compelling the production of passwords of devices
In police investigations, gaining possession of a physical device is merely the first step in acquiring the data on the device, as the LEAs still require the passcodes needed to unlock the device. LEAs compelling the production of passcodes to gain access to potentially incriminating data raises obvious questions regarding the right against self-incrimination; however, in the context of digital devices, several privacy issues may crop up as well.
In Kathi Kalu Oghad, the SC held that compelling the production of fingerprints of an accused person to compare them with fingerprints discovered by the LEA in the course of their investigation does not violate the right to protection against self-incrimination of the accused. It has been argued that the ratio in the judgement prohibits the compelling of disclosure of passwords and biometrics for unlocking devices because Kathi Kalu Oghad only dealt with the production of fingerprints in order to compare the fingerprints with pre-existing evidence, as opposed to unlocking new evidence by utilising the fingerprint. However, the judgement deals with self-incrimination and does not address any privacy issues.
The right against self-incrimination approach alone may not be enough to resolve all concerns. Firstly, there may be varying levels of protection provided to different forms of password protections on digital devices; text- and pattern-based passcodes are inarguably protected under Art. 20(3) of the Constitution. However, the protection of biometrics-based passcodes relies upon the correct interpretation of the Kathi Kalu Oghad precedent. Secondly, Art. 20(3) only protects the accused in investigations and not when non-accused digital devices are acquired by LEAs and the passcodes of the devices demanded.
Therefore, considering the aforementioned points, it is pertinent to remember that the right against self-incrimination does not exist in a vacuum separate from privacy. It originates from the concept of decisional autonomy – the right of individuals to make decisions about matters intimate to their life without interference from the state and society. Puttuswamy I observed that decisional autonomy is the bedrock of the right to privacy, as privacy allows an individual to make these intimate decisions away from the glare of society and/or the state. This has heightened importance in this context as interference with such autonomy could lead to the person in question facing criminal prosecution. The SC in Selvi v Karnataka and Puttuswamy I has repeatedly affirmed that the right against self-incrimination and the right to privacy are linked concepts, with the court observing that the right to remain silent is an integral aspect of decisional autonomy.
In Virendra Khanna, the Karnataka High Court (HC) dealt with the privacy and self-incrimination concerns caused by LEAs compelling the disclosure of passwords. The HC brushes aside concerns related to privacy by noting that the right to privacy is not absolute and that an exception to the right to privacy is state interest and protection of law and order (para 5.11), and that unlawful disclosure of material to third parties could be an actionable wrong (para 15). The court’s interpretation of privacy effectively provides a free pass for the police to interfere with the right to privacy under the pretext of a criminal investigation. This conception of privacy is inadequate as the issue of proportionality is avoided, and the court does not attempt to ensure that the interference is proportionate with the outcome.
US courts also see the compelling of production of passcodes as an issue of self-incrimination as well as privacy. In its judgement in Application for a Search Warrant, a US court observed that compelling the disclosure of passcodes existed at an intersection of the right to privacy and self-incrimination; the right against self-incrimination serves to protect the privacy interests of suspects.
Disclosure of passwords to digital devices amounts to an intrusion of the privacy of the suspect as the collective contents on the digital device effectively amount to providing LEAs with a method to observe a person’s mind and identity. Police investigative techniques cannot override fundamental rights and must respect the personal autonomy of suspects – particularly, the choice between silence and speech. Through the production of passwords, LEAs can effectively get a snapshot of a suspect’s mind. This is analogous to the polygraph and narco-analysis test struck down as unconstitutional by the SC in Selvi as it violates decisional autonomy.
As Sekhri noted, a criminal process that reflects the aspirations of the Puttuswamy judgement would require LEAs to first explain with reasonable detail the material which they wish to find in the digital devices. Secondly, they must provide a timeline for the investigation to ensure that individuals are not subjected to inexhaustible investigations with police roving through their devices indefinitely. Thirdly, such a criminal process must demand, a higher burden to be discharged from the state if the privacy of the individual is infringed upon. These aspirations should form the bedrock of a system of judicial warrants that LEAs ought to be required to comply with if they wish to compel the disclosure of passwords from individuals. The framework proposed above is similar to the Virendra Khanna guidelines, as they provide a system of checks and balances that ensure that the intrusion on privacy is carried out proportionately; additionally, it would require LEAs to show a real requirement to demand access to the device. The independent eyes of a judicial magistrate provide a mechanism of oversight and a check against abuse of power by LEAs.
Conclusion
The criminal law apparatus is the most coercive power available to the state, and, therefore, privacy rights will become meaningless unless they can withstand it. Several criminal procedures in the country are rooted in colonial statutes, where the rights of the populace being policed were never a consideration; hence, a radical shift is required. However, post-1947 and Puttuswamy, the ignorance and refusal to submit to the rights of the population can no longer be justified and significant reformulation is necessary to guarantee meaningful protections to device owners. There is a need to ensure that the rights of individuals are protected, especially when the motivation for their infringement is the supposed noble intentions of the criminal justice system. Failing to defend the right to privacy in these moments would be an invitation for allowing the power of the state to increase and inevitably become absolute.
CCTVs in Public Spaces and the Data Protection Bill, 2021
The article by Anamika Kundu and Digvijay S. Chaudhary was originally published by RGNUL Student Research Review on April 20, 2022
Introduction
In recent times, Indian cities have seen an expansion of state deployed CCTV cameras. According to a recent report, in terms of CCTVs deployed, Delhi was considered as the most surveilled city in the world, surpassing even the most surveilled cities in China. Delhi was not the only Indian city in that list, Chennai and Mumbai also made it to the list. In Hyderabad as well, the development of a Command and Control Centre aims to link the city’s surveillance infrastructure in real-time. Even though studies have shown that there is little correlation between CCTVs and crime control, deployment of CCTV cameras has been justified on the basis of national security and crime deterrence. Such an activity brings about the collection and retention of audio-visual/visual information of all individuals frequenting spaces where CCTV cameras are deployed. This information could be used to identify them (directly or indirectly) based on their looks or other attributes. Potential risks associated with the misuse, and processing of such personal data also arise. These risks include large scale profiling, criminal abuse (law enforcement misusing CCTV information for personal gains), and discriminatory targeting (law enforcement disproportionately focusing on a particular group of people). As these devices capture personal data of individuals, this article seeks data protection safeguards available to data principals against CCTV surveillance employed by the State in a public space under the proposed Data Protection Bill, 2021 (the “DPB”).
Safeguards Available Under the Data Protection Bill, 2021
To use CCTV surveillance, the measures and compliance listed under the DPB have to be followed. Obligations of data fiduciaries available under Chapter II, such as consent (clause 11), notice requirement (clause 7), and fair and reasonable processing (clause 5) are common to all data processing entities for a variety of activities. Similarly, as the DPB follows the principles of data minimisation (clause 6), storage limitation (clause 9), purpose limitation (clause 5), lawful and fair processing (clause 4), transparency (clause 23), and privacy by design (clause 22), these safeguards too are common to all data processing entities/activities. If a data fiduciary processes personal data of children, it has to comply with the standards stated under clause 16.
Under the DPB, compliance differs on the basis of grounds and purpose of data processing. As such, if compliance standards differ, so do the availability of safeguards under the DPB. Of relevance to this article, there are three standards of compliance under the DPB wherein the standards of safeguards available to a data principal differ. First, cases which would fall under Chapter III and hence, not require consent. Chapter III lists grounds for processing of personal data without consent. Second, cases which would fall under exemption clauses in Chapter VIII. In such cases, the DPB or some of its provisions would be inapplicable. Clause 35 under Chapter VIII gives power to the Central Government to exempt any agency from the application of the DPB. Similarly, Clause 36 under Chapter VIII, exempts certain provisions for certain processing of personal data. Third, cases which would not fall under either of the above Chapters. In such cases, all safeguards available under the DPB would be available to the data principals. Consequently, safeguards available to data principals in each of these standards are different. We will go through each of these separately.
First, if the grounds of processing of CCTV information is such that it falls under the scope of Chapter III of the DPB, wherein the consent requirement is done away with, then in those cases, the notice requirement has to reflect such purpose, meaning that even if consent is not necessary for certain cases, other requirements under the DPB would still apply. Here, we must note that CCTV deployment by the state on such a large scale may be justified on the basis of conditions stated under clauses 12 and 14 of DPB – specifically, the condition for the performance of state function authorised by law, and public interest. The requirement under clause 12 of “authorised by law” simply means that the state function should have legal backing. Deployment of CCTVs is most likely to fall under clause 12 as various states have enacted legislations providing for CCTV deployment in the name of public safety. As a result, even if section 12 takes away the requirement of consent for certain cases, data principals should be able to exercise all rights accorded to them under the DPB (chapter V) except the right to data portability under clause 19.
Second, processing of personal data via CCTVs by government agencies could be exempted from DPB under clause 35 for certain cases under the clause. Another exemption that is particularly concerning with regard to the use of CCTVs is the exemption provided under clause 36(a). Section 36(a) says that the provisions of chapters II-VII would not apply where the data is processed in the interest of prevention, detection, investigation, and prosecution of any offence under the law. Chapters II-VII govern the obligations of data fiduciaries, grounds where consent would not be required, personal data of children, rights of data principals, transparency and accountability measures, and restrictions on transfer of personal data outside India respectively. In these cases, the requirement of fair and reasonable processing under clause 5 would also not apply. As a broad justification provided for CCTVs deployment by the government is crime control, it is possible that section 36(a) justification can be used to exempt the processing of CCTV footage from the above-mentioned safeguards.
From the above discussion, the following can be concluded. First, if the grounds of processing fall under Chapter III, then standards of fair and reasonable processing, notice requirement, and all rights except the right to data portability u/s 19 would be available to data principals. Second, if the grounds of processing fall under clause 36, then, in that case, consent requirement, notice requirement, and the rights under DPB would be unavailable as that section mandates the non-application of those chapters. In such a case, even the processing requirements of a fair and reasonable manner stand suspended. Third, if the grounds of processing of CCTV information doesn’t fall under Chapter III, then all obligations listed under Chapter II would have to be followed. Moreover, the data principal would be able to exercise all the rights available under Chapter V of the DPB.
Constitutional Standards
When the Supreme Court recognised privacy as a fundamental right in the case of Puttaswamy v. Union of India (“Puttaswamy”), it located the principles of informed consent and purpose limitation as central to informational privacy. It recognised that privacy inheres not in spaces but in an individual. It also recognised that privacy is not an absolute right and certain restrictions may be imposed on the exercise of the right. Before listing the constitutional standards that activities infringing privacy must adhere to, it’s important to answer whether there exists a reasonable expectation of privacy in CCTV footage deployed in a public space by the State?
In Puttaswamy, the court recognised that privacy is not denuded in public spaces. Writing for the plurality judgement, Chandrachud J. recognised that the notion of a reasonable expectation of privacy has elements both of a subjective and objective nature. Defining these concepts, he writes, “Privacy at a subjective level is a reflection of those areas where an individual desire to be left alone. On an objective plane, privacy is defined by those constitutional values which shape the content of the protected zone where the individual ought to be left alone…hence while the individual is entitled to a zone of privacy, its extent is based not only on the subjective expectation of the individual but on an objective principle which defines a reasonable expectation.” Note how in the above sentences, the plurality judgement recognises “a reasonable expectation” to be inherent in “constitutional values”. This is important as the meaning of what’s reasonable is to be constituted according to constitutional values and not societal norms. A second consideration that the phrase “reasonable expectation of privacy” requires is that an individual’s reasonable expectation is allied to the purpose for which the information is provided, as held in the case of Hyderabad v. Canara Bank (“Canara Bank”). Finally, the third consideration in defining the phrase is that it is context dependent. For example, in the case of In the matter of an application by JR38 for Judicial Review (Northern Ireland) 242 (2015) (link here), the UK Supreme Court was faced with a scenario where the police published the CCTV footage of the appellant involved in riotous behaviour. The question before the court was: “Whether the publication of photographs by the police to identify a young person suspected of being involved in riotous behaviour and attempted criminal damage can ever be a necessary and proportionate interference with that person’s article 8 [privacy] rights?” The majority held that there was no reasonable expectation of privacy in the case because of the nature of the criminal activity the appellant was involved in. However, the majority’s formulation of this conclusion was based on the reasoning that “expectation of privacy” was dependent on the “identification” purpose of the police. The court stated, “Thus, if the photographs had been published for some reason other than identification, the position would have been different and might well have engaged his rights to respect for his private life within article 8.1”. Therefore, as the purpose of publishing the footage was “identification” of the wrongdoer, the reasonable expectation of privacy stood excluded. The Canara Bank case was relied on by the SC in Puttaswamy. The plurality judgement in Puttaswamy also quoted the above paragraphs from the UK Supreme Court judgement.
Finally, the SC in the Aadhaar case, laid down the factors of “reasonable expectation of privacy.” Relying on those factors, the Supreme Court observed that demographic information and photographs do not raise a reasonable expectation of privacy. It further held that face photographs for the purpose of identification are not covered by a reasonable expectation of privacy. As this author has recognised, the majority in the Aadhaar case misconstrued the “reasonable expectation of privacy” to lie not in constitutional values as held in Puttaswamy but in societal norms. Even with the misapplication of the Puttaswamy principles by the majority in Aadhaar, it is clear that the exclusion of a “reasonable expectation of privacy” in face photographs is valid only for the purpose of “identification”. For purposes other than “identification”, there should exist a reasonable expectation of privacy in CCTV footage. Having recognised the existence of “reasonable expectation of privacy” in CCTV footage, let’s see how the safeguards mentioned under the DPB stand the constitutional standards of privacy laid down in Puttaswamy.
The bench in Puttaswamy located privacy not only in Article 21 but the entirety of part III of the Indian Constitution. Where transgression to privacy relates to different provisions under Part III, the tests evolved under those Articles would apply. Puttaswamy recognised that national security and crime control are legitimate state objectives. However, it also recognised that any limitation on the right must satisfy the proportionality test. The proportionality test requires a legitimate state aim, rational nexus, necessity, and balancing of interests. Infringement on the right to privacy occurs under the first and second standard. The first requirement of proportionality stands justified as national security and crime control have been recognised to be legitimate state objectives. However, it must be noted that the EU Guidelines on Processing of Personal Data through video devices state that the mere purpose of “safety” or “for your safety” is not sufficiently specific and is contrary to the principle that personal data shall be processed lawfully, fairly and in a transparent manner in relation to the data subject. The second requirement is a rational nexus. As stated above, there is little correlation between crime control and surveillance measures. Even if the state justifies a rational nexus between state aim and the action employed, it is the necessity part of the proportionality test where the CCTV surveillance measures fail (as explained by this author). Necessity requires us to draw a list of alternatives and their impact on an individual, and then do a balancing analysis with regard to the alternatives. Here, judicial scrutiny of the exemption order under clause 35 is a viable alternative that respects individual rights while at the same time, not interfering with the state’s aim.
Conclusion
Informed consent and purpose limitation were stated to be central principles of informational privacy in Puttaswamy. Among the three standards we identified, the principles of informed consent and purpose limitation remain available only in the third standard. In the first standard, even though the requirement of consent has become unavailable, the principle of purpose limitation would still be applicable to the processing of such data. The second standard is of particular concern wherein neither of those principles is available to data principals. It is worth mentioning here that in large scale monitoring activities such as CCTV surveillance, the safeguards which the DPB lists out would inevitably have an implementation flaw. The reason is that in scenarios where individuals refuse consent for large scale CCTV monitoring, what alternatives would the government offer to those individuals? Practically, CCTV surveillance would fall under clause 12 standards where consent would not be required. Even in those cases, would the notice requirement safeguard be diminished to “you are under surveillance” notices? When we talk about exercise of rights available under the DPB, how would an individual effectively exercise their right when the data processing is not limited to a particular individual? These questions arise because the safeguards under the DPB (and data protection laws in general) are based on individualistic notions of privacy. Interestingly, individual use cases of CCTVs have also increased with an increase in state use of CCTVs. Deployment of CCTVs for personal or domestic purposes would be exempt from the above-mentioned compliances as that would fall under the exemption provision of clause 36(d). Two additional concerns arise in relation to processing of data concerning CCTVs – the JPC report’s inclusion of Non-Personal Data (“NPD”) within the ambit of DPB, and the government’s plan to develop a National Automated Facial Recognition System (“AFRS”). A significant part of the data collected by CCTVs would fall within the ambit of NPD.With the JPC’s recommendation, it will be interesting to follow the processing standards for NPD under the DPB. AFRS has been imagined as a national database of photographs gathered from various agencies to be used in conjunction with facial recognition technology. The use of facial recognition technology with CCTV cameras raises concerns surrounding biometric data, and risks of large scale profiling. Indeed, section 27 of the DPB reflects this risk and mandates a data protection impact assessment to be undertaken by the data fiduciary with respect to processing involving new technologies or large scale profiling or use of biometric data by such technologies, however the DPB does not define what “new technology” means. Concerns around biometric data are outside the scope of the present article, however, it would be interesting to look at how the use of facial recognition technology with CCTVs could impact the safeguards under DPB.
Comments to the Draft National Health Data Management Policy 2.0
This is a joint submission on behalf of (i) Access Now, (ii) Article 21, (iii) Centre for New Economic Studies, (iv) Center for Internet and Society, (v) Internet Freedom Foundation, (vi) Centre for Justice, Law and Society at Jindal Global Law School, (vii) Priyam Lizmary Cherian, Advocate, High Court of Delhi (ix) Swasti-Health Catalyst, (x) Population Fund of India.
At the outset, we would like to thank the National Health Authority (NHA) for inviting public comments on the draft version of the National Health Data Management Policy 2.0 (NDHMPolicy 2.0) (Policy) We have not provided comments to each section/clause, but have instead highlighted specific broad concerns which we believe are essential to be addressed prior tothe launch of NDHM Policy 2.0.
Read on to view the full submission here
Issue Brief_Regulating Crypto-asset advertising in India
CC Edited_Comparing advertising standards for crypto_TCC.docx.pdf
—
PDF document,
310 kB (317993 bytes)
CIS Issue Brief on regulating Crypto-asset advertising in India
Over the past decade, crypto-assets have established themselves within the digital global zeitgeist. Crypto-asset (alternatively referred to as cryptocurrency) trading and investments continue to skyrocket, with centralised crypto exchanges seeing upwards of USD 14 trillion (or around INR 1086 trillion) in trading volume.
One of the key elements behind this exponential growth and embedding of crypto-assets into the global cultural consciousness has been the marketing and advertising efforts of crypto-asset providers and crypto-asset-related service providers.In India alone, crypto-exchange advertisements have permeated into all forms of media and seem to be increasing as the market continues to mature. At the same time, however, financial regulators such as the RBI have consistently pointed out concerns associated with crypto-assets, even going so far as to warn consumers and investors of the dangers that may arise from investing in crypto-assets through a multitude of circulars.
In light of this, we analyse the regulations governing crypto-assets in India by examining the potential and actual limitations posed by them. We then compare them with the regulations governing the advertising of another financial instrument, mutual funds. Finally, we perform a comparative analysis of crypto-asset advertising regulations in four jurisdictions - The EU, Singapore, Spain and the United Kingdom- and identify clear and actionable recommendations that policymakers can implement to ensure the safety and fairness of crypto-asset advertising in India.
The full issue brief can be accessed Here
Making Voices Heard
We believe that voice interfaces have the potential to democratise the use of the internet by addressing limitations related to reading and writing on digital text-only platforms and devices. This report examines the current landscape of voice interfaces in India, with a focus on concerns related to privacy and data protection, linguistic barriers, and accessibility for persons with disabilities (PwDs).
The report features a visual mapping of 23 voice interfaces and technologies publicly available in India, along with a literature survey, a policy brief towards development and use of voice interfaces and a design brief documenting best practices and users’ needs, both with a focus on privacy, languages, and accessibility considerations, and a set of case studies on three voice technology platforms. Read and download the full report here
Credits
Research: Shweta Mohandas, Saumyaa Naidu, Deepika Nandagudi Srinivasa, Divya Pinheiro, and Sweta Bisht.
Conceptualisation, Planning, and Research Inputs: Sumandro Chattapadhyay, and Puthiya Purayil Sneha.
Illustration: Kruthika NS (Instagram @theworkplacedoodler). Website Design Saumyaa Naidu. Website Development Sumandro Chattapadhyay, and Pranav M Bidare.
Review and Editing: Puthiya Purayil Sneha, Divyank Katira, Pranav M Bidare, Torsha Sarkar, Pallavi Bedi, and Divya Pinheiro.
Copy Editing: The Clean Copy
Working paper on Non-Financial Use Cases of Blockchain Technology
Ever since its initial conceptualisation in 2009, blockchain technology has been synonymous with financial products and services - most notably crypto-assets like Bitcoin. However, while often associated with the financial sector, blockchain technology represents an opportunity for multiple industries to reinvent and improve their legacy processes. In India, the 2020 discussion Paper on Blockchain Technology by the Niti Aayog as well as the National Blockchain Strategy of 2021 by the Ministry of Electronics and Information Technology have attempted to articulate this opportunity. These documents examine the potential benefits that would arise from blockchain’s introduction across multiple non financial sectors.
This working paper looks to examine three specific use cases mentioned in the above mentioned government documents: Land record management, certification verification and pharmaceutical supply chain management. We look to provide an overview of what blockchain technology is and document the ongoing attempts to integrate blockchain technology into the aforementioned fields. We also assess the possible costs and benefits associated with blockchain’s introduction and look to draw insights from instances of such integration in other jurisdictions.
The full working paper can be found here.
The Government’s Increased Focus on Regulating Non-Personal Data: A Look at the Draft National Data Governance Framework Policy
Introduction
Non Personal Data (‘NPD’) can be understood as any information not relating to an identified or identifiable natural person. The origin of such data can be both human and non-human. Human NPD would be such data which has been anonymised in such a way that the person to whom the data relates cannot be re-identified. Non-human NPD would mean any such data that did not relate to a human being in the first place, for example, weather data. There has been a gradual demonstrated interest in NPD by the government in recent times. This new focus on regulating non personal data can be owed to the economic incentive it provides. In its report, the Sri Krishna committee, released in 2018 agreed that NPD holds considerable strategic or economic interest for the nation, however, it left the questions surrounding NPD to a future committee.
History of NPD Regulation
In 2020, the Ministry of Electronics and Information Technology (‘MEITY’) constituted an expert committee (‘NPD Committee’) to study various issues relating to NPD and to make suggestions on the regulation of non-personal data. The NPD Committee differentiated NPD into human and non-human NPD, based on the data’s origin. Human NPD would include all information that has been stripped of any personally identifiable information and non-human NPD meant any information that did not contain any personally identifiable information in the first place (eg. weather data). The final report of the NPD Committee is awaited but the Committee came out with a revised draft of its recommendations in December 2020. In its December 2020 report, the NPD Committee proposed the creation of a National Data Protection Authority (‘NPDA’) as it felt this is a new and emerging area of regulation. Thereafter, the Joint Parliamentary Committee on the Personal Data Protection Bill, 2019 (‘JPC’) came out with its version of the Data Protection Bill where it amended the short title of the PDP Bill 2019 to Data Protection Bill, 2021 widening the ambit of the Bill to include all types of data. The JPC report focuses only on human NPD, noting that non-personal data is essentially derived from one of the three sets of data - personal data, sensitive personal data, critical personal data - which is either anonymized or is in some way converted into non-re-identifiable data.
On February 21, 2022, the Ministry of Electronics and Information Technology (‘MEITY’) came out with the Draft India Data Accessibility and Use Policy, 2022 (‘Draft Policy’). The Draft Policy was strongly criticised mainly due to its aims to monetise data through its sale and licensing to body corporates. The Draft Policy had stated that anonymised and non-personal data collected by the State that has “undergone value addition” could be sold for an “appropriate price”. During the Draft Policy’s consultation process, it had been withdrawn several times and then finally removed from the website. The National Data Governance Framework Policy (‘NDGF Policy’) is a successor to this Draft Policy. There is a change in the language put forth in the NDGF Policy from the Draft Policy, where the latter mainly focused on monetary growth. The new NDGF Policy aims to regulate anonymised non-personal data (‘NPD’) kept with governmental authorities and make it accessible for research and improving governance. It wishes to create an ‘India Datasets programme’ which will consist of the aforementioned datasets. While MEITY has opened the draft for public comments, is a need to spell out the procedure in some ways for stakeholders to draft recommendations for the NDGF policies in an informed manner. Through this piece, we discuss the NDGF Policy in terms of issues related to the absence of a comprehensive Data Protection Framework in India and the jurisdictional overlap of authorities under the NDGF Policy and DPB.
What the National Data Governance Framework Policy Says
Presently in India, NPD is stored in a variety of governmental departments and bodies. It is difficult to access and use this stored data for governmental functions without modernising collection and management of governmental data. Through the NDGF Policy, the government aims to build an Indian data storehouse of anonymised non-personal datasets and make it accessible for both improving governance and encouraging research. It imagines the establishment of an Indian Data Office (‘IDO’) set up by MEITY , which shall be responsible for consolidating data access and sharing of non-personal data across the government. In addition, it also mandates a Data Management Unit for every Ministry/department that would work closely with the IDO. IDO will also be responsible for issuing protocols for sharing NPD. The policy further imagines an Indian Data Council (‘IDC’) whose function would be to define frameworks for important datasets, finalise data standards, and Metadata standards and also review the implementation of the policy. The NDGF Policy has provided a broad structure concerning the setting up of anonymisation standards, data retention policies, data quality, and data sharing toolkit. The NDGF Policy states that these standards shall be developed and notified by the IDO or MEITY or the Ministry in question and need to be adhered to by all entities.
The Data Protection Framework in India
The report adopted by the JPC, felt that it is simpler to enact a single law and a single regulator to oversee all the data that originates from any data principal and is in the custody of any data fiduciary. According to the JPC, the draft Bill deals with various kinds of data at various levels of security. The JPC also recommended that since the Data Protection Bill (‘DPB’) will handle both personal and non-personal data, any further policy / legal framework on non-personal data may be made a part of the same enactment instead of any separate legislation. The draft DPB states that what is to be done with the NDP shall be decided by the government from time to time according to its policy. As such, neither the DPB, 2021 nor the NDGF Policy go into details of regulating NPD but only provide a broad structure of facilitating free-flow of NPD, without taking into account the specific concerns that have been raised since the NPD committee came out with its draft report on regulating NPD dated December 2020.
Jurisdictional overlaps among authorities and other concerns
Under the NDGF policy, all guidelines and rules shall be published by a body known as the Indian Data Management Office (‘IDMO’). The IDMO is set to function under the MEITY and work with the Central government, state governments and other stakeholders to set standards. Currently, there is no sign of when the DPB will be passed as law. According to the JPC, the reason for including NPD within the DPB was because of the impossibility to differentiate between PD and NPD. There are also certain overlaps between the DPB and the NDGF which are not discussed by the NDGF. NDGF does not discuss the overlap between the IDMO and Data Protection Authority (‘DPA’) established under the DPB 2021.
Under the DPB, the DPA is tasked with specifying codes of practice under clause 49. On the other hand, the NDGF has imagined the setting up of IDO, IDMO, and the IDC, which shall be responsible for issuing codes of practice such as data retention, and data anonymisation, and data quality standards. As such, there appears to be some overlap in the functions of the to-be-constituted DPA and the NDGF Policy.
Furthermore, while the NDGF Policy aims to promote openness with respect to government data, there is a conflict with open government data (‘OGD’) principles when there is a price attached to such data. OGD is data which is collected and processed by the government for free use, reuse and distribution. Any database created by the government must be publicly accessible to ensure compliance with the OGD principles.
Conclusion
Streamlining datasets across different authorities is a huge challenge for the government and hence the NGDF policy in its current draft requires a lot of clarification. The government can take inspiration from the European Union which in 2018, came out with a principles-based approach coupled with self-regulation on the framework of the free flow of non-personal data. The guidance on the free-flow of non-personal data defines non-personal data based on the origin of data - data which originally did not relate to any personal data (non-human NPD) and data which originated from personal data but was subsequently anonymised (human NPD). The regulation further realises the reality of mixed data sets and regulates only the non-personal part of such datasets and where the datasets are inextricably linked, the GDPR would apply to such datasets. Moreover, any policy that seeks to govern the free flow of NPD ought to make it clear that in case of re-identification of anonymised data, such re-identified data would be considered personal data. The DPB, 2021 and the NGDF, both fail to take into account this difference.
Central Bank Digital Currencies: A solution to India’s financial woes or just a piece of the puzzle?
Central Bank Digital Currencies (CBDCs) have, over the last couple of years, stepped firmly into the global financial spotlight. India is no exception to this trend, with both the Reserve Bank of India (RBI) and the Finance Minister referring to an Indian CBDC that is currently under development.
With the introduction of this CBDC a matter of when and not if, India and many other countries stand on the precipice of re-imagining their financial systems. It is therefore imperative that any attempt at introducing a CBDC is preceded by a detailed analysis of its scope, benefits, limitations, and how it has been implemented in other jurisdictions. This policy brief looks to achieve that by examining the form that a CBDC could take, what its policy goals would be in India, the considerations the RBI would have to account for and whether a CBDC would work in present-day India. Finally, it also looks at the case of Nigeria to draw insights that could also be applied to the introduction and operationalisation of a CBDC in the Indian context.
The full issue brief can be accessed here.
Comments to the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
These comments examine whether the proposed amendments are in adherence to established principles of constitutional law, intermediary liability and other relevant legal doctrines. We thank the Ministry of Electronics and Information Technology (MEITY) for allowing us this opportunity. Our comments are divided into two parts. In the first part, we reiterate some of our comments to the existing version of the rules, which we believe holds relevance for the proposed amendments as well. And in the second part, we provide issue-wise comments that we believe need to be addressed prior to finalising the amendments to the rules.
To access the full text of the Comments to the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, click here
What Are The Consumer Protection Concerns With Crypto-Assets?
The article was published in Medianama on July 8, 2022
Crypto-asset regulation is at the forefront of India’s financial regulator’s minds. On the 6th of June, the Securities and Exchange Board of India (SEBI) in a response to the Parliamentary Standing Committee on Finance expressed clear consumer protection concerns associated with crypto-assets.
This statement follows multiple notices issued by the Reserve Bank of India (RBI) warning consumers of the risks related to crypto-assets, and even a failed attempt to prevent banks from transacting with any individual trading crypto-assets. Yet, in spite of these multiple warnings, and a significant drop in trading volume due to the introduction of a new taxation structure, crypto-assets still have managed to establish themselves as a legitimate financial instrument in the minds of many.
Recent global developments, however, seem to validate the concerns held by both the RBI and SEBI.
The bear market that crypto finds itself in has sent shockwaves throughout the ecosystem, crippling some of the most established tokens in the space. Take, for example, the death spiral of the algorithmic stablecoin Terra USD and its sister token Luna—with Terra USD going from a top-10-traded crypto-token to being practically worthless. The volatility of token prices has had a significant knock-on effect on crypto-related services. Following Terra’s crash, the Centralised Finance Platform (CeFi) Celsius—which provided quasi-banking facilities for crypto holders—also halted all withdrawals. More recently, the crypto-asset hedge fund Three Arrows also filed for bankruptcy following its inability to meet its debt obligations and protect its assets from creditors looking to get their money back.
Underpinning these stories of failing corporations are the very real experiences of investors and consumers—many of whom have lost a significant amount of wealth. This has been a direct result of the messaging around crypto-assets. Crypto-assets have been promoted through popular culture as a means of achieving financial freedom and accruing wealth quickly. It is this narrative that lured numerous regular citizens to invest substantial portions of their income into crypto-asset trading. At the same time, the crypto-asset space is littered with a number of scams and schemes designed to trick unaware consumers. These schemes, primarily taking the form of ‘pump and dump’ schemes, represent a significant issue for investors in the space.
It seems, therefore, that any attempt to ensure consumer protection in the crypto-space must adopt two key strategies:
- First, it must re-orient the narrative from crypto as a simple means of getting wealthy—and ensure that those consumers who invest in crypto do so with full knowledge of the risks associated with crypto-assets
- Second, it must provide consumers with sufficient recourse in cases where they have been subject to fraud.
In this article, we examine the existing regulatory framework around grievance redressal for consumers in India—and whether these safeguards are sufficient to protect consumers trading crypto-assets. We further suggest practical measures that the government can adopt going forward.
What is the Current Consumer Protection Framework Around Crypto-assets?
Safeguards Under the Consumer Protection Act and E-commerce Rules
The increased adoption of e-commerce by consumers in India forced legislators to address the lack of regulation for the protection of consumer interests. This legislative expansion may extend to protecting the interests of investors and consumers trading in crypto-assets.
The groundwork for consumer welfare was laid in the new Consumer Protection Act, 2019 which defined e-commerce as the “buying or selling of goods or services including digital products over digital or electronic network.” It also empowered the Union Government to take measures and issue rules for the protection of consumer rights and interests, and the prevention of unfair trade practices in e-commerce.
Within a year, the Union Government exercised its power to issue operative rules known as the Consumer Protection (E-Commerce) Rules, 2020 (the “Rules”), which amongst other things, sought to prohibit unfair trade practices across all models of e-commerce. The Rules define an e-commerce entity as one which owns, operates or manages a digital or electronic facility or platform (which includes a website as well as mobile applications) for electronic commerce.
The definition of e-commerce is not limited only to physical goods but also includes services as well as digital products. So, one can plausibly assume that it would be applicable to a number of crypto-exchanges, as well as certain entities offering decentralized finance (DeFi) services. This is because crypto tokens—be it cryptocurrencies like Bitcoin, Ethereum, or Dogecoin—are not considered currency or securities within Indian law, but can be said to be digital products since they are digital goods.
The fact that the digital products being traded on the e-commerce entity originated outside Indian territory would make no difference as far as the applicability of the Rules is concerned. The Rules apply even to e-commerce entities not established in India, but which systematically offer goods or services to consumers in India. The concept of systematically offering goods or services across territorial boundaries appears to have been taken from the E-evidence Directive of the European Union and seeks to target only those entities which intend to do substantial business within India while excluding those who do not focus on the Indian market and have only a minuscule presence here.
Additionally, the Rules impose certain duties and obligations on e-commerce entities, such as:
- The appointment of a nodal officer or a senior designated functionary who is resident in India, to ensure compliance with the provisions of the Consumer Protection Act;
- The prohibition on the adoption of any unfair trading practices, thereby making the most important requirements of consumer protection applicable to e-commerce;
- The establishment of a grievance redressal mechanism and specifying an outer limit of one month for redressal of complaints;
- The prohibition on imposing cancellation charges on the consumer, unless a similar charge is also borne by the e-commerce entity if it cancels the purchase order unilaterally for any reason;
- The prohibition on price manipulation to gain unreasonable profit by imposing an unjustified price on the consumers;
- The prohibition on discrimination between consumers of the same class or an arbitrary classification of consumers that affects their rights; etc.
The Rules also impose certain liabilities on e-commerce entities relating to the tracking of shipments, the accuracy of the information on the goods or services being offered, information and ranking of sellers, tracking complaints, and information regarding payment mechanisms. Most importantly, the Rules explicitly make the grievance redressal mechanism under the Consumer Protection Act, 2019 applicable to e-commerce entities in case they violate any of the requirements under the Rules.
What this means is that at present crypto-exchanges and crypto-service providers clearly fall within the ambit of consumer protection legislation in India. In real terms, this means that consumers can rest assured that in any crypto transaction their rights must be accounted for by the corporation.
With crypto related scams exploding globally following 2021, it is likely that Indian investors will come into contact, or be subject to various scams and schemes in the crypto marketplace. Therefore, it is imperative that consumers and investors the steps they can take in case they fall victim to a scam. Currently, any consumer who is the victim of a fraud or scam in the crypto space would as per the current legal regime, have two primary redressal remedies:
- Lodging a criminal complaint with the police, usually the cyber cell, regarding the fraud. It then becomes the police’s responsibility to investigate the case, trace the perpetrators, and ensure that they are held accountable under relevant legal provisions.
- Lodging a civil complaint before the consumer forum or even the civil courts claiming compensation and damages for the loss caused. In this process, the onus is on the consumer to follow up and prove that they have been defrauded.
Filing a consumer complaint may impose an extra burden on the consumer to prove the fraud—especially if the consumer is unable to get complete and accurate information regarding the transaction. Additionally, in most cases, a consumer complaint is filed when the perpetrator is still accessible and can be located by the consumer. However, in case the perpetrator has absconded, the consumer would have no choice but to lodge a criminal complaint. That said, if the perpetrators have already absconded, it may be difficult even for the police to be of much help considering the anonymity that is built into technology.
Therefore, perhaps the best protection that can be afforded to the consumer is where the regulatory regime is geared towards the prevention of frauds and scams by establishing a licensing and supervisory regime for crypto businesses.
A Practical Guide to Consumer Protection and Crypto-assets
What is apparent is that existing regulations are not sufficient to cover the extent of protection that a crypto-investor would require. Ideally, this gap would be covered by dedicated legislation that looks to cover the range of issues within the crypto-ecosystem. However, in the absence of the (still pending) government crypto bill, we are forced to consider how consumers can currently be protected and made aware of the risks associated with crypto-assets.
On the question of informing customers of the risks associated, we must address one of the primary means through which consumers become aware of crypto-assets: advertising. Currently, crypto-asset advertising follows a code set down by the Advertising Standards Council of India, a self-regulating, non-government body. As such, there is currently no government body that enforces binding advertising standards on crypto and crypto-service providers.
While self-regulation has generally been an acceptable practice in the case of advertising, the advertising of financial products has differed slightly. For example, Schedule VI of the Securities and Exchange Board of India (Mutual Funds) Regulations, 1996, lays down detailed guidelines associated with the advertising of mutual funds. Crypto-assets can, depending on their form, perform similar functions to currencies, securities, and assets. Moreover, they carry a clear financial risk—as such their advertising should come under the purview of a recognised financial regulator. In the absence of a dedicated crypto bill, an existing regulator—such as SEBI or the RBI—should use their ad-hoc power to bring crypto-assets and their advertising under their purview.
This would allow for the government to not only ensure that advertising guidelines are followed, but to dictate the exact nature of these guidelines. This allows it to issue standards pertaining to disclaimers and prevent crypto service providers from advertising crypto as being easy to understand, having a guaranteed return on investment, or other misleading messages.
Moreover, financial institutions such as the RBI and SEBI may consider increasing efforts to inform consumers of the financial and economic risks associated with crypto-assets by undertaking dedicated public awareness campaigns. Strongly enforced advertising guidelines, coupled with widespread and comprehensive awareness efforts, would allow the average consumer to understand the risks associated with crypto-assets, thereby re-orienting the prevailing narrative around them.
On the question of providing consumers with clear recourse, current financial regulators might consider setting up a joint working group to examine the extent of financial fraud associated with crypto-assets. Such a body can be tasked with providing consumers with clear information related to crypto-asset scams and schemes, how to spot them, and the next steps they must take in case they fall victim to one.
Aman Nair is a policy officer at the Centre for Internet & Society (CIS), India, focusing on fintech, data governance, and digital cooperative research. Vipul Kharbanda is a non-resident fellow at CIS, focusing on the fintech research agenda of the organisation.
Deployment of Digital Health Policies and Technologies: During Covid-19
Digitisation of public services in India began with taxation, land record keeping, and passport details recording, but it was soon extended to cover most governmental services - with the latest being public health. The digitisation of healthcare system in India had begun prior to the pandemic. However, given the push digital health has received in recent years especially with an increase in the intensity of activity during the pandemic, we thought it is important to undertake a comprehensive study of India's digital health policies and implementation. The project report comprises a desk-based research review of the existing literature on digital health technologies in India and interviews with on-field healthcare professionals who are responsible for implementing technologies on the ground.
The report by Privacy International and the Centre for Internet & Society can be accessed here.
Surveillance Enabling Identity Systems in Africa: Tracing the Fingerprints of Aadhaar
In this report, we identify the different external actors that influencing this “developmental” agenda. These range from philanthropic organisations, private companies, and technology vendors, to state and international institutions. Most notable among these is the World Bank, whose influence we investigated in the form of case studies of Nigeria and Kenya. We also explored the role played by the “success” of the Aadhaar programme in India on these new ID systems. A key characteristic of the growing “digital identity for development” trend is the consolidation of different databases that record beneficiary data for government programmes into one unified platform, accessed by a unique biometric ID. This “Aadhaar model” has emerged as a default model to be adopted in developing countries, with little concern for the risks it introduces. Read and download the full report here.
NHA Data Sharing Guidelines – Yet Another Policy in the Absence of a Data Protection Act
Reviewed and edited by Anubha Sinha
Launched in 2018, PM-JAY is a public health insurance scheme set to cover 10 crore poor and vulnerable families across the country for secondary and tertiary care hospitalisation. Eligible candidates can use the scheme to avail of cashless benefits at any public/private hospital falling under this scheme. Considering the scale and sensitivity of the data, the creation of a well-thought-out data-sharing document is a much-needed step. However, the document – though only a draft – has certain portions that need to be reconsidered, including parts that are not aligned with other healthcare policy documents. In addition, the guidelines should be able to work in tandem with the Personal Data Protection Act whenever it comes into force. With no prior intimation of the publication of the guidelines, and the provision of a mere 10 days for consultation, there was very little scope for stakeholders to submit their comments and participate in the consultation. While the guidelines pertain to the PM-JAY scheme, it is an important document to understand the government’s concerns and stance on the sharing of health data, especially by insurance companies.
Definitions: Ambiguous and incompatible with similar policy documents
The draft guidelines add to the list of health data–related policies that have been published since the beginning of the pandemic. These include three draft health data management policies published within two years, which have already covered the sharing and management of health data. The draft guidelines repeat the pattern of earlier policies on health data, wherein there is no reference to the policies that predated it; in this case, the guidelines fail to refer to the draft National Digital Health Data Management Policy (published in April 2022). To add to this, the document – by placing the definitions at the end – is difficult to read and understand, especially when terms such as ‘beneficiary’, ‘data principal’, and ‘individual’ are used interchangeably. In the same vein, the document uses the terms ‘data principal’ and ‘data fiduciary’, and the definitions of health data and personal data, from the 2019 PDP Bill, while also referring to the IT Act SDPI Rules and its definition of ‘sensitive personal data’. While the guidelines state that the IT Act and Rules will be the legislation to refer to for these guidelines, it is to be noted that the IT Act under the SPDI Rules covers ‘body corporates’, which under Section 43A(1), is defined as “any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities;”. It is difficult to add responsibility and accountability to the organisations under the guidelines when they might not even be covered under this definition.
With each new policy, civil society organisations have been pointing out the need to have a data protection act before introducing policies and guidelines that deal with the processing and sharing of the data of individuals. Ideally, these policies – even in draft form – should have been published after the Personal Data Protection Bill was enacted, to ensure consistency with the provisions of the law. For example, the guidelines introduce a new category of governance mechanisms under the data-sharing committee headed by a data-sharing officer (DSO). The responsibilities and powers of the DSO are similar to that of the data protection officer under the draft PDP Bill as well as the National Data Health Management Policy (NHDMP). This, in turn, raises the question of whether the DSO and the DPOs under both the PDP Bill and the draft NDMP will have the same responsibilities. Clarity in terms of which of the policies are in force and how they intersect is needed to ensure a smooth implementation. Ideally, having multiple sources of definitions should be addressed at the drafting stage itself.
Guiding Principles: Need to look beyond privacy
The guidelines enumerate certain principles to govern the use, collection, processing, and transmission of the personal or sensitive personal data of beneficiaries. These principles are accountability, privacy by design, choice and consent, openness/transparency, etc. While these provisions are much needed, their explanation at times misses the mark of why these principles were added. For example, in the case of accountability, the guidelines state that the ‘data fiduciary’ shall be accountable for complying with measures based on the guiding principles However, it does not specify who the fiduciaries would be accountable to and what the steps are to ensure accountability. Similarly, in the case of openness and transparency, the guidelines state that the policies and practices relating to the management of personal data will be available to all stakeholders. However, openness and transparency need to go beyond policies and practices and should consider other aspects of openness, including open data and the use of open-source software and open standards. This again will add to transparency, in that it would specify the rights of the data principal, as the current draft looks at the rights of the data principal merely from a privacy perspective. In the case of purpose limitation as well, the guidelines are tied to the privacy notice, which again puts the burden on the individual (in this case, beneficiary) when the onus should actually be on the data fiduciary. Lastly, under the empowerment of beneficiaries, the guidelines state that the “data principal shall be able to seek correction, amendments, or deletion of such data where it is inaccurate;”. The right to deletion should not be conditional on inaccuracy, especially when entering the scheme is optional and consent-based.
Data sharing with third parties without adequate safeguards
The guidelines outline certain cases where personal data can be collected, used, or disclosed without the consent of the individual. One of these cases is when the data is anonymised. However, the guidelines do not detail how this anonymisation would be achieved and ensured through the life cycle of the data, especially when the clause states that the data will also be collected without consent. The guidelines also state that the anonymised data could be used for public health management, clinical research, or academic research. The guidelines should have limited the scope of academic research or added certain criteria to gain access to the data; the use of vague terminology could lead to this data (sometimes collected without consent) being de-anonymised or used for studies that could cause harm to the data principal or even a particular community. The guidelines state that the data can be shared as ‘protected health information’ with a government agency for oversight activities authorised by law, epidemic control, or in response to court orders. With the sharing of data, care should be taken to ensure data minimisation and purpose limitations that go beyond the explanations added in the body of the guidelines. In addition, the guidelines also introduce the concept of a ‘clean room’, which is defined as “a secure sandboxed area with access controls, where aggregated and anonymised or de-identified data may be shared for the purposes of developing inference or training models”. The definition does not state who will be developing these training models; it could be a cause of worry if AI companies or even insurance companies have the potential to use this data to train models that could eventually make decisions based on the results. The term ‘sandbox’ is explained under the now revoked DP Bill 2021 as “such live testing of new products or services in a controlled or test regulatory environment for which the Authority may or may not permit certain regulatory relaxations for a
specified period for the limited purpose of the testing”. Neither the 2019 Bill nor the IT Act/Rules defines ‘sandbox’; the guidelines should have ideally spent more time explaining how the sandbox system in the ‘Clean Room’ works.
Conclusion
The draft Data Sharing Guidelines are a welcome step in ensuring that the entities sharing and processing data have guidelines to adhere to, especially since the Data Protection Bill has not been passed yet. The mention of the best practices for data sharing in annexures, including practices for people who have access to the data, is a step in the right direction, which could be made better with regular training and sensitisation. While the guidelines are a good starting point, they still suffer from the issues that have been highlighted in similar health data policies, including not referring to older policies, adding new entities, and the reliance on digital and mobile technology. The guidelines could have added more nuance to the consent and privacy by design sections to ensure other forms of notice, e.g., notice in audio form in different Indian languages. While PM-JAY aims to reach 10 crore poor and vulnerable families, there is a need to look at how to ensure that consent is given according to the guidelines that are “free, informed, clear, and specific”.
Getting the (Digital) Indo-Pacific Economic Framework Right
The article was originally published in Directions on 16 September 2022.
It is still early days. Given the broad and noncommittal scope of the economic arrangement, it is unlikely that the IPEF will lead to a trade deal among members in the short run. Instead, experts believe that this new arrangement is designed to serve as a ‘framework or starting point’ for members to cooperate on geo-economic issues relevant to the Indo-Pacific, buoyed in no small part by the United States’ desire to make up lost ground and counter Chinese economic influence in the region.
United States Trade Representative (USTR) Katherine Tai has underscored the relevance of the Indo-Pacific digital economy to the US agenda with the IPEF. She has emphasized the importance of collaboratively addressing key connectivity and technology challenges, including standards on cross-border data flows, data localisation and online privacy, as well as the discriminatory and unethical use of artificial intelligence. This is an ambitious agenda given the divergence among members in terms of technological advancement, domestic policy preferences and international negotiating stances at digital trade forums. There is a significant risk that imposing external standards or values on this evolving and politically-contested digital economy landscape will not work, and may even undermine the core potential of the IPEF in the Indo-Pacific. This post evaluates the domestic policy preferences and strategic interests of the Framework’s member states, and how the IPEF can navigate key points of divergence in order to achieve meaningful outcomes.
State of domestic digital policy among IPEF members
Data localisation is a core point of divergence in global digital policymaking. It continues to dominate discourse and trigger dissent at all international trade forums, including the World Trade Organization. IPEF members have a range of domestic mandates restricting cross-border flows, which vary in scope, format and rigidity (see table below). Most countries only have a conditional data localisation requirement, meaning data can only be transferred to countries where it is accorded an equivalent level of protection – unless the individual whose data is being transferred consents to said transfer. Australia and the United States have sectoral localisation requirements for health and defence data respectively. India presently has multiple sectoral data localisation requirements. In particular, a 2018 Reserve Bank of India (RBI) directive imposed strict local storage requirements along with a 24-hour window for foreign processing of payments data generated in India. The RBI imposed a moratorium on the issuance of new cards by several US-based card companies until compliance issues with the data localisation directive were resolved. Furthermore, several iterations of India’s recently withdrawn Personal Data Protection Bill contained localisation requirements for some categories of personal data.
Indonesia and Vietnam have diluted the scopes of their data localisation mandates to apply, respectively, only to companies providing public services and to companies not complying with other local laws. These dilutions may have occurred in response to concerted pushback from foreign technology companies operating in these countries. In addition to sectoral restrictions on the transfer of geospatial data, South Korea retains several procedural checks on cross-border flows, including formalities regarding providing notice to individual users.
Moving onto another issue flagged by USTR Tai, while all IPEF members recognise the right to information privacy at an overarching or constitutional level, the legal and policy contours of data protection are at different stages of evolution in different countries. Japan, South Korea, Malaysia, New Zealand, Philippines, Singapore and Thailand have data protection frameworks in place. Data protection frameworks in India and Brunei are under consultation. Notably, the US does not have a comprehensive federal framework on data privacy, although there are patchworks of data privacy regulations at both the federal and state levels.
Regulation and strategic thinking on artificial intelligence (AI) are also at varying levels of development among IPEF members. India has produced a slew of policy papers on Responsible Artificial Intelligence. The most recent policy paper published by NITI AAYOG (the Indian government’s think tank) refers to constitutional values and endorses a risk-based approach to AI regulation, much like that adopted by the EU. The US National Security Commission on Artificial Intelligence (NSCAI), chaired by Google CEO Eric Schmidt, expressed concerns about the US ceding AI leadership ground to China. The NSCAI’s final report emphasised the need for US leadership of a ‘coalition of democracies’ as an alternative to China’s autocratic and control-oriented model. Singapore has also made key strides on trusted AI, launching A.I. verify – the world’s first AI Governance Testing Framework for companies that wish to demonstrate their use of responsible AI through a minimum verifiable product.
IPEF and pipe dreams of digital trade
Some members of the IPEF are signatories to other regional trade agreements. With the exception of Fiji, India and the US, all the IPEF countries are members of the Regional Comprehensive Economic Partnership (RCEP), which also includes China. Five IPEF member countries are also members of the Comprehensive and Progressive Trans-Pacific Partnership (CPTPP) that President Trump backed out of in 2017. Several IPEF members also have bilateral or trilateral trading agreements among themselves, an example being the Digital Economic Partnership Agreement (DEPA) between Singapore, New Zealand and Chile.
All these ‘mega-regional’ trading agreements contain provisions on data flows, including prohibitions on domestic legal provisions that mandate local computing facilities or restrict cross-border data transfers. Notably, these agreements also incorporate exceptions to these rules. The CPTPP includes within its ambit an exception on the grounds of ‘legitimate public policy objectives’ of the member, while the RCEP incorporates an additional exception for ‘essential security interests’.
IPEF members are also spearheading multilateral efforts related to the digital economy: Australia, Japan and Singapore are working as convenors of the plurilateral Joint Statement Initiative (JSI) at the World Trade Organization (WTO), which counts 86 WTO members as parties. India (along with South Africa) vehemently opposes this plurilateral push on the grounds that the WTO is a multilateral forum functioning on consensus and a plurilateral trade agreement should not be negotiated within the aegis of the WTO. They fear, rightly, that such gambits close out the domestic policy space, especially for evolving digital economy regimes where keen debate and contestation exist among domestic stakeholders. While wary of the implications of the JSI, other IPEF members, such as Indonesia, have cautiously joined the initiative to ensure that they have a voice at the table.
It is unlikely that the IPEF will lead to a digital trade arrangement in the short run. Policymaking on issues as complex as the digital economy that must respond to specific social, economic and (geo)political realities cannot be steamrolled through external trade agreements. For instance, after the Los Angeles Ministerial India opted out of the IPEF trade pillar citing both India’s evolving domestic legislative framework on data and privacy as well as a broader lack of consensus among IPEF members on several issues, including digital trade. Commerce Minister Piyush Goyal explained that India would wait for the “final contours” of the digital trade track to emerge before making any commitments.
Besides, brokering a trade agreement through the IPEF runs a risk of redundancy. Already, there exists a ‘spaghetti bowl’ of regional trading agreements that IPEF members can choose from, in addition to forming bilateral trade ties with each other.
This is why Washington has been clear about calling the IPEF an ‘economic arrangement’ and not a trade agreement. Membership does not imply any legal obligations. Rather than duplicating ongoing efforts or setting unrealistic targets, the IPEF is an opportunity for all players to shape conversations, share best practices and reach compromises, which could feed back into ongoing efforts to negotiate trade deals. For example, several members of RCEP have domestic data localisation mandates that do not violate trade deals because the agreement carves out exceptions that legitimise domestic policy decisions. Exchanges on how these exceptions work in future trade agreements could be a part of the IPEF arrangement and nudge states towards framing digital trade negotiations through other channels, including at the WTO. Furthermore, states like Singapore that have launched AI self-governance mechanisms could share best practices on how these mechanisms were developed as well as evaluations of how they have helped policy goals be met. And these exchanges shouldn’t be limited to existing IPEF members. If the forum works well, countries that share strategic interests in the region with IPEF members, including, most notably, the European Union, may also want to get involved and further develop partnerships in the region.
Countering China
Talking shop on digital trade should certainly not be the only objective of the IPEF. The US has made it clear that they want the message emanating from the IPEF ‘to be heard in Beijing’. Indeed, the IPEF offers an opportunity for the reassertion of US economic interests in a region where President Trump’s withdrawal from the CPTPP has left a vacuum for China to fill. Accordingly, it is no surprise that the IPEF has representation from several regions of the Indo-Pacific: South Asia, Southeast Asia and the Pacific.
This should be an urgent policy priority for all IPEF members. Since its initial announcement in 2015, the Digital Silk Road (DSR), the digital arm of China’s Belt and Road Initiative, has spearheaded massive investments by the Chinese private sector (allegedly under close control of the Chinese state) in e-commerce, fintech, smart cities, data centres, fibre optic cables and telecom networks. This expansion has also happened in the Indo-Pacific, unhampered by China’s aggressive geopolitical posturing in the region through maritime land grabs in the South China Sea. With the exception of Vietnam, which remains wary of China’s economic expansionism, countries in Southeast Asia welcome Chinese investments, extolling their developmental benefits. Several IPEF members – including Indonesia, Malaysia and Singapore – have associations with Chinese private sector companies, predominantly Huawei and ZTE. A study evaluating Indonesia’s response to such investments indicates that while they are aware of the risks posed by Chinese infrastructure, their calculus remains unaltered: development and capacity building remain their primary focuses. Furthermore, on the specific question of surveillance, given evidence of other countries such as the US and Australia also using digital infrastructure for surveillance, the threat from China is not perceived as a unique risk.
Setting expectations and approaches
Still, the risks of excessive dependence on one country for the development of digital infrastructure are well known. While the IPEF cannot realistically expect to displace the DSR, it can be utilised to provide countries with alternatives. This can only be done by issuing carrots rather than sticks. A US narrative extolling ‘digital democracy’ is unlikely to gain traction in a region characterised by a diversity of political systems that is focused on economic and development needs. At the same time, an excessive focus on thorny domestic policy issues – such as data localisation and the pipe dream of yet another mega-regional trade deal – could risk derailing the geo-economic benefits of the IPEF.
Instead, the IPEF must focus on capacity building, training and private sector investment in infrastructure across the Indo-Pacific. The US must position itself as a geopolitically reliable ally, interested in the overall stability of the digital Indo-Pacific, beyond its own economic or policy preferences. This applies equally to other external actors, like the EU, who may be interested in engaging with or shaping the digital economic landscape in the Indo-Pacific.
Countering Chinese economic influence and complementing security agendas set through other fora – such as the Quadrilateral Security Dialogue – should be the primary objective of the IPEF. It is crucial that unrealistic ambitions seeking convergence on values or domestic policy do not undermine strategic interests and dilute the immense potential of the IPEF in catalysing a more competitive and secure digital Indo-Pacific.
Table: Domestic policy positions on data localisation and data protection
Demystifying Data Breaches in India
Edited by Arindrajit Basu and Saumyaa Naidu
India saw a 62% drop in data breaches in the first quarter of 2022. Yet, it ranked fifth on the list of countries most hit by cyberattacks according to a 2022 report by Surfshark, a Netherlands-based VPN company. Another report on the cost of data breaches researched by the Ponemon Institute and published by IBM reveals that the breach of about 29500 records between March 2021 and March 2022 resulted in a 25% increase in the average cost from INR 165 million in 2021 to INR 176 million in 2022.
These statistics are certainly a cause for concern, especially in the context of India’s rapidly burgeoning digital economy shaped by the pervasive platformization of private and public services such as welfare, banking, finance, health, and shopping among others. Despite the rate at which data breaches occur and are reported in the media, there seems to be little information about how and when they are resolved. This post examines the discourse on data breaches in India with respect to their historical forms, with a focus on how the specific terminology to describe data security incidents has evolved in mainstream news media reportage.
While expert articulations of cybersecurity in general and data breaches in particular tend to predominate the public discourse on data privacy, this post aims to situate broader understandings of data breaches within the historical context of India’s IT revolution and delve into specific concepts and terminology that have shaped the broader discourse on data protection. The late 1990s and early 2000s offer a useful point of entry into the genesis of the data security landscape in India.
Data Breaches and their Predecessor Forms
The articulation of data security concerns around the late 1990s and early 2000s isn’t always consistent in deploying the phrase, ‘data breach’ to signal cybersecurity concerns in India. The terms such as ‘data/ identity theft’ and ‘data leak’ figure prominently in the public articulation of concerns with the handling of personal information by IT systems, particularly in the context of business process outsourcing (BPO) and e-commerce activities. Other pertinent terms such as “security breach”, “data security”, and ‘“cyberfraud” also capture the specificity of growing concerns around outsourced data to India. At the time, i.e. around mid-2000s regulatory frameworks were still evolving to accommodate and address the complexities arising from a dynamic reconfiguration of the telecommunications and IT landscape in India.
Some of the formative cases that instantiate the usage of the aforementioned terms are instructive to understand shifts in the reporting of such incidents over time. The earliest case during that period concerns a 2002 case concerning the theft and sale of source code by an IIT Kharagpur student who intended to sell the code to two undercover FBI agents who worked with the CBI to catch the thief. A straightforward case of data theft was framed by media stories around the time as a cybercrime involving the illegal sale of the source code of a software package, as software theft of intellectual property in the context of outsourcing and as an instance of industrial espionage in poor nations without laws protecting foreign companies. This case became the basis of the earliest calls for the protection of data privacy and security in the context of the Indian BPO sector. The Indian IT Act, 2000 at the time only covered unauthorized access and data theft from computers and networks without any provisions for data protection, interception or computer forgery. The BPO boom in India brought with it employment opportunities for India’s English-speaking, educated youth but in the absence of concrete data privacy legislation, the country was regarded as an unsafe destination for outsourcing aside from the political ramifications concerning the loss of American jobs.
In a major 2005 incident, employees of the Mphasis BFL call centre in Pune extracted sensitive bank account information of Citibank’s American customers to divert INR 1.90 crore into new accounts set up in India. The media coverage of this incident calls it India’s first outsourcing cyberfraud and a well planned scam, a cybercrime in a globalized world, and a case of financial fraud and a scam that required no hacking skills, and a case of data theft and misuse. Within the ambit of cybercrime, media reports of these incidents refer to them as cases of “fraud”, “scam” and “theft''.
Two other incidents in 2005 set the trend for a critical spotlight on data security practices in India. In a June 2005 incident, an employee of a Delhi-based BPO firm, Infinity e-systems, sold the account numbers and passwords of 1000 bank customers to the British Tabloid, The Sun. The Indian newspaper, Telegraph India, carried an online story headlined, “BPO Blot in British Backlash: Indian Sells Secret Data,” which reported that the employee, Kkaran Bahree, 24, was set up by a British journalist, Oliver Harvey. Harvey filmed Bahree accepting wads of cash for the stolen data. Bahree’s theft of sensitive information is described both as a data fraud and a leak in the above 2005 BBC story by Soutik Biswar. Another story on the incident calls it a “scam” involving the leakage of credit card information. The use of the term ‘leak’ appears consistently across other media accounts such as a 2005 story on Karan Bahree in the Times of India and another story in the Economic Times about the Australian Broadcasting Corporation’s (ABC) sting operation similar to the one in Delhi, describing the scam by the fraudsters as a leak of the online information of Australians. Another media account of the coverage describes the incident in more generic terms such as an “outsourcing crime”.
The other case concerned four former employees of Parsec technologies who stole classified information and diverted calls from potential customers, causing a sudden drop in the productivity of call centres managed by the company in November 2005. Another call centre fraud came to light in 2009 through a BBC sting operation in which British reporters went to Delhi and secretly filmed a deal with a man selling credit card and debit card details obtained from Symantec call centres, which sold software made by Norton. This BBC story uses the term “breach” to refer to the incident.
In the broader framing of these cases generally understood as cybercrime, which received transnational media coverage, the terms “fraud”, “leak”, “scam”, and “theft” appear interchangeably. The term “data breach” does not seem to be a popular or common usage in these media accounts of the BPO-related incidents. A broader sense of breach (of confidentiality, privacy) figures in the media reportage in implicitly racial terms of cultural trust, as a matter of ethics and professionalism and in the language of scandal in some cases.
These early cases typify a specific kind of cybercrime concerning the theft or misappropriation of outsourced personal data belonging to British or American residents. What’s remarkable about these cases is the utmost sensitivity of the stolen personal information including financial details, bank account and credit/debit card numbers, passwords, and in one case, source code. While these cases rang the alarm bells on the Indian BPO sector’s data security protocols, they also directed attention to concerns around the training of Indian employees on the ethics of data confidentiality and vetting through psychometric tests for character assessment. In the wake of these incidents, the National Association of Software and Service Companies (NASSCOM), an Indian non-governmental trade and advocacy group, launched a National Skills Registry for IT professionals to enable employers to conduct background checks in 2006.
These data theft incidents earned India a global reputation of an unsafe destination for business process outsourcing, seen to be lacking both, a culture of maintaining data confidentiality and concrete legislation for data protection at the time. Importantly, the incidents of data theft or misappropriation were also traceable back to a known source, a BPO employee or a group of malefactors, who often sold sensitive data belonging to foreign nationals to others in India.
The phrase “data leak” also caught on in another register in the context of the widespread use of camera-equipped mobile phones in India. The 2004 Delhi MMS case offers an instance of a date leak, recapitulating the language of scandal in moralistic terms.
The Delhi MMS Case
The infamous 2004 incident involved two underage Delhi Public School (DPS) students who recorded themselves in a sexually explicit act on a cellular phone. After a fall out, the male student passed the low-resolution clip on to his friend in which his female friend’s face is seen. The clip, distributed far and wide in India, ended up on the famous e-shopping and auction website, bazee.com leading to the arrest of the website’s CEO Avinash Bajaj for hosting the listing for sale. Another similar case in 2004 mimicked the mechanics of visual capture through hand-held MMS-enabled mobile phones. A two-minute MMS of a top South-Indian actress taking a shower went viral on the Internet in 2004, the year when another MMS of two prominent Bollywood actors kissing had already done the rounds. The MMS case also marked the onset of a national moral panic around the amateur uses of mobile phone technologies, capable of corrupting young Indian minds under a sneaky regime of new media modernity. The MMS case, not strictly the classic case of a data breach - non-visual information generally stored in databases - became an iconic case of a data leak framed in the media as a scandal that shocked the country, with calls for the regulation of mobile phone use in schools. The case continued its scandalous afterlife in a 2009 Bollywood film, Dev D and another 2010 film, Love, Sex and Dhokha,
Taken together, the BPO data thefts and frauds and the data leak scandals prefigure the contemporary discourse on data breaches in the second decade of the 21st century, or what may also be called the Decade of Datafication. The launch of the Indian biometric identity project, Aadhaar, in 2009, which linked access to public services and welfare delivery with biometric identification, resulted in large-scale data collection of the scheme’s subscribers. Such linking raised the spectre of state surveillance as alleged by the critics of Aadhaar, marking a watershed moment in the discourse on data privacy and protection.
Aadhaar Data Security and Other Data Breaches
Aadhaar was challenged in the Indian Supreme Court in 2012 when it was made mandatory for welfare and other services such as banking, taxation and mobile telephony. The national debate on the status of privacy as a cultural practice in Indian society and a fundamental right in the Indian Constitution led to two landmark judgments - the 2017 Puttaswamy ruling holding privacy to be a constitutional right subject to limitations and the 2018 Supreme Court judgment holding mandatory Aadhaar to be constitutional only for welfare and taxation but no other service.
While these judgments sought to rein in Aadhaar’s proliferating mandatory uses, biometric verification remained the most common mode of identity authentication with most organizations claiming it to be mandatory for various purposes. During the same period from 2010 onwards, a range of data security events concerning Aadhaar came to light. These included app-based flaws, government websites publishing Aadhaar details of subscribers, third party leaks of demographic data, duplicate and forged Aadhaar cards and other misuses.
In 2015, the Indian government launched its ambitious Digital India Campaign to provide government services to Indian citizens through online platforms. Yet, data security breach incidents continued to increase, particularly the trade in the sale and purchase of sensitive financial information related to bank accounts and credit card numbers. The online availability of a rich trove of data, accessible via a simple Google search without the use of any extractive software or hacking skills within a thriving shadow economy of data buyers and sellers makes India a particularly vulnerable digital economy, especially in the absence of robust legislation. The lack of awareness around digital crimes and low digital literacy further exacerbates the situation given that datafication via government portals, e-commerce, and online apps has outpaced the enforcement of legislative frameworks for data protection and cybersecurity.
In the context of Aadhaar data security issues, the term “data leak” seems to have more traction in media stories followed by the term “security breach”. Given the complexity of the myriad ways in which Aadhaar data has been breached, terms such as data leak and exposure (of 11 crore Indian farmers’ sensitive information) add to the specificity of the data security compromise. The term “fraud” also makes a comeback in the context of Aadhaar-related data security incidents. These cases represent a mix of data frauds involving fake identities, theft of thumb prints for instance from land registries and inadvertent data leaks in numerous incidents involving government employees in Jharkhand, voter ID information of Indian citizens in Andhra Pradesh and Telangana and activist reports of Indian government websites leaking Aadhaar data.
Aadhaar-related data security events parallel the increase in corporate data breaches during the decade of datafication. The term “data leak” again alternates with the term “data breach” in most media accounts while other terms such as “theft” and “scam” all but disappear in the media coverage of corporate data breaches.
From 2016 onwards, incidents of corporate data breaches in India continued to rise. A massive debit card data breach involving the YES Bank ATMs and point-of-sale (PoS) machines compromised through malware between May and July of 2016 resulted in the exposure of ATM PINs and non-personal identifiable information of customers. It went undetected for nearly three months. Another data leak in 2018 concerned a system run by Indane, a state-owned utility company, which allowed anyone to download private information on all Aadhaar holders including their names, services they were connected to and the unique 12-digit Aadhaar number. Data breaches continued to be reported in India concurrent with the incidents of data mismanagement related to Aadhaar. Some prominent data breaches included a cyberattack on the systems of airline data service provider SITA resulting in the leak of Air India passenger data, leakage of the personal details of the Common Admission Test (CAT) applicants, details of credit card and order preferences of Domino’s pizza customers on the dark web, leakage of COVID-19 patients’ test results leaked by government websites, user data of Justpay and Big Basket for sale on the dark web and an SBI data breach among others between 2019 and 2021.
The media reportage of these data breaches use the term “cyberattack” to describe the activities of hackers and cybercriminals operating within a shadow economy or the dark web. Recent examples of cyberattacks by hackers who leak user data for sale on the dark web include 8.2 terabytes of 110 million sensitive financial data (KYC details, Aadhaar, credit/debit cards and phone numbers) of the payments app MobiKwik users, 180 million Domino’s pizza orders (name, location, emails, mobile numbers), and Flipkart’s Cleartrip users’ data. In these incidents again, three terms appear prominently in the media reportage - cyberattack, data breach, and leak. The term “data breach” remains the most frequently used epithet in the media coverage of the lapses of data security. While it alternates with the term “leak” in the stories, the term “data breach” appears consistently across most headlines in the news stories.
The exposure of sensitive, personal, and non-personal data by public and private entities in India is certainly a cause for concern, given the ongoing data protection legislative vacuum.
The media coverage of data breaches tends to emphasize the quantum of compromised user data aside from the types of data exposed. The media framing of these breaches in quantitative terms of financial loss as well as the magnitude and the number of breaches certainly highlights the gravity of these incidents but harm to individual users is often not addressed.
Evolving Terminology and the Source of Data Harms
The main difference in the media reportage of the BPO cybersecurity incidents during the early aughts and the contemporary context of datafication is the usage of the term, “data breach”, which figures prominently in contemporary reportage of data security incidents but not so much in the BPO-related cybercrimes.
THe BPO incidents of data theft and the attendant fraud must be understood in the context of the anxieties brought on by a globalizing world of Internet-enabled systems and transnational communications. In most of these incidents regarded as cybercrimes, the language of fraud and scam ventures further to attribute such illegal actions of the identifiable malefactors to cultural factors such as lack of ethics and professionalism.The usage of the term “data leak” in these media reports functions more specifically to underscore a broader lapse in data security as well as a lack of robust cybersecurity laws. The broader term, “breach”, is occasionally used to refer to these incidents but the term, “data breach” doesn’t appear as such.
The term “data breach” gains more prominence in media accounts from 2009 onwards in the context of Aadhaar and the online delivery of goods and services by public and private players. The term “data breach” is often used interchangeably with the term “leak” within the broader ambit of cyberattacks in the corporate sector. The media reportage frames Aadhaar-related security lapses as instances of security/data breaches, data leaks, fraud, and occasionally scam.
In contrast to the handful of data security cases in the BPO sector, data breaches have abounded in the second decade of the twenty-first century. What further differentiates the BPO-related incidents to the contemporary data breaches is the source of the data security lapse. Most corporate data breaches remain attributable to the actions of hackers and cybercriminals while the BPO security lapses were traceable back to ex-employees or insiders with access to sensitive data. We also see in the coverage of the BPO-related incidents, the attribution of such data security lapses to cultural factors including a lack of ethics and professionalism often in racial overtones. The media reportage of the BBC and ABC sting operations suggests that the India BPOs lack of preparedness to handle and maintain personal data confidentiality of foreigners point to the absence of a privacy culture in India. Interestingly, this transnational attribution recurs in a different form in the national debate on Aadhaar and how Indians don’t care about their privacy.
The question of the harms of data breaches to individuals is also an important one. In the discourse on contemporary data breaches, the actual material harm to an individual user is rarely ever established in the media reportage and generally framed as potential harm that could be devastating given the sensitivity of the compromised data. The harm is reported to be predominantly a function of organizational cybersecurity weakness or attributed to hackers and cybercriminals.
The reporting of harm in collective terms of the number of accounts breached, financial costs of a data breach, the sheer number of breaches and the global rankings of countries with the highest reported cases certainly suggests a problem with cybersecurity and the lack of organizational preparedness. However, this collective framing of a data breach’s impact usually elides an individual user’s experience of harm. Even in the case of Aadhaar-related breaches - a mix of leaking data on government websites and other online portals and breaches - the notion of harm owing to exposed data isn’t clearly established. This is, however, different from the extensively documented cases of Aadhaar-related issues in which welfare benefits have been denied, identities stolen and legitimate beneficiaries erased from the system due to technological errors.
Future Directions of Research
This brief, qualitative foray into the media coverage of data breaches over two decades has aimed to trace the usage of various terms in two different contexts - the Indian BPO-related incidents and the contemporary context of datafication. It would be worth exploring at length, the relationship between frequent reports of data breaches, and the language used to convey harm in the contemporary context of a concrete data protection legislation vacuum. It would be instructive to examine the specific uses of the terms such as “fraud”, “leak”, “scam”, “theft” and “breach” in media reporting of such data security incidents more exhaustively. Such analysis would elucidate how media reportage shapes public perception towards the safety of user data and an anticipation of attendant harm as data protection legislation continues to evolve.
Especially with Aadhaar, which represents a paradigm shift in identity verification through digital means, it would be useful to conduct a sentiment analysis of how biometric identity related frauds, scams, and leaks are reported by the mainstream news media. A study of user attitudes and behaviours in response to the specific terminology of data security lapses such as the terms “breach”, “leak”, “fraud”, “scam”, “cybercrime”, and “cyberattack” would further contribute to how lay users understand the gravity of a data security lapse. Such research would go beyond expert understandings of data security incidents that tend to dominate media reportage to elucidate the concerns of lay users and further clarify the cultural meanings of data privacy.
‘Techplomacy’ and the negotiation of AI standards for the Indo-Pacific
This is a modified version of the post that appeared in The Strategist
By Arindrajit Basu with inputs from and review by Amrita Sengupta and Isha Suri
Later this month, UN member states elected American candidate Doreen Bogdan-Martin "the most important election you have never heard off" to elect the next secretary-general of the International Telecommunications Union (ITU). While this technical body's work may be esoteric, the election was fiercely contested with Russian candidate (and former Huawei executive; aptly reflecting the geopolitical competition that is underway in determining the “future of the internet” through the technical standards that underpin it. The “Internet Protocol” (IP) that is the set of rules governing the communication and exchange of data over the internet itself is being subjected to political contestation between a Sino-Russian vision that would see the standard give way to greater government control and a US vision ostensibly rooted in more inclusive multi-stakeholder participation.
As critical and emerging technologies take the geopolitical centre-stage, the global tug of war over the development, utilisation, and deployment is playing out most ferociously at standard-setting organisations, an arms’ length away from the media limelight. Powerful state and non-state actors alike are already seeking to shape standards in ways that suit their economic, political, and normative priorities. It is time for emerging economies, middle powers and a wider array of private actors and members from the civil society to play a more meaningful and tangible role in the process.
What are standards and why do they matter
Simply put, standards are blueprints or protocols with requirements which ‘standardise’ products and related processes around the world, thus ensuring that they are interoperable, safe and sustainable. For example, USB, WiFi or a QWERTY keyboard can be used around the world because they are built on technical standards that enable equipment produced adopting these standards to be used around the world.Standards are negotiated both domestically-at domestic standard-setting bodies such as the Bureau of Indian Standards (BIS) or Standards Australia (SA) or global standard-development organisations such as the International Telecommunications Union (ITU) or the International Standardisation Organisation (ISO). While standards are not legally binding unless they are explicitly imposed as requirements in a legislation, they have immense coercive value. Not adhering to recognised standards means that certain products may not reach markets as they are not compatible with consumer requirements or cannot claim to meet health or safety expectations. The harmonisation of internationally recognised standards serves as the bedrock for global trade and commerce. Complying with a global standard is particularly critical because of its applicability across several markets. Further, international trade law proclaims that World Trade Organisation (WTO) members can impose trade restrictive domestic measures only on the basis of published or soon to be published international standards.(Article 2.4 of the Technical Barriers to Trade Agreement)
Shaping global standards is of immense geopolitical and economic value to states and the private sector alike. States that are able to ‘export’ their domestic technological standards internationally enable their companies to reap a significant economic advantage because it is cheaper for them to adopt global standards. Further, companies draw huge revenue by holding patents to technologies that are essential to comply with a certain standard popularly known as Standard Essential Patents or SEPs and licensing them to other players who want to enter the market. For context, IPlytics estimated that cumulative global royalty income from licensing SEPs was USD 20 billion in 2020, anticipated to increase significantly in the coming years due to massive technological upgradation currently underway.
China’s push for dominance to influence the 5G standard at the Third Generation Partnership Project (3GPP) illustrates how prioritising standards-setting both through domestic industrial policy and foreign policy could provide rich economic and geopolitical dividends. After failing to meaningfully influence the setting of the 3G and 4G standards,the Chinese government commenced a national effort that sought to harmonise domestic standards, improve government coordination of standard-setting efforts, and obtain a first movers advantage over other nations developing their own domestic 5G standards. This was combined with a diplomatic push that saw vigorous private sector participation (Huawei put in 20 5G related proposals whereas Ericsson and Nokia put in just 16 and 10 respectively);
packing key leadership positions in Working Groups with representatives from Chinese companies and institutions; and ensuring that all Chinese participants vote in unison for any proposal. It is no surprise therefore that Chinese companies now lead the way on 5G with Huawei owning the most number of 5G patents and has finalised more 5G contracts than any other company despite restrictions placed on Huawei’s gear by some countries. As detailed in its “Make in China”strategy, China will now activelyapply its winning strategy to other standard-setting avenues as well
Standards for Artificial Intelligence
A number of institutions, including private actors such as Huawei and Cloud Walk have contributed to China’s 2018 AI standardisation white paper that was revised and updated in 2021.The white paper maps the work of SDOs in the field of AI standards and outlines a number of recommendations on how Chinese actors can use global SDOs to boost industrial competitiveness and globally promote “Chinese wisdom.” While there are cursory references to the role of standards in furthering “ethics” and “privacy,” the document does not outline how China will look to promote these values at SDOs.
Artificial Intelligence (AI) is a general purpose technology that has various outcomes and use-cases.Top down regulation of AI by governments is emerging across jurisdictions but this may not keep pace with the rapidly evolving technology being developed by the private sector or adequately check the diversity of use-cases. On the other hand, private sector driven self-regulatory initiatives focussing on ‘ethical AI’ are very broad and provide too much leeway to technology companies to evade the law. Technical standards offer a middle ground where multiple stakeholders can come together to devise uniform requirements on various stages of the AI development lifecycle. Of course, technical standards must co-exist with government driven regulation as well as self regulatory codes to holistically govern the deployment of AI globally. However, while the first two modes of regulation has received plenty of attention from policy-makers and scholars alike, AI standard-setting is an emerging field that has yet to be concretely evaluated from a strategic and diplomatic perspective.
Introducing a new CIS-ASPI project
This is why researchers at the Australian Strategic Policy Institute have partnered with the Centre for Internet and Society (Bengaluru) to produce a ‘techplomacy guide’ on negotiating AI standards for stakeholders in the Indo-Pacific. Given the immense economic value of shaping global technical standards, it is imperative that SDOs not be dominated only by the likes of the US, Europe or China. The standards likely to impact a majority of nations, devised only from the purview of a few countries may be context agnostic to the needs of emerging economies. Further, there are values at stake here. An excessive focus on security, accuracy or quality of AI-driven products may make some technology palatable across the world even if the technology undermines core democratic values such as privacy, and anti-discrimination. China’s efforts at shaping Facial Recognition Technology (FRT) standards at the ITU have been criticised for moving beyond mere technical specifications into the domain of policy recommendations despite there being a lack of representation of experts on human rights, consumer protection or data protection at the ITU. Accordingly, diversity of representation in terms of expertise, gender, and nationality at SDOs, including in leadership positions, are aspects our project will explore with an eye towards creating more inclusive participation.
Through this project ,we hope to identify how key stakeholders drive these initiatives and how technological standards can be devised in line both with core democratic values and strategic priorities. Through extensive consultations with several stakeholder groups, we plan to offer learning products to policy makers and technical delegates alike to enable Australian and Indian delegates to serve as ambassadors for our respective nations.
For more information on this new and exciting project funded by the Australian Departmentfor Foreign Affairs and Trade as part of the Australia India Cyber and Critical Technology Partnership grants, visit www.aspi.org.au/techdiplomacy and https://www.internationalcybertech.gov.au/AICCTP-grant-round-two
Big Tech’s privacy promise to consumers could be good news — and also bad news
It remains to be seen whether Google’s Privacy Sandbox project will be truly privacy-preserving. (Reuters Illustration: Francois Lenoir)
In February, Facebook, rebranded as Meta, stated that its revenue in 2022 is anticipated to reduce by $10 billion due to steps undertaken by Apple to enhance user privacy on its mobile operating system. More specifically, Meta attributed this loss to a new AppTrackingTransparency feature that requires apps to request permission from users before tracking them across other apps and websites or sharing their information with and from third parties. Through this change, Apple effectively shut the door on “permissionless” internet tracking and has given consumers more control over how their data is used. Meta alleged that this would hurt small businesses benefiting from access to targeted advertising services and charged Apple with abusing its market power by using its app store to disadvantage competitors under the garb of enhancing user privacy.
Access the full article published in the Indian Express on April 13, 2022