Centre for Internet & Society
The Surveillance Industry in India: At Least 76 Companies Aiding Our Watchers!

urban don on flickr

India Subject to NSA Dragnet Surveillance! No Longer a Hypothesis — It is Now Officially Confirmed

by lawgeek on flickr

Interview with Mr. Billy Hawkes - Irish Data Protection Commissioner

by Sean Nicholls on flickr

How Surveillance Works in India

Demonstrators showing support for National Security Agency whistleblower Edward Snowden at India Gate in New Delhi on Sunday.

Can India Trust Its Government on Privacy?

A man checking his cell phone in New Delhi on June 18. Picture by Anindito Mukherjee/Reuters.

Parsing the Cyber Security Policy

Image: siliconindia.com

FinFisher in India and the Myth of Harmless Metadata

by John-Norris on Flickr

Report on the Sixth Privacy Roundtable Meeting, New Delhi

A banner of the event with logos of all the organisers

Privacy Law Must Fit the Bill

Sunil Abraham

Report on the 2nd Privacy Round Table meeting

by Maria Xynou last modified Jul 12, 2013 11:54 AM
This post entails a report on the second Privacy Round Table meeting which took place on 20th April 2013.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


In furtherance of Internet Governance multi-stakeholder Initiatives and Dialogue in 2013, the Centre for Internet and Society (CIS) in collaboration with the Federation of Indian Chambers of Commerce and Industry (FICCI), and the Data Security Council of India (DSCI), is holding a series of six multi-stakeholder round table meetings on “privacy” from April 2013 to August 2013. The CIS is undertaking this initiative as part of their work with Privacy International UK on the SAFEGUARD project.

In 2012, the CIS and DSCI were members of the Justice AP Shah Committee which created the “Report of Groups of Experts on Privacy”. The CIS has recently drafted a Privacy (Protection) Bill 2013, with the objective of contributing to privacy legislation in India. The CIS has also volunteered to champion the session/workshops on “privacy” in the meeting on Internet Governance proposed for October 2013.

At the roundtables the Report of the Group of Experts on Privacy, DSCI´s paper on “Strengthening Privacy Protection through Co-regulation” and the text of the Privacy (Protection) Bill 2013 will be discussed. The discussions and recommendations from the six round table meetings will be presented at the Internet Governance meeting in October 2013.

The dates of the six Privacy Round Table meetings are enlisted below:

  1. New Delhi Roundtable: 13 April 2013
  2. Bangalore Roundtable: 20 April 2013
  3. Chennai Roundtable: 18 May 2013
  4. Mumbai Roundtable: 15 June 2013
  5. Kolkata Roundtable: 13 July 2013
  6. New Delhi Final Roundtable and National Meeting: 17 August 2013

 

Following the first Privacy Round Table in Delhi, this report entails an overview of the discussions and recommendations of the second Privacy Round Table meeting in Bangalore, on 20th April 2013.

Overview of DSCI´s paper on “Strengthening Privacy Protection through Co-regulation”

 

The meeting began with a brief summary of the first Privacy Round Table meeting which took place in Delhi on 13th April 2013. Following the summary, the Data Security Council of India (DSCI) presented the paper “Strengthening Privacy Protection through Co-regulation”. In particular, DSCI presented the regulatory framework for data protection under the IT (Amendment) Act 2008, which entails provisions for sensitive personal information, privacy principles and “reasonable security practices”. It was noted that the privacy principles, as set out in the Justice AP Shah Report, refer to: data collection limitation, data quality, purpose specification, use limitation, security safeguards, openness and individual participation. The generic definitions of identified privacy principles refer to: notice, choice and consent, collection limitation, purpose specification, access and correction, disclosure of information, security, openness/transparency and accountability. However, the question which prevailed is what type of regulatory framework should be adopted to incorporate all these privacy principles.

DSCI suggested a co-regulatory framework which would evolve from voluntary self-regulation with legal recognition. The proposed co-regulatory regime could have different types of forms based on the role played by the government and industry in the creation and enforcement of rules. DSCI mentioned that the Justice AP Shah Committee recommends: (1) the establishment of the office of the Privacy Commissioner, both at the central and regional levels, (2) a system of co-regulation, with emphasis on SROs and (3) that SROs would be responsible for appointing an ombudsman to receive and handle complaints.

The discussion points brought forward by DSCI were:

  • What role should government and industry respectively play in developing and enforcing a regulatory framework?
  • How can the codes of practice developed by industry be enforced in a co-regulatory regime? How will the SRO check the successful implementation of codes of practice? How can the SRO penalize non-compliances?
  • How can an organization be incentivized to follow the codes of practice under the SRO?
  • What should be the role of SROs in redressal of complaints?
  • What should be the business model for SROs?

DSCI further recommended the establishment of “light weight” regulations based on global privacy principles that value economic beliefs of data flow and usage, while guaranteeing privacy to citizens. DSCI also recommended that bureaucratic structures that could hinder business interests be avoided, as well as that the self-regulatory framework of businesses adapts technological advances to the privacy principles. Furthermore, DSCI recommended that self-regulatory bodies are legally recognised.

 

Discussion on the draft Privacy (Protection) Bill 2013

Discussion of definitions and preamble: Chapter I & II

The second session began with a discussion of definitions used in the Bill. In particular, many participants argued that the term ´personal data´ should be more specific, especially since the vague definition of the term could create a potential for abuse. Other participants asked who the protection of personal data applies to and whether it covers both companies and legal persons. Furthermore, the question of whether the term ´personal data´ entails processed and stored data was raised, as well as whether the same data protection regulations apply to foreign citizens residing in India. A participant argued that the preamble of the Bill should be amended to include the term ´governance´ instead of ´democracy´, as this privacy legislation should be applicable in all cases in India, regardless of the current political regime.

Sensitive Personal Data

The meeting proceeded with a discussion of the term ´sensitive personal data´ and many participants argued that the term should be broadened to include more categories, such as religion, ethic group, race, caste, financial information and others. Although the majority of the participants agreed that the term ´sensitive personal data´ should be redefined, they disagreed in regards to what should be included in the term. In particular, the participants were not able to reach a consensus on whether religion, caste and financial information should be included in the definition of the term ´sensitive personal data´. Other participants argued that passwords should be included within the scope of ´sensitive personal data´, as they can be just as crucial as financial information.

Information vs. Data

During the discussion, a participant argued that there is a subtle difference between the term ´information´ and ´data´ and that this should be pointed out in the Bill to prevent potential abuse. Another participant argued that ´sensitive personal data´ should be restricted to risk factors, which is why unique identifiers, such as passwords, should be included in the definition of the term. Other participants argued that the context of data defines whether it is ´sensitive´ or not, as it may fall in the category of ´national security´ in one instance, but may not in another. Thus, all types of data should be considered within their context, rather than separately. The fact that privacy protection from several financial services already exists was pointed out and the need to exclude pre-existing protections from the Bill was emphasised. In particular, a participant argued that banks are obliged to protect their customers´ financial information either way, which is why it should not be included in the definition of the term ´sensitive personal data´.

Exemptions

Several exemptions to the right to privacy were discussed throughout the meeting. A participant asked whether the right to privacy would also apply to deceased persons and to unborn infants.  Another participant asked whether the term ´persons´ would be restricted to natural persons or if it would also apply to artificial persons. The fact that children should also have privacy rights was discussed in the meeting and in particular, participants questioned whether children´s right to privacy should be exempted in cases when they are being surveilled by their own parents.

Discussion of “Protection of Personal Data”: Chapter III

Following the discussion of definitions used in the Bill, the meeting proceeded with a discussion on the protection of personal data. A participant emphasized that the probability of error in data is real and that this could lead to major human rights violations if not addressed appropriately and in time. The fact that the Bill does not address the element of error within data was pointed out and suggested that it be included in draft Privacy (Protection) Bill. Another participant recommended an amendment to the Bill which would specify the parties, such as the government or companies, which would be eligible to carry out data collection in India. As new services are been included, the end purpose of data collection should be taken into consideration and, in particular, the ´new purposes´ for data collection would have to be specified at every given moment.

Data Collection

In terms of data collection, a participant emphasized that the objectives and purposes are different from an individual and an industry perspective, which should be explicitly considered through the Bill. Furthermore, the participant argued that the fact that multiple purposes for data collection may arise should be taken into consideration and relevant provisions should be incorporated in the in Bill. Another participant argued that the issue of consent for data collection may be problematic, especially since the purpose of data collection may change in the process and while an individual may have given consent to the initial purpose for data collection, he/she may not have given consent to the purposes which evolved throughout the process. Thus, explicitly defining the instances for data collection may not be feasible.

Consent

On the issue of consent, several participants argued that it would be important to distinguish between ´mandatory´ and ´optional´ information, as, although individuals may be forced by the government to hand over certain cases, in other cases they choose to disclose their personal data. Thus participants argued that the Bill should provide different types of privacy protections for these two separate cases. Other participants argued that the term ´consent´ varies depending on its context and that this should too be taken into consideration within the draft Privacy (Protection) Bill. It was also argued that a mechanism capable of gaining individual consent prior to data collection should be developed. However, a participant emphasized upon the fact that, in many cases, it is very difficult to gain individual consent for data collection, especially when individuals cannot read or write. Thus the need to include provisions for uneducated or disabled persons within the Bill was highly emphasized.

Further questions were raised in regards to the withdrawal of consent. Several participants argued that the draft Privacy (Protection) Bill should explicitly determine that all data is destroyed once an individual has withdrawn consent. Participants also argued that consent should also be a prerequisite to the collection, processing, sharing and retention of secondary users´ data, such as the data of individuals affiliated to the individual in question. A participant argued that there are two problematic areas of consent: (1) financial distribution (such as loans) and (2) every financial institution must store data for a minimum of seven to eight years. Having taken these two areas in consideration, the participant questioned whether it is feasible to acquire consent for such cases, especially since the purpose for data retention may change in the process. Participants also referred to extreme cases through which consent may not be acquired prior to the collection, processing, sharing and retention of data, such as in disastrous situations (e.g. earthquake) or in extreme medical cases (e.g. if a patient is in a coma), and suggested that relevant provisions are included in the Bill.

Data Disclosure

In terms of data disclosure, several participants argued that the disclosure of data can potentially be a result of blackmail and that the Bill does not provide any provisions for such extreme cases. Furthermore, participants argued that although consent may be taken from an individual for a specific purpose, such data may be used in the process for multiple other purposes by third parties and that it is very hard to prevent this. It was recommended that the Bill should incorporate provisions to prevent the disclosure of data for purposes other than the ones for which consent was given.

A participant recommended that individuals are informed of the name of the Data Processor prior to the provision of consent for the disclosure of data, which could potentially increase transparency. Many participants raised questions in regards to the protection of data which goes beyond the jurisdiction of a country. It remains unclear how data will be processed, shared, retained when it is not handled within India and several participants argued that this should be encountered within the Bill.

Data Destruction

In terms of data destruction, a participant emphasized upon the fact that the draft Privacy (Protection) Bill lacks provisions for the confirmation of the destruction of data. In particular, although the Bill guarantees the destruction of data in certain cases, it does not provide a mechanism through which individuals can be assured that their data has actually been deleted from databases. Another individual argued that since the purposes for data collection may change within the process, it is hard to determine the cases under which data can be destroyed. Since the purposes for data collection and data retention may change in time, the participant argued that it would be futile to set a specific regulatory framework for data destruction. Another participant emphasized upon the value of data and stated that although some data may appear to have no value today, it may in the future, which is why data should not be destroyed.

Data Processing

In terms of data processing, participants argued that privacy protection complications have arisen in light of the social media. In particular, they argued that social media develop and expand technologically constantly and that it is very difficult to regulate the processing of data that may be conducted by such companies. A participant emphasized the difference between (1) the processing of data when it is being read and (2) the processing of data when it is being analysed. Such a distinction should be considered within the Bill, as well as the use of data which is being processed. Many participants distinguished between the primary and secondary use of data and argued that the secondary use of data should also be included in the privacy statements of companies.

However, participants also pointed out that purposes for the collection of data may overlap and that it may be difficult to distinguish between primary and secondary purposes for data collection. A participant disagreed with this argument and stated that it is possible to distinguish between primary and secondary purposes of data collection, as long as companies are transparent about why they are collecting information and about the purpose of its processing. This argument was seconded by another participant who argued that the specific purposes for the processing of data should be incorporated in the Bill.

In brief, the following questions with regards to chapter III of the bill were raised during the meeting:

  • Should consent be required prior to the collection of data?
  • Should consent be acquired prior and after the disclosure of data?
  • Should the purpose of data collection be the same as the purpose for the disclosure of data?
  • Should an executive order or a court order be required to disclose data?
  • At the background of national security, anyone´s data can be under the ´suspicion list´. How can the disclosure of data be prevented in such circumstances? Non-criminals may have their data in the ´suspicion list´ and under national security, the government can disclose information; how can their information be protected in such cases?
  • An individual may not be informed of the collection, analysis, disclosure and retention of his/her data; how can an individual prevent the breach of his/her data?
  • Should companies notify individuals when they share their (individuals´) data with international third parties?

 

In brief, the following recommendations with regards to chapter III of the bill were raised during the meeting:

  • The data subject has to be informed, unless there is a model contract.
  • The request for consent should depend on the type of data that is to be disclosed.
  • Some exceptions need to be qualified (for example, in instances of medical patients different exceptions may apply).
  • The shared data may be considered private data (need of a relevant regulatory framework).
  • An international agreement should deal with the sharing of data with international third parties - incorporating such provisions in Indian law would probably be inadequate.
  • If any country is not data-secure, there should be an approval mechanism for the transfer of data to such a country.
  • India could have an export law which would monitor which data is sensitive and should not be shared with international third parties.
  • The problem with disclosure is when there is an exception for certain circumstances
  • Records should be kept on individuals who disclose data; there should be a trail of disclosure, so that there can be more transparency and accountability.
  • Ownership of data is a controversial issue and so is the disclosure of data; consumers give up the ownership of their data when they share it with third parties and ergo cannot control its disclosure (or non-disclosure).
  • ´Data ownership´ should be included in the definitions of the Bill.
  • What is the ´quality´ of data? The definition for ´quality´ under section 11 of the Bill is not well defined and should be improved.

 

Discussion of “Interception of Communications”: Chapter IV

 

The discussion on the interception of communications started off with a statement that 70 percent of the citizens in India are enrolled on “voice”, which means that the interception of communications affects a large proportion of the population in the country. A participant asked whether the body corporate in India should be treated as a telecommunications provider and whether it should be responsible for the interception of communications. Another participant argued that the disclosure of information should be closely regulated, even when it is being intercepted for judicial purposes. Many participants agreed that data which is collected and intercepted should not be used for other purposes other than the original purpose, as well as that such information should not be shared with third parties.

Questions were raised in regards to who should authorise the interception of communications and a participant recommended that a judicial warrant should be a prerequisite to the interception of communications in India. Some participants argued that the Bill should clearly specify the instances under which communications can be intercepted, as well as the legitimate purposes for interception. It was also argued that some form of ´check and balance´ should exist for the interception of communications and that the Bill should provide mechanisms to ensure that interception is carried out in a legal way. Several participants recommended that the Privacy Commissioner is mandated to approve the interception of communications, while questions were raised in regards to the sharing of intercepted data.

Discussion on self-regulation and co-regulation

 

The final session of the meeting consisted of a debate on self-regulation and co-regulation. Questions were raised in regards to how self-regulation and co-regulation could be enforced. Some participants recommended the establishment of sector regulations which would mandate the various forms of surveillance, such as a separate regulation for the UID scheme. However, this recommendation was countered by participants who argued that the government would probably not approve every sector regulation and that this would leave large areas of surveillance unregulated.

The participants who supported the self-regulation framework argued that the government should not intervene in the industry and that the industry should determine its own rules in terms of handling its customers´ data. Other participants supported the co-regulatory framework and argued that companies should cooperate with the Privacy Commissioner in terms of handling customers´ data, especially since this would increase transparency on how the industry regulates the use of customers´ data. The supporters of co-regulation supplemented this statement by arguing that the members of the industry should comply with regulations and that if they do not, there should be sanctions. Such arguments were countered by supporters of self-regulation, who stated that the industry should create its own code of conduct and that the government should not regulate its work.

Furthermore, it was argued that although government regulations for the handling of data could make more sense in other countries, in India, the industry became aware of privacy far sooner than what the government did, which is why a self-regulatory regime should be established in terms of handling data. Such arguments were countered by supporters of co-regulation who argued that the industry has vested interest in self-regulation, which should be countered by public policy. This argument was also countered by participants arguing that, given the high levels of corruption in India, the Privacy Commissioner in India may be corrupt and co-regulation may end up being ineffective. Other participants questioned this argument by stating that if India lacks legal control over the use of data by companies, individuals are exposed to potential data breaches. Supporters of co-regulation stated that the Privacy Commissioner should formulate a set of practices and both the industry and the government should comply with them.

Meeting conclusion

 

The second Privacy Round Table entailed a discussion of the definitions used in the draft Privacy (Protection) Bill 2013, as well as of chapters II, III and IV on the right to privacy, the protection of personal data and the interception of communications. The majority of the participants agreed that India needs a privacy legislation and that individuals´ data should be legally protected. However, participants disagreed in regards to how data would be safeguarded and the extent to which data collection, processing, sharing, disclosure, destruction and retention should be regulated. This was supplemented by the debate on self-regulation and co-regulation which concluded the meeting; participants disagreed on whether the industry should regulate the use of customers´ data autonomously from government regulation or whether the industry should co-operate with the Privacy Commissioner for the regulation of the use of data. Though a consensus was not reached in regards to co-regulation and self-regulation, the majority of the participants agreed upon the establishment of a privacy legislation which would safeguard individuals´ personal data. The major issue, however, with the creation of a privacy legislation in India would probably be its adequate enforcement.

GNI Annual Report

by Prasad Krishna last modified Apr 25, 2013 07:14 AM

PDF document icon GNI Annual Report 2012.pdf — PDF document, 7512 kB (7692870 bytes)

Off the Record

by Nishant Shah last modified Apr 26, 2013 05:58 AM
Social networks track our world but not relationships. We live in a world where things happen. And yet, with the presence of digital objects, the things that happen have increased in intensity and volume.

Nishant Shah's article was published in the Indian Express on April 6, 2013.


Never before have we lived in a world that is so seen,documented, archived and forgotten. Early Enlightenment philosophers had wondered, if a tree falls in loneliness and there is nobody there to see it, does the tree really fall? In the world of instant documentation, chances are that if the tree falls, there is somebody there to tweet it.

We live in a spectacular world. That is not to say that it is the best or worst of all possible. I want to ponder on the fact that we create spectacles of things that were otherwise swept under the carpet. Every little detail of our myriad and mundane life is potentially spectacular. From medical technologies that can decipher our chemical DNA to the mobile phone that Instagrams the food we eat and things that we see, we are surrounded by spectacles of everyday life. Pictures, tweets, blogs, geolocation services, status updates, likes, shares — the texture of living has never been this richly and overwhelmingly documented.

However, the data and information that constitutes the recognition of our life, have increased to such a scale that we have overturned the course of human history writing. We identify ourselves as a species that is able to document, store and relay information from one passing generation to another. So much so that we have invested a vast amount of our energies in creating museums, writing histories, building archives, and obsessively collecting facts and fictions of our origins, from the big bang to flying reptiles.

But big data has made us reach a point where we are trying to manage, filter the onslaught of data. We have, for the first time, created information that is no longer intelligible to the human eye or brain. From machines that can verify god particles to artificial intelligence which can identify patterns every day we have replaced the human being from its central position as consumer, producer and subject of data.

These are conditions of living in information societies that are producing, archiving and reorganising information for these information ecosystems. The multiple information streams remind us of the multitude and diversity of human life which cannot be reduced to a generalising theory of similarity. The rise of big data brings to focus the promise of the World Wide Web — a reminder that there are alternatives to the mainstream and that there are unheard, contradictory voices that deserve to be heard. Yet, even as the burgeoning information society explodes on our devices, there is another anxiety which we need to encounter. If the world of information, which was once supposed to be the alternative, becomes the central and dominant mode of viewing the world, what does it hide?

Take friendship, for instance.You can quantify how many friends exist on your social networks. Algorithms can work out complex proximity principles and determine who your closer connections are.

Data mining tools are able to figure out the similarities and likelihood of enduring conversations in your social sphere. But these are all human actions which can be captured by the network and the big data realities. They may be able to give us new information about what friends do and how often, but there is still almost no way of figuring out, which friend might call you in the middle of the night.

Friendship, like many other things, is not made of spectacles. It does not produce information sets which can be mapped and represented as information. Friendship cannot be reduced to pictures of being together or dramatic stories of survival and togetherness. More often than not, true friendships are made of things that do not happen. Or things, if they happen, cannot be put in a tweet, captured on Instagram or shared on Tumblr.

As we take these social networked realities as 'real' realities, it might be worth asking what is being missed out, what remains unheard and unrepresented in these information streams. Because if you love somebody and there is nobody to know it, report it, record it and convert it into a spectacle, does it make your love any less special? Any less intense? Any less true?

IT (Amendment) Act, 2008, 69A Rules: Draft and Final Version Comparison

by Jadine Lannon last modified Apr 30, 2013 10:10 AM
Jadine Lannon has performed a clause-by-clause comparison of the 69A draft rules and 69A rules for Section 69A of the IT Act in order to better understand how the two differ. While there has been reshuffling of the clauses in the official rules, the content itself has not changed significantly. Notes have been included on some changes we deemed to be important.

Below is a chart depicting the 69A Draft Rules and the 69A Rules:

c1
c2
c3
c4
c5
c6

There was a lot of structural change between the draft rules and the official rules—many of the draft clauses were shuffled around and combined—but not a lot of change in content. Many of the changes that appear in the official rules serve to clarify parts of the draft rules.

Three definitions were added under clause (2), two to clarify later references to a “designated officer” and a “nodal officer” and the third to indicate a form appended to the official Rules.

Clause (3) of the official rules then clarifies who shall be named the “designated officer”, which was not done in the draft rules as there was no inclusion of an official title of the officer who would have the responsibilities of the “designated officer”. Interestingly, clause (3) of the draft rules requires the Secretary of the Department of Information Technology, Ministry of Communications & Information Technology, Government of India to name an officer, whereas clause (3) of the official rules states that the “Central Government” shall designate an officer, a change in language that allows for much more flexibility on the government's part.

Clause (5) in the draft rules and clause (4) in the official rules deal with the designation of a Nodal Officer, but omitted in the official rules are responsibilities of the designated officer, which includes acting on the “direction of the indian competent court”. This responsibility does not appear in any part of the official rules. Further, clause (4) of the official rules requires the organizations implicated in the rules to publish the name of the Nodal Officer on their website; this is an addition to the draft rules, and a highly useful one at that. This is an important move towards some form of transparency in this contentious process.

Clause (5) of the official rules significantly clarifies clause (4) of the draft rules by stating that the designated officer may direct any Agency of the Government or intermediary to block access once a request from the Nodal Officer has been received.

Clause (7) of the official rules uses the word “information” instead of “computer resource”, which is used in the corresponding clause (12) in the draft rules, when referring to the offending object. This change in language significantly widens the scope of what can be considered offending under the rules.

The sub-sections (2), (3) and (4) of clause (9) of the official rules are additions to the draft rules. Sub-section (2) is a significant addition, as it deals with the ability of the Secretary of the Department of Information Technology's ability to block for public access any information or part thereof without granting a hearing to the entity in control of the offending information in a case of emergency nature. The request for blocking will then be brought before the committee of examination of request within 48 hours of the issue of direction, meaning that the offending information could be blocked for two days without giving notice to the owner/controller of the information of the reason for the blockage.

An important clarification has been included in clause (15) of the official rules, which differs from clause (23) of the draft rules through the inclusion of the following phrase: “The Designated Officer shall maintain complete record of the request received and action taken thereof [...] of the cases of blocking for public access”. This is a significant change from clause (23), which simply states that the “Designated Officer shall maintain complete record [...] of the cases of blocking”. This could be seen as an important step towards transparency and accountability in the 69B process of blocking information for public access if clause (16) of the official rules did not state that all requests and complaints received and all actions taken thereof must be kept confidential, so the maintenance of records mentioned in clause (15) of the official rules appears to be only for internal record-keeping. However, just the fact that this information is being recording is a significant change from the draft rules, and may, if the sub-rules relating to confidentiality were to be changed, be useful data for the public.

Surveillance technology companies operating in India - spreadsheet

by Maria Xynou last modified Apr 27, 2013 04:29 PM
The Centre for Internet and Society has started investigating surveillance technology companies operating in India! This spreadsheet entails the first 77 companies which are being researched.

PDF document icon Surveillance technology companies operating in India - spreadsheet.pdf — PDF document, 514 kB (527204 bytes)

Indian Telegraph Act, 1885, 419A Rules and IT (Amendment) Act, 2008, 69 Rules

by Jadine Lannon last modified Apr 30, 2013 10:04 AM
Jadine Lannon has performed a clause-by-clause comparison of the 419A Rules of the Indian Telegraph Act, 1885 and the 69 Rules under Section 69 of the Information Technology (Amendment) Act, 2008 in order to better understand how the two are similar and how they differ. Though they are from different Acts entirely, the Rules are very similar. Notes have been included on some changes we deemed to be important.
c1
c2
c3
c4
c5

Though they are from different Acts entirely, the 419A Rules from the Indian Telegraph Act of 1885 and the 69 Rules from the Information Technology (Amended) Act, 2008 are very similar. In fact, much of the language that appears in the official 69 rules is very close, if not the same in many places, as the language found in the 419A rules. The majority of the change in language between the 419A Rules and the equivalent 69 Rules acts to clarify statements or wordings that may appear vague in the former. Aside from this, it appears that many of the 69 Rules have been cut-and-pasted from the 419A Rules.

Arguably the most important change between the two sets of rules takes place between Clause (3) of the 419A Rules and Clause (8) of the 69 Rules, where the phrase “while issuing directions [...] the officer shall consider possibility of acquiring the necessary information by other means” has been changed to “the competent authority shall, before issuing any direction under Rule (3), consider possibility of acquiring the necessary information by other means”. This is an important distinction, as the latter requires other options to be looked at before issuing the order for any interception or monitoring or decryption of any information, whereas the former could possibly allow the interception of messages while other options to gather the “necessary” information are being considered. It seems unreasonable that the state and various state-approved agencies could possibly be intercepting the personal messages of Indian citizens in order to gather “necessary” information without having first established that interception was a last resort.

Another potentially significant change between the rules can be found between Clause (15) of the 419A Rules, which states, in the context of punishment of a service provider, the action taken shall include “not only fine but also suspension or revocation of their licenses”, whereas Clause (21) of the 69 Rules states that the punishment of an intermediary or person in-charge of computer resources “shall be liable for any action under the relevant provisions of the time being in force”. This is an interesting distinction, possibly made to avoid issues with legal arbitrariness associated with assigning punishments that differ for those punishments for the same activities laid out under the Indian Penal Code. Either way, the punishments for a violation of the maintenance of secrecy and confidentiality as well as unauthorized interception (or monitoring or decryption) could potentially be much harsher under the 69 Rules.

In the same vein, the most significant clarification through a change in language takes place between Clause (10) of the 419A and Clause (14) of the 69 Rules: “the service providers shall designate two senior executives of the company” from the 419A Rules appears as “every intermediary or person in-charge of computer resource shall designate an officer to receive requisition, and another officer to handle such requisition” in the 69 Rules. This may be an actual difference between the two sets of Rules, but either way, it appears to be the most significant change between the equivalent Clauses.

The addition of certain clauses in the 69 Rules can also give us some interesting insights about what was of concern when the 419A rules were being written. To begin, the 419A rules provide no definitions for any of the specific terms used in the Rules, whereas the 69 Rules include a list of definitions in Clause (2). Clause (4) of 69 Rules, which deals which the authorisation of an agency of the Government to perform interception, monitoring and decryption, is sorely lacking in the 419A rules, which alludes to “authorised security [agencies]” without ever providing any framework as to how these agencies become authorised or who should be doing the authorising.

The 69 Rules also include Clause (5), which deals with how a state should go about obtaining authorisation to issue directions for interception, monitoring and/or decryption in territories outside of its jurisdiction, which is never mentioned in 419A rules, lamely sentencing states to carry out the interception of messages only within their own jurisdiction.

Lastly, Clause (24), which deals with the prohibition of interception, monitoring and/or decryption of information without authorisation, and Clause (25), which deals with the prohibition of the disclosure of intercepted, monitored and/or decrypted information, have fortunately been added to the 69 Rules.

IT (Amendment) Act, 2008, 69 Rules: Draft and Final Version Comparison

by Jadine Lannon last modified Apr 30, 2013 09:56 AM
Jadine Lannon has performed a clause-by-clause comparison of the Draft 69 Rules and official 69 Rules under Section 69B in order to better understand how the two are similar and how they differ. Very brief notes have been included on some changes we deemed to be important.
c1
c2
c3
c4
c5
c6
c7
c8
c9

Similar to the other comparisons that I have done on the 69A and 69B Draft and official Rules, the majority of the changes between these two sets of rules serves to restructure and clarify various clauses in the Draft 69 Rules.

Three new definitions appear in the Clause (2) of the 69 Rules, including a definition for “communication”, which appears in the Draft Rules but has no associated definition under Clause (2) of the Draft Rules.

Clause (31) of the Draft Rules, which deals with the requirement of security agencies of the State and Union territories to share any information gathered through interception, monitoring and/or decryption with federal agencies, does not make an appearance in the official rules. Further, this necessity does not seem to be implied anywhere in the official 69 Rules.

IT (Amendment) Act, 2008, 69B Rules: Draft and Final Version Comparison

by Jadine Lannon last modified Apr 30, 2013 09:47 AM
Jadine Lannon has performed a clause-by-clause comparison of the Draft 69B Rules and official 69B Rules under Section 69B in order to better understand how the two are similar and how they differ. Notes have been included on some changes we deemed to be important.
c1
c2
c3
c4
c5
c6
c7

There has been a considerable amount of re-arrangement and re-structuring of the various clauses between the 69B Draft Rules and the official Rules, as can be seen in the comparison chart, but very little content has been changed. The majority of the changes made to the official Rules are changes in wording and language that serve to provide some much-needed clarification to the Draft Rules (see the differences between Clause (9) of the Draft Rules and sub-section (4) of Clause (3) of the official Rules as an example). Language redundancies, as well as full clauses (Clause [6] of the Draft Rules) have been thankfully removed in the official Rules.

Aside from the addition of four definitions, including a definition for a “security policy”, a phrase which appears in the Draft Rules without being defined, Clause (2) contains what is most likely one of the more noteable changes between the two definitions: under sub-section (g) in the 69 Rules, the words “or unauthorised use” have been added to the definition of “cyber security breaches”, which significantly increases the scope of what can be considered a cyber security breach under the Rules.

A significant change between the two sets of rules can be found in sub-section (2) of Clause (8) of the official rules, which states that, “save as otherwise required for the purpose of any ongoing investigation, criminal complaint or legal proceedings the intermediary or the person in-charge of computer resource shall destroy records pertaining to directions for monitoring or collection of information”. The section in italics has been added to the original Clause (22) of the Draft Rules, meaning that when the Rules were originally drawn up, no exceptions were to be made for the destructions of the records for the issuing of directions for monitoring and/or the collected information. They would simply have to be destroyed within six months of the discontinuance of the monitoring/collection.

One change that may or may not be significant is the replacement of the words “established violations” in the Draft Rules to simply “violation” in the official Rules in Clauses (19)/(6), which deal with the responsibility of the intermediary. This could be taken to mean that suspected and/or perceived violations may also be punishable under this clause, but this is a hard stance to argue. Most likely the adjustment was made when those superfluous and/or convoluted parts of the Draft rules were being removed.

The Surveillance Industry in India: At Least 76 Companies Aiding Our Watchers!

by Maria Xynou last modified Jul 12, 2013 11:59 AM
Maria Xynou is conducting research on surveillance technology companies operating in India. So far, 76 companies have been detected which are currently producing and selling different types of surveillance technology. This post entails primary data on the first ever investigation of the surveillance industry in India. Check it out!
The Surveillance Industry in India: At Least 76 Companies Aiding Our Watchers!

urban don on flickr


This blog post has been cross-posted in Medianama on May 8, 2013. This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC.


So yes, we live in an Internet Surveillance State. And yes, we are constantly under the microscope. But how are law enforcement agencies even equipped with such advanced technology to surveille us in the first place?

Surveillance exists because certain companies produce and sell products and solutions which enable mass surveillance. Law enforcement agencies would not be capable of mining our data, of intercepting our communications and of tracking our every move if they did not have the technology to do so. Thus an investigation of the surveillance industry should be an integral part of research for any privacy advocate, which is why I started looking at surveillance technology companies. India is a very interesting case not only because it lacks privacy legislation which could safeguard us from the use of intrusive technologies, but also because no thorough investigation of the surveillance industry in the country has been carried out to date.

The investigation of the Indian surveillance industry has only just begun and so far, 76 surveillance technology companies have been detected. No privacy legislation...and a large surveillance industry. What does this mean?

A glimpse of the surveillance industry in India

In light of the UID scheme, the National Intelligence Grid (NATGRID), the Crime and Criminal Tracking Network System (CCTNS) and the Central Monitoring System (CMS), who supplies law enforcement agencies the technology to surveille us?

In an attempt to answer this question and to uncover the surveillance industry in India, I randomly selected a sample of 100 companies which appeared to produce and sell surveillance technology. This sample consisted of companies producing technology ranging from internet and phone monitoring software to  biometrics, CCTV cameras, GPS tracking and access control systems. The reason why these companies were randomly selected was to reduce the probability of research bias and out of the 100 companies initially selected, 76 of them turned out to sell surveillance technology. These companies vary in the types of surveillance technology they produce and it should be noted that most of them are not restricted to surveillance technologies, but also produce other non-surveillance technologies. Paradoxically enough, some of these companies simultaneously produce internet monitoring software and encryption tools! Thus it would probably not be fair to label companies as ´surveillance technology companies´ per se, but rather to acknowledge the fact that, among their various products, they also sell surveillance technologies to law enforcement agencies.

Companies selling surveillance technology in India are listed in Table 1. Some of these companies are Indian, whilst others have international headquarters and offices in India. Not surprisingly, the majority of these companies are based in India's IT hub, Bangalore.

Table 2 shows the types of surveillance technology produced and sold by these 76 companies.

The graph below is based on Table 2 and shows which types of surveillance are produced the most by the 76 companies.

Graph on types of surveillance sold to law enforcement agencies by 76 companies in India

Out of the 76 companies, the majority (32) sell surveillance cameras, whilst 31 companies sell biometric technology; this is not a surprise, given the UID scheme which is rapidly expanding across India. Only one company from the sample produces social network analysis software, but this is not to say that this type of technology is low in the Indian market, as this sample was randomly selected and many companies producing this type of software may have been excluded. Moreover, many companies (13) from the sample produce data mining and profiling technology, which could be used in social networking sites and which could have similar - if not the same - capabilities as social network analysis software. Such technology may potentially be aiding the Central Monitoring System (CMS), especially since the project would have to monitor and mine Big Data.

On countless occasions I have been told that surveillance is an issue which concerns the elite and which does not affect the poorer classes, especially since the majority of the population in India does not even have Internet access. However, the data in the graph above falsifies this mainstream belief, as many companies operating in India produce and sell phone and SMS monitoring technology, while more than half the population owns mobile phones.  Seeing as companies, such as ClearTrail Technologies and Shoghi Communications, sell phone monitoring equipment to law enforcement agencies and more than half the population in India has mobile phones, it is probably safe to say that surveillance is an issue which affects everyone, not just the elite.

Did you Know:

CARLOS62 on flickr

  1. WSS Security Solutions Pvt. Ltd. is north India´s first CCTV zone
  2. Speck Systems Limited was the first Indian company to design, manufacture and fly a micro UAV indigenously
  3. Mobile Spy India (Retina-X Studios) has the following mobile spying features:
  • SniperSpy: remotely monitors smartphones and computers from any location
  • Mobile Spy: monitors up to three phones and uploads SMS data to a server using GPRS without leaving traces

4. Infoserve India Private Limited produces an Internet monitoring System with the following features:

  • Intelligence gathering for an entire state or a region
  • Builds a chain of suspects from a single start point
  • Data loss of less than 2%
  • 2nd Generation Interception System
  • Advanced link analysis and pattern matching algorithms
  • Completely Automated System
  • Data Processing of up to 10 G/s
  • Automated alerts on the capture of suspicious data (usually based on keywords)

5.  ClearTrail Technologies deploys spyware into a target´s machine
6.  Spy Impex sells Coca Cola Tin Cameras!
7.  Nice Deal also sells Coca Cola Spy Cameras, as well as Spy Pen Cameras, Wrist Watch Cameras and Lighter Video Cameras to name a few...
8. Raviraj Technologies is an Indian company which supplies RFID and biometric technology to multiple countries all around the world... Countries served by Raviraj Technologies include non-democracies, such as Zimbabwe and Saudi Arabia...as well as post-revolutionary countries, such as Egypt and Tunisia... Why is this concerning?

  • Non-democracies lack adequate privacy and human rights safeguards and by supplying such regimes with biometric and tracking technology, the probability is that this will lead to further oppression within these countries
  • Egypt and Tunisia had elections to transit to democracy and by providing them biometric technology, this could lead to further oppression and stifle efforts to increase human rights safeguards

“I´m not a terrorist, I have nothing to hide!”

r1chardm on flickr

It´s not a secret: Everyone knows we are being surveilled, more or less. Everyone is aware of the CCTV cameras (luckily there are public notices to warn us...for now). Most people are aware that the data they upload on Facebook is probably surveilled...one way or the other. Most people are aware that mobile phones can potentially be wiretapped or intercepted. Yet, that does not prevent us from using our smartphones and from disclosing our most intimate secrets to our friends, from uploading hundreds of photos on Facebook and on other social networking sites, or from generally disclosing our personal data on the Internet. The most mainstream argument in regards to surveillance and the disclosure of personal data today appears to be the following:

“I´m not a terrorist, I have nothing to hide!”

Indeed. You may not be a terrorist...and you may think you have nothing to hide. But in a surveillance state, to what extent does it really matter if you are a terrorist? And how do we even define ´risky´ and ´non-risky´ information?

Last year at the linux.conf.au, Jacob Appelbaum stated that in a surveillance state, everyone can potentially be a suspect. The argument “I´m not a terrorist, I have nothing to hide” is merely a psychological coping mechanism when dealing with surveillance and expresses a lack of agency. Bruce Schneier has argued that the psychology of security does not necessarily reflect the reality of security. In other words, we may feel or think that our data is secure because we consider it to ential ´non-risky´ information, but the reality of security may indicate that our data may entail ´risky information´ depending on who is looking at it, when, how and why. I disagree with the distinction between ´risky´ and ´non-risky´ information, as any data can potentially be ´risky´ depending on the circumstances of its access.

That being said, we do not necessarily need to disclose nude photos or be involved in some criminal organization in order to be tracked. In a surveillance society, we are all potentially suspects. The mining and profiling of our data may lead to us somehow being linked to someone who, for whatever reason, is a suspect (regardless of whether that person has committed an actual offence) and thus may ultimately end us up being suspects. Perhaps one of our interests (as displayed in our data), our publicly expressed ideas or even our browsing habits may fall under ´suspicious activity´. It´s not really an issue of whether we are involved in a criminal organisation per se or if we are disclosing so-called ´risky information´.  As long as our data is being surveilled, we are all suspects, which means that we can all potentially be arrested, interrogated and maybe even tortured, just like any other criminal suspect.

But what fuels a surveillance society? How can law enforcement agencies mine such huge volumes of data? Many companies, such as the 76 listed in this research, equip law enforcement agencies with the technology to monitor the Internet and our phones, to deploy malware to our computers, to mine and profile our data on social networking sites and to track our vehicles and movement. A main reason why we currently live in a Surveillance State is because the surveillance industry is blooming and currently equipping law enforcement agencies with the technology to watch our every move. Thus companies producing and selling surveillance technologies play an essential role in maintaining the surveillance state and should be accountable for the implications their products have on individuals´ right to privacy and other human rights.

Surveillance technologies, however, are not the only factor which fuels a surveillance state. Companies produce technologies based on the market´s demand and without it, the surveillance industry would not exist. The market appears to demand for surveillance technologies because a pre-existing surveillance culture has been established which in turn may or may not have been created by political interests of public control. Nonetheless, surveillance appears to be socially integrated. The fact that some of the most profitable businesses in the world, such as 3M, produce and sell surveillance technologies, as well as the fact that, in most countries in the world, it is considered socially prestigious to work in such a company is minimum proof that surveillance is being socially integrated. In other words, companies should be accountable in regards to the technologies they produce and who they sell them to, but we should also take into consideration that the only reason why these companies exist to begin with is because there is a demand for them.

By not opposing to repressive surveillance laws, to the CCTV cameras in every corner, to surveillance schemes -such as NATGRID and the CMS in India- or by handing over our data, we are fuelling the surveillance state. Unlike Orwell's totalitarian state described in 1984, surveillance today does not appear to be imposed in a top-down manner, but rather it appears to be a product of both the Information Revolution and of our illusionary sense of control over our personal data. Our ´apathy´ enables surveillance laws to be enacted and companies to produce the technology which will aid law enforcement agencies in putting us all under the microscope. As easy as it would be to blame companies for producing surveillance technologies, the reality of surveillance appears to be much more complicated than that, especially if surveillance is socially integrated.

Yet, the reality in India is that at least 76 companies are producing and selling surveillance technologies and equipping law enforcement agencies with them. This is extremely concerning because India lacks privacy legislation which could safeguard individuals from potential abuse. The fact that India has not enacted a privacy law ultimately means that individuals are not informed when their data is collected, who has access to it, whether it is being processed, shared, disclosed and/or retained. Furthermore, the absence of privacy legislation in India also means that law enforcement agencies are not held liable and this has an impact on accountability and transparency, as it is not possible to determine whether surveillance is effective or not. In other words, there are currently absolutely no safeguards for the individual in India and simultaneously, the rapidly expanding surveillance industry poses major threats to human rights.

Not only does India urgently need privacy legislation to be enacted to safeguard citizens from potential abuse, but the use of all surveillance technologies should be strictly regulated now. As previously mentioned, some companies, such as Raviraj Technologies, are exporting biometric technology to non-democratic countries and to fragile states transitioning to democracy. This should be prevented, as equipping a country - which lacks adequate safeguards for its citizens - with the technology to ultimately control its citizens can potentially have severe effects on human rights within the country. Thus export controls are necessary to prevent the expansion of surveillance technologies to countries which lack legal safeguards for their citizens. This also means that there should be some restrictions to international companies selling surveillance technologies from creating offices in India, since the country currently lacks privacy legislation.

Surveillance technologies can potentially have very severe effects, such as innocent people being arrested, interrogated, tortured...and maybe even murdered in some states. Should they be treated as weapons? Should the same export restrictions that apply to arms apply to surveillance technologies? Sure, the threat posed by surveillance technologies appears to be indirect. But don't indirect threats usually have worse outcomes in the long run? We may not be terrorists and we may have nothing to hide...but we have no privacy safeguards and a massively expanding surveillance industry in India. We are exposed to danger...to say the least.

CIS Logos

by Prasad Krishna last modified May 06, 2013 05:38 AM

ZIP archive icon CIS Logo Formats.zip — ZIP archive, 562 kB (575796 bytes)

Privacy Round Table (Chennai Invite)

by Prasad Krishna last modified May 06, 2013 08:15 AM

PDF document icon Invite-Chennai(1).pdf — PDF document, 1073 kB (1098753 bytes)

Google Policy Fellowship Programme: Call for Applications

by Prasad Krishna last modified May 17, 2013 01:01 AM
The Centre for Internet & Society (CIS) is inviting applications for the Google Policy Fellowship programme. Google is providing a USD 7,500 stipend to the India Fellow, who will be selected by July 1, 2013.

The Google Policy Fellowship offers successful candidates an opportunity to develop research and debate on the fellowship focus areas, which include Access to Knowledge, Openness in India, Freedom of Expression, Privacy, and Telecom, for a period of about ten weeks starting from July 7, 2013 upto October 1, 2013. CIS will select the India Fellow. Send in your applications for the position by June 15, 2013.

To apply, please send to [email protected] the following materials:

  1. Statement of Purpose: A brief write-up outlining about your interest and qualifications for the programme including the relevant academic, professional and extracurricular experiences. As part of the write-up, also explain on what you hope to gain from participation in the programme and what research work concerning free expression online you would like to further through this programme. (About 1200 words max).

  2. Resume

  3. Three references

Fellowship Focus Areas

Access to Knowledge: Studies looking at access to knowledge issues in India in light of copyright law, consumers law, parallel imports and the interplay between pervasive technologies and intellectual property rights, targeted at policymakers, Members of Parliament, publishers, photographers, filmmakers, etc.

  • Openness in India: Studies with policy recommendations on open access to scholarly literature, free access to law, open content, open standards, free and open source software, aimed at policymakers, policy researchers, academics and the general public.

  • Freedom of Expression: Studies on policy, regulatory and legislative issues concerning censorship and freedom of speech and expression online, aimed at bloggers, journalists, authors and the general public.

  • Privacy: Studies on privacy issues like data protection and the right to information, limits to privacy in light of the provisions of the constitution, media norms and privacy, banking and financial privacy, workplace privacy, privacy and wire-tapping, e-governance and privacy, medical privacy, consumer privacy, etc., aimed at policymakers and the public.

  • Telecom: Building awareness and capacity on telecommunication policy in India for researchers and academicians, policymakers and regulators, consumer and civil society organisations, education and library institutions and lay persons through the creation of a dedicated web based resource focusing on knowledge dissemination.

Frequently Asked Questions

What is the Google Policy Fellowship program?

The Google Policy Fellowship program offers students interested in Internet and technology related policy issues with an opportunity to spend their summer working on these issues at the Centre for Internet and Society at Bangalore. Students will work for a period of ten weeks starting from June 1, 2013. The research agenda for the program is based on legal and policy frameworks in the region connected to the ground-level perceptions of the fellowship focus areas mentioned above.

  • I am an International student can I apply and participate in the program? Are there any age restrictions on participating?

    Yes. You must be 18 years of age or older by January 1, 2013 to be eligible to participate in Google Policy Fellowship program in 2013.

  • Are there citizenship requirements for the Fellowship?

    For the time being, we are only accepting students eligible to work in India (e.g. Indian citizens, permanent residents of India, and individuals presently holding an Indian student visa. Google cannot provide guidance or assistance on obtaining the necessary documentation to meet the criteria.

  • Who is eligible to participate as a student in Google Policy Fellowship program?

    In order to participate in the program, you must be a student. Google defines a student as an individual enrolled in or accepted into an accredited institution including (but not necessarily limited to) colleges, universities, masters programs, PhD programs and undergraduate programs. Eligibility is based on enrollment in an accredited university by January 1, 2013.

  • I am an International student can I apply and participate in the program?

    In order to participate in the program, you must be a student (see Google's definition of a student above). You must also be eligible to work in India (see section on citizen requirements for fellowship above). Google cannot provide guidance or assistance on obtaining the necessary documentation to meet this criterion.

  • I have been accepted into an accredited post-secondary school program, but have not yet begun attending. Can I still take part in the program?

    As long as you are enrolled in a college or university program as of January 1, 2013, you are eligible to participate in the program.

  • I graduate in the middle of the program. Can I still participate?

    As long as you are enrolled in a college or university program as of January 1, 2013, you are eligible to participate in the program.

Payments, Forms, and Other Administrative Stuff

How do payments work?

Google will provide a stipend of USD 7,500 equivalent to each Fellow for the summer.

  • Accepted students in good standing with their host organization will receive a USD 2,500 stipend payable shortly after they begin the Fellowship in June 2013.

  • Students who receive passing mid-term evaluations by their host organization will receive a USD 1,500 stipend shortly after the mid-term evaluation in July 2013.

  • Students who receive passing final evaluations by their host organization and who have submitted their final program evaluations will receive a USD 3,500 stipend shortly after final evaluations in August 2013.

Please note: Payments will be made by electronic bank transfer, and are contingent upon satisfactory evaluations by the host organization, completion of all required enrollment and other forms. Fellows are responsible for payment of any taxes associated with their receipt of the Fellowship stipend.

*While the three step payment structure given here corresponds to the one in the United States, disbursement of the amount may be altered as felt necessary.

What documentation is required from students?

Students should be prepared, upon request, to provide Google or the host organization with transcripts from their accredited institution as proof of enrollment or admission status. Transcripts do not need to be official (photo copy of original will be sufficient).

I would like to use the work I did for my Google Policy Fellowship to obtain course credit from my university. Is this acceptable?

Yes. If you need documentation from Google to provide to your school for course credit, you can contact Google. We will not provide documentation until we have received a final evaluation from your mentoring organization.

Host Organizations

What is Google's relationship with the Centre for Internet and Society?

Google provides the funding and administrative support for individual fellows directly. Google and the Centre for Internet and Society are not partners or affiliates. The Centre for Internet and Society does not represent the views or opinions of Google and cannot bind Google legally.

Important Dates

What is the program timeline?

June 15, 2013 Student Application Deadline. Applications must be received by midnight.
July 1, 2013 Student applicants are notified of the status of their applications.
July 2013 Students begin their fellowship with the host organization (start date to be determined by students and the host organization); Google issues initial student stipends.
August 2013
Mid-term evaluations; Google issues mid-term stipends.
October 2013 Final evaluations; Google issues final stipends.

Surveillance Technologies (Table 1)

by Prasad Krishna last modified May 09, 2013 10:02 AM

PDF document icon Surveillance tech companies Table 1.pdf — PDF document, 514 kB (527297 bytes)

Surveillance Technologies (Table 2)

by Prasad Krishna last modified May 09, 2013 10:22 AM

PDF document icon Surveillance tech companies Table 2.pdf — PDF document, 331 kB (338980 bytes)

CIS Celebrates 5 Years

by Prasad Krishna last modified May 18, 2013 02:09 AM

PDF document icon Emailer_S-4.pdf — PDF document, 398 kB (408303 bytes)

Media Coverage

by Prasad Krishna last modified May 18, 2013 04:11 AM

PDF document icon Media coverage.pdf — PDF document, 1105 kB (1131791 bytes)

Report on the 3rd Privacy Round Table meeting

by Maria Xynou last modified Jul 12, 2013 11:35 AM
This report entails an overview of the discussions and recommendations of the third Privacy Round Table meeting in Chennai, on 18th May 2013.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC.


In furtherance of Internet Governance multi-stakeholder Initiatives and Dialogue in 2013, the Centre for Internet and Society (CIS) in collaboration with the Federation of Indian Chambers of Commerce and Industry (FICCI), and the Data Security Council of India (DSCI), is holding a series of six multi-stakeholder round table meetings on “privacy” from April 2013 to August 2013. The CIS is undertaking this initiative as part of their work with Privacy International UK on the SAFEGUARD project.

In 2012, the CIS and DSCI were members of the Justice AP Shah Committee which created the “Report of Groups of Experts on Privacy”. The CIS has recently drafted a Privacy (Protection) Bill 2013, with the objective of contributing to privacy legislation in India. The CIS has also volunteered to champion the session/workshops on “privacy” in the meeting on Internet Governance proposed for October 2013.

At the roundtables the Report of the Group of Experts on Privacy, DSCI´s paper on “Strengthening Privacy Protection through Co-regulation” and the text of the Privacy (Protection) Bill 2013 will be discussed. The discussions and recommendations from the six round table meetings will be presented at the Internet Governance meeting in October 2013.

The dates of the six Privacy Round Table meetings are enlisted below:

  1. New Delhi Roundtable: 13 April 2013
  2. Bangalore Roundtable: 20 April 2013
  3. Chennai Roundtable: 18 May 2013
  4. Mumbai Roundtable: 15 June 2013
  5. Kolkata Roundtable: 13 July 2013
  6. New Delhi Final Roundtable and National Meeting: 17 August 2013

 

Following the first two Privacy Round Tables in Delhi and Bangalore, this report entails an overview of the discussions and recommendations of the third Privacy Round Table meeting in Chennai, on 18th May 2013.

Overview of DSCI´s paper on ´Strengthening Privacy Protection through Co-Regulation´

The third Privacy Round Table meeting began with an overview of the paper on “Strengthening Privacy Protection through Co-Regulation” by the Data Security Council of India (DSCI). In particular, the DSCI pointed out that although the IT (Amendment) Act 2008 lays down the data protection provisions in the country, it has its limitations in terms of applicability, which is why a comprehensive privacy law is required in India. The DSCI provided a brief overview of the Report of the Group of Experts on Privacy (drafted in the Justice AP Shah Committee) and argued that in light of the UID scheme, NATRGID, DNA profiling and the Central Monitoring System (CMS), privacy concerns have arisen and legislation which would provide safeguards in India is necessary. However, the DSCI emphasized that although they support the enactment of privacy legislation which would safeguard Indians from potential abuse, the economic value of data needs to be taken into account and bureaucratic structures which would hinder the work of businesses should be avoided.

The DSCI supported the enactment of privacy legislation and highlighted its significance, but also emphasized that such a legal framework should support the economic value of data. The DSCI appeared to favour the enactment of privacy legislation as it would not only oblige the Indian government to protect individuals´ sensitive personal data, but it would also attract more international customers to Indian online companies. That being said, the DSCI argued that it is important to secure a context for privacy based on Indian standards, rather than on global privacy standards, since the applicability of global standards in India has proven to be weak. The privacy bill should cover all dimensions (including, but not limited to, interception and surveillance) and the misuse of data should be legally prevented and prohibited. Yet, strict regulations on the use of data could potentially have a negative effect on companies’ competitive advantage in the market, which is why the DSCI proposed a co-regulatory framework – if not self-regulation.

In particular, the DSCI argued that companies should be obliged to provide security assurances to their customers and that regulation should not restrict the way they handle customers´ data, especially since customers choose to use a specific service in every case. This argument was countered by a participant who argued that in many cases, customers may not have alternative choices for services and that the issue of “choice” and consent is complicated. Thus it was argued that companies should comply with regulations which restrict the manner with which they handle customers´ data. Another participant argued that a significant amount of data is collected without users´ consent (such as through cookies) and that in most cases, companies are not accountable in regards to how they use the data, who they share it with or how long they retain it. Another participant who also countered the co-regulatory framework suggested by the DSCI argued that regulations are required for smartphones, especially since there is currently very low accountability as to how SMS data is being used or shared. Other participants also argued that, in every case, individual consent should be acquired prior to the collection, processing, retention, and disclosure of data and that that individual should have the right to access his/her data and make possible corrections.

The DSCI firmly supported its position on co-regulation by arguing that not only would companies provide security assurances to customers, but that they would also be accountable to the Privacy Commissioner through the provision of a detailed report on how they handle their customers´ data. Furthermore, the DSCI pointed out that in the U.S. and in Europe, companies provide privacy policies and security assurances and that this is considered to be adequate. Given the immense economic value of data in the Digital Age and the severe effects regulation would have on the market, the DSCI argued that co-regulation is the best solution to ensure that both individuals´ right to privacy and the market are protected.

The discussion on co-regulation proceeded with a debate on what type of sanctions should be applied to those who do not comply with privacy regulations. However, a participant argued that if a self-regulatory model was enforced and companies did not comply with privacy principles, the question of what would happen to individuals´ data would still remain. It was argued that neither self-regulation nor co-regulation provides any assurances to the individual in regards to how his/her data is protected and that once data is breached, there is very little that can be done to eliminate the damage. In particular, the participant argued that self-regulation and co-regulation provide very few assurances that data will not be illegally disclosed and breached. The DSCI responded to this argument by stating that in the case of a data breach, the both the Privacy Commissioner and the individual in question would have to be informed and that this issue would be further investigated. Other participants agreed that co-regulation should not be an option and argued that the way co-regulation would benefit the public has not been adequately proven.

The DSCI countered the above arguments by stating that the industry is in a better position to understand privacy issues than the government due to the various products that it produces. Industries also have better outreach than the Indian government and could enhance awareness to both other companies and individuals in terms of data protection, which is why the code of practice should be created by the industry and validated by the government. This argument was countered by a participant who stated that if the industry decides to participate in the enforcement process, this would potentially create a situation of conflict of interest and could be challenged by the courts in the future. The participant argued that an industry with a self-regulatory code of practice may be problematic, especially since there would be inadequate checks and balances on how data is being handled.

Another participant argued that the Indian government does not appear to take responsibility for the right to privacy, as it is not considered to be a fundamental human right; this being said, a co-regulatory framework could be more appropriate, especially since the industry has better insights on how data is being protected on an international level. Thus it was argued that the government could create high level principles and that the industry would comply. However, a participant argued that every company is susceptible to some type of violation and that in such a case, both self-regulation and co-regulation would be highly problematic. It was argued that, as any company could probably violate users´ data in some way down the line either way, self-regulation or co-regulation would probably not be the most beneficial option for the industry. This argument was supplemented by another participant who stated that co-regulation would mandate the industry and the Privacy Commissioner as the ultimate authorities to handle users´ data and that this could potentially lead to major violations, especially due to inadequate accountability towards users.

Co-regulation was once again supported by the DSCI through the argument that customers choose to use specific services and that by doing so, they should comply with the security measures and privacy policies provided. However, a participant asked whether other stakeholders should be involved, as well as what type of incentives companies have in order to comply with regulations and to protect users´ data. Another participant argued that the very definition of privacy remains vague and that co-regulation should not be an option, since the industry could be violating individuals´ privacy without even realising it. Another issue which was raised is how data would be protected when many companies have servers based in other countries. The DSCI responded by arguing that checks and balances would be in place to deal with all the above concerns, yet a general consensus on co-regulation did not appear to have been reached.

Discussion on the draft Privacy (Protection) Bill 2013

Discussion of definitions: Chapter II

The sections of the draft Privacy (Protection) Bill 2013 were discussed during the second session of the third Privacy Round Table meeting. In particular, the session started with a discussion on whether the draft Privacy (Protection) Bill 2013 should be split into two separate Bills, where the one would focus on data protection and the other on surveillance and interception. The split of a Bill on data protection to two consecutive Bills was also proposed, where the one would focus on data protection binding the public sector and the other on data protection binding the private sector. As the draft Privacy (Protection) Bill 2013 is in line with global privacy standards, the possibility of splitting the Bill to focus separately on the sections mentioned above was seriously considered.

The discussion on the definitions laid out in Chapter 2 of the draft Privacy (Protection) Bill 2013 started with a debate around the definitions of personal data and sensitive personal data and what exactly they should include. It was pointed out that the Data Protection Act of the UK has a much broader definition for the term ´sensitive personal data´ and it was recommended that the Indian draft Privacy (Protection) Bill complies with it. Other participants argued that a controversy lies in India on whether the government would conduct a caste census and if that were to be the case, such data (also including, but not limited to, religion and ethnic origin) should be included in the legal definition for ´sensitive personal data´ to safeguard individuals from potential abuse. Furthermore, the fact that the term ´sensitive personal data´ does not have a harmonious nature in the U.S. and in Europe was raised, especially since that would make it more difficult for India to comply to global privacy standards.

The broadness of the definition for ´sensitive personal data´ was raised as a potential problematic issue, especially since it may not be realistic to expect companies in the long term to protect everything it may include. The participants debated on whether financial information should be included in the definition of ´sensitive personal data´, but a consensus was not reached. Other participants argued that the terms ´data subject´ and ´data controller´ should be carefully defined, as well as that a generic definition for the term ´genetic data´ should be included in the Bill. Furthermore, it was argued that the word ´monitor´ should be included in the definitions of the Bill and that the universal norms in regards to the definitions should apply to each and every state in India. It was also noted that organizational affiliation, such as a trade union membership, should also be included in the definitions of the Bill, since the lack of legal protection may potentially have social and political implications.

Discussion of “Protection of Personal Data”: Chapter III

The discussion on the data protection chapter of the draft Privacy (Protection) Bill began with the recommendation that data collected by companies should comply with a confidentiality agreement. Another participant argued that the UK looks at every financial mechanism to trace how information flows and that India should do the same to protect individuals´ personal data. It was also argued that when an individual is constantly under surveillance, that individual´s behaviour is more controlled and that extra accountability should be required for the use of CCTV cameras. In particular, it was argued that when entities outside the jurisdiction gain access to CCTV data, they should be accountable as to how they use it. Furthermore, it was argued that the Bill should provide provisions on how data is used abroad, especially when it is stored in foreign servers.

Issue of Consent

The meeting proceeded with a discussion of Section 6 and it was pointed out that consent needs to be a prerequisite to data collection. Furthermore, conditions laid out in section 3 would have to be met, through which the individual would have to be informed prior to any data collection, processing, disclosure and retention of data. Section 11 of the Bill entails an accuracy provision, through which individuals have the right to access the data withheld about them and make any necessary corrections. A participant argued that the transmission of data should also be included in the Bill and that the transmitter would have to be responsible for the accuracy of the data. Another participant argued that transmitters should be responsible for the integrity of the data, but that individuals should be responsible for its accuracy. However, such arguments were countered by a participant who argued that it is not practically possible to inform individuals every time there is a change in their data.

Outsourcing of Data

It was further recommended that outsourcing guidelines should be created and implemented, which would specify the agents responsible for outsourcing data. On this note, the fact that a large volume of Indian data is being outsourced to the U.S. under the Patriot Act was discussed. In particular, it was pointed out that most data retention servers are based in the U.S., which makes it difficult for Indians to be able to be informed about which data is being collected, whether it is being processed, shared, disclosed and/or retained. A participant argued that most companies have special provisions which guarantee that data will not cross borders and that it actually depends on the type of ISP handling the data.

Another issue which was raised was that, although a consumer may have control over his/her data at the first stage, that individual ultimately loses control over his/her data in the next stages when data is being shared and/or disclosed without his/her knowledge or consent. Not only is this problematic because individuals lose control over their data, but also because the issue of accountability arises, as it is hard to determine who is responsible for the data once it has been shared and disclosed. Some participants suggested that such a problem could possibly be solved if the data subject is informed by the data processor that its data is being outsourced, as well as of the specific parties the data is being outsourced to. Another participant argued that it does not matter who the data is being outsourced to, but the manner of its use is what really matters.

Data Retention

Acting on the powers given by POTA, it was argued that 50,000 arrests have been made. Out of these arrests, only seven convictions have been made, yet the data of thousands of individuals can be stored for many years under POTA. Thus, it was pointed out that it is crucial that the individual is informed when his/her data is destroyed and that such data is not retained indefinitely. This was supplemented by a participant who argued that most countries in the West have data retention laws and that India should too. Other participants argued that data retention does not end with data destruction, but with the return of the data to the individual and the assurance that it is not stored elsewhere. However, several participants argued that the return of data is not always possible, especially since parties may lack the infrastructure to take back their data.

It was pointed out that civil society groups have claimed that collected data should be destroyed within a specific time period, but the debate remains polarized. In particular, some participants argued that data should be retained indefinitely, as the purpose of data collection may change within time and that data may be valuable in dealing with crime and terrorism in the future. This was countered by participants who argued that the indefinite retention of data may potentially lead to human rights violations, especially if the government handling the data is non-democratic. Another participant argued that the fact that data may be collected for purpose A, processed for purpose B and retained or disclosed for purpose C can be very problematic in terms of human rights violations in the future. Furthermore, another participant stated that destruction should mean that data is no longer accessible and that is should not only apply to present data, but also to past data, such as archives.

Data Processing

The processing of personal data is regulated in section 8 of the draft Privacy (Protection) Bill 2013. A participant argued that the responsibility should lie with the person doing the outsourcing of the data (the data collector). Another participant raised the issue that although banks acquire consent prior to collection and use of data, they subsequently use that data for any form of data processing and disclosure. Credit information requires specific permission and it was argued that the same should apply to other types of personal data. Consent should be acquired for every new purpose other than the original purpose for data collection. It was strongly argued that general consent should not cover every possible disclosure, sharing and processing of data. Another issue which was raised in terms of data processing is that Indian data could be compromised through global cooperation or pre-existing cooperation with third parties.

Data Disclosure

The disclosure of personal data was highlighted as one of the most important provisions within the draft Privacy (Protection) Bill 2013. In particular, three types of disclosure were pointed out: (1) disclosure with consent, (2) disclosure in outsourcing, (3) disclosure for law enforcement purposes. Within this discussion, principle liability issues were raised, as well as whether the data of a deceased person should be disclosed. Other participants raised the issue of data being disclosed by international third parties, who gain access to it through cooperation with Indian law enforcement agencies and cases of dual criminality in terms of the misuse of data abroad were raised. A participant highlighted three points: (1) the subject who has responsibility for the processing of data, (2) any obligation under law should be made applicable to the party receiving the information, (3) applicable laws for outsourcing Indian data to international third parties. It was emphasized that the failure to address these three points could potentially lead to a conflict of laws.

According to a participant, a non-disclosure agreement should be a prerequisite to outsourcing. This was preceded by a discussion on the conditions for data disclosure under the draft Privacy (Protection) Bill 2013 and it was recommended that if data is disclosed without the consent of the individual, the individual should be informed within one year. It was also pointed out that disclosure of data in furtherance of a court order should not be included in the Bill because courts in India tend to be inconsistent. This was followed by a discussion on whether power should be invested in the High Court in terms of data disclosure.

Discussion of “Interception of Communications”: Chapter IV

The third Privacy Round Table ended with a brief discussion on the fourth chapter of the draft Privacy (Protection) Bill 2013, which regulates the interception of communications. Following an overview of the sections and their content, a participant argued that interception does not necessarily need to be covered in the draft Privacy (Protection) Bill, as it is already covered in the Telegraph Act. This was countered by participants who argued that the interception of communications can potentially lead to a major violation of the right to privacy and other human rights, which is why it should be included in the draft Privacy (Protection) Bill. Other participants argued that a requirement that intercepted communication remains confidential is necessary, but that there is no need to include privacy officers in this. Some participants proposed that an exception for sting operations should be included in this chapter.

Meeting conclusion

The third Privacy Round Table entailed a discussion of the definitions used in the draft Privacy (Protection) Bill 2013, as well as of chapters II, III and IV on the right to privacy, the protection of personal data and the interception of communications. The majority of the participants agreed that India needs a privacy legislation and that individuals´ data should be legally protected. However, participants disagreed in regards to how data would be safeguarded and the extent to which data collection, processing, sharing, disclosure, destruction and retention should be regulated. This was supplemented by the debate on self-regulation and co-regulation; participants disagreed on whether the industry should regulate the use of customers´ data autonomously from government regulation or whether the industry should co-operate with the Privacy Commissioner for the regulation of the use of data. Though a consensus was not reached in regards to co-regulation and self-regulation, the majority of the participants agreed upon the establishment of a privacy legislation which would safeguard individuals´ personal data. The major issue, however, with the creation of a privacy legislation in India would probably be its adequate enforcement.

Institute on Internet & Society

by Prasad Krishna last modified May 21, 2013 09:39 AM

PDF document icon A3_Portrait_Ford_Institute_Flyer.pdf — PDF document, 664 kB (680326 bytes)

India's Internet Growth & Challenges

by Prasad Krishna last modified May 22, 2013 05:37 AM

PDF document icon A3_Portrait_Ford_Internet_Growth.pdf — PDF document, 211 kB (216626 bytes)

IPv4 and IPv6 - FAQs

by Prasad Krishna last modified May 22, 2013 06:20 AM

PDF document icon A3_Portrait_Ford_IP.pdf — PDF document, 516 kB (528425 bytes)

Comparative Analysis of DNA Profiling Legislations from Across the World

by Srinivas Atreya last modified Jul 12, 2013 11:30 AM
With the growing importance of forensic data in law enforcement and research, many countries have recognized the need to regulate the collection and use of forensic data and maintain DNA databases. Across the world around 60 countries maintain DNA databases which are generally regulated by specific legislations. Srinivas Atreya provides a broad overview of the important provisions of four different legislations which can be compared and contrasted with the Indian draft bill.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


Efforts to regulate the collection and use of DNA data were started in India in 2007 by the Centre for DNA Fingerprinting and Diagnostics through their draft DNA Profiling Bill. Although the bill has evolved from its original conception, several concerns with regard to human rights and privacy still remain. The draft bill heavily borrows the different aspects related to collection, profiling and use of forensic data from the legislations of the United States, United Kingdom, Canada and Australia.


Click to find an overview of a comparative analysis of DNA Profiling Legislations.

CIS Cybersecurity Series (Part 1) - Christopher Soghoian

by Purba Sarkar last modified Jul 12, 2013 10:26 AM
CIS interviews Christopher Soghoian, cybersecurity researcher and activist, as part of the Cybersecurity Series

"We live in a surveillance state. The government can find out who we communicate with, who we talk to, who we are near, when we are at a protest, which stores we go to, where we travel to... they can find out all of these things. And it's unlikely it's going to get rolled back, but the best we can hope for is a system of law where the government gets to use its powers only in the right situation." – Christopher Soghoian, American Civil Liberties Union.

Centre for Internet and Society presents its first installment of the CIS Cybersecurity Series.

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.

In this installment, CIS interviews Christopher Soghoian, a privacy researcher and activist, working at the intersection of technology, law and policy. Christopher is the Principal Technologist and a Senior Policy Analyst with the Speech, Privacy and Technology Project at the American Civil Liberties Union (ACLU).

Christopher is based in Washington, D.C. His website is http://www.dubfire.net/

 

This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.


Internet Institute Agenda

by Prasad Krishna last modified Jun 03, 2013 05:42 AM

PDF document icon Agenda.pdf — PDF document, 123 kB (126412 bytes)

Free Speech

by Prasad Krishna last modified Jun 03, 2013 09:15 AM
Free Speech
Full-size image: 83.9 KB | View image View Download image Download

Blocking of Websites

by Prasad Krishna last modified Jun 03, 2013 09:18 AM
Blocking of Websites
Full-size image: 44.6 KB | View image View Download image Download

Intermediary Liability and Freedom of Expression

by Prasad Krishna last modified Jun 03, 2013 09:24 AM
Intermediary Liability and Freedom of Expression
Full-size image: 165.9 KB | View image View Download image Download

Internet Governance Forum

by Prasad Krishna last modified Jun 03, 2013 09:28 AM
Internet Governance Forum
Full-size image: 52.8 KB | View image View Download image Download

Events

by Prasad Krishna last modified Jun 03, 2013 09:35 AM
Events
Full-size image: 38.2 KB | View image View Download image Download

Privacy Timeline

by Prasad Krishna last modified Jun 03, 2013 09:48 AM
Privacy Timeline
Full-size image: 42.3 KB | View image View Download image Download

UID (1)

by Prasad Krishna last modified Jun 03, 2013 09:52 AM
UID (1)
Full-size image: 185.3 KB | View image View Download image Download

UID (2)

by Prasad Krishna last modified Jun 03, 2013 09:52 AM
UID (2)
Full-size image: 246.9 KB | View image View Download image Download

DNA (1)

by Prasad Krishna last modified Jun 03, 2013 10:44 AM
DNA (1)
Full-size image: 271.2 KB | View image View Download image Download

DNA (2)

by Prasad Krishna last modified Jun 03, 2013 10:44 AM
DNA (2)
Full-size image: 200.1 KB | View image View Download image Download

Privacy Round Table Mumbai

by Prasad Krishna last modified Jun 11, 2013 08:46 AM

PDF document icon Invite-Mumbai.pdf — PDF document, 1092 kB (1119147 bytes)

CIS 5 Years Posters

by Prasad Krishna last modified Jun 06, 2013 05:46 AM
All posters that were exhibited recently at CIS during the open days.

ZIP archive icon All Posters.zip — ZIP archive, 278600 kB (285287319 bytes)

Open Letter to "Not" Recognize India as Data Secure Nation till Enactment of Privacy Legislation

by Elonnai Hickok last modified Jul 12, 2013 11:07 AM
India shouldn't be granted the status of "data secure nation" by Europe until it enacts a suitable privacy legislation, points out the Centre for Internet and Society in this open letter.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


This letter is with regards to both the request from the Confederation of Indian Industry that the EU recognize India as a data secure nation made on April 29th 2013, [1] and the threat from India to stall negotiations on the Free Trade Agreement with the EU unless recognized as data secure nation made on May 9th 2013.[2]

On behalf of the Centre for Internet and Society, we request that you urge the European Parliament and the EU ambassador to India to reject the request, and to not recognize India as a data secure nation until a privacy legislation has been enacted.

The Centre for Internet and Society believes that if Europe were to grant India status as a data secure nation based only on the protections found in the “Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011”, not only will India be protected through inadequate standards, but the government will not have an incentive to enact a legislation that recognizes privacy as a comprehensive and fundamental human right. Since 2010 India has been in the process of realizing a privacy legislation.  In 2011 the “Draft Privacy Bill 2011” was leaked.[3] In  2012 the “Report of the Group of Experts on Privacy” was released. The Report recommends a comprehensive right to privacy for India, nine national privacy principles, and a privacy framework of co-regulation for India to adopt. [4] In 2013 the need for a stand alone privacy legislation was highlighted by the Law Minister.[5] The Centre for Internet and Society has recently drafted the “Privacy Protection Bill 2013” - a citizen's version of a possible privacy legislation for India.[6] Currently, we are hosting a series of six “Privacy Roundtables” across India in collaboration with FICCI and DSCI from April 2013 - August 2013.[7] The purpose of the roundtables is to gain public feedback to the text of the “Privacy Protection Bill 2013”, and other possible frameworks for privacy in India. The discussions and recommendations from the meeting will be published into a compilation and presented at the Internet Governance meeting in October 2013.

The Center for Internet and Society will also be submitting the “Privacy Protection Bill 2013” and the public feedback to the Department of Personnel and Training (DoPT) with the hope of contributing to and informing a privacy legislation in India.

The Centre for Internet and Society has been researching privacy since 2010 and was a member of the committee which compiled the “Report of the Group of Experts on Privacy”. We have also submitted comments on the “Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011” to the Committee on Subordinate Legislation  of the 15th Lok Sabha.[8]

We hope that you will consider our request and urge the European Parliament and the EU ambassador to India to not recognize India as a data secure nation until a privacy legislation has been enacted.


[1]. CII asks EU to accept India as 'Data Secure' nation: http://bit.ly/15Z77dH

[2]. India threatens to stall trade talks with EU: http://bit.ly/1716aF1

[3]. New privacy Bill: Data Protection Authority, jail term for offence: http://bit.ly/emqkkH

[4]. The Report of the Group of Experts on Privacy http://bit.ly/VqzKtr

[5]. Law Minister Seeks stand along privacy legislation, writes PM: http://bit.ly/16hewWs

[6]. The Privacy Protection Bill 2013 drafted by CIS: http://bit.ly/10eum5d

[7]. Privacy Roundtable: http://bit.ly/12HYoj5

[8]. Comments on the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data Information) Rules, 2011: http://bit.ly/Z2FjX6

Note: CIS sent the letters to Data Protection Commissioners across Europe.

India Subject to NSA Dragnet Surveillance! No Longer a Hypothesis — It is Now Officially Confirmed

by Maria Xynou last modified Nov 06, 2013 10:20 AM
As of last week, it is officially confirmed that the metadata of everyone´s communications is under the NSA´s microscope. In fact, the leaked data shows that India is one of the countries which is under NSA surveillance the most!
India Subject to NSA Dragnet Surveillance! No Longer a Hypothesis — It is Now Officially Confirmed

by lawgeek on flickr


This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC. This blog was cross-posted in Medianama on 24th June 2013.


¨Does the NSA collect any type of data at all on millions or hundreds of millions of  Americans?”, the democratic senator, Ron Wyden, asked James Clapper, the director of national intelligence a few months ago. “No sir”, replied Clapper.

 

True, the National Security Agency (NSA) does not collect data on millions of Americans. Instead, it collects data on billions of Americans, Indians, Egyptians, Iranians, Pakistanis and others all around the world.

Leaked NSA surveillance

Verizon Court Order

Recently, the Guardian released a top secret order of the secret Foreign Intelligence Surveillance Court (FISA) requiring Verizon on an “ongoing, daily basis” to hand over information to the NSA on all telephone calls in its systems, both within the US and between the US and other countries. Verizon is one of America's largest telecoms providers and under a top secret court order issued on 25 April 2013, the communications records of millions of US citizens are being collected indiscriminately and in bulk supposedly until 19 July 2013. In other words, data collection has nothing to do with whether an individual has been involved in a criminal or terrorist activity or not. Literally everyone is potentially subject to the same type of surveillance.

USA Today reported in 2006 that the NSA had been secretly collecting the phone call records of millions of Americans from various telecom providers. However, the April 25 top secret order is proof that the Obama administration is continuing the data mining programme begun by the Bush administration in the aftermath of the 09/11 terrorist attacks. While content data may not be collected, this dragnet surveillance includes metadata such as the numbers of both parties on a call, location data, call duration, unique identifiers, the International Mobile Subscriber Identity (IMSI) number and the time and duration of all calls.

Content data may not be collected, but metadata can also be adequate to discover an individual's network of associations and communications patterns. Privacy and human rights concerns rise from the fact that the collection of metadata can result in a highly invasive form of surveillance of citizens´ communications and lives. Metadata records can enable the US government to know the identity of every person with whom an individual communicates electronically, as well as the time, duration and location of the communication. In other words, metadata is aggregate data and it is enough to spy on citizens and to potentially violate their right to privacy and other human rights.

PRISM

Recently, a secret NSA surveillance programme, code-named PRISM, was leaked by The Washington Post. Apparently, not only is the NSA gaining access to the meta data of all phone calls through the Verizon court order, but it is also tapping directly into the servers of nine leading Internet companies: Microsoft, Skype, Google, Facebook, YouTube, Yahoo, PalTalk, AOL and Apple. However, following these allegations, Google, Microsoft and Facebook recently asked the U.S. government to allow them to disclose the security requests they receive for handing over user data. It remains unclear to what extent the U.S. government is tapping into these servers.

Yet it appears that the PRISM online surveillance programme enables the NSA to extract personal material, such as audio and video chats, photographs, emails and documents. The Guardian reported that PRISM appears to allow GCHQ, Britain's equivalent of the NSA, to secretly gather intelligence from the same internet companies. Following allegations that GCHQ tried to circumvent UK law by using the PRISM computer network in the US, the British foreign secretary, William Hague, stated that it is “fanciful nonsense” to suggest that GCHQ would work with an agency in another country to circumvent the law. Most notably, William Hague emphasized that reports that GCHQ are gathering intelligence from photos and online sites should not concern people who have nothing to hide! However, this implies that everyone is guilty until proven innocent...when actually, democracy mandates the opposite.

James R. Clapper, the US Director of National Intelligence, stated:

Information collected under this program is among the most important and valuable foreign intelligence information we collect, and is used to protect our nation from a wide variety of threats. The unauthorized disclosure of information about this important and entirely legal program is reprehensible and risks important protections for the security of Americans.”

So essentially, Clapper stated that in the name of US national security, the personal data of billions of citizens around the world is being collected. By having access to data stored in the servers of some of the biggest Internet companies in the world, the NSA ultimately has access to the private data of almost all the Internet users in the world.

Boundless Informant

And once the NSA has access to tons of data through the Verizon court order and the PRISM surveillance programme, how does it create patterns of intelligence and generally mine huge volumes of data?

The Guardian released top secret documents about the NSA data mining tool, called Boundless Informant; this tool is used to detail and map by country the volumes of information collected from telephone and computer networks. The focus of the Boundless Informant is to count and categorise the records of communication, known as metadata, and to record and analyse where its intelligence comes from. One of the leaked documents states that the tool is designed to give NSA officials answers to questions like: “What type of coverage do we have on country X”. According to the Boundless Informant documents, the NSA has been collecting 3 billion pieces of intelligence from US computer networks over a 30-day period ending in March 2013. During the same month, 97 billion pieces of intelligence from computer networks were collected worldwide.

The following “global heat map” reveals how much data is being collected by the NSA from around the world:

Boundless Informant: "Global Heat Map"

The colour scheme of the above map ranges from green (least subjected to surveillance) through yellow and orange to red (most surveillance). India is notably orange and is thus subject to some of the highest levels of surveillance by the NSA in the world.

During a mere 30-day period, the largest amount of intelligence was gathered from Iran with more than 14 billion reports, while Pakistan, Jordan and Egypt were next in line in terms of intelligence gathering. Unfortunately, India ranks 5th worldwide in terms of intelligence gathering by the NSA. According to the map above, 6.3 billion pieces of intelligence were collected from India by the NSA from February to March 2013. In other words, India is currently one of the top countries worldwide which is under the US microscope, with 15% of all information being tapped by the NSA coming from India during February-March 2013.

Edward Snowden is the 29-year-old man behind the NSA leaks...who is responsible for one of the most important leaks in US (and one may argue, global) history.


So what does this all mean for India?

In his keynote speech at the 29th Chaos Communications Congress, Jacob Appelbaum stated that surveillance should be an issue which concerns “everyone´s department”, especially in light of the NSA spying on citizens all over the world. True, the U.S. appears to have a history in spying on civilians, and the Corona, Argon, and Lanyard satellites used by the U.S. for photographic surveillance from the late 1950s is proof of that. But how does all this affect India?

By tapping into the servers of some of the biggest Internet companies in the world, such as Google, Facebook and Microsoft, the NSA does not only gain access to the data of American users, but also to that of Indian users. In fact, the “global heat map” of the controversial Boundless Informant data mining tool clearly shows that India ranked 5th worldwide in terms of intelligence gathering, which means that not only is the NSA spying on Indians, but that it is also spying on India more than most countries in the world. Why is that a problem?

India has no privacy law. India lacks privacy legislation which could safeguard citizens from potential abuse by different types of surveillance. But the worst part is that, even if India did have privacy laws, that would still not prevent the NSA from tapping into Indians´ data through the servers of Internet companies, such as Google. Moreover, the fact that India lacks a Privacy Commissioner means that the country lacks an expert authority who could address data breaches.

Recent reports that the NSA is tapping into these servers ultimately means that the U.S. government has access to the data of Indian internet users. However, it remains unclear how the U.S. government is handling Indian data, which other third parties may have access to it, how long it is being retained for, whether it is being shared with other third parties or to what extent U.S. intelligence agencies can predict the behaviour of Indian internet users through pattern matching and data mining.

Many questions remain vague, but one thing is clear: through the NSA´s total surveillance programme, the U.S. government can potentially control the data of billions of internet users around the world, and with this control arises the possibility of oppression. It´s not just about the U.S. government having access to Indians´ data, because access can lead to control and according to security expert, Bruce Schneier:

“Our data reflects our lives...and those who control our data, control our lives”.

How are Indians supposed to control their data, and thus their lives, when it is being stored in foreign servers and the U.S. has the “right” to tap into that data? The NSA leaks mark a significant point in our history, not only because they are resulting in corporations seeking data request transparency, but also because they are unveiling a major global issue: surveillance is a fact and can no longer can be denied. The massive, indiscriminate collection of Indians´ data, without their prior knowledge or consent, and without the provision of guarantees in regards to how such data is being handled, poses major threats to their right to privacy and other human rights. The potential for abuse is real, especially since the larger the database, the larger the probability for error. Mining more data does not necessarily increase security; on the contrary, it increases the potential for abuse, especially since technology is not infallible and data trails are not always accurate.

What does this mean? Well, probably the best case scenario is that an individual is targeted. The worst case scenario is that an individual is imprisoned (or maybe even murdered - remember the drones?) because his or her data “says” that he or she is guilty. Is that the type of world we want to live in?

What can we do now?

Let´s start from the basics. India needs privacy legislation. India needs privacy legislation now. India needs privacy legislation now, more than ever.

Privacy legislation would regulate the collection, access to, sharing of, retention and disclosure of all personal data within India. Such legislation could also regulate surveillance and the interception of communications, in compliance with the right to privacy and other human rights. A Privacy Commissioner would also be established through privacy legislation, and this expert authority would be responsible for overseeing the enforcement of the Privacy Act and addressing data breaches. But clearly, privacy legislation is not enough. The various privacy laws of European countries have not prevented the NSA from tapping into the servers of some of the biggest Internet companies in the world and from gaining access to the data of millions of citizens around the world. Yet, privacy legislation in India should be a basic prerequisite to ensure that data is not breached within India and by those who may potentially gain access to Indian national databases.

As a next- but immediate- step, the Indian government should demand answers from the NSA to the following questions:

  • What type of data is collected from India and which parties have access to it?

  • How long is such data retained for? Can the retention period be renewed and if so, for how long?

  • Is data collected on Indian internet users shared with third parties? If so, which third parties can gain access to this data and under what conditions? Is a judicial warrant required?

In addition to the above questions, the Indian government should also request all other information relating to Indians´ data collected through the PRISM programme, as well as proceed with a dialogue on the matter. Governments are obliged to protect their citizens from the abuse of their human rights, especially in cases when such abuse may occur from foreign agencies. Thus, the Indian government should ensure that the future secret collection of Indians´ data is prevented and that Internet companies are transparent and accountable in regards to who has access to their servers.

On an individual level, Indians can protect their data by using encryption, such as GPG encryption for their emails and OTR encryption for instant messaging. Tor is free software and an open network which enables online anonymity by bouncing communications around a distributed network of relays run by volunteers all around the world. Tor is originally short for “The Onion Router” and “onion routing” refers to the layers of encryption used. In particular, data is encrypted and re-encrypted multiple times and is sent to randomly selected Tor relays. Each relay decrypts a “layer” of encryption to reveal it only to the next relay in the circuit and the final relay decrypts the last “layer” of encryption. Essentially, Tor reduces the possibility of original data being understood in transit and conceals the routing of it.

To avoid surveillance, the use of HTTPS-Everywhere in the Tor Browser is recommended, as well as the use of combinations of additional software, such as TorBirdy and Enigmail, OTR and Diaspora. Tor hidden services are communication endpoints that are resistant to both metadata analysis and surveillance, which is why they are highly recommended in light of the NSA´s surveillance. An XMPP client that ships with an XMPP server and a Tor hidden service is a good example of how to avoid surveillance.

Protecting our data is more important now than ever. Why? Because global, indiscriminate, mass data collection is no longer a hypothesis: it´s a fact. And why is it vital to protect our data? Because if we don´t, we are ultimately sleepwalking into our control and oppression where basic human rights, such as freedom, would be a myth of the past.

The principles formulated by the Electronic Frontier Foundation and Privacy International on communication surveillance should be taken into consideration by governments and law enforcement agencies around the world. In short, these principles are:

  • Legality: Limitations to the right to privacy must be prescribed by law

  • Legitimate purpose: Access to communications or communications metadata should be restricted to authorised public authorities for investigative purposes and in pursuit of a legitimate purpose

  • Necessity: Access to communications or communications metadata by authorised public authorities should be restricted to strictly and demonstrably necessary cases

  • Adequacy: Public authorities should be restricted from adopting or implementing measures that allow access to communications or communications metadata that is not appropriate for fulfillment of the legitimate purpose

  • Competent authority: Authorities must be competent when making determinations relating to communications or communications metadata

  • Proportionality: Public authorities should only order the preservation and access to specifically identified, targeted communications or communications metadata on a case-by-case basis, under a specified legal basis

  • Due process: Governments must respect and guarantee an individual's human rights, that may interference with such rights must be authorised in law, and that the lawful procedure that governs how the government can interfere with those rights is properly enumerated and available to the public

  • User notification: Service providers should notify a user that a public authority has requested his or her communications or communications metadata with enough time and information about the request so that a user may challenge the request

  • Transparency about use of government surveillance: The access capabilities of public authorities and the process for access should be prescribed by law and should be transparent to the public

  • Oversight: An independent oversight mechanism should be established to ensure transparency of lawful access requests

  • Integrity of communications and systems: Service providers are responsible for the secure transmission and retention of communications data or communications metadata

  • Safeguards for international cooperation: Mutual legal assistance processes between countries and how they are used should be clearly documented and open to the public

  • Safeguards against illegitimate access: Governments should ensure that authorities and organisations who initiate, or are complicit in, unnecessary, disproportionate or extra-legal interception or access are subject to sufficient and significant dissuasive penalties, including protection and rewards for whistleblowers, and that individuals affected by such activities are able to access avenues for redress

  • Cost of surveillance: The financial cost of providing access to user data should be borne by the public authority undertaking the investigation

Applying these above principles is a prerequisite, but may not be enough. Now is the time to resist unlawful and non-transparent surveillance. Now is the time for everyone to fight for their right to be free.

Is a world without freedom worth living in?

Annual Report (2012-13)

by Prasad Krishna last modified Oct 22, 2014 12:04 AM

PDF document icon CIS Annual Report 2012-13.pdf — PDF document, 1451 kB (1486307 bytes)

Audit Report (2010-11)

by Prasad Krishna last modified Jun 20, 2013 11:48 AM

PDF document icon audit report 2010-11(AY 2011-12).pdf — PDF document, 14823 kB (15179565 bytes)

Interview with Mr. Billy Hawkes - Irish Data Protection Commissioner

by Maria Xynou last modified Jul 12, 2013 11:06 AM
Maria Xynou recently interviewed Mr. Billy Hawkes, the Irish Data Protection Commissioner, at the CIS´ 4th Privacy Round Table meeting. View this interview and gain an insight on recommendations for data protection in India!
Interview with Mr. Billy Hawkes - Irish Data Protection Commissioner

by Sean Nicholls on flickr


This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


The Irish Data Protection Commissioner was asked the following questions:

1. What powers does the Irish Data Commissioner´s office have? In your opinion, are these sufficient? Which powers have been most useful? If there is a lack, what would you feel is needed?

2. Does your office differ from other EU data protection commissioner offices?

3. What challenges has your office faced? What is the most common type of privacy violation that your office has faced?

4. Why should privacy legislation be enacted in India?

5. Does India need a Privacy Commissioner? Why? If India creates a Privacy Commissioner, what structure / framework would you suggest for the office?

6. How do you think data should be regulated in India? Do you support the idea of co-regulation or self-regulation?

7. How can India protect its citizens´ data when it is stored in foreign servers?

 

video

Interview with the Citizen Lab on Internet Filtering in India

by Maria Xynou last modified Jun 26, 2013 09:47 AM
Maria Xynou recently interviewed Masashi Crete-Nishihata and Jakub Dalek from the Citizen Lab on internet filtering in India. View this interview and gain an insight on Netsweeper and FinFisher!

A few days ago, Masashi Crete-Nishihata (research manager) and Jakub Dalek (systems administrator) from the Citizen Lab visited the Centre for Internet and Society (CIS) to share their research with us.

The Citizen Lab is an interdisciplinary laboratory based at the Munk School of Global Affairs at the University of Toronto, Canada. The OpenNet Initiative is one of the Citizen Lab's ongoing projects which aims to document patterns of Internet surveillance and censorship around the world. OpenNet.Asia is another ongoing project which focuses on censorship and surveillance in Asia.

The following video entails an interview of both Masashi Crete-Nishihata and Jakub Dalek on the following questions:

1. Why is it important to investigate Internet filtering around the world?

2. How high are the levels of Internet filtering in India, in comparison to the rest of the world?

3. "Censorship and surveillance of the Internet aim at tackling crime and terrorism and in increasing overall security." Please comment.

4. What is Netsweeper and how is it being used in India? What consequences does this have?

5. What is FinFisher and how could it be used in India?

Video


Report on the 4th Privacy Round Table meeting

by Maria Xynou last modified Jul 12, 2013 11:04 AM
This report entails an overview of the discussions and recommendations of the fourth Privacy Round Table in Mumbai, on 15th June 2013.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


In furtherance of Internet Governance multi-stakeholder Initiatives and Dialogue in 2013, the Centre for Internet and Society (CIS) in collaboration with the Federation of Indian Chambers of Commerce and Industry (FICCI), and the Data Security Council of India (DSCI), is holding a series of six multi-stakeholder round table meetings on “privacy” from April 2013 to August 2013. The CIS is undertaking this initiative as part of their work with Privacy International UK on the SAFEGUARD project.

In 2012, the CIS and DSCI were members of the Justice AP Shah Committee which created the “Report of Groups of Experts on Privacy”. The CIS has recently drafted a Privacy (Protection) Bill 2013, with the objective of contributing to privacy legislation in India. The CIS has also volunteered to champion the session/workshops on “privacy” in the meeting on Internet Governance proposed for October 2013.

At the roundtables the Report of the Group of Experts on Privacy, DSCI´s paper on “Strengthening Privacy Protection through Co-regulation” and the text of the Privacy (Protection) Bill 2013 will be discussed. The discussions and recommendations from the six round table meetings will be presented at the Internet Governance meeting in October 2013.

The dates of the six Privacy Round Table meetings are enlisted below:

  1. New Delhi Roundtable: 13 April 2013

  2. Bangalore Roundtable: 20 April 2013

  3. Chennai Roundtable: 18 May 2013

  4. Mumbai Roundtable: 15 June 2013

  5. Kolkata Roundtable: 13 July 2013

  6. New Delhi Final Roundtable and National Meeting: 17 August 2013

Following the first three Privacy Round Tables in Delhi, Bangalore and Chennai, this report entails an overview of the discussions and recommendations of the fourth Privacy Round Table meeting in Mumbai, on 15th June 2013.

Discussion of the Draft Privacy (Protection) Bill 2013

Discussion of definitions: Chapter 1

The fourth Privacy Round Table meeting began with a discussion of the definitions in Chapter 1 of the draft Privacy (Protection) Bill 2013. In particular, it was stated that in India, the courts argue that the right to privacy indirectly derives from the right to liberty, which is guaranteed in article 21 of the constitution. However, this provision is inadequate to safeguard citizens from potential abuse, as it does not protect their data adequately. Thus, all the participants in the meeting agreed with the initial notion that India needs privacy legislation which will explicitly regulate data protection, the interception of communications and surveillance within India. To this extent, the participants started a thorough discussion of the definitions used in the draft Privacy (Protection) Bill 2013.

It was specified in the beginning of the meeting that the definition of personal data in the Bill applies to natural persons and not to juristic persons. A participant argued that the Information Technology Act refers to personal data and that the draft Privacy (Protection) Bill 2013 should be harmonised with existing rules. This was countered by a participant who argued that the European Union considers the Information Technology Act inadequate in protecting personal data in India and that since India does not have data secure adequacy, the Bill and the IT Act should not be harmonised.

Other participants argued that all other relevant acts should be quoted in the discussion so that it does not overlap with existing provisions in other rules, such as the IT Act. Furthermore, this was supported by the notion that the Bill should not clash with existing legislation, but this was dismissed by the argument that this Bill – if enacted into law – would over right all other competing legislation. Special laws over right general laws in India, but this would be a special law for the specific purpose of data protection.

The definition of sensitive personal data includes biometric data, political affiliation and past criminal history, but does not include ethnicity, caste, religion, financial information and other such information. It was argued that one of the reasons why such categories are excluded from the definition of sensitive personal data is because the government requests such data on a daily basis and that it is not willing to take any additional expense to protect such data. It was stated that the Indian government has argued that such data collection is necessary for caste census and that financial information, such as credit data, should not be included in the definition for sensitive personal data, because a credit Act in India specifically deals with how credit data should be used, shared and stored.

Such arguments were backlashed by participants arguing that definitions are crucial because they are the “building blocks” of the entire Bill and that ethnicity, caste, religion and financial information should not be excluded from the Bill, as they include information which is sensitive within the Indian context. In particular, some participants argued that the Bill would be highly questioned by countries with strong privacy legislation, as certain categories of information, such as ethnicity and caste, are definitely considered to be sensitive personal information within India. The argument that it is too much of a bureaucratic and financial burden for the Indian government to protect such personal data was countered by participants who argued that in that case, the government should not be collecting that information to begin with – if it cannot provide adequate safeguards.

The debate on whether ethnicity, religion, caste and financial information should be included in the definition for sensitive personal data continued with a participant arguing that no cases of discrimination based on such data have been reported and that thus, it is not essential for such information to be included in the definition. This argument was strongly countered by participants who argued that the mere fact that the government is interested in this type of information implies that it is sensitive and that the reasons behind the governments´ interest in this information should be investigated. Furthermore, some participants argued that a new provision for data on ethnicity, religion, caste and financial information should be included, as well as that there is a difference between voluntarily handing over such information and being forced to hand it over.

The inclusion of passwords and encryption keys in the definition of sensitive personal data was highly emphasized by several participants, especially since their disclosure can potentially lead to unauthorised access to volumes of personal data. It was argued that private keys in encryption are extremely sensitive personal data and should definitely be included within the Bill.

In light of the NSA leaks on PRISM, several participants raised the issue of Indian authorities protecting data stored in foreign servers. In particular, some participants argued that the Bill should include provisions for data stored in foreign servers in order to avoid breaches for international third parties. However, a participant argued that although Indian companies are subject to the law, foreign data processors cannot be subject to Indian law, which is why they should instead provide guarantees through contracts.

Several participants strongly argued that the IT industry should not be subject to some of the privacy principles included in the Report of the Group of Experts on Privacy, such as the principle of notice. In particular, they argued that customers choose to use specific services and that by doing so, they trust companies with their data; thus the IT industry should not have to comply with the principle of notice and should not have to inform individuals of how they handle their data.

On the issue of voluntary disclosure of personal data, a participant argued that, apart from the NPR and UID, Android and Google are conducting the largest data collection within India and that citizens should have the jurisdiction to go to court and to seek that data. The issue of data collection was further discussed over the next sessions.

Right to Privacy: Chapter 2

The discussion of the right to privacy, as entailed in chapter 2 of the draft Privacy (Protection) Bill 2013, started with a participant stating that governments own the data citizens hand over to them and that this issue, along with freedom from surveillance and illegal interception, should be included in the Bill.

Following the distinction between exemptions and exceptions to the right to privacy, a participant argued that although it is clear that the right to privacy applies to all natural persons in India, it is unclear if it also applies to organizations. This argument was clarified by a participant who argued that chapter 2 clearly protects natural persons, while preventing organisations from intervening to this right. Other participants argued that the language used in the Bill should be more gender neutral and that the term “residential property” should be broadened within the exemptions to the right to privacy, to also include other physical spaces, such as shops. On this note, a participant argued that the word “family” within the exemptions should be more specifically defined, especially since in many cases husbands have controlled their wives when they have had access to their personal accounts.

The definition of “natural person” was discussed, while a participant raised the question of whether data protection applies to persons who have undergone surgery and who have changed their sexual orientation; it was recommended that such provisions are included within the Bill. The above questions were answered by a participant who argued that the generic European definitions for “natural persons” and “family” could be adopted, as well as that CCTV cameras used in public places, such as shops, should be subject to the law, because they are used to monitor third parties.

Other participants suggested that commercial violations are not excluded from the Bill, as the broadcasting of people, for example, can potentially lead to a violation of the right to privacy. In particular, it was argued that commercial establishments should not be included in the exemptions section of the right to privacy, in contrast to other arguments that were in favour of it. Furthermore, participants argued that the interaction between transparency and freedom of information should be carefully examined and that the exemptions to the right to privacy should be drafted accordingly.

Protection of Personal Data: Chapter 3

Some of the most important discussions in the fourth Privacy Round Table meeting revolved around the protection of personal data.

Collection of personal data

The discussion on the collection of personal data started with a statement that the issue of individual consent prior to data collection is essential and that in every case, the data subject should be informed of its data collection, data processing, data sharing and data retention.

It was pointed out that, unlike most privacy laws around the world, this Bill is affirmative because it states that data can only be collected once the data subject has provided prior consent. It was argued that if this Bill was enacted into law, it would probably be one of the strictest laws in the world in terms of data collection, because data can only be collected with individual consent and a legitimate purpose. Data collection in the EU is not as strict, as there are some exemptions to individual consent; for example, if someone in the EU has a heart attack, other individuals can disclose his or her information. It was emphasized that as this Bill limits data collection to individual consent, it does not serve other cases when data collection may be necessary but individual consent is not possible. A participant pointed out that, although the Justice AP Shah Report of the Group of Experts on Privacy states that “consent may not be acquired in some cases”, such cases are not specified within the Bill.

Other issues that were raised are that the Bill does not specify how individual consent would be obtained as a prerequisite to data collection. In particular, it remains unclear whether such consent would be acquired through documentation, a witness or any other way. Thus it was emphasized that the method for acquiring individual consent should be clearly specified within the Bill, especially since it is practically hard to obtain consent for large portions of the Indian population that live below the line of poverty.

A participant argued that data collection on private detectives, from reality TV shows and on physical movement and location should also be addressed in the Bill. Furthermore, other participants argued that specific explanations to exempt medical cases and state collection of data which is directly related to the provision of welfare should be included in the Bill. Participants recommended that individuals should have the right to opt out from data collection for the purpose of providing welfare programmes and other state-run programmes.

The need to define the term “legitimate purpose” was pointed out to ensure that data is not breached when it is being collected. A participant recommended the introduction of a provision in the Bill for anonymising data in medical case studies and it was pointed out that it is very important to define what type of data can be collected. In particular, it was argued that a large range of personal data is being collected in the name of “public health” and “public security” and that, in many cases, patients may provide misinformed consent, because they may think that the revelation of their personal data is necessary, when actually it might not be. It was recommended that this issue is addressed and that necessary provisions are included in the Bill.

In the cases where data is collected for statistics, individuals may not be informed of their data being collected and may not provide consent. It was also recommended that this issue is addressed and included in the Bill. However, it was also pointed out that in many cases, individuals may choose to use a service, but they may not be able to consent to their data collection and Android is an example of this. Thus it was argued that companies should be transparent about how they handle users´ data and that they should require individuals´ consent prior to data collection.

It was emphasized that governments have a duty of transparency towards their citizens and that the fact that, in many cases, citizens are obliged to hand over their data without giving prior consent to how their data is being used should be taken into consideration. In particular, it was argued that many citizens need to use specific services or welfare programmes and that they are obliged to hand over their personal information. It was recommended that the Bill incorporates provisions which would oblige all services to acquire individual consent prior to data collection. However, the issue that was raised is that often companies provide long and complicated contracts and policy guides which discourage individuals from reading them and thus from providing informed consent; it was recommended that this issue is addressed as well.

Storage and destruction of personal data

The discussion on the storage and destruction of personal data started with a statement that different sectors should have different data retention frameworks. The proposal that a ubiquitous data retention framework should not apply to all sectors was challenged by a participant who stated that the same data retention period should apply to all ISPs and telecoms. Furthermore, it was added that regulators should specify the data retention period based on specific conditions and circumstances. This argument was countered by participants who argued that each sector should define its data retention framework depending on many variables and factors which affect the collection and use of data.

In European laws, no specific data retention periods are established. In particular, European laws generally state that data should only be retained for a period related to the purpose of its collection. Hence it was pointed out that data retention frameworks should vary from sector to sector, as data, for example, may need to be retained longer for medical cases than for other cases. This argument, however, was countered by participants who argued that leaving the prescription of a data retention period to various sectors may not be effective in India.

Questions of how data retention periods are defined were raised, as well as which parties should be authorised to define the various purposes for data retention. One participant recommended that a common central authority is established, which can help define the purpose for data retention and the data retention period for each sector, as well as to ensure that data is destroyed once the data retention period is over. Another participant recommended that a three year data retention period should be applied to all sectors by default and that such periods could be subject to change depending on specific cases.

Security of personal data and duty of confidentiality

Participants recommended that the definition of “data integrity” should be included in Chapter 1 of the draft Privacy (Protection) Bill 2013. Other participants raised the need to define the term “adequacy” in the Bill, as well as to state some parameters for it. It was also suggested that the term “adequacy” could be replaced by the term “reasonable”.

One of the participants raised the issue of storing data in a particular format, then having to transfer that data to another format which could result in the modification of that data. It was pointed out that the form and manner of securing personal data should be specifically defined within the Bill. However, it was argued that the main problem in India is the implementation of the law, and that it would be very difficult to practically implement the draft Privacy (Protection) Bill in India.

Disclosure of personal data

The discussion on the disclosure of personal data started with a participant arguing that the level of detail disclosed within data should be specified within the Bill. Another participant argued that the privacy policies of most Internet services are very generic and that the Bill should prevent such services from publicly disclosing individuals´ data. On this note, a participant recommended that a contract and a subcontract on the disclosure of personal data should be leased in order to ensure that individuals are aware of what they are providing their consent to.

It was recommended that the Bill should explicitly state that data should not be disclosed for any other purpose other than the one for which an individual has provided consent. Data should only be used for its original purpose and if the purpose for accessing data changes within the process, consent from the individual should be acquired prior to the sharing and disclosure of that data. A participant argued that banks are involved with consulting and other advisory services which may also lead to the disclosure of data; all such cases when information is shared and disclosed to (unauthorised) third parties should be addressed in the Bill.

Several participants argued that companies should be responsible for the data they collect and that should not share it or disclose it to unauthorised third parties without individuals´ knowledge or consent. On this note, other participants argued that companies should be legally allowed to share data within a group of companies, as long as that data is not publicly disclosed. An issue that was raised by one of the participants is that online companies, such as Gmail, usually acquire consent from customers through one “click” to a huge document which not only is usually not read by customers, but which vaguely entails all the cases for which individuals would be providing consent for. This creates the potential for abuse, as many specific cases which would require separate, explicit consent, are not included within this consent mechanism.

This argument was countered by a participant who stated that the focus should be on code operations for which individuals sign and provide consent, rather than on the law, because that would have negative implications on business. It was highlighted that individuals choose to use specific services and that by doing so they trust companies with their data. Furthermore, it was argued that the various security assurances and privacy policies provided by companies should suffice and that the legal regulation of data disclosure should be avoided.

Consent-based sharing of data should be taken into consideration, according to certain participants. The factor of “opt in” should also be included when a customer is asked to give informed consent. Participants also recommended that individuals should have the power to “opt out”, which is currently not regulated but deemed to be extremely important. Generally it was argued that the power to “opt in” is a prerequisite to “opt out”, but both are necessary and should be regulated in the Bill.

A participant emphasized the need to regulate phishing in the Bill and to ensure that provisions are in place which could protect individuals´ data from phishing attacks. On the issue of consent when disclosing personal data, participants argued that consent should be required even for a second flow of data and for all other flows of data to follow. In other words, it was recommended that individual consent is acquired every time data is shared and disclosed. Moreover, it was argued that if companies decide to share data, to store it somewhere else or to disclose it to third parties years after its initial collection, the individual should have the right to be informed.

However, such arguments were countered by participants who argued that systems, such as banks, are very complex and that they don´t always have a clear idea of where data flows. Thus, it was argued that in many cases, companies are not in a position to control the flow of data due to a lack of its lack of traceability and hence to inform individuals every time their data is being shared or disclosed.

Participants argued that the phrase “threat to national security” in section 10 of the Bill should be explicitly defined, because national security is a very broad term and its loose interpretation could potentially lead to data breaches. Furthermore, participants argued that it is highly essential to specify which authorities would determine if something is a threat to national security.

The discussion on the disclosure of personal data concluded with a participant arguing that section 10 of the Bill on the non-disclosure of information clashes with the Right to Information Act (RTI Act), which mandates the opposite. It was recommended that the Bill addresses the inevitable clash between the non-disclosure of information and the right to information and that necessary provisions are incorporated in the Bill.

Presentation by Mr. Billy Hawkes – Irish Data Protection Commissioner

The Irish Data Protection Commissioner, Mr. Billy Hawkes, attended the fourth Privacy Round Table meeting in Mumbai and discussed the draft Privacy (Protection) Bill 2013.

In particular, Mr. Hawkes stated that data protection law in Ireland was originally introduced for commercial purposes and that since 2009 privacy has been a fundamental right in the European Union which spells out the basic principles for data protection. Mr. Hawkes argued that India has successful outsourcing businesses, but that there is a concern that data is not properly protected. India has not been given data protection adequacy by the European Union, mainly because the country lacks privacy legislation.

There is a civic society desire for better respect for human rights and there is the industrial desire to be considered adequate by the European Union and to attract more international customers. However, privacy and data protection are not covered adequately in the Information Technology Act, which is why Mr. Hawkes argued that the draft Privacy (Protection) Bill 2013 should be enacted in compliance with the principles from the Justice AP Shah Report on the Group of Experts on Privacy. Enacting privacy legislation in India would, according to Mr. Hawkes, be a prerequisite so that India can potentially be adequate in data protection in the future.

The Irish Data Protection Commissioner referred to the current negotiations taking place in the European Union for the strengthening of the 1995 Directive on Data Protection, which is currently being revisited and which will be implemented across the European Union. Mr. Hawkes emphasized that it is important to have strong enforcement powers and to ask companies to protect data. In particular, he argued that data protection is good customer service and that companies should acknowledge this, especially since data protection reflects respect towards customers.

Mr. Hawkes highlighted that other common law countries, such as Canada and New Zealand, have achieved data secure adequacy and that India can potentially be adequate too. More and more countries in the world are seeking European adequacy. Privacy law in India would not only safeguard human rights, but it´s also good business and would attract more international customers, which is why European adequacy is important. In every outsourcing there needs to be a contract which states that the requirements of the data controller have been met. Mr. Hawkes emphasized that it is a competitive disadvantage in the market to not be data adequate, because most countries will not want their data outsourced to countries which are inadequate in data security.

As a comment to previous arguments stated in the meeting, it was pointed out that in Ireland, if companies and banks are not able to track the flow of data, then they are considered to be behaving irresponsibly. Furthermore, Mr. Hawkes states that data adequacy is a major reputational issue and that inadequacy in data security is bad business. It is necessary to know where the responsibility for data lies, which party initially outsourced the data and how it is currently being used. Data protection is a fundamental right in the European Union and when data flows outside the European Union, the same level of protection should apply. Thus other non-EU countries should comply with regulations for data protection, not only because it is a fundamental human right, but also because it is bad business not to do so.

The Irish Data Protection Commissioner also referred to the “Right to be Forgotten”, which is the right to be told how long data will be retained for and when it will be destroyed. This provides individuals some control over their data and the right to demand this control.

On the funding of data protection authorities, Mr. Hawkes stated that funding varies and that in most cases, the state funds the data protection authority – including Ireland. Data protection authorities are substantially funded by their states across the European Union and they are allocated a budget every year which is supposed to cover all their costs. The Spanish data protection authorities, however, are an exception because a large amount of their activities are funded by fines.The data protection authorities in the UK (ICO) are funded through registration fees paid by companies and other organizations.

When asked about how many employees are working in the Irish data protection commissioner´s office, Mr. Hawkes replied that only thirty individuals are employed. Employees working in the commissioner´s office are responsible for overseeing the protection of the data of Facebook users, for example. Facebook-Ireland is responsible for handling users´ data outside of North America and the commissioner´s office conducted a detailed analysis to ensure that data is protected and that the company meets certain standards. Facebook´s responsibility is limited as a data controller as individuals using the service are normally covered by the so-called "household exemption" which puts them outside the scope of data protection law. The data protection commissioner conducts checks and balances, writes reports and informs companies that if they comply with privacy and data protection, then they will be supported.

Data protection in Ireland covers all the organizations, without exception. Mr. Hawkes stated that EU data protection commissioners meeting in the "Article 29" Working Party spend a significant amount of their time dealing with companies like Google and Facebook and with whether they protect their customers´ data.

The Irish Data Protection Commissioner recommended that India establishes a data protection commission based on the principles included in the Justice AP Shah Report of the Group of Experts on Privacy. In particular, an Indian data protection commission would have to deal with a mix of audit inspections, complaints, greater involvement with sectors, transparency, accountability and liability to the law. Mr. Hawkes emphasized that codes of practice should be implemented and that the focus should not be on bureaucracy, but on accountability. It was recommended that India should adopt an accountability approach, where punishment will be in place when data is breached.

On the recent leaks on the NSA´s surveillance programme, PRISM, Mr. Hawkes commented that he was not surprised. U.S. companies are required to give access to U.S. law enforcement agencies and such access is potentially much looser in the European Union than in the U.S., because in the U.S. a court order is normally required to access data, whereas in the European Union that is not always the case. Mr. Hawkes stated that there needs to be a constant questioning of the proportionality, necessity and utility of surveillance schemes and projects in order to ensure that the right to privacy and other human rights are not violated.

Mr. Hawkes stated that the same privacy law should apply to all organizations and that India should ensure its data adequacy over the next years. The Irish Data Protection Commissioner is responsible for Facebook Ireland and European law is about protecting the rights of any organisation that comes under European jurisdiction, whether it is a bank or a company. Mr. Billy Hawkes emphasized that the focus in India should be on adequacy in data security and in protecting citizens´ rights.

Meeting conclusion

The fourth Privacy Round Table meeting entailed a discussion of the draft Privacy (Protection) Bill 2013 and Mr. Billy Hawkes, the Irish Data Protection Commissioner, gave a presentation on adequacy in data security and on his thoughts on data protection in India. The discussion on the draft Privacy (Protection) Bill 2013 led to a debate and analysis of the definitions used in the Bill, of chapter 2 on the right to privacy, and on data collection, data retention, data sharing and data disclosure. The participants provided a wide range of recommendations for the improvement of the draft Privacy (Protection) Bill and all will be incorporated in the final draft. The Irish Data Protection Commissioner, Mr. Billy Hawkes, stated that the European Union has not given data adequacy to India because it lacks privacy legislation and that data inadequacy is not only a competitive disadvantage in the market, but it also shows a lack of respect towards customers. Mr. Hawkes strongly recommended that privacy legislation in compliance with the Justice AP Shah report is enacted, to ensure that India is potentially adequate in data security in the future and that citizens´ right to privacy and other human rights are guaranteed.

Open Letter to Prevent the Installation of RFID tags in Vehicles

by Maria Xynou last modified Jul 12, 2013 10:59 AM
The Centre for Internet and Society (CIS) has sent this open letter to the Society of Indian Automobile Manufacturers (SIAM) to urge them not to intall RFID tags in vehicles in India.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


This letter is with regards to the installation of Radio Frequency Identification Tags (RFID) in vehicles in India.

On behalf of the Centre for Internet and Society, we urge you to prevent the installation of RFID tags in vehicles in India, as the legality, necessity and utility of RFID tags have not been adequately proven. Such technologies raise major ethical concerns, since India lacks privacy legislation which could safeguard individuals' data.

The proposed rule 138A of the Central Motor Vehicle Rules, 1989, mandates that RFID tags are installed in all light motor vehicles in India. However, section 110 of the Motor Vehicles Act (MV Act), 1988, does not bestow on the Central Government a specific empowerment to create rules in respect to RFID tags. Thus, the legality of the proposed rule 138A is questioned, and we urge you to not proceed with an illegal installation of RFID tags in vehicles until the Supreme Court has clarified this issue.

The installation of RFID tags in vehicles is not only currently illegal, but it also raises majors privacy concerns. RFID tags yield locational information, and thus reveal information as to an individual’s whereabouts. This could lead to a serious invasion of the right to privacy, which is at the core of personal liberty, and constitutionally protected in India. Moreover, the installation of RFID tags in vehicles is not in compliance with the privacy principles of the Report of the Group of Experts on Privacy, as, among other things, the architecture of RFID tags does not allow for consent to be taken from individuals for the collection, use, disclosure, and storage of information generated by the technology.[1]

The Centre for Internet and Society recently drafted the Privacy (Protection) Bill 2013 – a citizen's version of a possible privacy legislation for India.[2] The Bill defines and establishes the right to privacy and regulates the interception of communications and surveillance, and would include the regulation of technologies like RFID tags. As this Bill has not been enacted into law and India lacks a privacy legislation which could safeguard individuals' data, we strongly urge you to not require the mandatory installation of RFID tags in vehicles, as this could potentially violate individuals' right to privacy and other human rights.

As the proposed rule 138A, which mandates the installation of RFID tags in vehicles, is currently illegal and India lacks privacy legislation which would regulate the collection, use, sharing of, disclosure and retention of data, we strongly urge you to ensure that RFID tags are not installed in vehicles in India and to play a decisive role in protecting individuals' right to privacy and other human rights.

Thank you for your time and for considering our request.

Sincerely,

Centre for Internet and Society (CIS)

 

 

[1]. Report of the Group of Experts on Privacy: http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

[2].Draft Privacy (Protection) Bill 2013: http://cis-india.org/internet-governance/blog/privacy-protection-bill-2013.pdf

The State is Snooping: Can You Escape?

by Snehashish Ghosh last modified Apr 29, 2019 03:09 PM
Blanket surveillance of the kind envisaged by India's Centralized Monitoring System achieves little, but blatantly violates the citizen's right to privacy; Snehashish Ghosh explores why it may be dangerous and looks at potential safeguards against such intrusion.

The Snowden Leaks have made it amply clear that the covert surveillance conducted by governments is no longer covert. Information by its very nature is prone to leaks. The discretion lies completely in the hands of the personnel handling your data or information. Whether it is through knowledge obtained by an intelligence analyst about the US Government conducting indiscriminate surveillance, or hackers infiltrating a secure system and leaking personal information, stored information has a tendency to come out in the open sooner or later.

This raises the question whether, with the advancement of technologies, we should trust our personal information and data with computers. Should we have more stringent laws and procedural safeguards to protect our personal information? Of course, the broader question that remains is whether we have a ‘Right to be Forgotten’.

Similar to PRISM in the US, India is also implementing a Centralized Monitoring System (CMS) which would have the capabilities to conduct multiple privacy-intrusive activities, ranging from call data record analysis to location based monitoring. Given the circumstances and the current revelations by a whistleblower in the US, it is more than imperative to take a closer look at the surveillance technologies which are being deployed by India and question what implications it might have in the future.

Technological shift and procedural safeguards
The need for procedural safeguards was brought to light in the Supreme Court case, when news reports surfaced about the tapping of politicians' phones by the CBI. The Court while deciding on the issue of phone tapping in the case of People’s Union of Civil Liberties v. Union of India (1996), observed that the Indian Telegraph Act, 1885 is an ancient legislation and does not address the issue of telephone tapping. Thereafter, the court issued guidelines, which were implemented by the Government by amending and inserting Rule 419A of the Indian Telegraph Rules, 1951. These procedural safeguards ensure that due process will be followed by any law enforcement agency, while conducting surveillance.

Section 5(2) of the Indian Telegraph Act, 1885 grants the power to the Government to conduct surveillance provided that there is an occurrence of any public emergency or public safety. If and only if the conditions of public safety and public emergency are compromised, and if the concerned authority is convinced that it is expedient to issue such an order for interception in the interest of “the sovereignty and integrity of India, the security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of an offence” is surveillance legitimized. The same was reaffirmed by the Supreme Court in the 1996 judgment on wire tapping.

Now, as the Government of India is planning to launch a new technology, the Centralized Monitoring System (CMS) which would snoop, track and monitor communication data flowing through telecom and data networks, the question arises: can we have procedural safeguards which would protect our right to privacy against technologies such as the CMS?

The key component of a procedural safeguard is human discretion; either a court authorization or an order from a high ranking government official is necessary to conduct targeted surveillance and the reasons for conducting surveillance have to be recorded in writing. This is the procedure which is ordinarily followed by law enforcement agencies before conducting any form of surveillance. However, with the computational turn, governments have resorted to practices which would do away with the human discretion. Dragnet surveillance allows for blanket surveillance. Before getting to the problems in evolving a due process for systems like CMS, it is imperative to examine the capabilities of the system.

Centralized Monitoring System and death of due process
Setting up of a CMS was conceptualized in India after the 2008 Mumbai attacks. It was further consolidated and found a place in the Report of the Telecom Working Group on the Telecom Sector for the Twelfth Five Year Plan (2012-2017). The Report was published in August, 2011 and goes into the details of the CMS.

When machines and robots are deployed to conduct blanket surveillance and impinge on the most fundamental right to life and liberty, and also violate the basic tenets of due process, then much cannot be done by way of procedures. What then do we resort to, is the primary question. Can there be a compromise between the right to privacy and security?

The Report indicates that the technology will cater to “the requirements of security management for law enforcement agencies for interception, monitoring, data analysis/mining, anti‐social‐networking using the country’s telecom infrastructure for unlawful activities.”

The CMS will also be capable of running algorithms for interception of connection oriented networks, algorithms for interception of voice over internet protocol (VoIP), video over IP and GPS based monitoring systems. These algorithms would be able to intercept any communication without any intervention from the telecom or internet service provider. It would also have the capability to intercept and analyze data on any communication network as well as to conduct location based monitoring by tracking GPS locations. Given such capabilities, it is clear that a computer system will be sifting through the internet/communication data and will conduct surveillance as instructed through algorithms. This would include identifying patterns, profiling and also storing data for posterity. Moreover, the CMS will have direct access to the telecommunication infrastructure and would be monitoring all forms of communication.

With the introduction of CMS, state surveillance will shift to blanket surveillance from the current practice of targeted surveillance which can be carried out under specific circumstances that are well defined in the law and in judgments. Moreover, when it comes to current means of surveillance, there are well-defined procedures under the law which have the ability to prevent misuse of the surveillance systems. This is not to say that the current procedural safeguards under the laws are not prone to abuse, but if implemented properly, there is less chance of them being misused. Furthermore, with strong privacy and data protection laws, unlawful and illegal surveillance can be minimized.

In the current legal framework, with respect to surveillance, if CMS is implemented then it will be in violation of the fundamental right to privacy and freedom of speech as guaranteed under our Constitution. It will be also in contravention of the procedural safeguards laid down in the Supreme Court judgement and the Rule 419A of Indian Telegraph Rules, thereof. Strong privacy laws and data protection laws may be put in place, which are completely absent now. But at the end of the day, a machine will be spying on every citizen of India or anyone using any communication services, without any specific targets or suspects.

In the People’s Union of Civil Liberties v. Union of India (1996), the Supreme Court laid down that “the substantive law as laid down in Section 5(2) of the [Indian Telegraph Act, 1885] must have procedural backing so that the exercise of power is fair and reasonable.” But with technologies such as CMS, it will be very difficult to have any form of procedural backing because the system would do away with human discretion which happens to be a key ingredient of any legal procedure.

The argument which can be made in favour of CMS, if any, is that a machine will be going through personal data and it will not be available to any personnel or law enforcement agency without authorization and therefore, it will adhere to the due process. However, such a system will be keeping track of all personal information. Right to privacy is the right to be left alone and any incursion on this fundamental right can only be allowed in special cases, in cases of public emergency or threat of public safety. So, electronic blanket surveillance without human intervention also amounts to violation of the substantive law, which specifically allows surveillance only to be conducted under certain conditions, and not through a system such as CMS that is designed to keep a constant watch on everyone, irrespective of the fact whether there is a need to do so.

Additionally, there exists a strong, pre-established notion that whatever comes out of a computer is bound to be true and authentic and there cannot be any mistakes. We have witnessed this in the past where an IT professional from Bangalore was arrested and detained by the Maharashtra Police for posting derogatory content on Orkut about Shivaji. Later, it was found that the records acquired from the Internet Service Provider were incorrect and the individual had been arrested and detained illegally.

Telephone bills, credit card bills coming out from a computer system are often held to be authentic and error-free. With UID, our identity has been reduced to a number and biometrics stored in a database corresponding to that number. It is this trust in anything which comes out of a computer or a machine that can lead to massive abuse of the system in the absence of any form of checks and balance in place. Artificial things taking control over human lives and our almost unflinching trust in technology will not only cause gross violations of privacy but will also be the death of due process and basic human rights as we know it.

In this regard, due emphasis should be given to the landmark Supreme Court judgment in the case of Maneka Gandhi v. Union of India (1978) which deals with issues related to due process and privacy. It states that "procedure which deals with the modalities of regulating, restricting or even rejecting a fundamental right falling within Article 21 has to be fair, not foolish, carefully designed to effectuate, not to subvert, the substantive right itself. Thus, understood, ‘procedure’ must rule out anything arbitrary, freakish or bizarre. A valuable constitutional right can be canalised only by canalised processes".

When machines and robots are deployed to conduct blanket surveillance and impinge on the most fundamental right to life and liberty and also violate the basic tenets of due process, then much cannot be done by way of procedures. What then do we resort to, is the primary question. Can there be a compromise between the right to privacy and security?

A no-win situation
In reality, dragnet surveillance or blanket surveillance is not very useful for gathering valuable intelligence to prevent instances of threat to national security, public safety and public emergency. For example, if the CMS is used to mine data, analyse content related to anti-social activities and even if the system is 99 per cent accurate, the remaining 1 per cent which is a false positive happens to be a large set. So, 1 out of every 100 individuals identified as an anti-social element by CMS may actually be an innocent citizen. Given the possibility of false positives and which may be more than 1 per cent, the number of innocent citizens caught in the terrorist net would be much higher.

Even though blanket surveillance or dragnet surveillance can keep a tab on everyone, it is nearly impossible for an algorithm to separate the terrorists from the rest. Moreover, the data set collected by the machine is too big for any human analyst, to actually analyze and identify the terrorist in the midst of a deluge of information. Therefore, the argument that a system like CMS will ensure security in lieu of minor intrusions of privacy is a flawed one. Implementation of CMS will not really ensure security but will be a case of blatant violation of individual’s right to privacy anyway.

What is perhaps more shocking is that not only will CMS be futile in preventing security breaches or neutralizing security threats, it will on the contrary expose individual Indian citizens to breach of personal security. If personal data and information are stored for future reference through a centralized mechanism, which is also the case with UID, it will be highly susceptible to attacks and security threats. It will be a Pandora’s Box with a potential to create havoc the moment someone is able to gain access to the information with intention to misuse that. Leaking of personal information and data on a large scale can be detrimental to society and give rise to instances of public emergency.

The ‘Right to be Forgotten’

Currently, the European Union is engulfed in the debate on the “Right to be Forgotten” laws. The Right to be Forgotten finds its origins in the French Law le droit à l’oubli or the right of oblivion, where a convict who has served his sentence can object to the publication of facts of his conviction and imprisonment or penalty. This law has a new found meaning in the context of social media and the internet, where we have the right to delete all our personal information permanently. This is an important issue which India should debate and discuss, as we live in an era where privacy comes at a cost.

On the one hand, technology has made it easier to track, trace, monitor and snoop, on the other it has also seen innovation in the field of encryption and anonymity tools. Encryption tools such as Open PGP exist online, which can secure information from third party access. Tor Browser, allows an user to surf the web anonymously. The use of such technologies should be encouraged as there is no law which prohibits their use. If systems are being built to spy on us, it will be better if we use technologies which protect our personal information from such surveillance technologies.

SEBI and Communication Surveillance: New Rules, New Responsibilities?

by Kovey Coles last modified Jul 12, 2013 10:51 AM
In this blog post, Kovey Coles writes about the activities of the Securities Exchange Board of India (SEBI), discusses the importance of call data records (CDRs), and throws light on the significant transition in governmental leniency towards access to private records.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


Introduction

The Securities Exchange Board of India (SEBI) is the country’s securities and market regulator, an investigation agency which seeks to combat market offenses such as insider trading. SEBI has received much media attention this month regarding its recent expansion of authority; the agency is reportedly on track to be granted powers to access telecom companies’ CDRs. These CDRs are kept by telecommunication companies for billing purposes, and contain information on who sent a call, who received a call, and how long the call lasted, but does not disclose information about call content. Although SEBI has emphatically sought several new investigative powers since 2009 (including access to CDRs, surveillance of email, and monitoring of social media), India’s Ministry of Finance only recently endorsed SEBI’s plea for direct access to service providers’ CDRs. In SEBI’s founding legislation, this capability is not mentioned. Very recently, however, the Ministry of Finance has decided to support expansion of current legislation in regards to CDR access for SEBI, the Reserve Bank of India (RBI), and potentially other agencies, when it comes to prevention of money laundering and other economic offenses.

SEBI’s Authority (Until Now)

Established in 1992 under the Securities and Exchange Board of India Act, SEBI was created with the power of "registering and regulating the working of… [individuals] and intermediaries who may be associated with securities markets in any manner."[1] Its powers have included "calling for information from, undertaking inspection, conducting inquires and audits of the intermediaries and self-regulatory organisations in the securities market."[2] Although the agency has held the responsibility to investigate records on market activity, they have never explicitly enjoyed a right to CDRs or other communications data. Now, with the intention of “meeting new challenges thrown forward by the technological and market advances,”[3] SEBI and the Ministry of Finance want to extend their record keeping scope and investigative powers to include CDR access, a form of communications surveillance.

But the ultimate question is whether agencies like SEBI need this type of easy access to records of communication.

What is the Importance of CDR Access?

Reports on SEBI’s recent expansion are quick to ensure that the agency is not looking for phone-tapping rights, which intercepts messages within telephonic calls, but instead only seeks call records. CDRs, in effect, are “metadata,” a sort of information about information. In this case, it is data about communications, but it is not the communications themselves. Currently, there a total of nine agencies which are able to make actual phone-tapping requests in India. But when it comes to access of CDRs, the government seems much more generous in expanding powers of existing agencies. SEBI, as well as RBI and others, are all looking to be upgraded in their authority over CDRs. Experts argue, however, that "metadata and other forms of non-content data may reveal even more about an individual than the content itself, and thus deserves equivalent protection."[4] Therefore, a second crucial question is whether this sensitive CDR data will feature the same detail of protection and safeguards which exist for communication interception.

One reason for the recent move in CDR access is that SEBI and RBI have found the process of obtaining CDRs too arduous and ill-defined.[5] Currently, under section 92 of the CrPc, Magistrates and Commissioners of Police can request a CDR only with an official corresponding first information report (FIR), while there exists no explicit guideline for SEBI’s role in the process of CDR acquisition.[6] Although the government may seek to relax this procedure, SEBI’s founding legislation prohibits investigation without the pretense of “reasonable grounds," as stipulated in section 11C of the SEBI Act.[7] It has always stood that only under these reasonable grounds could SEBI begin inspection of an intermediary’s "books, registers, and other documents."[7] With the government creating a way for SEBI and similar agencies to circumvent the traditional procedures for access to CDRs, these new standards should incorporate safeguards to ensure the protection of individual privacy. Banking companies, financial institutions, and intermediaries have already been obliged to maintain extensive record keeping of transactions, clients, and other financial data under section 12 of the Prevention of Money-Laundering Act of 2002.[8] But books and records containing financial data differ greatly from communication data, which can include much more personal information and therefore may compromise individuals’ freedom of speech and expression, as well as the right to privacy.

Significance and Responsibility in this Decision

Judging from SEBI’s prior capabilities of inspection and inquiry, this change may initially seem only a minor expansion of power for the agency, but it actually represents a significant transition in governmental leniency toward access to private records. As mentioned, the recent goal of the Ministry of Finance to extend rights to CDRs is resulting in amended powers for more agencies than only SEBI. Moreover, this power expansion comes on the heels of controversy surrounding America’s National Security Agency (NSA) amassing millions of CDRs and other datasets both domestically and internationally. There is obvious room for concern over Indian citizen’s call records being made more easily accessible, with fewer checks and balances in place. The benefits of the new policy include easier access to evidence which could incriminate those involved in financial crimes. But is that benefit actually worth giving SEBI the right to request citizen’s call records? In the cases against economic offenses, CDR access often amounts only to circumstantial evidence. With its ongoing battle against insider trading and other financial malpractice, crimes which are inherently difficult to prove, SEBI could have aspirations to grow progressively more omnipresent. But as the agency’s breadth expands, citizen’s rights to privacy are simultaneously being curtailed. Ultimately, the value of preventing economic offense must be balanced with the value of the people’s rights to privacy.


[1]. 1992 Securities and Exchange Board of India Act, section 11, part 2(b).

[2]. 1992 Securities and Exchange Board of India Act, section 11, part 2(i).

[3]. “Sebi Finalising new Anti-money laundering guidelines,” The Times of India, June 16, 2013

http://timesofindia.indiatimes.com/business/india-business/Sebi-finalizing-new-anti-money-laundering-guidelines/articleshow/20615014.cms

[4]. International Principles on the Application of Human Rights to Communications Surveillance -http://www.necessaryandproportionate.net/#_edn1

[5]. “Sebi to soon to get Powers to Access Call Records,” Business Today, June 13, 2013

http://businesstoday.intoday.in/story/sebi-call-record-access/1/195815.html

[6]. 1973 Criminal Procedure Code, Section 92 http://trivandrum.gov.in/~trivandrum/pdf/act/CODE_OF_CRIMINAL_PROCEDURE.pdf

“Govt gives Sebi, RBI Access to Call Data Records,” The Times of India, June 14, 2013

http://articles.timesofindia.indiatimes.com/2013-06-14/india/39975284_1_home-ministry-access-call-data-records-home-secretary

[7]. 1992 Securities and Exchange Board of India Act, section 11C, part 8

[8]. 2002 Prevention of Money-Laundering Act, section 12

Privacy Round Table Kolkata

by Prasad Krishna last modified Jul 10, 2013 06:08 AM

PDF document icon Invite-Kolkata.pdf — PDF document, 1090 kB (1116261 bytes)

Way to watch

by Chinmayi Arun last modified Jul 01, 2013 10:17 AM
The domestic surveillance regime in India lacks adequate safeguards.

Chinmayi Arun's column was published in the Indian Express on June 26, 2013.


A petition has just been filed in the Indian Supreme Court, seeking safeguards for our right to privacy against US surveillance, in view of the PRISM controversy. However, we should also look closer home, at the Indian government's Central Monitoring System (CMS) and other related programmes. The CMS facilitates direct government interception of phone calls and data, doing away with the need to justify interception requests to a third party private operator. The Indian government, like the US government, has offered the national security argument to defend its increasing intrusion into citizens' privacy. While this argument serves the limited purpose of explaining why surveillance cannot be eliminated altogether, it does not explain the absence of any reasonably effective safeguards.

Instead of protecting our privacy rights from the domestic and international intrusions made possible by technological development, our government is working on leveraging technology to violate privacy with greater efficiency. The CMS infrastructure facilitates large-scale state surveillance of private communication, with very little accountability. The dangers of this have been illustrated throughout history. Although we do have a constitutional right to privacy in India, the procedural safeguards created by our lawmakers thus far offer us very little effective protection of this right.

We owe the few safeguards that we have to the intervention of the Supreme Court of India, in PUCL vs Union of India and Another. In the context of phone tapping under the Telegraph Act, the court made it clear that the right to privacy is protected under the right to life and personal liberty under Article 21 of the Constitution of India, and that telephone tapping would also intrude on the right to freedom of speech and expression under Article 19. The court therefore ruled that there must be appropriate procedural safeguards to ensure that the interception of messages and conversation is fair, just and reasonable. Since lawmakers had failed to create appropriate safeguards, the Supreme Court suggested detailed safeguards in the interim. We must bear in mind that these were suggested in the absence of any existing safeguards, and that they were framed in 1996, after which both communication technology and good governance principles have evolved considerably.

The safeguards suggested by the Supreme Court focus on internal executive oversight and proper record-keeping as the means to achieving some accountability. For example, interception orders are to be issued by the home secretary, and to later be reviewed by a committee consisting of the cabinet secretary, the law secretary and the secretary of telecommunications (at the Central or state level, as the case may be). Records are to be kept of details such as the communications intercepted and all the persons to whom the material has been disclosed. Both the Telegraph Act and the more recent Information Technology Act have largely adopted this framework to safeguard privacy. It is, however, far from adequate in contemporary times. It disempowers citizens by relying heavily on the executive to safeguard individuals' constitutional rights. Additionally, it burdens senior civil servants with the responsibility of evaluating thousands of interception requests without considering whether they will be left with sufficient time to properly consider each interception order.

The extreme inadequacy of this framework becomes apparent when it is measured against the safeguards recommended in the recent report on the surveillance of communication by Frank La Rue, the United Nations special rapporteur on the promotion and protection of the right to freedom of speech and expression. These safeguards include the following: individuals should have the legal right to be notified that they have been subjected to surveillance or that their data has been accessed by the state; states should be transparent about the use and scope of communication surveillance powers, and should release figures about the aggregate surveillance requests, including a break-up by service provider, investigation and purpose; the collection of communications data by the state, must be monitored by an independent authority.

The safeguards recommended by the special rapporteur would not undermine any legitimate surveillance by the state in the interests of national security. They would, however, offer far better means to ensure that the right to privacy is not unreasonably violated. The emphasis placed by the special rapporteur on transparency, accountability and independent oversight is important, because our state has failed to recognise that in a democracy, citizens must be empowered as far as possible to demand and enforce their rights. Their rights cannot rest completely in the hands of civil servants, however senior. There is no excuse for refusing to put these safeguards in place, and making our domestic surveillance regime transparent and accountable, in compliance with our constitutional and international obligations.

World Wide Rule

by Nishant Shah last modified Jul 01, 2013 10:26 AM
Nishant Shah's review of Schmidt and Cohen's book was published in the Indian Express on June 14, 2013.

Click to read the original published in the Indian Express here


Book: The New Digital Age
Author: Eric Schmidt & Jared Cohen
Publisher: Hachette
Price: Rs 650
Pages: 315


When I first heard that Eric Schmidt the chairman of Google and Jared Cohen, the director of the techno-political think-tank Google Ideas, are co-authoring a book about our future and how it is going to be re-shaped with the emergence of digital technologies, I must confess I was sceptical. When people who do things that you like start writing about those things, it is not always a pretty picture. Or an easy read. However, like all sceptics, I am only a romantic waiting to be validated. So, when I picked up The New Digital Age I was hoping to be entertained, informed and shaken out of my socks as the gurus of the interwebz spin science fiction futures for our times. Sadly, I have been taught my lesson and have slid back into hardened scepticism.

Here is the short version of the book: Technology is good. Technology is going to be exciting. There are loads of people who haven't had it yet. There are not enough people who have figured out how things work. Everybody needs to go online because no matter what, technologies are here to stay and they are going to be the biggest corpus of power. They write, "There is a canyon dividing people who understand technology and people charged with addressing the world's toughest geopolitical issues, and no one has built a bridge…As global connectivity continues its unprecedented advance, many old institutions and hierarchies will have to adapt or risk becoming obsolete, irrelevant to modern society." So the handful who hold the reigns of the digital (states, corporates, artificial intelligence clusters) are either going to rule the world, or, well, write books about it.

The long version is slightly more nuanced, even though it fails to give us what we have grown to expect of all things Google — the bleeding edge of back and beyond. For a lay person, observations that Schmidt and Cohen make about the future of the digital age might be mildly interesting in the way title credits to your favourite movie can be. Once they have convinced us, many, many times, that the internet is fast and fluid and that it makes things fast and fluid and hence the future we imagine is going to be fast and fluid, the authors tell us that the internet is spawning a new "caste system" of haves, have-nots, and wants-but-does-not-haves.

Citing the internet as "the largest experiment involving anarchy in history" they look at the new negotiations of power around the digital. Virulent viruses from the "Middle East" make their appearance. Predictably wars of censorship and free information in China get due attention. Telcos get a big hand for building the infrastructure which can sell Google phones to people in Somalia. The book offers a straightforward (read military) reading of drones and less-than-expected biased views on cyberterrorism, which at least escapes the jingoism that the USA has been passing off in the service of a surveillance state. And more than anything else, the book shows politicos and governments around the world, that the future is messy, anarchy is at hand, but as long as they put their trust in Big Internet Brothers, the world will be a manageable place.

So while you can clearly see where my review for the book is heading, I must give it its due credit.

There are three things about this book that make it interesting. The first is how Schmidt and Cohen seem to be in a seesaw dialogue with themselves. They realise that five billion people are going to get connected online. They gush a little about what this net-universality is going to mean. And then immediately, they also realise that we have to prepare ourselves for a "Brave New World," which is going to be infinitely more messy and scary. They recognise that the days of anonymity on the Web are gone, with real life identities becoming our primary digital avatars. However, they also hint at a potential future of pseudonymity that propels free speech in countries with authoritarian regimes. This oscillation between the good, the bad, the plain and the incredible, keeps their writing grounded without erring too much either on the side of techno-euphoria or dystopic visions of the future.

Second, and perhaps justly so, the book doles out a lot of useful information not just for the techno-neophytes but also the amateur savant. There are stories about "Currygate" in Singapore, or of what Vodaphone did in Egypt after the Arab Spring, or of the "Human Flesh Search Engine" in China, which offer a comprehensive, if not critical, view of the way things are. Schmidt and Cohen have been everywhere on the ether and they have cyberjockeyed for decades to tell us stories that might be familiar but are still worth the effort of writing.

Third, it is a readable book. It doesn't require you to Telnet your way into obscure meaning sets in the history of computing. It is written for people who are still mystified not only about the past of the Net but also its future, and treads a surprisingly balanced ground in both directions. It is a book you can give to your grandmother, and she might be inspired to get herself a Facebook (or maybe a Google +) account.

But all said and done, I expected more. It is almost as if Schmidt and Cohen are sitting on a minefield of ideas which they want to hint at but don't yet want to share because they might be able to turn it into a new app for the Nexus instead. It is a book that could have been. It wasn't. It is ironic how silent the book is about the role that big corporations play in shaping our techno-futures, and the fact that it is printed on dead-tree books with closed licensing so I couldn't get a free copy online. For people claiming to build new and political futures, the fact that this wisdom could not come out in more accessible forms and formats, speaks a lot about how seriously we can take their views of the future.

A Technological Solution to the Challenges of Online Defamation

by Eduardo Bertoni — last modified Jul 02, 2013 02:47 PM
When people are insulted or humiliated on the Internet and decide to take legal action, their cases often follow a similar trajectory.

This blog post written by Eduardo Bertoni was published in GlobalVoices on May 28, 2013. CIS has cross-posted this under the Creative Commons Licence.


Consider this scenario:

A public figure, let’s call her Senator X, enters her name into a search engine. The results surprise her — some of them make her angry because they come from Internet sites that she finds offensive. She believes that her reputation has been damaged by certain content within the search results and, consequently, that someone should pay for the personal damages inflicted.

Her lawyer recommends appealing to the search engine – the lawyer believes that the search engine should be held liable for the personal injury caused by the offensive content, even though the search engine did not create the content. The Senator is somewhat doubtful about this approach, as the search engine will also likely serve as a useful tool for her own self-promotion. After all, not all sites that appear in the search results are bothersome or offensive. Her lawyer explains that while results including her name will likely be difficult to find, the author of the offensive content should also be held liable. At that point, one option is to request that the search engine block any offensive sites related to the individual’s name from its searches. Yet the lawyer knows that this cannot be done without an official petition, which will require a judge’s intervention.

“We must go against everyone – authors, search engines – everyone!” the Senator will likely say. “Come on!” says the lawyer, “let's move forward.” However, it does not occur to either the Senator or the lawyer that there may be an alternative approach to that of classic courtroom litigation. The proposal I make here suggests a change to the standard approach – a change that requires technology to play an active role in the solution.

Who is liable?

The “going against everyone” approach poses a critical question: Who is legally liable for content that is available online? Authors of offensive content are typically seen as primarily liable. But should intermediaries such as search engines also be held liable for content created by others?

This last question raises a very specific, procedural question: Which intermediaries will be the subjects of scrutiny and viewed as liable in these types of situations? To answer this question, we must distinguish between intermediaries that provide Internet access (e.g. Internet service providers) and intermediaries that host content or offer content search functions. But what exactly is an ‘intermediary’? And how do we evaluate where an intermediary’s responsibility lies? It is also important to distinguish those intermediaries which simply connect individuals to the Internet from those that offer different services.

What kind of liability might an intermediary carry?


This brings us to the second step in the legal analysis of these situations: How do we determine which model we use in defining the responsibility of an intermediary? Various models have been debated in the past. Leading concepts include:

  • strict liability, under which the intermediary must legally respond to all offensive content
  • subjective liability, under which the intermediary’s response depends on what it has done and what it was or is aware of
  • conditional liability – a variation on subjective liability – under which, if an intermediary was notified or advised that it was promoting or directing users to illegal content and did nothing in response, it is legally required to respond to the offensive content.

These three options for determining liability and responses to offensive online content have been included in certain legislation and have been used in judicial decisions by judges around the world. But not one of these three alternatives provides a perfect standard. As a result, experts continue to search for a definition of liability that will satisfy those who have a legitimate interest in preventing damages that result from offensive content online.

How are victims compensated?

Now let’s return to the example presented earlier. Consider the concept of Senator X’s “satisfaction.” In these types of situations, “satisfaction” is typically economic — the victim will sue for a certain amount of money in “damages”, and she can target anyone involved, including the intermediary.

Interestingly, in the offline world, alternatives have been found for victims of defamation: For example, the “right to reply” aims to aid anyone who feels that his or her reputation or honor has been damaged and allows individuals to explain their point of view.

We must also ask if the right to reply is or is not contradictory to freedom of expression. It is critical to recognize that freedom of expression is a human right recognized by international treaties; technology should be able to achieve a similar solution to issues of online defamation without putting freedom of expression at risk.

Solving the problem with technology

In an increasingly online world, we have unsuccessfully attempted to apply traditional judicial solutions to the problems faced by victims like Senator X. There have been many attempts to apply traditional standards because lawyers are accustomed to using in them in other situations. But why not change the approach and use technology to help “satisfy” the problem?

The idea of including technology as part of the solution, when it is also part of the problem, is not new. If we combine the possibilities that technology offers us today with the older idea of the right to reply, we could change the broader focus of the discussion.

My proposal is simple: some intermediaries (like search engines) should create a tool that allows anyone who feels that he or she is the victim of defamation and offensive online content to denounce and criticize the material on the sites where it appears. I believe that for victims, the ability to say something and to have their voices heard on the sites where others will come across the information in question will be much more satisfactory than a trial against the intermediaries, where the outcome is unknown.

This proposal would also help to limit regulations that impose liability on intermediaries such as search engines. This is important because many of the regulations that have been proposed are technologically impractical. Even when they can be implemented, they often result in censorship; requirements that force intermediaries to filter content regularly infringe on rights such as freedom of expression or access to information.

This proposal may not be easy to implement from a technical standpoint. But I hope it will encourage discussion about the issue, given that a tool like the one I have proposed, although with different characteristics, was once part of Google’s search engine (the tool, “Google Sidewiki” is now discontinued). It should be possible  improve upon this tool, adapt it, or do something completely new with the technology it was based on in order to help victims of defamation clarify their opinions and speak their minds about these issues, instead of relying on courts to impose censorship requirements on search engines. This tool could provide much greater satisfaction for victims and could help prevent the violation of the rights of others online as well.

Critics may argue that people will not read the disclaimers or statements written by “defamed” individuals and that the impact and spread of the offensive content will continue unfettered. But this is a cultural problem that will not be fixed by placing liability on intermediaries. As I explained before, the consequences of doing so can be unpredictable.

If we continue to rely on traditional regulatory means to solve these problems, we’ll continue to struggle with the undesirable results they can produce, chiefly increased controls on information and expression online. We should instead look to a technological solution as a viable alternative that cannot and should not be ignored.


Eduardo Bertoni is the Director of the Center for Studies on Freedom of Expression and Access to Information at Palermo University School of Law in Buenos Aires. He served as the Special Rapporteur for Freedom of Expression to the Organization of American States from 2002-2005.

Indian surveillance laws & practices far worse than US

by Pranesh Prakash last modified Jul 12, 2013 11:09 AM
Explosive would be just the word to describe the revelations by National Security Agency (NSA) whistleblower Edward Snowden.

Pranesh Prakash's column was published in the Economic Times on June 13, 2013. This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC.


Now, with the American Civil Liberties Union suing the Obama administration over the NSA surveillance programme, more fireworks could be in store. Snowden's expose provides proof of what many working in the field of privacy have long known. The leaks show the NSA (through the FBI) has got a secret court order requiring telecom provider Verizon to hand over "metadata", i.e., non-content data like phone numbers and call durations, relating to millions of US customers (known as dragnet or mass surveillance); that the NSA has a tool called Prism through which it queries at least nine American companies (including Google and Facebook); and that it also has a tool called Boundless Informant (a screenshot of which revealed that, in February 2013, the NSA collected 12.61 billion pieces of metadata from India).

Nothing Quite Private

The outrage in the US has to do with the fact that much of the data the NSA has been granted access to by the court relates to communications between US citizens, something the NSA is not authorised to gain access to. What should be of concern to Indians is that the US government refuses to acknowledge non-Americans as people who also have a fundamental right to privacy, if not under US law, then at least under international laws like the Universal Declaration of Human Rights and the ICCPR.

US companies such as Facebook and Google have had a deleterious effect on privacy. In 2004, there was a public outcry when Gmail announced it was using an algorithm to read through your emails to serve you advertisements. Facebook and Google collect massive amounts of data about you and websites you visit, and by doing so, they make themselves targets for governments wishing to snoop on you, legally or not.

Worse, Indian-Style

That said, Google and Twitter have at least challenged a few of the secretive National Security Letters requiring them to hand over data to the FBI, and have won. Yahoo India has challenged the authority of the Controller of Certifying Authorities, a technical functionary under the IT Act, to ask for user data, and the case is still going on.

To the best of my knowledge, no Indian web company has ever challenged the government in court over a privacy-related matter. Actually, Indian law is far worse than American law on these matters. In the US, the NSA needed a court order to get the Verizon data. In India, the licences under which telecom companies operate require them to provide this. No need for messy court processes.

The law we currently have — sections 69 and 69B of the Information Technology Act — is far worse than the surveillance law the British imposed on us. Even that lax law has not been followed by our intelligence agencies.

Keeping it Safe

Recent reports reveal India's secretive National Technical Research Organisation (NTRO) — created under an executive order and not accountable to Parliament — often goes beyond its mandate and, in 2006-07, tried to crack into Google and Skype servers, but failed. It succeeded in cracking Rediffmail and Sify servers, and more recently was accused by the Department of Electronics and IT in a report on unauthorised access to government officials' mails.

While the government argues systems like the Telephone Call Interception System (TCIS), the Central Monitoring System (CMS) and the National Intelligence Grid (Natgrid) will introduce restrictions on misuse of surveillance data, it is a flawed claim. Mass surveillance only increases the size of the haystack, which doesn't help in finding the needle. Targeted surveillance, when necessary and proportional, is required. And no such systems should be introduced without public debate and a legal regime in place for public and parliamentary accountability.

The government should also encourage the usage of end-to-end encryption, ensuring Indian citizens' data remains safe even if stored on foreign servers. Merely requiring those servers to be located in India will not help, since that information is still accessible to American agencies if it is not encrypted. Also, the currently lax Indian laws will also apply, degrading users' privacy even more.

Indians need to be aware they have virtually no privacy when communicating online unless they take proactive measures. Free or open-source software and technologies like Open-PGP can make emails secure, Off-The-Record can secure instant messages, TextSecure for SMSes, and Tor can anonymise internet traffic.

Privacy (Protection) Bill, 2013

by Prasad Krishna last modified Jul 03, 2013 09:39 AM

PDF document icon The Privacy (Protection) Bill, 2013 - 1 June 2013 (for Bombay).pdf — PDF document, 196 kB (200944 bytes)

Privacy Protection Bill, 2013 (With Amendments based on Public Feedback)

by Elonnai Hickok last modified Jul 12, 2013 10:50 AM
In 2013 CIS drafted the Privacy Protection Bill as a citizens' version of a privacy legislation for India. Since April 2013, CIS has been holding Privacy Roundtables in collaboration with FICCI and DSCI, with the objective of gaining public feedback to the Privacy Protection Bill and other possible frameworks for privacy in India.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


As a part of this process, CIS has been amending the Privacy Protection Bill based on public feedback. Below is the text of the Bill as amended according to feedback gained from the New Delhi, Bangalore, and Chennai Roundtables.

Click to download the Privacy Protection Bill, 2013 with latest amendments (PDF, 196 Kb).

The Difficult Balance of Transparent Surveillance

by Kovey Coles last modified Jul 15, 2013 04:23 AM
Is it too much to ask for transparency in data surveillance? On occasion, companies like Microsoft, Facebook, and the other silicon valley giants would say no. When customers join these services, each company provides their own privacy statement which assures customers of the safety and transparency that accompanies their personal data.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


Google even publishes annual “Transparency Reports” which detail the data movement behind the scenes. Governments, too, are somewhat open about surveillance methods, for example with the public knowledge of the existence and role of institutions like America’s NSA and India’s CMS. These façades of assurance, however, never satisfy the public enough to protect them from feeling cheated and deceived when information leaks about surveillance practices. And in the face of controversy around surveillance, both service providers and governments scramble to provide explanations for discrepancies between their promises and their practices.

So it seems that transparency might not be too much to ask, but instead is perhaps more complicated of a request than imagined. For some citizens, nothing would be more satisfying than complete transparency on all data collection. For those who recognize surveillance as crucial for national security, however, complete transparency would mean undermining the very efficacy of surveillance practices. And data companies often find themselves caught between these two ends, simultaneously seeking profits by catering to the public, while also trying to abide by political and legal frameworks. Therefore, in the process of modern data surveillance, each attempt at resolution of the transparency issue will become a delicate balance between three actors: the government, the big data companies, and the people. As rightly stated on the Digital Due Process website, rules for surveillance must carefully consider “the individual’s constitutional right to privacy, the government’s need for tools to conduct investigations, and the interest of service providers in clarity and customer trust.”[1]

So we must unpack the idea of transparency.

First, there should be a distinction made between proactive transparency and reactive transparency, or, the announcement of surveillance practices versus the later access to surveillance records. The former is more risky and therefore more difficult to entertain, while the latter may lack any real substance beyond satisfying inquiries. Also consider the discrepancy in motivation for transparency between the actors. For the citizen, is transparency really an end goal, or is it only a stepping stone in the argument for eradication of surveillance practices in the name of rights to privacy? Here, we ascertain the true value of total transparency; will it ever please citizens to learn of a government’s most recent undermining of the private sphere?

Reactive transparency has been achieved only in recent years in India, during a number of well publicized legal cases. In one of the earliest cases of reactive transparency, Reliance Communications made an affidavit in the Supreme Court over the exact number of surveillance directives given by the government. It was released that 151,000 Reliance accounts were monitored for a project between 2006 and 2010, with 3,588 tapped phones just from the Delhi region alone in 2005.[2]

But also there has been controversy over the extent of reactive transparency, because it has been especially problematic to discern the point where transparency once again encroaches on privacy, both for government and the people’s sake. After gathering the data, its release could further jeopardize the citizens and the government. It is important to carefully consider the productive extent of reactive transparency: What will become of the information? Will one publicly reveal how many people were spied on? Who was spied on? What was found when through spying? Citizens must take all of this into consideration when requesting transparency.

Meanwhile, service providers embrace transparency when it can benefit their corporation, or as a recent Facebook statement explained, “we’ve been in discussions with U.S. national security authorities urging them to allow more transparency, so that our users around the world can understand how infrequently we are asked to provide user data on national security grounds.” [a] Many of the service providers mentioned in the recently leaked PRISM report have made well-publicized requests to the U.S. government for more transparency.[3]

Not only have they allegedly written requests to the government to allow them to disclose information, but the companies (including Facebook [a], Apple [b], Microsoft[c], and Google [d]) have all released explanatory statements in the wake of the June 2013 PRISM scandal. Although service providers claim that the request to release data about their cooperation is in the ‘interest of transparency,’ it instead seems that the motivation for this transparency is to ease consumers’ concerns and help the companies save face. The companies (and the government) will admit their participation in surveillance once it has become impossible to deny their association with the programs. This shrewd aspect of transparency can be seen most clearly in statements like those from Microsoft, who included in their statement on June 14th, “We have not received any national security orders of the type that Verizon was reported to have received.” [c] Spontaneous allusions like this are meant to contrast guilt-conscious service providers favorably to telecom service providers such as AT&T and Verizon, who allegedly yielded the most communications data and who as of now have yet to release defensive public statements.

Currently, we find ourselves in a situation where entities admit to their collusion in snooping only once information has leaked, indignation has ignited, and scandal has erupted. A half-hearted proactive transparency leads to an outrage demanding reactive semi-transparency. These weak forms of transparency neither satisfy the public, nor allow governments and service providers to maintain dignity.

But now is also a crucial moment for possible reevaluation and reformation of this system, especially in India. Not only is India enacting its own national security surveillance system, the CMS[4] but the recent NSA and PRISM revelations are still sending shockwaves throughout the world of cyber security and surveillance. Last week, a Public Interest Litigation (PIL) was sent to the Indian Supreme Court, arguing that nine foreign service providers (Facebook, Hotmail, Yahoo!, Google, Apple, Skype, Paltalk, AOL, YouTube) violated the trust and privacy of their Indian customers through their collusion with the US government’s surveillance programs.[5]

Among other things, the PIL emphatically sought prosecution of the mentioned corporations, demands for the service providers to establish servers in India, and also sought stricter rules to prevent Indian officials from using these foreign services for work involving national security. Ultimately, the PIL was rejected by the Supreme Court; although the PIL stated the grounds of Rule 6 of the Information Technology Rules 2011 for the guidelines in protecting sensitive Indian citizen information, the SC saw the PIL as addressing problems outside of SC jurisdiction, and was quoted as saying “we cannot entertain the petition as an Indian agency is not involved.”[5][6]

The SC considered the PIL only partially, however, as certain significant parts of the petition were indeed within Indian domestic agency, for example the urge to prohibit federal officials from using the private email services such as Gmail, Hotmail, and Yahoo. And although the SC is not the correct place to push for new safeguard legislation, the ideas of the PIL are not invalid, as Indian leaders have long searched for ways of ensuring basic Indian privacy laws in the context of international service providers. This is also not a problem distinctive to India. International service providers have entered into agreements regarding the same problems of incorporating international customers’ rights, formal agreements which India could emulate if it wanted to demand greater privacy or transparency.

For example, there is the Safe Harbor Framework, an institution in place to protect and mediate European Union citizens’ privacy rights within the servers of foreign (i.e. American) Internet companies. These regulations were established in 2000, and serve the purpose of adjusting foreign companies’ standards to incorporate E.U. privacy laws. In accordance with the agreement, E.U. data is only allowed to be sent to outside providers who maintain the seven Safe Harbor principles, several of which focus on transparency of data usage.[7] India could enact a system similar to this, and it would likely alleviate some of the concerns raised in the most recent PIL. These frameworks, however, have not proven completely reliable safeguards either, especially when the service providers’ own government uses national security as a means to override the agreement. Although the U.S. government has yet to fully confirm or deny many of the NSA and PRISM allegations in regards to Europe, there is currently strong room to believe that the surveillance practices may have violated the Safe Harbor agreements by delivering sensitive E.U. citizen data to the U.S. government.[8] It is uncertain how these revelations will impact the agreements made between the big Silicon-Valley companies and their E.U. customers.

The recent PIL also strongly suggested establishing domestic data servers to keep Indian citizens’ information within the country and under the direct supervision of Indian entities. It strongly pushes for self-reliance as the best way to ensure both citizen and national security. The PIL assumes that domestic servers will not only offer better information protection, but also create much needed jobs and raise national tax revenue.[5] If allegations about PRISM and the E.U. prove true, then the E.U. may also decide to support establishment of European servers as well.

Several of the ideas outlined in the PIL have merit, but may not be as productive as the requesters assume. It is true that establishing servers and domestic regulators in India may temporarily protect from unwanted foreign, i.e. American, surveillance. But at the same time, this also increases likelihood of India’s own central government taking a stronger surveillance stance, more stringently monitoring their own servers and databases. It has not yet been described how the CMS will be operate its surveillance methods, but moving data to domestic servers may just result in shifting power from NSA to CMS. Rather than more privacy or transparency, the situation could easily become a matter of who citizens prefer spying over them.

Even if one government establishes rules which enforce transparency, this may clash with the laws of the service providers’ domestic government, i.e. confidentiality in surveillance. Considering all of this, rejection of foreign service providers and promotion of domestic self reliance may ultimately prove the most effective alternative for nations which are growing rapidly in both internet presence and internet consciousness. But that does not make this option the easiest. Facing the revelations and disillusionment of domestic (CMS) and international (PRISM) surveillance methods, countries like India are reaching an impeding critical juncture. Now is the most important time to establish new norms, while public sentiment is at its highest and transition is most possible, not only creating new laws which can safeguard privacy, but also strongly considering alternatives to foreign service providers like those outlined in June’s PIL. Privacy International’s guiding principles of communications surveillance also offer useful advice, urging for the establishment of oversight institutions which can access surveillance records and periodically publish aggregate data on surveillance methods.[9] Although the balance between security on the national level and security on the personal level will continue to be problematic for nations in the upcoming years, and even though service providers’ positions on surveillance usually seem contrived, Microsoft Vice President John Frank made a statement which deserves appreciation, rightly saying, “Transparency alone may not be enough to restore public confidence, but it’s a great place to start.”[c]


[1]. http://digitaldueprocess.org/

[2]. http://bit.ly/151Ue1H

[3]. http://bit.ly/12XDb1Z

[4]. http://ti.me/11Xh08V

[5]. Copy of 2013 PIL to Supreme Court, Prof. S.N. Singh [attached]

[6]. http://bit.ly/1aXWdbU

[7]. http://1.usa.gov/qafcXe

[8]. http://bit.ly/114hcCX

[9]. http://bit.ly/156wspI


[a]. Facebook Statement: http://bit.ly/ZQDcn6

[b]. Apple Statement: http://bit.ly/1akaBuN

[c]. Microsoft Statement:http://bit.ly/1bFIt31

[d]. Google Statement: http://bit.ly/16QlaqB

CIS Cybersecurity Series (Part 4) - Marietje Schaake

by Purba Sarkar last modified Jul 12, 2013 10:24 AM
CIS interviews Marietje Schaake, member of the European parliament, as part of the Cybersecurity Series
"It is important that we don't confine solutions in military head quarters or in government meeting rooms but that consumers, internet users, NGOs, as well as businesses, together take responsibility to build a resilient society where we also don't forget what it is we are defending, and that is our freedoms... and we have learned hopefully from the war on terror, that there is a great risk to compromise freedom for alleged security and that is a mistake we should not make again." - Marietje Schaake, member of European parliament.

Centre for Internet and Society presents its fourth installment of the CIS Cybersecurity Series.
 
The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.
 
In this installment, CIS interviews Marietje Schaake, member of the European Parliament for the Dutch Democratic Party (D66) with the Alliance of Liberals and Democrats for Europe (ALDE) political group. She serves on the Committee on Foreign Affairs, where she focuses on neighbourhood policy, Turkey in particular; human rights, with a specific focus on freedom of expression, Internet freedom, press freedom; and Iran. In the Committee on Culture, Media, Education, Youth and Sports, Marietje works on Europe’s Digital Agenda and the role of culture and new media in the EU´s external actions. In the Committee on International Trade, she focuses on intellectual property rights, the free flow of information and the relation between trade and foreign affairs.
 
Marietje's website is: http://www.marietjeschaake.eu/
 

 

This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.


Response from Ministry of Home Affairs

by Prasad Krishna last modified Jul 15, 2013 04:34 AM
Rakesh Mittal's reply received by the Centre for Internet and Society.

PDF document icon Rakesh Mittal's reply.pdf — PDF document, 264 kB (271205 bytes)

Redirected to DEITY for Response to RTI

by Prasad Krishna last modified Jul 15, 2013 05:04 AM
Ministry of Home Affairs redirected to the Department of Electronics and Communication Information to respond to the RTI filed by CIS regarding information on the officials and agencies authorized to intercept telephone messages in India.

PDF document icon Redirected.pdf — PDF document, 325 kB (332929 bytes)

Moving Towards a Surveillance State

by Srinivas Atreya last modified Jul 15, 2013 05:57 AM
The cyberspace is a modern construct of communication and today, a large part of human activity takes place in cyberspace. It has become the universal platform where business is executed, discourse is conducted and personal information is exchanged. However, the underbelly of the internet is also seen to host activities and persons who are motivated by nefarious intent.

Note: The original tender document of the Assam Police dated 28.02.2013 along with other several other tender documents for procurement of Internet and Voice Monitoring Systems is attached as a zip folder.


As highlighted in the International Principles on the Application of Human Rights to Communications Surveillance, logistical barriers to surveillance have decreased in recent decades and the application of legal principles in new technological contexts has become unclear. It is often feared that in light of the explosion of digital communications content and information about communications, or "communications metadata," coupled with the decreasing costs of storing and mining large sets of data and the provision of personal content through third party service providers make State surveillance possible at an unprecedented scale. Communications surveillance in the modern environment encompasses the monitoring, interception, collection, preservation and retention of, interference with, or access to information that includes, reflects, arises from or is about a person's communications in the past, present or future.[*] These fears are now turning into a reality with the introduction of mass surveillance systems which penetrate into the lives of every person who uses any form of communications. There is ample evidence in the form of tenders for Internet Monitoring Systems (IMS) and Telecom Interception Systems (TCIS) put out by the Central government and various state governments that the Indian state is steadily turning into an extensive surveillance state.

While surveillance and intelligence gathering is essential for the maintenance of national security, the creation and working of a mass surveillance system as it is envisioned today may not necessarily be in absolute conformity with the existing law. A mass surveillance system like the Central Monitoring System (CMS) not only threatens to completely eradicate any vestige of the right to privacy but in the absence of a concrete set of procedural guidelines creates a tremendous risk of abuse.

Although information regarding the Central Monitoring System is quite limited on the public forum at the moment it can be gathered that a centralized system for monitoring of all communication was first proposed by the Government of India in 2009 as indicated by the press release of the Ministry of Communications & Information. Implementation of the system started subsequently as indicated by another government press release and the Center for Development of Telematics (C-DOT) was entrusted with the responsibility of implementing the system. As per the C-DOT annual report 2011-12, research, development, trials and progressive scaling up of a Central Monitoring System were conducted by the organization in the past 4 years and the requisite hardware and CMS solutions which support voice and data interception have been installed and commissioned at various Telecom Service Providers (TSP) in Delhi and Haryana as part of the pilot project. Media reports indicate that the project will be fully functional by 2014. While an extensive surveillance system is being stealthily introduced by the state, several concerns with regard to its extent of use, functioning, and real world impact have been raised owing to ambiguities and wide gaps in procedure and law. Moreover, the lack of a concrete privacy legislation coupled with the absence of public discourse indicates the lack of interest of the state over the rights of an ordinary citizen. It is under these circumstances that awareness must first be brought regarding the risks of the mass surveillance on civil liberties which in the absence of established procedures protecting the rights of the citizens of the state can result in the abuse of powers by the state or its agencies and lead to the demise of civil freedoms even in democratic states.

The architecture and working of a proposed Internet Monitoring System must be examined in an attempt to better understand the functioning, capabilities and possible impact of a Central Monitoring System on our society and lives. This can perhaps allow more open discourse and a committed effort to preserve the rights of the citizens especially the right to privacy can be made while allowing for the creation of strong procedural guidelines which will help maintain legitimate intelligence gathering and surveillance.

Internet Monitoring System: Setup and Working
Very broadly, The Internet Monitoring System enables an agency of the state to intercept and monitor all content which passes through the Internet Service Provider’s (ISP) server which includes all electronic correspondence (emails, chats or IM’s, transcribed call logs), web forms, video and audio files, and other forms of internet content. The electronic data is stored and also subject to various types of analysis. While Internet Monitoring Systems are installed locally and their function is limited to specific geographic region, the Central Monitoring System will consolidate the data acquired from the different voice and data interception systems located across the country and create a centralized architecture for interception, monitoring and analysis of communications. Although the exact specifications and functions of the central monitoring system still remain unclear and ambiguous, some parallels regarding the functioning of the CMS can be drawn from the the specifications revealed in the Assam Police tender document for the procurement of an Internet Monitoring System.

Setup
The deployment architecture of an Internet Monitoring System (IMS) contains probe servers which are installed at the Internet Service Provider’s (ISP) premises and the probes are installed at various tapping points within the entire ISP network.  A collection server is also installed and hosted at the site of the ISP. The collection server is used to either collect, analyze, filter or simple aggregate the data from the ISP servers and the data is transferred to a master aggregation server located a central data center. The central data center may also contain more servers specifically for analysis and storage. This type of architecture is being referred to as a ‘high availability clustered setup’ which is supposed to provide security in case of a failure or outage.

The Assam Police Internet Monitoring System tender document specifically indicates that the deployment in the state of Assam shall require 8 taps or probes to be installed at different ISPs, out of which 6 taps/probes shall be of 10 GBPS and 2 taps are of 1 GBPS. The document however mentions that the specifications are preliminary and subject to change.

Types of data
The proposed internet monitoring system of the Assam state can provide network traffic interception and a variety of internet protocols including Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Internet Message Access Protocol (IMAP) and Session Initiation Protocol (SIP), Voice over Internet Protocol (VoIP) can be intercepted and monitored. The system can also support monitoring of Internet Relay Chat and various other messaging applications (such as Google Talk, Yahoo Chat, MSN Messenger, ICQ, etc.).  The system can be equipped to capture and display multiple file types like text (.doc, .pdf), zipped (.zip) and executable applications (.exe). Further, information regarding login details, login pattern, login location, DNS address, routing address can be acquired along with the IP address and other details of the user.

Web crawling capabilities can be installed on the system which can provide data from various data sources like social networking sites, web based communities, wikis, blogs and other forms of web content. Social media websites (such as Twitter, Facebook, Orkut, MySpace etc.), web pages and data on hosted applications can also be intercepted, monitored and analyzed.  The system also allows capture of additional pages if updated; log periodical updates and other changes. This allows the monitoring agencies the capability of gathering internet traffic based on several parameters like Protocols, Keywords, Filters and Watch lists. Keyword matching is achieved by including phonetically similar words in various languages including local languages.

More specific functions of the IMS can include complete email extraction which will disclose the address book, inbox, sent mail folder, drafts folder, personal folders, delete folders, custom folders etc. and can also provide identification of dead drop mails. The system can also be equipped to allow country wise tracking of instant messages, chats and mails.

Regarding retention and storage of data, the tender document specifies that the system shall be technically capable of retaining the metadata of Internet traffic for at least one year and the defined traffic/payload/content is to be retained in the storage server at least for a week.  However, the data may be retained for a longer period if required. The metadata and qualified data after analysis are integrated to a designated main intelligence repository for storage.

Types of Analysis
The Internet Monitoring System apart from intercepting all the data generated through the Internet Service Providers is essentially equipped for various types of data analysis. The solutions that are installed in the internet monitoring system provide the capability for real time as well as historical analysis of network traffic, network perimeter devices and internal sniffers.  The kinds of analysis based on ‘slicing and dicing of data’ range from text mining, sentiment analysis, link analysis, geo-spatial analysis, statistical analysis, social network analysis, transaction analysis, locational analysis and fusion based analysis, CDR analysis, timeline analysis and histogram based analysis from various sources.

The solutions installed in the IMS can enable monitoring of specific words or phrases (in various languages) in blogs, websites, forums, media reports, social media websites, media reports, chat rooms and messaging applications, collaboration applications and deep web applications. Phone numbers, addresses, names, locations, age, gender and other such information from content including comments and such can also be monitored. Specifically with regard to social media, the user’s profile and information related to it can be extracted and a detailed ontology of all the social media profiles of the user can be created.

Based on the information, the analysis supposed to provide the capability to identify suspicious behavior based on existing and new patterns as they emerge and are continuously applied to combine incoming and existing information on people, profiles, transactions, social network, type of websites visited, time spent on websites, type of content download or view and any other type of gatherable information. The solutions on the system are also supposed to create single or multiple or parallel scenario build-ups that may occur in blogs, social media forums, chat rooms, specific web hosting server locations or URL, packet route that may be defined from time to time and such scenario build-ups can be based on parameters like sentiments, language or expressions purporting hatred or anti-national expressions, and even emotions like expression of joy, compassion and anger, which as may be defined by the agency depending on operational and intelligence requirement. Based on these parameters, automated alerts can be generated relating to structured or unstructured data (including metadata of contents), events, pattern discovery, phonetically similar words or phrases or actions from users.

Based on the data analysis, reports or dossiers can be generated and visual analysis allowing a wide variety of views can be created.  Further, real time visualization showing results from real-time data can be generated which allows alerts, alert categories or discoveries to be ranked (high, medium, and low priority, high value asset, low value asset, moderate value asset, verified information, unverified information, primary evidence, secondary evidence, circumstantial evidence, etc.) based on criteria developed by the agency. The IMS solutions can also be capable of offering web-intelligence and open source intelligence and allow capabilities like simultaneous search capabilities which can be automated providing a powerful tool for exploration of the intercepted data.

Another important requirement mentioned in the tender document is the systems capability to integrate with other interception and monitoring systems for 2G, 3G/UMTS and other evolving mobile carrier technologies including fixed line and Blackberry services and encrypted IP services like Skype services.

Conclusion
It is clear that a system like IMS with its extensive interception and analysis capabilities gives complete access to an agency or authority of all information that is accessed or transmitted by a person on the internet including information which is private and confidential such as email and instant messages. Although the state has the power to issue directions for interception or monitoring of information under the Information Technology Act, 2000 and certain rules are prescribed under section 69B, they are wholly inadequate compared to the scope and extent of the Internet Monitoring System and its scale of operations. The interception and monitoring systems that are either proposed or already in place effectively bypass the existing procedures prescribed under the Information Technology Act.

The issues, concerns and risks are only compounded when it comes to the Central Monitoring System. The solutions installed in present day interception and monitoring systems give the state unprecedented powers to intercept, monitor and analyze all the data of any person who access the internet. Tools like deep packet inspection and extensive data mining solutions in the absence of concrete safeguards and when deployed through a centralized system can be misused to censor any content including legitimate discourse. Also, the perception that access to a larger amount of data or all data can help improve intelligence can also be sometimes misleading and it must be asked whether the fundamental rights of the citizens of the state can be traded away under the pretext of national security. Furthermore, it is essential for the state to weigh the costs of such a project both economically and morally and balance it with sufficient internal measures as well as adequate laws so that the democratic values are persevered and not endangered by any act of reckless force.

Reiterating what has been said earlier, while it is important for the state to improve its intelligence gathering tools and mechanisms, it must not be done at the cost of a citizen’s fundamental right. It is the duty of the democratic state to endure and maintain a fine balance between national interest and fundamental rights through timely creation of equitable laws.


[*]. http://necessaryandproportionate.net/#_edn2

Tenders, EOI and Press Release

by Prasad Krishna last modified Jul 15, 2013 05:56 AM

ZIP archive icon Surveillance Systems - Govt Tenders, EOI and Press Release.zip — ZIP archive, 5976 kB (6119473 bytes)

How Surveillance Works in India

by Pranesh Prakash last modified Jul 15, 2013 10:20 AM
When the Indian government announced it would start a Centralized Monitoring System in 2009 to monitor telecommunications in the country, the public seemed unconcerned. When the government announced that the system, also known as C.M.S., commenced in April, the news didn’t receive much attention.
How Surveillance Works in India

Demonstrators showing support for National Security Agency whistleblower Edward Snowden at India Gate in New Delhi on Sunday.


This article by Pranesh Prakash was published in the New York Times on July 10, 2013.


After a colleague at the Centre for Internet and Society wrote about the program and it was lambasted by Human Rights Watch, more reporters started covering it as a privacy issue. But it was ultimately the revelations by Edward J. Snowden about American surveillance that prompted Indians to ask questions about its own government’s surveillance programs.

In India, we have a strange mix of great amounts of transparency and very little accountability when it comes to surveillance and intelligence agencies. Many senior officials are happy to anonymously brief reporters about the state of surveillance, but there is very little that is officially made public, and still less is debated in the national press and in Parliament.

This lack of accountability is seen both in the way the Big-Brother acronyms (C.M.S., Natgrid, T.C.I.S., C.C.T.N.S., etc.) have been rolled out, as well as the murky status of the intelligence agencies. No intelligence agency in India has been created under an act of Parliament with clearly established roles and limitations on powers, and hence there is no public accountability whatsoever.

The absence of accountability has meant that the government has since 2006 been working on the C.M.S., which will integrate with the Telephone Call Interception System that is also being rolled out. The cost: around 8 billion rupees ($132 million) — more than four times the initial estimate of 1.7 billion — and even more important, our privacy and personal liberty. Under their licensing terms, all Internet service providers and telecom providers are required to provide the government direct access to all communications passing through them. However, this currently happens in a decentralized fashion, and the government in most cases has to ask the telecoms for metadata, like call detail records, visited Web sites, IP address assignments, or to carry out the interception and provide the recordings to the government. Apart from this, the government uses equipment to gain access to vast quantities of raw data traversing the Internet across multiple cities, including the data going through the undersea cables that land in Mumbai.

With the C.M.S., the government will get centralized access to all communications metadata and content traversing through all telecom networks in India. This means that the government can listen to all your calls, track a mobile phone and its user’s location, read all your text messages, personal e-mails and chat conversations. It can also see all your Google searches, Web site visits, usernames and passwords if your communications aren’t encrypted.

Internet Surfing

A man surfing a Facebook page at an internet cafe in Guwahati, Assam, on Dec. 6, 2011.
Image Credit:
Anupam Nath/Associated Press

You might ask: Why is this a problem when the government already had the same access, albeit in a decentralized fashion? To answer that question, one has to first examine the law.

There are no laws that allow for mass surveillance in India. The two laws covering interception are the Indian Telegraph Act of 1885 and the Information Technology Act of 2000, as amended in 2008, and they restrict lawful interception to time-limited and targeted interception.The targeted interception both these laws allow ordinarily requires case-by-case authorization by either the home secretary or the secretary of the department of information technology.

Interestingly, the colonial government framed better privacy safeguards into communications interception than did the post-independence democratic Indian state. The Telegraph Act mandates that interception of communications can only be done on account of a public emergency or for public safety.  If either of those two preconditions is satisfied, then the government may cite any of the following five reasons: “the sovereignty and integrity of India, the security of the state, friendly relations with foreign states, or public order, or for preventing incitement to the commission of an offense.” In 2008, the Information Technology Act copied much of the interception provision of the Telegraph Act but removed the preconditions of public emergency or public safety, and expands the power of the government to order interception for “investigation of any offense.” The IT Act thus very substantially lowers the bar for wiretapping.

Apart from these two provisions, which apply to interception, there are many laws that cover recorded metadata, all of which have far lower standards. Under the Code of Criminal Procedure, no court order is required unless the entity is seen to be a “postal or telegraph authority” — and generally e-mail providers and social networking sites are not seen as such.

Unauthorized access to communications data is not punishable per se, which is why a private detective who gained access to the cellphone records of Arun Jaitley, a Bharatiya Janata Party leader, has been charged under the weak provision on fraud, rather than invasion of privacy. While there is a provision in the Telegraph Act to punish unlawful interception, it carries a far lesser penalty (up to three years of imprisonment) than for a citizen’s failure to assist an agency that wishes to intercept or monitor or decrypt (up to seven years of imprisonment).

To put the ridiculousness of the penalty in Sections 69 and 69B of the IT Act provision in perspective, an Intelligence Bureau officer who spills national secrets may be imprisoned up to three years. And under the Indian Penal Code, failing to provide a document one is legally bound to provide to a public servant, the punishment can be up to one month’s imprisonment. Further, a citizen who refuses to assist an authority in decryption, as one is required to under Section 69, may simply be exercising her constitutional right against self-incrimination. For these reasons and more, these provisions of the IT Act are arguably unconstitutional.

As bad as the IT Act is, legally the government has done far worse. In the licenses that the Department of Telecommunications grants Internet service providers, cellular providers and telecoms, there are provisions that require them to provide direct access to all communications data and content even without a warrant, which is not permitted by the existing laws on interception. The licenses also force cellular providers to have ‘bulk encryption’ of less than 40 bits. (Since G.S.M. network encryption systems like A5/1, A5/2, and A5/3 have a fixed encryption bit length of 64 bits, providers in India have been known use A5/0, that is, no encryption, thus meaning any person — not just the government — can use off-the-air interception techniques to listen to your calls.)

Cybercafes (but not public phone operators) are required to maintain detailed records of clients’ identity proofs, photographs and the Web sites they have visited, for a minimum period of one year. Under the rules designed as India’s data protection law (oh, the irony!), sensitive personal data has to be shared with government agencies, if required for “purpose of verification of identity, or for prevention, detection, investigation including cyber incidents, prosecution, and punishment of offenses.”

Along similar lines, in the rules meant to say when an Internet intermediary may be held liable for a user’s actions, there is a provision requiring the Internet company to “provide information or any such assistance to government agencies legally authorized for investigative, protective, cybersecurity activity.” (Incoherent, vague and grammatically incorrect sentences are a consistent feature of laws drafted by the Ministry of Communications and IT; one of the telecom licenses states: “The licensee should make arrangement for monitoring simultaneous calls by government security agencies,” when clearly they meant “for simultaneous monitoring of calls.”)

In a landmark 1996 judgment, the Indian Supreme Court  held that telephone tapping is a serious invasion of an individual’s privacy and that the citizens’ right to privacy has to be protected from abuse by the authorities. Given this, undoubtedly governments must have explicit permission from their legislatures to engage in any kind of broadening of electronic surveillance powers. Yet, without introducing any new laws, the government has surreptitiously granted itself powers — powers that Parliament hasn’t authorized it to exercise — by sneaking such powers into provisions in contracts and in subordinate legislation.

Can India Trust Its Government on Privacy?

by Pranesh Prakash last modified Jul 15, 2013 10:35 AM
In response to criticisms of the Centralized Monitoring System, India’s new surveillance program, the government could contend that merely having the capability to engage in mass surveillance won’t mean that it will. Officials will argue that they will still abide by the law and will ensure that each instance of interception will be authorized.
Can India Trust Its Government on Privacy?

A man checking his cell phone in New Delhi on June 18. Picture by Anindito Mukherjee/Reuters.


Pranesh Prakash's article was published in the New York Times on July 11, 2013.


In fact, they will argue that the program, known as C.M.S., will better safeguard citizens’ privacy: it will cut out the telecommunications companies, which can be sources of privacy leaks; it will ensure that each interception request is tracked and the recorded content duly destroyed within six months as is required under the law; and it will enable quicker interception, which will save more lives. But there are a host of reasons why the citizens of India should be skeptical of those official claims.

Cutting out telecoms will not help protect citizens from electronic snooping since these companies still have the requisite infrastructure to conduct surveillance. As long as the infrastructure exists, telecom employees will misuse it. In a 2010 report, the journalist M.A. Arun noted that “alarmingly, this correspondent also came across several instances of service providers’ employees accessing personal communication of subscribers without authorization.” Some years back, K.K. Paul, a top Delhi Police officer and now the Governor of Meghalaya, drafted a memo in which he noted mobile operators’ complaints that private individuals were misusing police contacts to tap phone calls of “opponents in trade or estranged spouses.”

India does not need to have centralized interception facilities to have centralized tracking of interception requests. To prevent unauthorized access to communications content that has been intercepted, at all points of time, the files should be encrypted using public key infrastructure. Mechanisms also exist to securely allow a chain of custody to be tracked, and to ensure the timely destruction of intercepted material after six months, as required by the law. Such technological means need to be made mandatory to prevent unauthorized access, rather than centralizing all interception capabilities.

At the moment, interception orders are given by the federal Home Secretary of India and by state home secretaries without adequate consideration. Every month at the federal level 7,000 to 9,000 phone taps are authorized or re-authorized. Even if it took just three minutes to evaluate each case, it would take 15 hours each day (without any weekends or holidays) to go through 9,000 requests. The numbers in Indian states could be worse, but one can’t be certain as statistics on surveillance across India are not available. It indicates bureaucratic callousness and indifference toward following the procedure laid down in the Telegraph Act.

In a 1975 case, the Supreme Court held that an “economic emergency” may not amount to a “public emergency.” Yet we find that of the nine central government agencies empowered to conduct interception in India, according to press reports — Central Board of Direct Taxes, Intelligence Bureau, Central Bureau of Investigation, Narcotics Control Bureau, Directorate of Revenue Intelligence, Enforcement Directorate, Research & Analysis Wing, National Investigation Agency and the Defense Intelligence Agency — three are exclusively dedicated to economic offenses.

Suspicion of tax evasion cannot legally justify a wiretap, which is why the government said it had believed that Nira Radia, a corporate lobbyist, was a spy when it defended putting a wiretap on her phone in 2008 and 2009. A 2011 report by the cabinet secretary pointed out that economic offenses might not be counted as “public emergencies,” and that the Central Board of Direct Taxes should not be empowered to intercept communications. Yet the tax department continues to be on the list of agencies empowered to conduct interceptions.

India has arrived at a scary juncture, where the multiple departments of the Indian government don’t even trust each other. India’s Department of Information Technology recently complained to the National Security Advisor that the National Technical Research Organization had hacked into National Informatics Center infrastructure and extracted sensitive data connected to various ministries. The National Technical Research Organization denied it had hacked into the servers but said hundreds of e-mail accounts of top government officials were compromised in 2012, including those of “the home secretary, the naval attaché to Tehran, several Indian missions abroad, top investigators of the Central Bureau of Investigation and the armed forces,” The Mint newspaper reported. Such incidents aggravate the fear that the Indian government might not be willing and able to protect the enormous amounts of information it is about to collect through the C.M.S.

Simply put, government entities have engaged in unofficial and illegal surveillance, and the C.M.S. is not likely to change this. In a 2010 article in Outlook, the journalist Saikat Datta described how various central and state intelligence organizations across India are illegally using off-the-air interception devices. “These systems are frequently deployed in Muslim-dominated areas of cities like Delhi, Lucknow and Hyderabad,” Mr. Datta wrote. “The systems, mounted inside cars, are sent on ‘fishing expeditions,’ randomly tuning into conversations of citizens in a bid to track down terrorists.”

The National Technical Research Organization, which is not even on the list of entities authorized to conduct interception, is one of the largest surveillance organizations in India. The Mint reported last year that the organization’s surveillance devices, “contrary to norms, were deployed more often in the national capital than in border areas” and that under new standard operating procedures issued in early 2012, the organization can only intercept signals at the international borders. The organization runs multiple facilities in Mumbai, Bangalore, Delhi, Hyderabad, Lucknow and Kolkata, in which monumental amounts of Internet traffic are captured. In Mumbai, all the traffic passing through the undersea cables there is captured, Mr. Datta found.

In the western state of Gujarat, a recent investigation by Amitabh Pathak, the director general of police, revealed that in a period of less than six months, more than 90,000 requests were made for call detail records, including for the phones of senior police and civil service officers. This high a number could not possibly have been generated from criminal investigations alone. Again, these do not seem to have led to any criminal charges against any of the people whose records were obtained. The information seems to have been collected for purposes other than national security.

India is struggling to keep track of the location of its proliferating interception devices. More than 73,000 devices to intercept mobile phone calls have been imported into India since 2005. In 2011, the federal government asked various state governments, private corporations, the army and intelligence agencies to surrender these to the government, noting that usage of any such equipment for surveillance was illegal. We don’t know how many devices were actually turned in.

These kinds of violations of privacy can have very dangerous consequences. According to the former Intelligence Bureau head in the western state of Gujarat, R.B. Sreekumar, the call records of a mobile number used by Haren Pandya, the former Gujarat home minister, were used to confirm that it was he who had provided secret testimony to the Citizens’ Tribunal, which was conducting an independent investigation of the 2002 sectarian riots in the state. Mr. Pandya was murdered in 2003.

The limited efforts to make India’s intelligence agencies more accountable have gone nowhere. In 2012, the Planning Commission of India formed a group of experts under Justice A.P. Shah, a retired Chief Justice of the Delhi High Court, to look into existing projects of the government and to suggest principles to guide a privacy law in light of international experience. (Centre for Internet and Society, where I work was part of the group). However, the government has yet to introduce a bill to protect citizens’ privacy, even though the governmental and private sector violations of Indian citizens’ privacy is growing at an alarming rate.

In February, after frequent calls by privacy activists and lawyers for greater accountability and parliamentary oversight of intelligence agencies, the Centre for Public Interest Litigation filed a case in the Supreme Court. This would, one hopes, lead to reform.

Citizens must also demand that a strong Privacy Act be enacted. In 1991, the leak of a Central Bureau of Investigation report titled “Tapping of Politicians’ Phones” prompted the rights groups, People’s Union of Civil Liberties to file a writ petition, which eventually led to a Supreme Court of India ruling that recognized the right to privacy of communications for all citizens as part of the fundamental rights of freedom of speech and of life and personal liberty. However, through the 2008 amendments to the Information Technology Act, the IT Rules framed in 2011 and the telecom licenses, the government has greatly weakened the right to privacy as recognized by the Supreme Court. The damage must be undone through a strong privacy law that safeguards the privacy of Indian citizens against both the state and corporations. The law should not only provide legal procedures, but also ensure that the government should not employ technologies that erode legal procedures.

A strong privacy law should provide strong grounds on which to hold the National Security Advisor’s mass surveillance of Indians (over 12.1 billion pieces of intelligence in one month) as unlawful. The law should ensure that Parliament, and Indian citizens, are regularly provided information on the scale of surveillance across India, and the convictions resulting from that surveillance. Individuals whose communications metadata or content is monitored or intercepted should be told about it after the passage of a reasonable amount of time. After all, the data should only be gathered if it is to charge a person of committing a crime. If such charges are not being brought, the person should be told of the incursion into his or her privacy.

The privacy law should ensure that all surveillance follows the following principles: legitimacy (is the surveillance for a legitimate, democratic purpose?), necessity (is this necessary to further that purpose? does a less invasive means exist?), proportionality and harm minimization (is this the minimum level of intrusion into privacy?), specificity (is this surveillance order limited to a specific case?) transparency (is this intrusion into privacy recorded and also eventually revealed to the data subject?), purpose limitation (is the data collected only used for the stated purpose?), and independent oversight (is the surveillance reported to a legislative committee or a privacy commissioner, and are statistics kept on surveillance conducted and criminal prosecution filings?). Constitutional courts such as the Supreme Court of India or the High Courts in the Indian states should make such determinations. Citizens should have a right to civil and criminal remedies for violations of surveillance laws.

Indian citizens should also take greater care of their own privacy and safeguard the security of their communications. The solution is to minimize usage of mobile phones and to use anonymizing technologies and end-to-end encryption while communicating on the Internet. Free and open-source software like OpenPGP can make e-mails secure. Technologies like off-the-record messaging used in apps like ChatSecure and Pidgin chat conversations, TextSecure for text messages, HTTPS Everywhere and Virtual Private Networks can prevent Internet service providers from being able to snoop, and make Internet communications anonymous.

Indian government, and especially our intelligence agencies, violate Indian citizens’ privacy without legal authority on a routine basis. It is time India stops itself from sleepwalking into a surveillance state.

CIS Cybersecurity Series (Part 7) - Jochem de Groot

by Purba Sarkar last modified Jul 30, 2013 09:26 AM
CIS interviews Jochem de Groot, former policy advisor to the Netherlands government, as part of the Cybersecurity Series

"The basic principle that I think we must continue to embrace is that rights online are the same as rights offline... The amount of information that is available online is so enormous that it would be easy for governments to abuse that information for all kinds of purposes... And we are at a stage right now where we are really experimenting with how much information the govt or law enforcement can take to ensure the rule of law." - Jochem de Groot

Centre for Internet and Society presents its seventh installment of the CIS Cybersecurity Series. 

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.

In this installment, CIS interviews Jochem de Groot. Jochem has worked on the Netherlands government’s agenda to promote Internet freedom globally since 2009. He initiated and coordinated the founding conference of the Freedom Online Coalition in The Hague in December 2011, and advised the Kenyan government on the second Freedom Online event in Nairobi in 2012. Jochem represents the Dutch government in the EU, UN, OSCE and other multilateral fora, and oversees a project portfolio for promoting internet freedom globally.  

 
This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

DSCI Best Practices Meet 2013

by Kovey Coles last modified Jul 26, 2013 08:18 AM
The DSCI Best Practices Meet 2013 was organized on July 12, 2013 at Hyatt Regency, Anna Salai in Chennai. Kovey Coles attended the meet and shares a summary of the happenings in this blog post.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


Last year’s annual Best Practices Meet, sponsored by the Data Security Council of India (DSCI), was held in here in Bangalore, and featured CIS associates as panelists for an agenda focused mostly around mobility in technology. This year, the event was continued in nearby Chennai, where many of India’s top stakeholders in Cyber Security came together at the Hyatt hotel to discuss the modern cyber security landscape. Several of the key points of the day emphasized how the industry realm needed to be especially keen on Cyber Security today. Early speakers explained how many Cyber-Attacks occur as opportunistic attacks on financial institutions, and that these breaches often take months to be discovered, with the discovery usually being made by a third-party. For those reasons, it was repeatedly mentioned throughout the day that modern entities must anticipate attacks as inevitable, and prepare themselves to be able to respond and successfully bounce-back.

Several panelists of the event expanded upon the evolving challenges facing industries, and explained why service based industry continually grows more susceptible to Cyber-Attack. There were representatives from Microsoft, Flextronics, MyEasyDoc, and others, who explained how technological demands of modern consumers resulted inadvertently in weaker security. For example, with customers expecting real-time access to data rather than periodic data reports, i.e financial data reports, industries must now keep their data open, which weakens database security. Overall, the primary challenge faced by the industry was effectively summarized by Microsoft India CSO Ganapathi Subramaniam, stating that within web services, “Security and usability are inversely proportional.” Essentially, the more convenient a product, the less secure its infrastructure.

Despite discussion of the difficulties facing modern producers and consumers, there were undoubtedly highlights of optimism at the conference. A presentation by event sponsor Juniper Networks shed light on practices which combat Cyber-Attackers, including rerouting perceived Distributed Denial of Service (DDoS) attacks and finger-printing suspected hackers through a series of characteristics rather than just IP addresses (these characteristics include browser version, fonts, Add-Ons, time zone, and more). Notably, there was a call for cooperation on all fronts in combatting Cyber-crime, for public-private partnerships (PPP), and many citizens stood and spoke on the behalf of civil society’s incorporation in the process as well. One speaker, Retired Brig. Abhimanyu Ghosh admirably tore down sector divisions in the face of Cyber-Security threats, saying “We all want to secure ourselves. It is not a question of industry versus government, government versus industry. Government needs industry, and industry needs government.”

Finally, a few speakers used their opportunity at the conference to highlight issues related to rights and responsibilities of both citizens and government in internet. Nikhil Moro, a scholar at the Hindu Center for Politics and Public Policy, spoke at length about the urgent condition of laws which undermine freedom of speech and freedom of expression in India, especially within while online. His talk, which occurred near the end of the event, stirred the crowd to discussion, and helped remind the attendees of the comprehensiveness of issues which demand attention in the realm of a growing internet presence.

Interview with Mr. Reijo Aarnio - Finnish Data Protection Ombudsman

by Maria Xynou last modified Jul 19, 2013 01:02 PM
Maria Xynou recently interviewed Mr. Reijo Aarnio, the Finnish Data Protection Ombudsman, at the CIS' 5th Privacy Round Table. View this interview and gain an insight on recommendations for better data protection in India!

Mr. Reijo Aarnio - the Finnish Data Protection Ombudsman - was interviewed on the following questions:

1. What activities and functions does the Finnish data commissioner's office undertake?

2. What powers does the Finnish Data commissioner's office have? In your opinion, are these sufficient? Which powers have been most useful? If there is a lack, what would you feel is needed?

3. How is the office of the Finnish data protection commissioner funded?

4. What is the organizational structure at the Office of the Finnish Data Protection Commissioner and the responsibilities of the key executives?

5. If India creates a Privacy Commissioner, what structure/framework would you suggest for the office?

6. What challenges has your office faced?

7. What is the most common type of privacy violation that your office is faced with?

8. Does your office differ from other EU data protection commissioner offices?

9. How do you think data should be regulated in India?

10. Do you support the idea of co-regulation or self-regulation?

11. How can India protect its citizens' data when it is stored in foreign servers?

CII Conference on "ACT": Achieve Cyber Security Together"

by Kovey Coles last modified Jul 26, 2013 08:17 AM
The Confederation of Indian Industries (CII) organized a conference on facing cyber threats and challenges at Hotel Hilton in Chennai on July 13, 2013. Kovey Coles attended this conference and shares a summary of the event in this blog post.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


The conference hosted by CII in the Hotel Hilton, was well attended, and featured a range of industry experts, researches and developers, and members of the Indian armed forces.

Participants focused on the importance of Indian entities reaching new, adequate levels of cyber security. It was stated early in the event that India is one of the world's most targeted areas for cyber-attacks, and its number of domestic internet users is known to be rapidly increasing in an age which many view as a new era of international information warfare. Despite this, the speakers considered India to be too far behind other countries in its understanding of cyber security. In the opening remarks, CII Chairman Santhanam implored "We need hard core techies in this field… we are not producing them." Another speaker, Savitha Kesav Jagadeesan, a practicing lawyer in Chennai, asked if India would wait until the "9/11 of cyberspace" occurrence before we establish the same level of precautionary measures online as it exists now in transportation security.

With the presence of both the government’s executive forces and the private industries, the aura circulating the conference room was that of a collective Indian defense, a secure nation only achieved through both secure governmental and industrial aspects. Similar to the previous day’s DSCI cyber security conference, many speakers discussed security issues pertinent to the financial and banking industries, and other cyber crimes which had pecuniary goals. For people seeking to avoid the array of scams and frauds online, some talks shared some of the most basic advice, like safe password practices. "Passwords are like toothbrushes," said A.S. Murthy of the CDAC, "use them often, never share them with anyone, change them often." Other talks went into the intricacies of various hacking schemes, including tab-nabbing and Designated Denial of Service (DDoS) attacks, describing their tactics and how to moderate them.

In the end, the conference had certainly informed the attendees of the goals, and the challenges, that India will face in the coming months and years. The speakers (all of them) showed how the world of cyber security was quickly evolving, and demonstrated the imperative in government and industry entities evolving their own practices and defenses in stride. The ambitions of several presentations matched the well-publicized "5 lakh cyber professionals in 5 years" plan, placing a strong emphasis in the current and future training of young students in cyber security. Ultimately, I think, the conference helped convince that cyber security is neither a futile, nor completely infallible concept. As CISCO Vice President Col. K.P.M. Das said towards the end of the evening, the most ideal form of cyber security is truly "all about trust, the ability to recover, and transparency/visibility."

Parsing the Cyber Security Policy

by Chinmayi Arun last modified Jul 22, 2013 06:37 AM
An effective cyber-security policy must keep up with the rapid evolution of technology, and must never become obsolete. The standard-setting and review bodies will therefore need to be very nimble, says Chinmayi Arun.
Parsing the Cyber Security Policy

Image: siliconindia.com


Chinmayi Arun's article was published in the Hoot on July 13, 2013 and later cross-posted in the Free Speech Initiative the same day.


We often forget how vulnerable the World Wide Web leaves us. If walls of code prevent us from entering each other’s systems and networks, there are those who can easily pick their way past them or disable essential digital platforms. We are reminded of this by the doings of Anonymous, which carried out a series of attacks, including the website run by Computer Emergency Response Team India (CERT-In) which is the government agency in charge of cyber-security. Even more serious, are cyber-attacks (arguably cyber warfare) carried out by other states, using digital weapons such as Stuxnet, the digital worm. More proximate and personal are perhaps the phishing attacks, which are on the rise.

We therefore run a great risk if we leave air-traffic control, defense resources or databases containing several citizens’ personal data vulnerable. Sure, there is no doubt that efforts towards better cyber-security are needed. A cyber-security policy is meant to address this need, and to help manage threats to individuals, businesses and government agencies. We need to carefully examine the government’s efforts to handle cyber-security, how effective it is and whether its actions do not have too many negative spillovers.

The National Cyber-Security Policy, unveiled last week, is merely a statement of intention in broad terms. Much of  its real impact will be ascertainable only after the language to be used in the law is available. Nevertheless, the scope of the policy remains ambiguous so far, leading to much speculation about the different ways in which it might be intrusive.


One Size Fits All?
The policy covers very different kinds of entities: government agencies, private companies or businesses, non-governmental entities and individual users. These entities may need to be handled differently depending on their nature. Therefore, while direct state action may be most appropriate to secure government agencies’ networks, it may be less appropriate in the context of purely private business.

For example, securing police records would involve the government directly purchasing or developing sufficiently secure technology. However, different private businesses and non-governmental entities may be left to manage their own security. Depending on the size of each entity, each may be differently placed to acquire sophisticated security systems. A good policy would encourage innovation by those with the capacity to do this, while ensuring that others have access to reasonably sound technology, and that they use it. Grey-areas might emerge in contexts where a private party is manages critical infrastructure.

It will also be important to distinguish between smaller and larger organisations whilst creating obligations. Unless this distinction is made at the implementation stage, start-up businesses and civil society organisations may find requirements such as earmarking a budget for cyber security implementation or appointing a Chief Information Security Officer onerous. Additionally, the policy will need to translate into a regulatory solution that provides under-resourced entities with ready solutions to enable them to make their information systems secure, while encouraging larger entities with greater purchasing power to invest in procuring the best possible solutions.

Race to the Top
Security on the Internet works only if it stays one step ahead the people trying to break in. An effective cyber-security policy must keep up with the rapid evolution of technology, and must never become obsolete. The standard-setting and review bodies will therefore need to be very nimble.

The policy contemplates working with industry and supporting academic research and development to achieve this. However the actual manner in which resources are distributed and progress is monitored may make the crucial difference between a waste of public funds and acquisition of capacity to achieve a reasonable degree of cyber security.

Additionally the flow of public funds under this policy, particularly to purchase technology, should be examined very carefully to see whether it is justified. For example, if the government chooses to fund (even by way of subsidy) a private company’s cyber-security research and development rather than an equivalent public university’s endeavour, this decision should be scrutinized to see whether it was necessary. Similarly, if extensive public funds are spent training young people as a capacity-building exercise, we should watch to see how many of these people stay in India and how many leave such that other countries end up benefiting from the Indian government’s investment in them!

Investigation of Security Threats
Although much of the policy focuses on defensive measures that can be taken against security breaches, it is intended not only to cover investigation subsequent to an attack but also to pinpoint ‘potential cyber threats’ so that proactive measures may be taken.

The policy has outlined the need for a ‘Cyber Crisis Management Plan’ to handle incidents that impact ‘critical national processes or endanger public safety and security of the nation’. This portion of the policy will need to be watched closely to ensure that the language used is very narrow and allows absolutely no scope for misinterpretation or misuse that would affect citizens’ rights in any manner.

This caution will be necessary both in view of the manner in which restraints on freedom of speech permitted in the interests of public safety have been flagrantly abused, and because of the kind of paternalistic state intrusion that might be conceived to give effect to this.

Additionally, since the policy also mentions information sharing with internal and international security, defence, law enforcement and other such agencies, it will also be important to find out the exact nature of information to be shared. Of course, how the policy will be put into place will only become clear as the terms governing its various parts emerge. But one hopes the necessary internal direct action to ensure the government agencies’ information networks are secure is already well underway.

It is also to be hoped that the government chooses to take implementation of privacy rights at least as seriously as cyber-security. If some parts of cyber security involve ensuring that user data is protected, the decision about what data needs protection will be important to this exercise.

Additionally, although the policy discusses various enabling and standard-setting measures, it does not discuss the punitive consequences of failure to take reasonable steps to safeguard individuals’ personal data online. These consequences will also presumably form a part of the privacy policy, and should be put in place as early as possible.

You Have the Right to Remain Silent

by Nishant Shah last modified Jul 22, 2013 06:59 AM
Reflecting upon the state of freedom of speech and expression in India, in the wake of the shut-down of the political satire website narendramodiplans.com.

Nishant Shah's column was published in Down to Earth on July 17, 2013.


It took less than a day for narendramodiplans.com, a political satire website that had more than 60,000 hits in the 20 hours of its existence, to be taken down. A simple webpage that showed a smiling picture of Narendra Modi, the touted candidate for India’s next Prime Ministerial campaign, flashing his now trademark ‘V’ for Vengeance Victory sign. At the first glimpse it looked like another smart media campaign by the net-savvy minister who has already made use of the social web quite effectively, to connect with his constituencies and influence the younger voting population in the country. Below the image of Mr. Modi was a text that said, "For a detailed explanation of how Mr. Narendra Modi plans to run the nation if elected to the house as a Prime Minister and also for his view/perspective on 2002 riots please click the link below." The button, reminiscent of 'sale' signs on shops that offer permanent discounts, promised to reveal, for once and for all, the puppy plight of Mr. Modi's politics and his plans for the country that he seeks to lead.

However, when one tried to click on the button, hoping, at least for a manifesto that combined the powers of Machiavelli with the sinister beauty of Kafka, it proved to be an impossible task. The button wiggled, and jiggled, and slithered all over the page, running away from the mouse following it. Referencing the layers of evasive answers, the engineered Public Relations campaigns that try to obfuscate the history to some of the most pointed questions that have been posited to the Modi government through judicial and public forums, the button never stayed still enough to actually reveal the promised answers. For people who are familiar with the history of such political satire and protest online would immediately recognise that this wasn’t the most original of ideas. In fact, it was borrowed from another website - http://www.thepmlnvision.com/ that levelled similar accusations of lack of transparency and accountability on the part of Nawaz Sharif of Pakistan. Another instance, which is now also shut down, had a similar deployment where the webpage claimed to give a comprehensive view into Rahul Gandhi’s achievements, to question his proclaimed intentions of being the next prime-minister. In short, this is an internet meme, where a simple web page and a java script allowed for a critical commentary on the future of the next elections and the strengthening battle between #feku and #pappu that has already taken epic proportions on Twitter.

The early demise of these two websites (please do note, when you click on the links that the Nawaz Sharif website is still working) warns us of the tightening noose around freedom of speech and expression that politicos are responsible for in India. It has been a dreary last couple of years already, with the passing of the Intermediaries Liabilities Rules as an amendment to the IT Act of India, Dr. Sibal proposing to pre-censor the social web in a quest to save the face of erring political figures, teenagers being arrested for voicing political dissent, and artists being prosecuted for exercising their rights to question the state of governance in our country. Despite battles to keep the web an open space that embodies the democratic potentials and the constitutional rights of freedom of speech and expression in the country, it has been a losing fight to keep up with the ad hoc and dictatorial mandates that seem to govern the web.

We have no indication of why this latest piece of satirical expression, which should be granted immunity as a work of art, if not as an individual’s right to free speech, was suddenly taken down. The website now has a message that says, “I quit. In a country with freedom of speech, I assumed that I was allowed to make decent satire on any politician more particularly if it is constructive. Clearly, I was wrong.” The web is already abuzz with conspiracy theories, each sounding scarier than the other because they seem so plausible and possible in a country that has easily sacrificed our right to free speech and expression at the altar of political egos. And whether you subscribe to any of the theories or not, whether your sympathies lie with the BJP or with the UPA, whether or not you approve of the political directions that the country seems to be headed in, there is no doubt that you should be as agitated as I am, about the fact that we are in a fast-car to blanket censorship, and we are going there in style.

What happens online is not just about this one website or the one person or the one political party – it is a reflection on the rising surveillance and bully state that presumes that making voices (and sometimes people) invisible, is enough to resolve the problems that they create. And what happens on the web is soon going to also affect the ways in which we live our everyday lives. So the next time, you call some friends over for dinner, and then sit arguing about the state of politics in the country, make sure your windows are all shut, you are wearing tin-foil hats and if possible, direct all conversations to the task of finally finding Mamta Kulkarni. Because anything else that you say might either be censored or land you in a soup, and the only recourse you might have would be a website that shows the glorious political figures of the country, with a sign that says “To defend your right to free speech and expression, please click here”. And you know that you are never going to be able to click on that sign. Ever.

CIS Cybersecurity Series (Part 8) - Jeff Moss

by Purba Sarkar last modified Jul 30, 2013 09:25 AM
CIS interviews Jeff Moss, Chief Security Officer for ICANN, as part of the Cybersecurity Series.

"Most consumers don't understand the privacy trade offs when they browse the web... the data that is being collected about them, the analytics that is being run against their buying behaviour, it is invisible... it is behind the scenes... and so it is very difficult for the consumer to make an informed decision." - Jeff Moss, Chief Security Officer, ICANN.

Centre for Internet and Society presents its eighth installment of the CIS Cybersecurity Series. 

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.

In this installment, CIS interviews Jeff Moss. Jeff is the chief security officer for ICANN. He founded Black Hat Briefings and DEF CON, two of the most influential information security conferences in the world. In 2009, Jeff was sworn in as a member of the U.S. Department of Homeland Security Advisory Council (DHS HSAC), providing advice and recommendations to the Secretary of the Department of Homeland Security on matters related to domestic security.   

 
This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

Report on the 5th Privacy Round Table meeting

by Maria Xynou last modified Jul 26, 2013 08:24 AM
This report entails an overview of the discussions and recommendations of the fifth Privacy Round Table in Calcutta, on 13th July 2013.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC.


In 2013, the Centre for Internet and Society (CIS) in collaboration with the Federation of Indian Chambers of Commerce and Industry (FICCI), and the Data Security Council of India (DSCI), is holding a series of seven multi-stakeholder round table meetings on “privacy” from April 2013 to October 2013. The CIS is undertaking this initiative as part of their work with Privacy International UK on the SAFEGUARD project.

In 2012, the CIS and DSCI were members of the Justice AP Shah Committee which created the “Report of Groups of Experts on Privacy”. The CIS has recently drafted a Privacy (Protection) Bill 2013, with the objective of contributing to privacy legislation in India. The CIS has also volunteered to champion the session/workshops on “privacy” in the meeting on Internet Governance proposed for October 2013.

At the roundtables the Report of the Group of Experts on Privacy, DSCI´s paper on “Strengthening Privacy Protection through Co-regulation” and the text of the Privacy (Protection) Bill 2013 will be discussed. The discussions and recommendations from the round table meetings will be presented at the Internet Governance meeting in October 2013.

The dates of the seven Privacy Round Table meetings are enlisted below:

  1. New Delhi Roundtable: 13 April 2013

  2. Bangalore Roundtable: 20 April 2013

  3. Chennai Roundtable: 18 May 2013

  4. Mumbai Roundtable: 15 June 2013

  5. Kolkata Roundtable: 13 July 2013

  6. New Delhi Roundtable: 24 August 2013

  7. New Delhi Final Roundtable and National Meeting: 19 October 2013

Following the first four Privacy Round Tables in Delhi, Bangalore, Chennai and Mumbai, this report entails an overview of the discussions and recommendations of the fifth Privacy Round Table meeting in Kolkata, on 13th July 2013.

Presentation by Mr. Reijo Aarnio – Finnish Data Protection Ombudsman

The fifth Privacy Round Table meeting began with a presentation by Mr. Reijo Aarnio, the Finnish Data Protection Ombudsman. In particular, Mr. Aarnio initiated his presentation by distinguishing privacy and data protection and by emphasizing the need to protect both equally within a legal framework. Mr. Aarnio proceeded by highlighting that 96 percent of the Finnish community believes that data protection is necessary, especially since it is considered to play an essential role in the enhancement of the self-determination of the individual. Fuerthermore, Mr. Aarnio pointed out that the right to privacy in Finland in guaranteed under section 10 of the Finnish constitution.

The Finnish Data Protection Ombudsman argued that in order for India to gain European data protection adequacy, the implementation of a regulation for data protection in the country is a necessary prerequisite. Mr. Aarnio argued that although the draft Privacy (Protection) Bill 2013 provides a decisive step in regulating the use of data, the interception of communications and surveillance in India, it lacks in defining the data controller and the data subject, both of which should be legally specified.

In order to support his argument that India needs privacy legislation, the Ombudsman clarified the term “data protection” by stating that it relates to the following:

  • individual autonomy

  • the right to know

  • the right to live without undue interference

  • the right to be evaluated on the basis of correct and relevant information

  • the right to know the criteria automatic decision-making systems are based on

  • the right to trust data security

  • the right to receive assistance from independent authorities

  • the right to be treated in accordance with all other basic rights in a democracy

  • the right to have access to public documents

  • the freedom of speech

In addition to the above, Mr. Aarnio argued that the reason why data protection is important is because it ensures the respect for human dignity, individual autonomy and honor.

The Finnish Data Protection Ombudsman gave a brief overview of the development and history of data protection, by citing the oathe of Hippokrates, the Great Revolutions and World War II, all throughout which data protection has gained increased significance. Mr. Aarnio pointed out that as a result of the development and proliferation of technology, societies have evolved and that data protection is a major component of the contemporary Information Society. The Ombudsman stated that in the Information Society, information is money and open data and big data are products which are being commercialised and commodified. Hence, in order to ensure that human rights are not commericalised and commodified in the process, it is necessary to establish legal safeguards which can prevent potential abuse.

Article 8 of the European Charter of Fundamental Rights guarantees the protection of personal data. Mr. Aarnio argued that the Parliament is the most important data protection authority in Europe and that privacy is legally guaranteed on three levels:

  • Protection of personal life: The Criminal Code (chapter 24) addresses and protects freedom of speech and secrecy regulations

  • Communication: Protection of content and traffic data

  • Data Protection: The Personal Data Act creates Right to Know and to affect/impact, the right to organise one's personal life, automatic processing of personal data and maintenance of register

The Ombudsman also referred to the Directive 95/46/EC of the European Parliament of 24 October 1995 on the protection of individuals with regard to the processing of personal data and the free movement of such data.

Mr. Aarnio argued that in the contemporary ecosystem of the Information Society, countries need “Privacy by Design”, which entails the description of the processing of personal data and the evaluation of its lawfulness. In particular, the purpose for the collection and processing of data should be legally defined, as well as whether such data will be shared with third parties, disclosed and/or retained. The Ombudsman argued that India needs to define its data controllers and to legally specify their roles, in order to ensure that the management of data does not result in the infringement upon the right to privacy and other human rights.

The Finnish Data Protection Ombudsman concluded his presentation by stating that data security is not only a technological matter, but also – and in some cases, mostly – a legal issue, which is why India should enact the draft Privacy (Protection) Bill 2013.

Discussion of the draft Privacy (Protection) Bill 2013

Chapter I: Definitions

The discussion of the draft Privacy (Protection) Bill 2013 commenced with a debate on whether such a Bill is necessary at all, given that section 43 of the IT Act is considered (by participants at the round table) to regulate the protection of data. It was pointed out that although section 43 of the Information Technology Act provides some rules for data protection, the Committee has stated that these rules are inadequate. In particular, India currently lacks statutory provisions dealing with data protection and rules are inadequate because they are subject to parliamentary debate, and the Parliament does not have the right to vote on rules. The Parliament does not have the right to amend rules, which means that it does not have the right to amend the rules on data protection under the IT Act. Since the rules under section 43 of the IT Act are not subject to parliamentary review, India needs a seperate privacy statutue. Hence, the round table reached a consensus on the discussion of the draft Privacy (Protection) Bill 2013.

Personal data is defined in the draft Privacy (Protection) Bill 2013 as any data which relates to a natural person, while sensitive personal data is defined as a subset of personal data, such as biometric data, medical history, sexual preference, political affiliation and criminal history. It was pointed out that race, religion and caste are not included in the Bill's definition for sensitive personal data because the Government of India refuses to acknowledge these types of information as personal data. According to the Government, the collection of such data is routine and there have been no cases when such data has been breached, which is why race, religion and caste should not be included in the definition for sensitive personal information. However, the last caste sensus took place in 1931 and since then there has been no caste sensus, because it is considered to be a sensitive issue. This contradictory fact to the government's position was pointed out during the round table meeting.

A participant argued that financial information should be included within the definition for sensitive personal data. This was countered by a participant who argued that India has the Credit Information Companies Act which covers credit information and sets out specific information for the protection of credit data by banks and relevant companies. Yet the question of whether general financial information should be included in the definition for sensitive personal data was further discussed, and many participants supported its inclusion in the definition.

The question of whether IP addresses should be included in the definition for personal data was raised. The response to this question was that IP addresses should be included in the definition since they relate to the identification of a natural person. However, the question of whether a specific IP address is considered personal data, as many individuals use the Web through the same IP address, remained unclear. Other participants raised the question of whether unborn humans and deceased persons should have privacy rights. The response to this was that in India, only the court can decide if a deceased person can have the right to privacy.

The controversy between the UID project and the protection of biometric data under the definition for sensitive personal information was discussed in the round table. In particular, it was pointed out that because the UID scheme requires the mass biometric collection in India is contradictory to the protection of such data under the Bill. As the UID scheme remains unregulated, it is unclear who will have access to the biometric data, who it will be shared with, whether it will be disclosed and retained and if so, for how long. All the questions which revolve around the implementation of the UID scheme and the use of the biometric data collected raise concerns in regards to what extent such data can realistically be protected under privacy legislation.

On this note, a participant mentioned that under EU regulation, an ID number is included in the definition for sensitive personal information and it was recommended that the same is added in India's draft Privacy (Protection) Bill 2013. Furthermore, a participant recommended that fingerprints are also included in the definition for sensitive personal data, especially in light of the NPR and UID scheme.

A participant argued that passwords should also be included in the definition for sensitive personal data, as well as private keys which are used for encryption and decryption. It was pointed out that section 69 of the IT Act requires the disclosure of encryption keys upon the request from authorities, which potentially can lead to the violation of privacy and other human rights. Hence the significance of protecting passwords and encryption keys which can safeguard data was highly emphasized and it was argued that they should definitely be included in the definition for sensitive personal data. This position was countered by a participant who argued that the Government of India should have access to private encyrption keys for national security purposes.

On the definition of sensitive personal data, it was emphasized that this term should relate to all data which can be used for discrimination, which is why it needs to be protected. It was further emphasized that it took Europe twelve years to reach a definition for personal data, which is why India still needs to look at the issue in depth and encounter all the possible violations which may potentially occur from the non-regulation of various types of data. Most participants agreed that financial information, passwords and private encryption keys should be added in the definition for sensitive personal data.

The fifth round table entailed a debate on whether political affiliation should be included in the definition for sensitive personal data. In particular, one participant argued that political parties disclose the names of their members and that in many cases they are required to do in order to show their source of income. Hence, it was argued that political affiliation should not be included in the definition for sensitive personal data, since it is not realistic to expect political parties to protect their members' privacy. This was countered by other participants who argued that anonymity in political communications is important, especially when an individual is in a minority position, which is why the term political affiliation should be included in the definition for sensitive personal data.

The discussion on the definitions in the draft Privacy (Protection) Bill 2013 concluded with comments that the definiton for surveillance is very exclusive of many types of surveillance. In particular, it was argued that the definition for surveillance does not appear to cover artificial intelligence, screen shots and various other forms of surveillance, all of which should be regulated.

Chapter II: Right to Privacy

Section 4 of the draft Privacy (Protection) Bill 2013 states that all natural persons have a right to privacy. Section 5 of the Bill includes exemptions to the right to privacy. On this note, it was pointed out that during the round table that there is no universal definition of privacy and thus it is challenging to define the term and to regulate it. Furthermore, the rapid pace at which technology is proliferating was emphasized, along with its impact on the right to privacy. For example, it was mentioned that emails were not covered by privacy legislation in the past, but this needs to be amended accordingly. The European Data Protection Directive was established in 1995 and does not regulate many privacy issues which arise through the Internet, which is why it is currently being reviewed. Similarily, it was argued that privacy legislation in India should encompass provisions for potential data breaches which may occur through the Internet and various forms of technology.

A participant argued that the draft Privacy (Protection) Bill 2013 should include provisions for data subjects, which enable them to address their rights. In particular, it was argued that data subjects should have the right to access information collected and retained about them and that they should have the right to make corrections. The reponse to this comment was that the Bill may be split into two seperate Bills, where the one would regulate data protection and the other would regulate the interception of communications and surveillance, while the data subject would be addressed extensively. Furthermore, participants raised questions of how to define the data controller and the data subjects within the Indian context.

Other questions which were raised during the round table included whether spam should be addressed by the Bill. Several participants argued that spam should not be regulated, as it is not necessarily harmful to data subjects. Other participants argued that the isse of access to data should be addressed prior to the definition of privacy. Another argument was that commerical surveillance should not be conducted within restrictions, which is why it should not be inlcuded in the exemptions to the right to privacy. It was also pointed out that residential surveillance should be allowed, as long as the cameras are pointed inwards and do not capture footage of third parties outside of a residence. On this note, it was argued that surveillance in the work place should also be exempted from the right to privacy, as that too can be considered the private property of the owner. Moreover, it was emphasized that the surveillance of specific categories of people should also be excluded from the exemptions to the right to privacy.

A participant argued that in some cases, NGOs may be collecting information for some “beneficial purpose” and that such cases should be excluded from the exemptions to the right to privacy. Other participants argued that in many cases, data needs to be collected for market research and that the Bill should regulate what applies in such cases. All such arguments were countered by a participant, who argued that Section 5 of the Bill on the exemptions to the right to privacy should be deleted, as it creates to many complications. This recommendation was backed up by the example of a husband capturing a photograph of his wife and then publishing the image without her consent.

During this discussion, a participant raised the question of to what extent the right to privacy applies to minors. This question was supported by the example of Facebook, where many minors have profiles but the extent to which this data is protected remains ambiguous. Furthermore, it was pointed out that it remains unclear whether privacy legislation can practically safeguard minors who choose to share their data online. A participant responded to these concerns by stating that Facebook is a data controller and has to comply with privacy law to protect its customers' data. It was pointed out that it does not matter if the data controller is a company or an NGO; in every case, the data controller is obliged to comply with data protection law and regulations.

Furthermore, it was pointed out that Facebook allows for minors aged 13 to create a profile, while it remains unclear how minors can enforce their privacy rights. In particular, it remains unclear how the mediated collection of minors' data can be regulated and it was recommended that this is addressed by the Bill. A participant replied to this by stating that Indian laws rule in favour of minors, but that this simultaneously remains a grey area. In particular, it was pointed out that rules under section 43 of the Information Technology (IT) Act cover Internet access by minors, but this still remains an unclear area which needs further debate and analysis.

The question which prevailed at the end of the discussion of Chapter 2 of the Bill was on the social media and minors, and on how minors' data can be protected when it is being published immediately through the social media, such as Facebook. Furthermore, it was recommended that the Bill addresses the practical operationalisation of the right to privacy within the Indian context.

Chapter III: Protection of Personal Data

The discussion of Chapter 3 of the draft Privacy (Protection) Bill 2013 on the protection of personal data commenced with a reference to the nine privacy principles of the Justice AP Shah Justice Committee. The significance of the principles of notice and consent were outlined, as it was argued that individuals should have the right to be informed about the data collected about them, as well as to have the rigt to access such data and make possible corrections.

Collection of Personal Data

The discussion on the collection of personal data (as outlined in Section 6 of Chapter 3 of the Bill) commenced with a participant arguing that a company seeking to collect personal data should always have a stated function. In particular, a company selling technological products or services should not collect biometric data, for example, unless it serves a specified function. It was pointed out that data collection should be restricted to the specified purposes. For example, a hospital should be able to collect medical data because it relates to its stated function, but an online company which provides services should not be eligible to collect such data, as it deviates from its stated function.

During the discussion, it was emphasized that individuals should have the right to be informed when their data is being collected, which data is being collected, the conditions for the disclosure of such data and everything else that revolves around the use of their data once it has been collected. However, a participant questioned whether it is practically feasible for individuals to provide consent to the collection of their data every time it is being collected, especially since the privacy policies of companies keep changing. Moreover, it was questioned whether companies can or should resume the consent of their customers once their privacy policy has changed. On this note, a participant argued that companies should be obliged to notify their customers every time their privacy policy changes and every time the purpose behind their data collection changes.

On the issue of consent for data collection, a participant argued that individuals should have the right to withdraw their consent, even after their data has been collected and in such cases, such data should be destroyed. This was countered by another participant who argued that it is not realistic to expect companies to acquire individual consent every time the purpose behind data collection changes, nor is it feasible to allow for the withdrawal of consent without probable cause.

The issue of indirect consent to the collection of personal data was raised and, in particular, several participants argued that the Bill should have provisions which would regulate circumstances where indirect consent can be obtained for the collection of personal data. Furthermore, it was emphasized that the Bill should also include a notice for all potential purposes of data collection which may arise in the future; if the purpose for data collection changes based on conditions specified, then companies should not be mandated to notify individuals. Moreover, a participant argued that the Bill should include provisions which would enable individuals to opt-in and/or opt-out from data collection.

On the issue of consent, it was further outlined that consent provides a legitimate purpose to process data and that the data subject should have the right to be informed prior to the collection of his or her data. However, it was emphasized that the draft Privacy (Protection) Bill 2013 is a very strict regulation, as consent cannot always be acquired prior to data collection, because there are many cases where this is not practically feasible. It was pointed out that in the European Data Protection Directive, it is clear that consent cannot always be acquired prior to data collection. The example of medical cases was mentioned, as patients may not always be capable to provide consent to data collection which may be necessary.

In particular, it was highlighted that the European Data Protection Directive includes provisions for the processing of personal data, as well as exceptions for when consent is not required prior to data collection. The Directive guarantees the legitimate interest of the data controller and data processing is based upon the provisions of privacy legislation. The outsourcing of data is regulated in the European Union, and it was recommended that India regulates it too. Following this comment, it was stated that the recent leaks on the NSA's surveillance raise the issue of non-consentual state collection of data and non-consentual private disclosure of data and a brief debate revolved around these issues in the round table.

On the issue of mediated data collection, the situations in which collected data is mediated by third parties was analysed. It was recommended that the law is flexible to address the various types of cases when collected data is mediated, such as when a guardian needs to handle and take decisions for data of a mentally disabled person being collected. However, it was pointed out that mediated data collection should be addressed sectorally, as a doctor, for example, would address mediated data in a different manner than a company. It was emphasized that specific cases – such a parent taking a mediated decision on the data collection of his or her child – should be enabled, whereas all other cases should be prohibited. Thus it was recommended that language to address the mediated collection of data should be included in the Bill.

A participant raised the question of whether there should be seperate laws for the private collection of data and state collection of data. It was mentioned that this is the case in Canada. Another question which was raised was what happens when state collectors hire private contractors. The UID was brought as an example of state collection of data, while private contractors have been hired and are involved in the process of data collection. This could potentially enable the collection and access of data by unauthorised third parties, to which individuals may have not given their consent to. Thus it was strongly recommended that the Bill addresses such cases and prevents unauthorised collection and access of data.

The discussion on the collection of personal data ended with an interesting test case study for privacy: should the media have the right to disclose individuals' personal data? A debate revolved around this question and participants recommended that the Bill regulates the collection, processing, sharing, disclosure and retention of personal data by the media.

Retention of Personal Data

The discussion on the retention of personal data commenced with the statement that there are various exceptions to the retention of data in India, which are outlined in various court cases. It was pointed out that data should be retained in compliance with the law, but this is problematic as, in various occasions, a verbal order by a policeman can be considered adequate, but this can potentially increase the probability for abuse. A question which was raised was whether an Act of Parliament should allow for the long term storage of data, especially when there is inadequate data to support its long-term retention. It was pointed out that in some cases there are laws which allow for the storage of data for up to ten years, without the knowledge – let alone the consent – of the individual. Thus, the issue of data retention in India remains vague and should be addressed by the draft Privacy (Protection) Bill 2013.

Questions were raised on the duration of data retention periods and on whether there should be one general data retention law or several sectoral data retention laws. The participants disagreed on whether an Act of Parliament should regulate data retention or whether data retention should be regulated by sectoral authorities. A participant recommended “privacy by design” and stated that the question of data retention should be addressed by data controllers. Other participants raised the question of purpose limitation, especially for cases when data is being re-retained after the end of its retention period. A participant recommended that requirements for the anonymisation of data once it has exceeed its retention period should be established. However, this proposal was countered by participants who argued that the pracitcal enforcement of the anonymisation of retained data is not feasible within India.

Destruction of Personal Data

The retention of personal data can be prevented once data has been destroyed. However, participants argued that various types of data are being collected through surveillance products which are controlled by private parties. In such cases, it was argued that it remains unclear how it will be verified that data has indeed being destroyed.

A participant argued that the main problem with data destruction is that even if data has been deleted, it can be retrieved up to seven times; thus the question which arises is how can individuals know if their data has been permanently destroyed, or if it is being secretly retrieved. Questions were raised on how the permanent retention of data can be prevented, especially when even deleted data can be retrieved. Hence it was recommended that information security experts cooperate with data controllers and the Privacy Commissioner, to ensure that data is permanently destroyed and/or that data is not being accessed after the end of its retention period. Such experts would ensure that data is actually being destroyed.

Another participant pointed out the difference between the wiping of data and the deletion of data. In particular, the participant argued that data is being deleted when it is being overwritten by other data, and can potentially be recovered. Wiping of data, on the other hand, involves the wiping out of data which can never be recovered. The participant recommended that the Bill explicitly states that data is wiped out in order to ensure that data is not being indirectly retained.

Processing of Personal Data

The dicsussion on the processing of personal data began with the question of national archives. In particular, participants argued that if the processing of data is strictly regulated, that would restrict access to national archives and the draft Privacy (Protection) Bill 2013 should address this issue.

Questions were raised on the non-consentual processing of personal data and on how individual consent should be acquired prior to the processing of personal data. It was pointed out that the Article 29 Working Party has published an Opinion on purpose limitation with regards to data processing and it was recommended that a similar approach is adopted in India.

Furthermore, it was stated that IT companies are processing data from the EU and the U.S., but it remains unclear how individual consent can be obtained in such cases. A debate evolved on how to bind foreign data processors to meet the data requirements of India, as a minimum prerequisite to ensure that outsourced data is not breached. In light of the Edward Snowden leaks of NSA surveillance, many questions were raised on how Indian data outsourced and stored abroad can be protected.

It was highlighted during the round table that all data processing in India requires certification, but since the enforceability of the contracts relies on individuals, this raises issues of data security. Moreover, questions were raised on how Indian companies can protect the data of their foreign data subjects. Thus, it was recommended that the processing of data is strictly regulated through the draft Privacy (Protection) Bill 2013 to ensure that outsourced data and data processed in the country is not breached.

Security of Personal Data

On the issue of data security, the participants argued that the data subject should always be informed in cases when the confidentiality of their personal data is violated. Confidentiality is usually contractually limited, whereas secrecy is not, which is why both terms are included in the draft Privacy (Protection) Bill 2013. In particular, secrecy is usually used for public information, whereas confidentiality is not.

Participants argued that the Bill should include restrictions on the media, in order to ensure that the confidentiality and integrity of their sources' data is preserved. Several participants stated that the Bill should also include provisions for whistleblowers which would provide security and confidentiality for their data. The participants of the round table engaged in a debate on whether the media should be strictly regulated in order to ensure the confidentiality of their sources' data. On the one hand, it was argued that numerous data breaches have occured as a result of the media mishandling their sources' data. On the other hand, it was stated that all duties of secrecy are subject to the public interest, which is why the media reports on them and which is why the media should not be restricted.

Disclosure of Personal Data

The discussion on the disclosure of personal data commenced with participants pointing out that the draft Privacy (Protection) Bill 2013 does not include requirements for consent prior to the disclosure of personal data, which may potentially lead to abuse. Questions were raised on the outsourcing of Indian data abroad and on the consequences of its foreign disclosure. Once data is outsourced, it remains unclear how the lawful disclosure or non-disclosure of data can be preserved, which is why it was recommended that the Bill addresses such issues.

A participant argued that there is a binding relationship between the data controller and the data subject and that disclosure should be regulated on a contractual level. Another participant raised the question of enforcement: How can regulations on the disclosure of personal data be enforced? The response to this question was that the law should focus on the data controller and that when Indian data is being outsourced abroad, the Indian data controller should ensure that the data subjects' data is not breached. However, other participants raised the question of how data can be protected when it is outsourced to countries where the rule of law is not strong and when the country is considered inadequate in terms of data protection.

With an increased transnational flow of information, questions arise on how individuals can protect their information. A participant recommended that it should be mandatory for companies to state in their contracts who they are outsourcing data to and whether such data will be disclosed to third parties. However, this proposal as countered by a participant who argued that even if this was inforced, it is still not possible to enforce the rights of an Indian data subject in a country which does not have a strong rule of law or which generally has weak legislation. A specific example was mentioned, where E.G. Infosys and Wipro Singapore have a contractual agreement and Indian data is outsourced. It was pointed out that if such data is breached, it remains unclear if the individual should address this issue to Wipro India, as well as which law should apply in this case and whether companies should be liable.

A participant suggested that the data controller discloses data without having acquired prior consent, if the Government of India requests it. However, this was countered by a participant who argued that even in such a case, the question of regulating access to data still remains. Other participants argued that the Right to Information Act has been misused and that too much information is currently being disclosed. It was recommended that the Right to Information Act is amended and that the Bill includes strict regulations for the disclosure of personal data.

Meeting Conclusion

The fifth Privacy Round Table meeting commenced with a presentation on privacy and data protection by Mr. Reijo Aarnio, the Finnish Data Protection Ombudsman, and proceeded with a discussion of the draft Privacy (Protection) Bill 2013. The participants engaged in a heated debate and provided recommendations for the definitions used in the Bill, as well as for the regulation of data protection. The recommendations for the improvement of the draft Privacy (Protection) Bill 2013 will be considered and incorporated in the final draft.

Privacy Round Table, Delhi

by Prasad Krishna last modified Aug 12, 2013 10:42 AM

PDF document icon Invite-Delhi.pdf — PDF document, 1157 kB (1185675 bytes)

More than a Hundred Global Groups Make a Principled Stand against Surveillance

by Elonnai Hickok last modified Jul 31, 2013 02:26 PM
For some time now there has been a need to update understandings of existing human rights law to reflect modern surveillance technologies and techniques.

Nothing could demonstrate the urgency of this situation more than the recent revelations confirming the mass surveillance of innocent individuals around the world.

To move toward that goal, today we’re pleased to announce the formal launch of the International Principles on the Application of Human Rights to Communications Surveillance. The principles articulate what international human rights law – which binds every country across the globe – require of governments in the digital age. They speak to a growing global consensus that modern surveillance has gone too far and needs to be restrained. They also give benchmarks that people around the world can use to evaluate and push for changes in their own legal systems.

The product of over a year of consultation among civil society, privacy and technology experts, including the Centre for Internet and Society (read here, here, here and here), the principles have already been co-signed by over hundred organisations from around the world. The process was led by Privacy International, Access, and the Electronic Frontier Foundation. The process was led by Privacy International, Access, and the Electronic Frontier Foundation.

The release of the principles comes on the heels of a landmark report from the United Nations Special Rapporteur on the right to Freedom of Opinion and Expression, which details the widespread use of state surveillance of communications, stating that such surveillance severely undermines citizens’ ability to enjoy a private life, freely express themselves and enjoy their other fundamental human rights. And recently, the UN High Commissioner for Human Rights, Nivay Pillay, emphasised the importance of applying human right standards and democratic safeguards to surveillance and law enforcement activities.

"While concerns about national security and criminal activity may justify the exceptional and narrowly-tailored use of surveillance programmes, surveillance without adequate safeguards to protect the right to privacy actually risk impacting negatively on the enjoyment of human rights and fundamental freedoms," Pillay said.

The principles, summarised below, can be found in full at necessaryandproportionate.org. Over the next year and beyond, groups around the world will be using them to advocate for changes in how present laws are interpreted and how new laws are crafted.

We encourage privacy advocates, rights organisations, scholars from legal and academic communities, and other members of civil society to support the principles by adding their signature.

To sign, please send an email to [email protected], or visit https://www.necessaryandproportionate.org/about

Summary of the 13 principles

  • Legality: Any limitation on the right to privacy must be prescribed by law.
  • Legitimate Aim: Laws should only permit communications surveillance by specified State authorities to achieve a legitimate aim that corresponds to a predominantly important legal interest that is necessary in a democratic society.
  • Necessity: Laws permitting communications surveillance by the State must limit surveillance to that which is strictly and demonstrably necessary to achieve a legitimate aim.
  • Adequacy: Any instance of communications surveillance authorised by law must be appropriate to fulfill the specific legitimate aim identified.
  • Proportionality: Decisions about communications surveillance must be made by weighing the benefit sought to be achieved against the harm that would be caused to users’ rights and to other competing interests.
  • Competent judicial authority: Determinations related to communications surveillance must be made by a competent judicial authority that is impartial and independent.
  • Due process: States must respect and guarantee individuals' human rights by ensuring that lawful procedures that govern any interference with human rights are properly enumerated in law, consistently practiced, and available to the general public.
  • User notification: Individuals should be notified of a decision authorising communications surveillance with enough time and information to enable them to appeal the decision, and should have access to the materials presented in support of the application for authorisation.
  • Transparency: States should be transparent about the use and scope of communications surveillance techniques and powers.
  • Public oversight: States should establish independent oversight mechanisms to ensure transparency and accountability of communications surveillance.
  • Integrity of communications and systems: States should not compel service providers, or hardware or software vendors to build surveillance or monitoring capabilities into their systems, or to collect or retain information.
  • Safeguards for international cooperation: Mutual Legal Assistance Treaties (MLATs) entered into by States should ensure that, where the laws of more than one State could apply to communications surveillance, the available standard with the higher level of protection for users should apply.
  • Safeguards against illegitimate access: States should enact legislation criminalising illegal communications surveillance by public and private actors.

The Audacious ‘Right to Be Forgotten’

by Kovey Coles last modified Jul 31, 2013 10:08 AM
There has long been speculation over the permanency of our online presence. Posting about excessively-personal details, commenting in a way which is later embarrassing, being caught in unflattering public photos; to our chagrin, all of these unfortunate situations often persist on the web, and can continue to haunt us in future years.

Perhaps less dire, what if someone decides that she no longer wants the history of her internet action stored in online systems?

So far, there has been confusion over what should be done, and what realistically can be done about this type of permanent presence on a platform as complex and international in scope as the internet. But now, the idea of a right to be forgotten may be able to define the rights and responsibilities in dealing with unwanted data.

The right to be forgotten is an interesting and highly contentious concept currently being debated in the new European Union Data Protection Regulations.[1]

The Data Protection Regulation Bill was proposed in 2012 by EU Commissioner Viviane Reding and stands to replace the EU’s previous Data Protection law, which was enacted in 1995. Referred to as the “right to be forgotten” (RTBF), article 17 of the proposal would essentially allow an EU citizen to demand service providers to “take all reasonable steps” to remove his or her personal data from the internet, as long as there is no “legitimate” reason for the provider to retain it.[1] Despite the evident emphasis on personal privacy, the proposition is surrounded by controversy and facing resistance from many parties. Apparently, there are a range of concerns over the ramifications RTBF could bring.

Not only are major IT companies staunchly opposed to the daunting task of being responsible for the erasure of data floating around the web, but governments like the United States and even Great Britain are objecting the proposal as well.[2],[3]

From a commercial aspect, IT companies and US lobbying forces view the concept of RTBF as a burden and a waste of resources for service providers to implement. Largely due to the RTBF clause, the new EU Data Protection proposal as a whole has witnessed intense, “unprecedented” lobbying by the largest US tech companies and US lobby groups[4],[5]. From a different angle, there are those like Great Britain, whose grievances with the RTBF are in its overzealous aim and insatiable demands.[2] There are doubts as to whether a company will even be able to track down and erase all forms of  the data in question. The British Ministry of Justice stated, "The UK does not support the right to be forgotten as proposed by the European commission. The title raises unrealistic and unfair expectations of the proposals."[2] Many experts share these feasibility concerns. The Council of European Professional Informatics Societies (CEPIS) wrote a short report on the ramifications of cloud computing practices in 2011, in which it conformed, “It is impossible to guarantee complete deletion of all copies of data. Therefore it is difficult to enforce mandatory deletion of data. Mandatory deletion of data should be included into any forthcoming regulation of Cloud Computing services, but still it should not be relied on too much: the age of a ‘Guaranteed complete deletion of data’, if it ever existed has passed."[6]

Feasibility aside, the most compelling issue in the debate over RTBF is the demanding challenge of balancing and prioritizing parallel rights. When it comes to forced data erasure, conflicts of right to be forgotten versus freedom of speech and expression easily arises. Which right takes precedence over the other?

Some RTBF opponents fear that RTBF will hinder freedom of speech. They have a valid point. What is the extent of personal data erasure? Abuse of RTBF could result in some strange, Orwellian cyberspace where the mistakes or blemishes of society are all erased or constantly amended, and only positivity fills the internet. There are reasonable fears that a chilling effect may come into play once providers face the hefty noncompliance fines of the Data Protection law, and begin to automatically opt for customer privacy over considerations for freedom of expression. Moreover, what safeguards may be in place to prevent politicians or other public figures from removing bits of unwanted coverage?

Although these examples are extreme, considerations like these need to be made in the development of this law. With the amount of backlash from various entities, it is clear that a concept like the right to be forgotten could not exist as a simple, generalized law. It needs refinement.

Still, the concept of a RTBF is not without its supporters. Viktor Mayer-Schönberger, professor of Internet Governance at Oxford Internet Institute, considers RTBF implementation feasible and necessary, saying that even if it is difficult to remove all traces of an item, "it might be in Google's back-up, but if 99% of the population don't have access to it you have effectively been deleted."[7] Additionally, he claims that the undermining of freedom of speech and expression is "a ridiculous misstatement."[7] To him, the right to be forgotten is tied intricately to the important and natural process of forgetting things of the past.

Moreover, the Data Protection Regulation does mention certain exceptions for the RTBF, including protection for "journalistic purposes or the purpose of artistic or literary expression." [1] The problem, however, is the seeming contradiction between the RTBF and its own exceptions. In practice, it will be difficult to reconcile the powers granted by the RTBF with the limitations claimed in other sections of the Data Protection Regulation.

Currently, the are a few clean and straight forward implementations of RTBF. One would be the removal of mined user data which has been accumulated by service providers. Here, invoking the right would be possible once a person has deleted accounts or canceled contracts with a service (thereby fulfilling the notion that the service no longer has "legitimate" reason to retain the data). Another may be in the case of personal data given by minors who later want their data removed, which is an important example mentioned in Reding’s original proposal.[4] These narrow cases are some of the only instances where RTBF may be used without fear of interference with other social rights. Broader implementations of the RTBF concept, under the current unrefined form, may cause too many conflicting areas with other freedoms, and especially freedom of expression.

Overall, the Right to Be Forgotten is a noble concept, born out of concern for the citizen being overpowered by the internet. As an early EU publication states, "The [RTBF] rules are about empowering people, not about erasing past events or restricting the freedom of the press."[8] But at this point, too many clear details seem to be lacking from the draft design of the RTBF. There is concern that without proper deliberation, the concept could lead to unforeseen and undesirable outcomes. Privacy is a fundamental right that deserves to be protected, but policy makers cannot blindly follow the ideals of one right to the point where it interferes with other aspects of society.

Fortunately, recent amendment proposals have attempted some refinement of the bill. Jeffrey Rosen writes in the Stanford Law Review about a certain key concept that could help legitimize the right, namely an amendment proposing that only personally contributed data may be rescinded.[9] This would help avoid interference with others’ rights to expression, and provide limitations on the extent of right to be forgotten claims. As Leslie Harris, president of the Center for Democracy and Technology wrote in the Huffington Post, amendments are needed which can specifically define personal data in the RTBF sense; thereby distinguishing which type of data is allowed to be removed.[10] In the upcoming months, the European Parliament will be considering such amendments to the proposal. This time will be crucial as it will determine if the development of the right to be forgotten will make it a viable option for the EU’s 500 million citizens.

But even after terms are defined and after safeguards are established, this underling philosophical question remains:

Should a person be able to reclaim the right to privacy after willingly giving it up in the first place?

The RTBF is obviously a contentious topic, one which may need to be gauged individually by nation states; it will soon be revealed if the EU becomes the first to adopt the right. If RTBF fails to pass in European parliament, I would hope that it at least serves to remind people of the permanence of the data which they add to the internet, further incentivizing careful consideration of what one yields to the web. Rights frequently evolve and expand to meet societal or technological advances. If we are to expand the concept of privacy, however, then we must do so with proper consideration, so that privacy may not gain disproportionate power over other rights, or vice versa.


[1]. http://bit.ly/WSZvHv

[2]. http://bit.ly/YxKaNJ

[3]. http://tcrn.ch/YdH82f

[4]. http://bit.ly/196E8qj

[5]. http://bit.ly/wJKWTZ

[6]. http://bit.ly/15aoknF

[7]. http://bit.ly/Z3JbRU

[8]. http://bit.ly/xfodhI

[9]. http://bit.ly/13uyda5

[10]. http://huff.to/16P2XIS

India's National Cyber Security Policy in Review

by Jonathan Diamond last modified Jul 31, 2013 10:40 AM
Earlier this month, the Department of Electronics and Information Technology released India’s first National Cyber Security Policy. Years in the making, the Policy sets high goals for cyber security in India and covers a wide range of topics, from institutional frameworks for emergency response to indigenous capacity building.

What the Policy achieves in breadth, however, it often lacks in depth. Vague, cursory language ultimately prevents the Policy from being anything more than an aspirational document. In order to translate the Policy’s goals into an effective strategy, a great deal more specificity and precision will be required.

The Scope of National Cyber Security

Where such precision is most required is in definitions. Having no legal force itself, the Policy arguably does not require the sort of legal precision one would expect of an act of Parliament, for example. Yet the Policy deals in terms plagued with ambiguity, cyber security not the least among them. In forgoing basic definitions, the Policy fails to define its own scope, and as a result it proves remarkably broad and arguably unfocused.

The Policy’s preamble comes close to defining cyber security in paragraph 5 when it refers to "cyber related incident[s] of national significance" involving "extensive damage to the information infrastructure or key assets…[threatening] lives, economy and national security." Here at least is a picture of cyber security on a national scale, a picture which would be quite familiar to Western policymakers: computer security practices "fundamental to both protecting government secrets and enabling national defence, in addition to protecting the critical infrastructures that permeate and drive the 21st century global economy."[*] The paragraph 5 definition of sorts becomes much broader, however, when individuals and businesses are introduced, and threats like identity theft are brought into the mix.

Here the Policy runs afoul of a common pitfall: conflating threats to the state or society writ large (e.g. cyber warfare, cyber espionage, cyber terrorism) with threats to businesses and individuals (e.g. fraud, identity theft). Although both sets of threats may be fairly described as cyber security threats, only the former is worthy of the term national cyber security. The latter would be better characterized as cyber crime. The distinction is an important one, lest cyber crime be “securitized,” or elevated to an issue of national security. National cyber security has already provided the justification for the much decried Central Monitoring System (CMS). Expanding the range of threats subsumed under this rubric may provide a pretext for further surveillance efforts on a national scale.

Apart from mission creep, this vague and overly broad conception of national cyber security risks overwhelming an as yet underdeveloped system with more responsibilities than it may be able to handle. Where cyber crime might be left up to the police, its inclusion alongside true national-level cyber security threats in the Policy suggests it may be handled by the new "nodal agency" mentioned in section IV. Thus clearer definitions would not only provide the Policy with a more focused scope, but they would also make for a more efficient distribution of already scarce resources.

What It Get Right

Definitions aside, the Policy actually gets a lot of things right — at least as an aspirational document. It certainly covers plenty of ground, mentioning everything from information sharing to procedures for risk assessment / risk management to supply chain security to capacity building. It is a sketch of what could be a very comprehensive national cyber security strategy, but without more specifics, it is unlikely to reach its full potential. Overall, the Policy is much of what one might expect from a first draft, but certain elements stand out as worthy of special consideration.

First and foremost, the Policy should be commended for its commitment to “[safeguarding] privacy of citizen’s data” (sic). Privacy is an integral component of cyber security, and in fact other states’ cyber security strategies have entire segments devoted specifically to privacy. India’s Policy stands to be more specific as to the scope of these safeguards, however. Does the Policy aim primarily to safeguard data from criminals? Foreign agents? Could it go so far as to protect user data even from its own agents? Indeed this commitment to privacy would appear at odds with the recently unveiled CMS. Rather than merely paying lip service to the concept of online privacy, the government would be well advised to pass legislation protecting citizens’ privacy and to use such legislation as the foundation for a more robust cyber security strategy.

The Policy also does well to advocate “fiscal schemes and incentives to encourage entities to install, strengthen and upgrade information infrastructure with respect to cyber security.” Though some have argued that such regulation would impose inordinate costs on private businesses, anyone with a cursory understanding of computer networks and microeconomics could tell you that “externalities in cybersecurity are so great that even the freest free market would fail”—to quote expert Bruce Schneier. In less academic terms, a network is only as strong as its weakest link. While it is true that many larger enterprises take cyber security quite seriously, small and medium-sized businesses either lack immediate incentives to invest in security (e.g. no shareholders to answer to) or more often lack the basic resources to do so. Some form of government transfer for cyber security related investments could thus go a long way toward shoring up the country’s overall security.

The Policy also “[encourages] wider usage of Public Key Infrastructure (PKI) within Government for trusted communication and transactions.” It is surprising, however, that the Policy does not mandate the usage of PKI. In general, the document provides relatively few details on what specific security practices operators of Critical Information Infrastructure (CII) can or should implement.

Where It Goes Wrong

One troubling aspect of the Policy is its ambiguous language with respect to acquisition policies and supply chain security in general. The Policy, for example, aims to “[mandate] security practices related to the design, acquisition, development, use and operation of information resources” (emphasis added). Indeed, section VI, subsection A, paragraph 8 makes reference to the “procurement of indigenously manufactured ICT products,” presumably to the exclusion of imported goods. Although supply chain security must inevitably factor into overall cyber security concerns, such restrictive acquisition policies could not only deprive critical systems of potentially higher-quality alternatives but—depending on the implementation of these policies—could also sharpen the vulnerabilities of these systems.

Not only do these preferential acquisition policies risk mandating lower quality products, but it is unlikely they will be able to keep pace with the rapid pace of innovation in information technology. The United States provides a cautionary tale. The U.S. National Institute of Standards and Technology (NIST), tasked with producing cyber security standards for operators of critical infrastructure, made its first update to a 2005 set of standards earlier this year. Other regulatory agencies, such as the Federal Energy Regulatory Commission (FERC) move at a marginally faster pace yet nevertheless are delayed by bureaucratic processes. FERC has already moved to implement Version 5 of its Critical Infrastructure Protection (CIP) standards, nearly a year before the deadline for Version 4 compliance. The need for new standards thus outpaces the ability of industry to effectively implement them.

Fortunately, U.S. cyber security regulation has so-far been technology-neutral. Operators of Critical Information Infrastructure are required only to ensure certain functionalities and not to procure their hardware and software from any particular supplier. This principle ensures competition and thus security, allowing CII operators to take advantage of the most cutting-edge technologies regardless of name, model, etc. Technology neutrality does of course raise risks, such as those emphasized by the Government of India regarding Huawei and ZTE in 2010. Risk assessment must, however, remain focused on the technology in question and avoid politicization. India’s cyber security policy can be technology neutral as long as it follows one additional principle: trust but verify.

Verification may be facilitated by the use of free and open-source software (FOSS). FOSS provides security through transparency as opposed to security through obscurity and thus enables more agile responses to security responses. Users can identify and patch bugs themselves, or otherwise take advantage of the broader user community for such fixes. Thus open-source software promotes security in much the same way that competitive markets do: by accepting a wide range of inputs.

Despite the virtues of FOSS, there are plenty of good reasons to run proprietary software, e.g. fitness for purpose, cost, and track record. Proprietary software makes verification somewhat more complicated but not impossible. Source code escrow agreements have recently gained some traction as a verification measure for proprietary software, even with companies like Huawei and ZTE. In 2010, the infamous Chinese telecommunications giants persuaded the Indian government to lift its earlier ban on their products by concluding just such an agreement.  Clearly trust but verify is imminently practicable, and thus technology neutrality.

What’s Missing

Level of detail aside, what is most conspicuously absent from the new Policy is any framework for institutional cooperation beyond 1) the designation of CERT-In “as a Nodal Agency for coordination of all efforts for cyber security emergency response and crisis management” and 2) the designation of the “National Critical Information Infrastructure Protection Centre (NCIIPC) to function as the nodal agency for critical information infrastructure protection in the country.” The Policy mentions additionally “a National nodal agency to coordinate all matters related to cyber security in the country, with clearly defined roles & responsibilities.” Some clarity with regard to roles and responsibilities would certainly be in order. Even among these three agencies—assuming they are all distinct—it is unclear who is to be responsible for what.

More confusing still is the number of other pre-existing entities with cyber security responsibilities, in particular the National Technical Research Organization (NTRO), which in an earlier draft of the Policy was to have authority over the NCIIPC. The Ministry of Defense likewise has bolstered its cyber security and cyber warfare capabilities in recent years. Is it appropriate for these to play a role in securing civilian CII? Finally, the already infamous Central Monitoring System, justified predominantly on the very basis of cyber security, receives no mention at all. For a government that is only now releasing its first cyber security policy, India has developed a fairly robust set of institutions around this issue. It is disappointing that the Policy does not more fully address questions of roles and responsibilities among government entities.

Not only is there a lack of coordination among government cyber security entities, but there is no mention of how the public and private sectors are to cooperate on cyber security information—other than oblique references to “public-private partnerships.” Certainly there is a need for information sharing, which is currently facilitated in part by the sector-level CERTS. More interesting, however, is the question of liability for high-impact cyber attacks. To whom are private CII operators accountable in the event of disruptive cyber attacks on their systems? This legal ambiguity must necessarily be resolved in conjunction with the “fiscal schemes and incentives” also alluded to in the Policy in order to motivate strong cyber security practices among all CII operators and the public more broadly.

Next Steps

India’s inaugural National Cyber Security Policy is by and large a step in the right direction. It covers many of the most pressing issues in national cyber security and lays out a number of ambitious goals, ranging from capacity building to robust public-private partnerships. To realize these goals, the government will need a much more detailed roadmap.

Firstly, the extent of the government’s proposed privacy safeguards must be clarified and ideally backed by a separate piece of privacy legislation. As Benjamin Franklin once said, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.” When it comes to cyberspace, the Indian people must demand both liberty and safety.

Secondly, the government should avoid overly preferential acquisition policies and allow risk assessments to be technologically rather than politically driven. Procurement should moreover be technology-neutral. Open source software and source code escrow agreements can facilitate the verification measures that make technology neutrality work.

Finally, to translate this policy into a sound strategy will necessarily require that India’s various means be directed toward specific ends. The Policy hints at organizational mapping with references to CERT-In and the NCIIPC, but the roles and responsibilities of other government agencies as well as the private sector remain underdetermined. Greater clarity on these points would improve inter-agency and public-private cooperation—and thus, one hopes, security—significantly.

Not only is there a lack of coordination among government cyber security entities, but there is no mention of how the public and private sectors are to cooperate on cyber security information—other than oblique references to “public-private partnerships.” Certainly there is a need for information sharing, which is currently facilitated in part by the sector-level CERTS. More interesting, however, is the question of liability for high-impact cyber attacks. To whom are private CII operators accountable in the event of disruptive cyber attacks on their systems? This legal ambiguity must necessarily be resolved in conjunction with the “fiscal schemes and incentives” also alluded to in the Policy in order to motivate strong cyber security practices among all CII operators and the public more broadly.


[*]. Melissa E. Hathaway and Alexander Klimburg, “Preliminary Considerations: On National Cyber Security” in National Cyber Security Framework Manual, ed. Alexander Klimburg, (Tallinn, Estonia: Nato Cooperative Cyber Defence Centre of Excellence, 2012), 13

Guidelines for the Protection of National Critical Information Infrastructure: How Much Regulation?

by Jonathan Diamond last modified Aug 01, 2013 04:48 AM
July has been a busy month for cyber security in India. Beginning with the release of the country’s first National Cyber Security Policy on July 2 and followed just this past week by a set of guidelines for the protection of national critical information infrastructure (CII) developed under the direction of the National Technical Research Organization (NTRO), India has made respectable progress in its thinking on national cyber security.

Yet the National Cyber Security Policy, taken together with what little is known of the as-yet restricted guidelines for CII protection, raises troubling questions, particularly regarding the regulation of cyber security practices in the private sector. Whereas the current Policy suggests the imposition of certain preferential acquisition policies, India would be best advised to maintain technology neutrality to ensure maximum security.

According to Section 70(1) of the Information Technology Act, Critical Information Infrastructure (CII) is defined as a “computer resource, the incapacitation or destruction of which, shall have debilitating impact on national security, economy, public health or safety.” In one of the 2008 amendments to the IT Act, the Central Government granted itself the authority to “prescribe the information security practices and procedures for such protected system[s].” These two paragraphs form the legal basis for the regulation of cyber security within the private sector.

Such basis notwithstanding, private cyber security remains almost completely unregulated. According to the Intermediary Guidelines [pdf], intermediaries are required to report cyber security incidents to India’s national-level computer emergency response team (CERT-In). Other than this relatively small stipulation, the only regulation in place for CII exists at the sector level. Last year the Reserve Bank of India mandated that each bank in India appoint a chief information officer (CIO) and a steering committee on information security. The finance sector is also the only sector of the four designated “critical” by the Department of Electronics and Information Technology (DEIT) Cyber Security Strategy to have established a sector-level CERT, which released a set of non-compulsory guidelines [pdf] for information security governance in late 201

The new guidelines for CII protection seek to reorganize the government’s approach to CII. According to a Times of India article on the new guidelines, the NTRO will outline a total of eight sectors (including energy, aviation, telecom and National Stock Exchange) of CII and then “monitor if they are following the guidelines.” Such language, though vague and certainly unsubstantiated, suggests the NTRO may ultimately be responsible for enforcing the “[mandated] security practices related to the design, acquisition, development, use and operation of information resources” described in the Cyber Security Policy. If so, operators of systems deemed critical by the NTRO or by other authorized government agencies may soon be subject to cyber security regulation—with teeth.

To be sure, some degree of cyber security regulation is necessary. After all, large swaths of the country’s CII are operated by private industry, and poor security practices on the part of one operator can easily undermine the security of the rest. To quote security expert Bruce Schneier, “the externalities in cybersecurity are so great that even the freest free market would fail.” In less academic terms, networks are only as secure as their weakest links. While it is true that many larger enterprises take cyber security quite seriously, small and medium-sized businesses either lack immediate incentives to invest in security (e.g. no shareholders to answer to) or more often lack the basic resources to do so. Some form of government transfer for cyber security related investments could thus go a long way toward shoring up the country’s overall security.

Yet regulation may well extend beyond the simple “fiscal schemes and incentives” outlined in section IV of the Policy and “provide for procurement of indigenously manufactured ICT products that have security implications.” Such, at least, was the aim of the Preferential Market Access (PMA) Policy recently put on hold by the Prime Minister’s Office (PMO). Under pressure from international industry groups, the government has promised to review the PMA Policy, with the PMO indicating it may strike out clauses “regarding preference to domestic manufacturer[s] on security related products that are to be used by private sector.” If the government’s aim is indeed to ensure maximum security (rather than to grow an infant industry), it would be well advised to extend this approach to the Cyber Security Policy and the new guidelines for CII protection.

Although there is a national security argument to be made in favor of such policies—namely that imported ICT products may contain “backdoors” or other nefarious flaws—there are equally valid arguments to be made against preferential acquisition policies, at least for the private sector. First and foremost, it is unlikely that India’s nascent cyber security institutions will be able to regulate procurement in such a rapidly evolving market. Indeed, U.S. authorities have been at pains to set cyber security standards, especially in the past several years. Secondly, by mandating the procurement of indigenously manufactured products, the government may force private industry to forgo higher quality products. Absent access to source code or the ability to effectively reverse engineer imported products, buyers should make decisions based on the products’ performance records, not geo-economic considerations like country of origin. Finally, limiting procurement to a specific subset of ICT products likewise restricts the set of security vulnerabilities available to hackers. Rather than improve security, however, a smaller, more distinct set of vulnerabilities may simply make networks easier targets for the sorts of “debilitating” attacks the Policy aims to avert.

As India broaches the difficult task of regulating cyber security in the private sector, it must emphasize flexibility above all. On one hand, the government should avoid preferential acquisition policies which risk a) overwhelming limited regulatory resources, b) saddling CII operators with subpar products, and/or c) differentiating the country’s attack surface. On the other hand, the government should encourage certain performance standards through precisely the sort of “fiscal schemes and incentives” alluded to in the Cyber Security Policy. Regulation should focus on what technology does and does not do, not who made it or what rival government might have had their hands in its design. Ultimately, India should adopt a policy of technology neutrality, backed by the simple principle of trust but verify. Only then can it be truly secure.

CIS Cybersecurity Series (Part 9) - Saikat Datta

by Purba Sarkar last modified Aug 05, 2013 05:24 AM
CIS interviews Saikat Datta, Resident Editor of DNA, Delhi, as part of the Cybersecurity Series.

"Anonymous speech, in countries which have extremely severe systems of governments, which do not have freedom, etcetera, is welcome. But in a democracy like India, I do not see the need for anonymous speech because it is anyways guaranteed by the Constitution of India. So, no, I do not see the need for anonymity in an open and democratic state like India and I would be seriously worried if such a requirement comes up. Shouldn't I strive to be ideal? The ideal suggests that the constitution has guaranteed freedom of speech. Anonymity, for a time being may be acceptable to some people but I would like a situation where a person, without having to seek anonymity, can speak about anything and not be prosecuted by the state, or persecuted by society. And that is the ideal situation that I would like to strive for." - Saikat Datta, Resident Editor, DNA, Delhi.

Centre for Internet and Society presents its ninth installment of the CIS Cybersecurity Series. 

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.

Saikat Datta is a journalist who began his career in December 1996 and has worked with several publications like The Indian Express, the Outlook magazine and the DNA newspaper. He is currently the Resident Editor of DNA, Delhi. Saikat has authored a book on India's Special Forces and presented papers at seminars organized by the Centre for Land Warfare Studies, the Centre for Air Power Studies and the National Security Guards. He has also been awarded the International Press Institute Award for investigative journalism, the National RTI award in the journalism category and the Jagan Phadnis Memorial Award for investigative journalism.

 

 
This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

'Ethical Hacker' Saket Modi Calls for Stronger Cyber Security Discussions

by Kovey Coles last modified Aug 05, 2013 01:11 PM
Twenty-two year old Saket Modi is the CEO and co-founder of Lucideus, a leading cyber security company in India which claims to have worked with 4 out of 5 top global e-commerce companies, 4 out of 10 top IT companies in the world, and 3 out of 5 top banks of the Asia Pacific.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC


At the Confederation of Indian Industry (CII) conference on July 13, titled “ACT – Achieving Cyber-Security Together,” Modi as the youngest speaker on the agenda delivered an impromptu talk which lambasted the weaknesses of modern cyber security discussions, enlightened the audience on modern capabilities and challenges of leading cyber security groups, and ultimately received a standing ovation from the crowd. As a later speaker commented, Modi’s controversial opinions and practitioner insight had "set the auditorium ablaze for the remainder of the evening". Since then the Centre for Internet and Society (CIS) has had the pleasure of interviewing Saket Modi over Skype.

It is quite easy to find accounts of Saket Modi's introduction into hacking just by typing his name in the search engine. Faced with the pressure of failing, a teenage Saket discovered how to hack into his high school Chemistry teacher’s test and answer database. After successfully obtaining the answers, and revealing his wrong doings to his teacher, the young man grew intrigued by the possibilities of hacking. "I thought, if I could do this in a couple hours, four hours, then what might I be able to do in four days, four weeks, four months?"

Nowadays, Modi describes himself and his Lucideus team as "ethical hackers", a term recently espoused by hacker groups in the public eye. As opposed to "hacktivists", who utilize hacking methods (including attacks) to achieve or bring awareness to political issues, ethical hackers claim to exclusively use their computer skills to support defenses. At first, incorporation of ethics into a for-profit organization’s game plan may seem confusing, as it leaves room for key questions, like how does one determine which clients constitute ethical business? When asked, however, Modi clarifies by explaining how the ethics are not manifest in the entities Lucideus supports, but instead inherent in the choice of building defensive networks as opposed to using their skills for attack or debilitation. Nevertheless, considerations remain as to whether supporting the cyber security of some entities can lead to the insecurity of others, for example, strengthening the agencies which work in covert cyber espionage. On this point, Modi seems more ambivalent, saying "it depends on a case by case basis". But he still believes cyber security is a right that should be enjoyed by all, "entitled to [you] the moment you set foot on the internet".

As an experienced professional in the field who often gives input on major cyber policy decisions, Modi emphasizes the necessity of youth engagement in cyber security practice and policy. He calls his age bracket the “web generation,” those who have “grown with technology.” According to Modi, no one over 50 or 60 years of age can properly meet the current challenges of the cyber security realm. It is "a sad thing" that those older leaders carry the most power in policy making, and that they often have problems with both understanding and acceptability of modern technological capabilities. For the public, businesses, and also government, there are misconceptions about the importance of cyber security and the extent of modern cyber threats, threats which Modi and his company claim to combat regularly. "About 90 per cent of the crimes that take place in cyber space are because of lack of knowledge, rather than the expertise of the hacker,” he explains. Modi mentions a few basic misconceptions, as simple as, "if I have an anti-virus, my system is secured" or "if you have HTTPS certificate and SSL connection, your system is secured". “These are like wearing an elbow guard while playing cricket,” Modi tells. “If the ball comes at the elbow then you are protected, but what about the rest of the body?”

This highlights another problem evident in India’s current cyber security scene, the problem of lacking “quality institutes to produce good cyber security experts.” For example, Modi takes offence at there not being “a single institute which is providing cyber security at the undergraduate level [in India].” He alludes to the recently unveiled National Cyber Security Policy, specifically the call for five lakh cyber security experts in upcoming years. He calls this “a big figure,” but agrees that there needs to be a lot more awareness throughout the nation. “You really have to change a lot of things,” he says, “in order to get the right things in the right place here in India.”

When considering citizen privacy in relation to cyber security, and the relationship between the two (be it direct or inverse), Saket Modi says the important factor is the governing body, because the issue ultimately resolves to trust. Citizens must trust the “right people with the right qualifications” to store and protect their sensitive data, and to respect privacy. Modi is no novice to the importance of personal data protection, and his company works with a plethora of extremely sensitive information relating to both their clients and their clients’ clients data, so it operates with due care lest it create a “wikileaks part two.”

On internationalization and cyber security, he views the connection between the two as natural, intrinsic. “Cyberspace has added a new dimension to humanity,” says Modi, and tells how former constructs of physical constraints and linear bounds no longer apply. International cooperation is especially pertinent, according to Modi, because the greatest challenge for catching today’s criminal hackers is their international anonymity, “the ability to jump from one country to the other in a matter of milliseconds.”

With the extent of the challenges facing cyber defense specialists, and with the somewhat disorderly current state of Indian cyber security, it is curious to see that Saket Modi has devoted himself to the "ethical" side of hacking. Why hasn’t he or the rest of the Lucideus team resorted to offensive hacking, since Modi claims the majority of cyber attacks of the world who are committed by people also fall between the ages of 15 and 24? Apparently, the answer is simple. “We believe in the need for ethical hacking,” he defends. “We believe in the purpose of making the internet safer.”

Ethical Issues in Open Data

by Kovey Coles last modified Aug 07, 2013 09:19 AM
On August 1, 2013, I took part in a web meeting, organized and hosted by Tim Davies of the World Wide Web foundation. The meeting, titled “Ethical issues in Open Data,” had an agenda focused around privacy considerations in the context of the open data movement.

The main panelists, Carly Nyst and Sam Smith from Privacy International, as well as Steve Song from the International  Development Research Centre, were joined by roughly a dozen other privacy and development researchers from around the globe in the hour long session.

The primary issue of the meeting was the concern over modern capabilities of cross-analytics for de-anonymizing data sets and revealing personally identifiable information (PII) in open data. Open data can constitute publicly available information such as budgets, infrastructures, and population statistics, as long as the data meets the three open data characteristics: accessibility, machine readability, and availability for re-use. “Historically,” said Tim Davies, “public registers have been protected through obscurity.” However, both the capabilities of data analysts and the definition of personal data have continued to expand in recent years. This concern thus presents a conflict between researchers who advocate governments releasing open data reports, and researchers who emphasize privacy in the developing world.

Steve Song, advisor to IDRC Information & Networks program, spoke of the potential collateral damage that comes with publishing more and more types of information. Song addressed the imperative of the meeting in saying, “privacy needs to be a core part of open data conversation.” In his presentation, he gave a particularly interesting example of the tensions between public and private information implications. Following the infamous 2012 school shooting in Newtown, Connecticut, the information on Newtown’s gun permit owning citizens (made publicly available through America’s Freedom of Information Act) was aggregated into an interactive map which revealed the citizens’ addresses. This obviously became problematic for the Newtown community, as the map not only singled out homes which exercised their right to bear arms but also indirectly revealed which homes were without firearm protection and thereby more vulnerable to theft and crime. The Newtown example clearly demonstrates the relationship (and conflict) between open data and privacy; it resolves to the conflict between the right to information and the right to privacy.

An apparent issue surrounding open data is its perceived binary nature. Many advocates either view data as being open, or not; any intermediary boundaries are only forms of governments limiting data accessibility. Therefore, a point raised by meeting attendee Raed Sharif aptly presented an open data counter-argument. Sarif noted how, inversely, privacy conceptions may form a threat to open data. He mentioned how governments could take advantage of privacy arguments to justify their refusal to publish open reports.

However, Carly Nyst summarized the privacy concern and argument in her remarks near the end of the meeting. Namely, she reasoned that the open data mission is viable, if only limited to generic data, i.e., data about infrastructure, or other information that is in no way personal. Doing so will avoid obstructions of individual privacy. Until more advanced anonymization techniques can be achieved, which can overcome modern re-identification methods, publicly publishing PII may prove too risky. It was generally agreed upon during the meeting that open data is not inherently bad, and in fact its analysis and availability can be beneficial, but the threat of its misuse makes it dangerous. For the future of open data, researchers and advocates should perhaps consider more nuanced approaches to the concept in order to respect considerations for other ethical issues, such as privacy.

FinFisher in India and the Myth of Harmless Metadata

by Maria Xynou last modified Aug 13, 2013 11:30 AM
In this article, Maria Xynou argues that metadata is anything but harmless, especially since FinFisher — one of the world's most controversial types of spyware — uses metadata to target individuals.
FinFisher in India and the Myth of Harmless Metadata

by John-Norris on Flickr

In light of PRISM, the Central Monitoring System (CMS) and other such surveillance projects in India and around the world, the question of whether the collection of metadata is “harmless” has arisen.[1] In order to examine this question, FinFisher[2] — surveillance spyware — has been chosen as a case study to briefly examine to what extent the collection and surveillance of metadata can potentially violate the right to privacy and other human rights. FinFisher has been selected as a case study not only because its servers have been recently found in India[3] but also because its “remote monitoring solutions” appear to be very pervasive even on the mere grounds of metadata.

FinFisher in India

FinFisher is spyware which has the ability to take control of target computers and capture even encrypted data and communications. The software is designed to evade detection by anti-virus software and has versions which work on mobile phones of all major brands.[4] In many cases, the surveillance suite is installed after the target accepts installation of a fake update to commonly used software.[5] Citizen Lab researchers have found three samples of FinSpy that masquerades as Firefox.[6]

FinFisher is a line of remote intrusion and surveillance software developed by Munich-based Gamma International. FinFisher products are sold exclusively to law enforcement and intelligence agencies by the UK-based Gamma Group.[7] A few months ago, it was reported that command and control servers for FinSpy backdoors, part of Gamma International´s FinFisher “remote monitoring solutions”, were found in a total of 25 countries, including India.[8]

The following map, published by the Citizen Lab, shows the 25 countries in which FinFisher servers have been found.[9]

Map

The above map shows the results of scanning for characteristics of FinFisher command and control servers.

FinFisher spyware was not found in the countries coloured blue, while the colour green is used for countries not responding. The countries using FinFisher range from shades of orange to shades of red, with the lightest shade of orange ranging to the darkest shade of red on a scale of 1-6, and with 1 representing the least active servers and 6 representing the most active servers in regards to the use of FinFisher. On a scale of 1-6, India is marked a 3 in terms of actively using FinFisher.[10]

Research published by the Citizen Lab reveals that FinSpy servers were recently found in India, which indicates that Indian law enforcement agencies may have bought this spyware from Gamma Group and might be using it to target individuals in India.[11] According to the Citizen Lab, FinSpy servers in India have been detected through the HostGator operator and the first digits of the IP address are: 119.18.xxx.xxx. Releasing complete IP addresses in the past has not proven useful, as the servers are quickly shut down and relocated, which is why only the first two octets of the IP address are revealed.[12]

The Citizen Lab's research reveals that FinFisher “remote monitoring solutions” were found in India, which, according to Gamma Group's brochures, include the following:

  • FinSpy: hardware or software which monitors targets that regularly change location, use encrypted and anonymous communications channels and reside in foreign countries. FinSpy can remotely monitor computers and encrypted communications, regardless of where in the world the target is based. FinSpy is capable of bypassing 40 regularly tested antivirus systems, of monitoring the calls, chats, file transfers, videos and contact lists on Skype, of conducting live surveillance through a webcam and microphone, of silently extracting files from a hard disk, and of conducting a live remote forensics on target systems. FinSpy is hidden from the public through anonymous proxies.[13]
  • FinSpy Mobile: hardware or software which remotely monitors mobile phones. FinSpy Mobile enables the interception of mobile communications in areas without a network, and offers access to encrypted communications, as well as to data stored on the devices that is not transmitted. Some key features of FinSpy Mobile include the recording of common communications like voice calls, SMS/MMS and emails, the live surveillance through silent calls, the download of files, the country tracing of targets and the full recording of all BlackBerry Messenger communications. FinSpy Mobile is hidden from the public through anonymous proxies.[14]
  • FinFly USB: hardware which is inserted into a computer and which can automatically install the configured software with little or no user-interaction and does not require IT-trained agents when being used in operations. The FinFly USB can be used against multiple systems before being returned to the headquarters and its functionality can be concealed by placing regular files like music, video and office documents on the device. As the hardware is a common, non-suspicious USB device, it can also be used to infect a target system even if it is switched off.[15]
  • FinFly LAN: software which can deploy a remote monitoring solution on a target system in a local area network (LAN). Some of the major challenges law enforcement faces are mobile targets, as well as targets who do not open any infected files that have been sent via email to their accounts. FinFly LAN is not only able to deploy a remote monitoring solution on a target´s system in local area networks, but it is also able to infect files that are downloaded by the target, by sending fake software updates for popular software or to infect the target by injecting the payload into visited websites. Some key features of the FinFly LAN include: discovering all computer systems connected to LANs, working in both wired and wireless networks, and remotely installing monitoring solutions through websites visited by the target. FinFly LAN has been used in public hotspots, such as coffee shops, and in the hotels of targets.[16]
  • FinFly Web: software which can deploy remote monitoring solutions on a target system through websites. FinFly Web is designed to provide remote and covert infection of a target system by using a wide range of web-based attacks. FinFly Web provides a point-and-click interface, enabling the agent to easily create a custom infection code according to selected modules. It provides fully-customizable web modules, it can be covertly installed into every website and it can install the remote monitoring system even if only the email address is known.[17]
  • FinFly ISP: hardware or software which deploys a remote monitoring solution on a target system through an ISP network. FinFly ISP can be installed inside the Internet Service Provider Network, it can handle all common protocols and it can select targets based on their IP address or Radius Logon Name. Furthermore, it can hide remote monitoring solutions in downloads by targets, it can inject remote monitoring solutions as software updates and it can remotely install monitoring solutions through websites visited by the target.[18]

Although FinFisher is supposed to be used for “lawful interception”, it has gained notoriety for targeting human rights activists.[19] According to Morgan Marquis-Boire, a security researcher and technical advisor at the Munk School and a security engineer at Google, FinSpy has been used in Ethiopia to target an opposition group called Ginbot.[20] Researchers have argued that FinFisher has been sold to Bahrain's government to target activists, and such allegations were based on an examination of malicious software which was emailed to Bahraini activists.[21] Privacy International has argued that FinFisher has been deployed in Turkmenistan, possibly to target activists and political dissidents.[22]

Many questions revolving around the use of FinFisher and its “remote monitoring solutions” remain   vague, as there is currently inadquate proof of whether this spyware is being used to target individuals by law enforcement agencies in the countries where command and control servers have been found, such as India.[23] However, FinFisher's brochures which were circulated in the ISS world trade shows and leaked by WikiLeaks do reveal some confirmed facts: Gamma International claims that its FinFisher products are capable of taking control of target computers, of capturing encrypted data and of evading mainstream anti-virus software.[24] Such products are exhibited in the world's largest surveillance trade show and probably sold to law enforcement agencies around the world.[25] This alone unveils a concerning fact: spyware which is so sofisticated that it even evades encryption and anti-virus software is currently in the market and law enforcement agencies can potentially use it to target activists and anyone who does not comply with social conventions.[26] A few months ago, two Indian women were arrested after having questioned the shutdown of Mumbai for Shiv Sena patriarch Bal Thackeray's funeral.[27] Thus, it remains unclear what type of behaviour is targeted by law enforcement agencies and whether spyware, such as FinFisher, would be used in India to track individuals without a legally specified purpose.

Furthermore, India lacks privacy legislation which could safeguard individuals from potential abuse, while sections 66A and 69 of the Information Technology (Amendment) Act, 2008, empower Indian authorities with extensive surveillance capabilites.[28] While it remains unclear if Indian law enforcement agencies are using FinFisher spy products to unlawfully target individuals, it is a fact that FinFisher control and command servers have been found in India and that, if used, they could potentially have severe consequences on individuals' right to privacy and other human rights.[29]

The Myth of Harmless Metadata

Over the last months, it has been reported that the Central Monitoring System (CMS) is being implemented in India, through which all telecommunications and Internet communications in the country are being centrally intercepted by Indian authorities. This mass surveillance of communications in India is enabled by the omission of privacy legislation and Indian authorities are currently capturing the metadata of communications.[30]

Last month, Edward Snowden leaked confidential U.S documents on PRISM, the top-secret National Security Agency (NSA) surveillance programme that collects metadata through telecommunications and Intenet communications. It has been reported that through PRISM, the NSA has tapped into the servers of nine leading Internet companies: Microsoft, Google, Yahoo, Skype, Facebook, YouTube, PalTalk, AOL and Apple.[31] While the extent to which the NSA is actually tapping into these servers remains unclear, it is certain that the NSA has collected metadata on a global level.[32] Yet, the question of whether the collection of metadata is “harmful” remains ambiguous.

According to the National Information Standards Organization (NISO), the term “metadata” is defined as “structured information that describes, explains, locates or otherwise makes it easier to retrieve, use or manage an information resource”. NISO claims that metadata is “data about data” or “information about information”.[33] Furthermore, metadata is considered valuable due to its following functions:

  • Resource discovery
  • Organizing electronic resources
  • Interoperability
  • Digital Identification
  • Archiving and preservation

Metadata can be used to find resources by relevant criteria, to identify resources, to bring similar resources together, to distinguish dissimilar resources and to give location information. Electronic resources can be organized through the use of various software tools which can automatically extract and reformat information for Web applications. Interoperability is promoted through metadata, as describing a resource with metadata allows it to be understood by both humans and machines, which means that data can automatically be processed more effectively. Digital identification is enabled through metadata, as most metadata schemes include standard numbers for unique identification. Moreover, metadata enables the archival and preservation of large volumes of digital data.[34]

Surveillance projects, such as PRISM and India's CMS, collect large volumes of metadata, which include the numbers of both parties on a call, location data, call duration, unique identifiers, the International Mobile Subscriber Identity (IMSI) number, email addresses, IP addresses and browsed webpages.[35] However, the fact that such surveillance projects may not have access to content data might potentially create a false sense of security.[36] When Microsoft released its report on data requests by law enforcement agencies around the world in March 2013, it revealed that most of the disclosed data was metadata, while relatively very little content data was allegedly disclosed.[37]

imilarily, Google's transparency report reveals that the company disclosed large volumes of metadata to law enforcement agencies, while restricting its disclosure of content data.[38]

Such reports may potentially provide a sense of security to the public, as they reassure that the content of personal emails, for example, has not been shared with the government, but merely email addresses – which might be publicly available online anyway. However, is content data actually more “harmful” than metadata? Is metadata “harmless”? How much data does metadata actually reveal?

The Guardian recently published an article which includes an example of how individuals can be tracked through their metadata. In particular, the example explains how an individual is tracked – despite using an anonymous email account – by logging in from various hotels' public Wi-Fi and by leaving trails of metadata that include times and locations. This example illustrates how an individual can be tracked through metadata alone, even when anonymous accounts are being used.[39]

Wired published an article which states that metadata can potentially be more harmful than content data because “unlike our words, metadata doesn't lie”. In particular, content data shows what an individual says – which may be true or false – whereas metadata includes what an individual does. While the validity of the content within an email may potentially be debateable, it is undeniable that an individual logged into specific websites – if that is what that individuals' IP address shows. Metadata, such as the browsing habits of an individual, may potentially provide a more thorough and accurate profile of an individual than that individuals' email content, which is why metadata can potentially be more harmful than content data.[40]

Furthermore, voice content is hard to process and written content in an email or chat communication may not always be valid. Metadata, on the other hand, provides concrete patterns of an individuals' behaviour, interests and interactions. For example, metadata can potentially map out an individuals' political affiliation, interests, economic background, institution, location, habits and the people that individual interacts with. Such data can potentially be more valuable than content data, because while the validity of email content is debateable, metadata usually provides undeniable facts. Not only is metadata more accurate than content data, but it is also ideally suited to automated analysis by a computer. As most metadata includes numeric figures, it can easily be analysed by data mining software, whereas content data is more complicated.[41]

FinFisher products, such as FinFly LAN, FinFly Web and FinFly ISP, provide solid proof that the collection of metadata can potentially be “harmful”. In particular, FinFly LAN can be deployed in a target system in a local area network (LAN) by infecting files that are downloaded by the target, by sending fake software updates for popular software or by infecting the payload into visited websites. The fact that FinFly LAN can remotely install monitoring solutions through websites visited by the target indicates that metadata alone can be used to acquire other sensitive data.[42]

FinFly Web can deploy remote monitoring solutions on a target system through websites. Additionally, FinFly Web can be covertly installed into every website and it can install the remote monitoring system even if only the email address is known.[43] FinFly ISP can select targets based on their IP address or Radius Logon Name. Furthermore, FinFly ISP can remotely install monitoring solutions through websites visited by the target, as well as inject remote monitoring solutions as software updates.[44] In other words, FinFisher products, such as FinFly LAN, FinFly Web and FinFly ISP, can target individuals, take control of their computers and their data, and capture even encrypted data and communications with the help of metadata alone.

The example of FinFisher products illustrates that metadata can potentially be as “harmful” as content data, if acquired unlawfully and without individual consent.[45] Thus, surveillance schemes, such as PRISM and India's CMS, which capture metadata without individuals' consent can potentially pose a major threat to the right to privacy and other human rights.[46] Privacy can be defined as the claim of individuals, groups or institutions to determine when, how and to what extent information about them is communicated to others.[47] Furthermore, privacy is at the core of human rights because it protects individuals from abuse by those in power.[48] The unlawful collection of metadata exposes individuals to the potential violation of their human rights, as it is not transparent who has access to their data, whether it is being shared with third parties or for how long it is being retained.

It is not clear if Indian law enforcement agencies are actually using FinFisher products, but the Citizen Lab did find FinFisher command and control servers in the country which indicates that there is a high probability that such spyware is being used.[49] This probability is highly concerning not only because the specific spy products have such advanced capabilities that they are even capable of capturing encrypted data, but also because India currently lacks privacy legislation which could safeguard individuals.

Thus, it is recommended that Indian law enforcement agencies are transparent and accountable if they are using spyware which can potentially breach their citizens' human rights and that privacy legislation is enacted into law. Lastly, it is recommended that all surveillance technologies are strictly regulated with regards to the protection of human rights and that Indian authorities adopt the principles on communication surveillance formulated by the Electronic Frontier Foundation and Privacy International.[50] The above could provide a decisive first step in ensuring that India is the democracy it claims to be.


[1]. Robert Anderson (2013), “Wondering What Harmless 'Metadata' Can Actually Reveal? Using Own Data, German Politician Shows Us”, The CSIA Foundation, http://bit.ly/1cIhu7G

[2]. Gamma Group, FinFisher IT Intrusion, http://bit.ly/fnkGF3

[3]. Morgan Marquis-Boire, Bill Marczak, Claudio Guarnieri & John Scott-Railton, “You Only Click Twice: FinFisher's Global Proliferation”, The Citizen Lab, 13 March 2013, http://bit.ly/YmeB7I

[4]. Michael Lewis, “FinFisher Surveillance Spyware Spreads to Smartphones”, The Star: Business, 30 August 2012, http://bit.ly/14sF2IQ

[5]. Marcel Rosenbach, “Troublesome Trojans: Firm Sought to Install Spyware Via Faked iTunes Updates”, Der Spiegel, 22 November 2011, http://bit.ly/14sETVV

[6]. Intercept Review, Mozilla to Gamma: stop disguising your FinSpy as Firefox, 02 May 2013, http://bit.ly/131aakT

[7]. Intercept Review, LI Companies Review (3) – Gamma, 05 April 2012, http://bit.ly/Hof9CL

[8]. Morgan Marquis-Boire, Bill Marczak, Claudio Guarnieri & John Scott-Railton, For Their Eyes Only: The Commercialization of Digital Spying, Citizen Lab and Canada Centre for Global Security Studies, Munk School of Global Affairs, University of Toronto, 01 May 2013, http://bit.ly/ZVVnrb

[9]. Morgan Marquis-Boire, Bill Marczak, Claudio Guarnieri & John Scott-Railton, “You Only Click Twice: FinFisher's Global Proliferation”, The Citizen Lab, 13 March 2013, http://bit.ly/YmeB7I

[10]. Ibid.

[11]. Morgan Marquis-Boire, Bill Marczak, Claudio Guarnieri & John Scott-Railton, For Their Eyes Only: The Commercialization of Digital Spying, Citizen Lab and Canada Centre for Global Security Studies, Munk School of Global Affairs, University of Toronto, 01 May 2013, http://bit.ly/ZVVnrb

[12]. Morgan Marquis-Boire, Bill Marczak, Claudio Guarnieri & John Scott-Railton, “You Only Click Twice: FinFisher's Global Proliferation”, The Citizen Lab, 13 March 2013, http://bit.ly/YmeB7I

[13]. Gamma Group, FinFisher IT Intrusion, FinSpy: Remote Monitoring & Infection Solutions, WikiLeaks: The Spy Files, http://bit.ly/zaknq5

[14]. Gamma Group, FinFisher IT Intrusion, FinSpy Mobile: Remote Monitoring & Infection Solutions, WikiLeaks: The Spy Files, http://bit.ly/19pPObx

[15]. Gamma Group, FinFisher IT Intrusion, FinFly USB: Remote Monitoring & Infection Solutions, WikiLeaks: The Spy Files, http://bit.ly/1cJSu4h

[16]. Gamma Group, FinFisher IT Intrusion, FinFly LAN: Remote Monitoring & Infection Solutions, WikiLeaks: The Spy Files, http://bit.ly/14J70Hi

[17]. Gamma Group, FinFisher IT Intrusion, FinFly Web: Remote Monitoring & Intrusion Solutions, WikiLeaks: The Spy Files, http://bit.ly/19fn9m0

[18]. Gamma Group, FinFisher IT Intrusion, FinFly ISP: Remote Monitoring & Intrusion Solutions, WikiLeaks: The Spy Files, http://bit.ly/13gMblF

[19]. Gerry Smith, “FinSpy Software Used To Surveil Activists Around The World, Reports Says”, The Huffington Post, 13 March 2013, http://huff.to/YmmhXI

[20]. Jeremy Kirk, “FinFisher Spyware seen Targeting Victims in Vietnam, Ethiopia”, Computerworld: IDG News, 14 March 2013, http://bit.ly/14J8BwW

[21]. Reporters without Borders: For Freedom of Information (2012), The Enemies of the Internet: Special Edition: Surveillance, http://bit.ly/10FoTnq

[22]. Privacy International, FinFisher Report, http://bit.ly/QlxYL0

[23]. Morgan Marquis-Boire, Bill Marczak, Claudio Guarnieri & John Scott-Railton, “You Only Click Twice: FinFisher's Global Proliferation”, The Citizen Lab, 13 March 2013, http://bit.ly/YmeB7I

[24]. Gamma Group, FinFisher IT Intrusion, FinSpy: Remote Monitoring & Infection Solutions, WikiLeaks: The Spy Files, http://bit.ly/zaknq5

[25]. Adi Robertson, “Paranoia Thrives at the ISS World Cybersurveillance Trade Show”, The Verge, 28 December 2011, http://bit.ly/tZvFhw

[26]. Gerry Smith, “FinSpy Software Used To Surveil Activists Around The World, Reports Says”, The Huffington Post, 13 March 2013, http://huff.to/YmmhXI

[27]. BBC News, “India arrests over Facebook post criticising Mumbai shutdown”, 19 November 2012, http://bbc.in/WoSXkA

[28]. Indian Ministry of Law, Justice and Company Affairs, The Information Technology (Amendment) Act, 2008, http://bit.ly/19pOO7t

[29]. Morgan Marquis-Boire, Bill Marczak, Claudio Guarnieri & John Scott-Railton, For Their Eyes Only: The Commercialization of Digital Spying, Citizen Lab and Canada Centre for Global Security Studies, Munk School of Global Affairs, University of Toronto, 01 May 2013, http://bit.ly/ZVVnrb

[30]. Phil Muncaster, “India introduces Central Monitoring System”, The Register, 08 May 2013, http://bit.ly/ZOvxpP

[31]. Glenn Greenwald & Ewen MacAskill, “NSA PRISM program taps in to user data of Apple, Google and others”, The Guardian, 07 June 2013, http://bit.ly/1baaUGj

[32]. BBC News, “Google, Facebook and Microsoft seek data request transparency”, 12 June 2013, http://bbc.in/14UZCCm

[33]. National Information Standards Organization (2004), Understanding Metadata, NISO Press, http://bit.ly/LCSbZ

[34]. Ibid.

[35]. The Hindu, “In the dark about 'India's PRISM'”, 16 June 2013, http://bit.ly/1bJCXg3 ; Glenn Greenwald, “NSA collecting phone records of millions of Verizon customers daily”, The Guardian, 06 June 2013, http://bit.ly/16L89yo

[36]. Robert Anderson, “Wondering What Harmless 'Metadata' Can Actually Reveal? Using Own Data, German Politician Shows Us”, The CSIA Foundation, 01 July 2013, http://bit.ly/1cIhu7G

[37]. Microsoft: Corporate Citizenship, 2012 Law Enforcement Requests Report,http://bit.ly/Xs2y6D

[38]. Google, Transparency Report, http://bit.ly/14J7hKp

[39]. Guardian US Interactive Team, A Guardian Guide to your Metadata, The Guardian, 12 June 2013, http://bit.ly/ZJLkpy

[40]. Matt Blaze, “Phew, NSA is Just Collecting Metadata. (You Should Still Worry)”, Wired, 19 June 2013, http://bit.ly/1bVyTJF

[41]. Ibid.

[42]. Gamma Group, FinFisher IT Intrusion, FinFly LAN: Remote Monitoring & Infection Solutions, WikiLeaks: The Spy Files, http://bit.ly/14J70Hi

[43]. Gamma Group, FinFisher IT Intrusion, FinFly Web: Remote Monitoring & Intrusion Solutions, WikiLeaks: The Spy Files, http://bit.ly/19fn9m0

[44]. Gamma Group, FinFisher IT Intrusion, FinFly ISP: Remote Monitoring & Intrusion Solutions, WikiLeaks: The Spy Files, http://bit.ly/13gMblF

[45]. Robert Anderson, “Wondering What Harmless 'Metadata' Can Actually Reveal? Using Own Data, German Politician Shows Us”, The CSIA Foundation, 01 July 2013, http://bit.ly/1cIhu7G

[46]. Shalini Singh, “India's surveillance project may be as lethal as PRISM”, The Hindu, 21 June 2013, http://bit.ly/15oa05N

[47]. Cyberspace Law and Policy Centre, Privacy, http://bit.ly/14J5u7W

[48]. Bruce Schneier, “Privacy and Power”, Schneier on Security, 11 March 2008, http://bit.ly/i2I6Ez

[49]. Morgan Marquis-Boire, Bill Marczak, Claudio Guarnieri & John Scott-Railton, For Their Eyes Only: The Commercialization of Digital Spying, Citizen Lab and Canada Centre for Global Security Studies, Munk School of Global Affairs, University of Toronto, 01 May 2013, http://bit.ly/ZVVnrb

[50]. Elonnai Hickok, “Draft International Principles on Communications Surveillance and Human Rights”, The Centre for Internet and Society, 16 January 2013, http://bit.ly/XCsk9b

Freedom from Monitoring: India Inc Should Push For Privacy Laws

by Sunil Abraham last modified Aug 21, 2013 07:04 AM
More surveillance than absolutely necessary actually undermines the security objective.

This article by Sunil Abraham was published in Forbes India Magazine on August 21, 2013.


I think I understand why the average Indian IT entrepreneur or enterprise does not have a position on blanket surveillance. This is because the average Indian IT enterprise’s business model depends on labour arbitrage, not intellectual property. And therefore they have no worries about proprietary code or unfiled patent applications being stolen by competitors via rogue government officials within projects such as NATGRID, UID and, now, the CMS.

A sub-section of industry, especially the technology industry, will always root for blanket surveillance measures. The surveillance industry has many different players, ranging from those selling biometric and CCTV hardware to those providing solutions for big data analytics and legal interception systems. There are also more controversial players who provide spyware, especially those in the market for zero-day exploits. The cheerleaders for the surveillance industry are techno-determinists who believe you can solve any problem by throwing enough of the latest and most expensive technology at it.

What is surprising, though, is that other indigenous or foreign enterprises that depend on secrecy and confidentiality—in sectors such a banking, finance, health, law, ecommerce, media, consulting and communications—also don’t seem to have a public position on the growing surveillance ambitions of ‘democracies’ such as India and the United States of America. (Perhaps the only exceptions are a few multinational internet and software companies that have made some show of resistance and disagreement with the blanket surveillance paradigm.)

Is it because these businesses are patriotic? Do they believe that secrecy, confidentiality and, most importantly, privacy, must be sacrificed for national security? If that were true then it would not be a particularly wise thing to do, as privacy is the precondition for security. Ann Cavoukian, privacy commissioner of Ontario, calls it a false dichotomy. Bruce Schneier, security technologist and writer, calls it a false zero sum game; he goes on to say, “There is no security without privacy. And liberty requires both security and privacy.”

The reason why the secret recipe of Coca Cola is still secret after over 120 years is the same as the reason why a captured soldier cannot spill the beans on the overall war strategy. Corporations, like militaries, have layers and layers of privacy and secrecy. The ‘need to know’ principle resists all centralising tendencies, such as blanket surveillance. It’s important to note that targeted surveillance to identify a traitor or spy within the military, or someone engaged in espionage within a corporation, is pretty much an essential. However, any more surveillance than absolutely necessary actually undermines the security objective. To summarise, privacy is a pre-condition to the security of the individual, the enterprise, the military and the nation state.

Most people complaining online about projects like the Central Monitoring System seem to think that India has no privacy laws. This is completely untrue: We have around 50 different laws, rules and regulations that aim to uphold privacy and confidentiality in various domains. Unfortunately, most of those policies are very dated and do not sufficiently take into account the challenges of contemporary information societies. These policy documents need to be updated and harmonised through the enactment of a new horizontal privacy law. A small minority will say that Section 43(A) of the Information Technology Act is the India privacy law. That is not completely untrue, but is a gross exaggeration. Section 43(A) is really only a data security provision and, at that, it does not even comprehensively address data protection, which is only a sub-set of the overall privacy regulation required in a nation.

What would an ideal privacy law for India look like? For one, it would protect the rights of all persons, regardless of whether they are citizens or residents. Two, it would define privacy principles. Three, it would establish the office of an independent and autonomous privacy commissioner, who would be sufficiently empowered to investigate and take action against both government and private entities. Four, it would define civil and criminal offences, remedies and penalties. And five, it would have an overriding effect on previous legislation that does not comply with all the privacy principles.

The Justice AP Shah Committee report, released in October 2012, defined the Indian privacy principles as notice, choice and consent, collection limitation, purpose limitation, access and correction, disclosure of information, security, openness and accountability. The report also lists the exemptions and limitations, so that privacy protections do not have a chilling effect on the freedom of expression and transparency enabled by the Right to Information Act.

The Department of Personnel and Training has been working on a privacy bill for the last three years. Two versions of the bill had leaked before the Justice AP Shah Committee was formed. The next version of the bill, hopefully implementing the recommendations of the Justice AP Shah Committee report, is expected in the near future. In a multi-stakeholder-based parallel process, the Centre for Internet and Society (where I work), along with FICCI and DSCI, is holding seven round tables on a civil society draft of the privacy bill and the industry-led efforts on co-regulation.

The Indian ITES, KPO and BPO sector should be particularly pleased with this development. As should any other Indian enterprise that holds personal information of EU and US nationals. This is because the EU, after the enactment of the law, will consider data protection in India adequate as per the requirements of its Data Protection Directive. This would mean that these enterprises would not have to spend twice the time and resources ensuring compliance with two different regulatory regimes.

Is the lack of enthusiasm for privacy in the Indian private sector symptomatic of Indian societal values? Can we blame it on cultural relativism, best exemplified by what Simon Davies calls “the Indian Train Syndrome, in which total strangers will disclose their lives on a train to complete strangers”? But surely, when email addresses are exchanged at the end of that conversation, they are not accompanied by passwords. Privacy is perhaps differently configured in Indian societies but it is definitely not dead. Fortunately for us, calls to protect this important human right are growing every day.

The Personal Data (Protection) Bill, 2013

by Prachi Arya last modified Aug 30, 2013 02:53 PM
Below is the text of the Personal Data (Protection) Bill, 2013 as discussed at the 6th Privacy Roundtable, New Delhi held on 24 August 2013. Note: This version of the Bill caters only to the Personal Data regime. The surveillance and privacy of communications regime was not discussed at the 6th Privacy Roundtable.

PDF document icon Personal Data (Protection) Bill.pdf — PDF document, 193 kB (198250 bytes)

Report on the Sixth Privacy Roundtable Meeting, New Delhi

by Prachi Arya last modified Aug 30, 2013 03:04 PM
In 2013 the Centre for Internet and Society (CIS) drafted the Privacy Protection Bill as a citizens' version of a privacy legislation for India. Since April 2013, CIS has been holding Privacy Roundtables in collaboration with Federation of Indian Chambers of Commerce and Industry (FICCI) and DSCI, with the objective of gaining public feedback to the Privacy Protection Bill and other possible frameworks for privacy in India. The following is a report on the Sixth Privacy Roundtable held in New Delhi on August 24, 2013.
Report on the Sixth Privacy Roundtable Meeting, New Delhi

A banner of the event with logos of all the organisers


This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC.


Introduction

A series of seven multi-stakeholder roundtable meetings on "privacy" were conducted by CIS in collaboration with FICCI from April 2013 to August 2013 under the Internet Governance initiative. DSCI joined CIS and FICCI as a co-organizer on April 20, 2013.

CIS was a member of the Justice A.P. Shah Committee which drafted the "Report of Groups of Experts on Privacy". CIS also drafted a Privacy (Protection) Bill 2013 (hereinafter referred to as ‘the Bill’), with the objective of establishing a well protected privacy regime in India. CIS has also volunteered to champion the session/workshops on "privacy" in the final meeting on Internet Governance proposed for October 2013.

At the roundtables the Report of the Group of Experts on Privacy and the text of the Privacy (Protection) Bill 2013 will be discussed. The discussions and recommendations from the six round table meetings will be presented at the Internet Governance meeting in October 2013.

The dates of the six Privacy Round Table meetings are enlisted below:

  1. New Delhi Roundtable: April 13, 2013
  2. Bangalore Roundtable: April 20, 2013
  3. Chennai Roundtable: May 18, 2013
  4. Mumbai Roundtable: June 15, 2013
  5. Kolkata Roundtable: July 13, 2013
  6. New Delhi Roundtable: August 24, 2013
  7. New Delhi Final Roundtable and National Meeting: October 19, 2013

This Report provides an overview of the proceedings of the Sixth Privacy Roundtable (hereinafter referred to as 'the Roundtable'), conducted at FICCI, Federation House in Delhi on August 24, 2013. The Personal Data (Protection) Bill, 2013 was discussed at the Roundtable.

The Sixth Privacy Roundtable began with reflections on the evolution of the Bill. In its penultimate form, the Bill stands substantially changed as compared to its previous versions. For the purpose of this Roundtable, which entailed participation largely from industry organizations and other entities who handle personal data, only the personal data regime was discussed. This debate was distinguished from the general and specific discussion relating to privacy, surveillance and interception of communications as it was felt that greater expertise was required to deal adequately with such a vast and nuanced area. After further discussion with security experts, the provisions on surveillance and privacy of communications will be reincorporated resulting in omnibus privacy legislation. To reflect this alteration in the ambit of the Bill in its current form, its title was changed to Personal Data (Protection) Bill from the more expansive – Privacy (Protection) Bill.

Chapter I – Preliminary

Section 2 of the first chapter enumerates various definitions including ‘personal data’, which is defined as any data that can lead to identification and ‘sensitive personal data’; a subset of personal data defined by way of a list. The main contentions arose in relation to the latter definition.

Religion and Caste

A significant modification is found in the definition of ‘sensitive personal data’, which has expanded to include two new categories, namely, (i) ethnicity, religion, race or caste, and (ii) financial and credit information. Although discussed previously, these two categories have hitherto been left out of the purview of the definition as they are fraught with issues of practicality. In the specific example of caste, the government has historically indulged in large-scale data collection for the purpose of census, for example as conducted by the Ministry of Rural Development and the Ministry of Social Justice and Empowerment, Government of India. Further, in the Indian scenario, various statutory benefits accrue from caste identities under the aegis of affirmative action policies. Hence, categorizing it as sensitive personal data may not be considered desirable. The problem is further exacerbated with respect to religion as even a person’s name can be an indicator. In light of this, some issues under consideration were –

  • Whether religion and caste should be categorized as sensitive personal data or personal data?
  • Whether it is impracticable to include it in either category?
  • If included as sensitive personal data, how should it be implemented?

The majority seemed to lean towards including it under the category of sensitive personal data rather than personal data. It was argued that the categorization of some personal data as sensitive was done on the basis of higher potential for profiling or discrimination. In the same vein, caste and religious identities were sensitive information, requiring greater protection as provided under section 16 of the Bill. Regarding the difficulties posed by revealing names, it was proposed that since it was not an indicator by default, this consideration could not be used as a rationale to eliminate religion from the definition. Instead, it was suggested that programmes sensitizing the populous to the implications of names as indicators of religion/caste should be encouraged. With regard to the issue of census, where caste information is collected, it was opined that the same could be done in an anonymously as well. The maintenance of public databases including such information by various public bodies was considered problematic for privacy as they are often easily accessible and hence have a high potential for abuse. Overall, the conclusion was that the potential for abuse of such data could be better curtailed if greater privacy requirements were mandated for both private and public organizations. The collection of this kind of data should be done on a necessity basis and kept anonymous wherever possible. However, it was acknowledged that there were greater impracticalities associated with treating religion and caste as sensitive personal data. Further, the use and disclosure of indicative names was considered to be a matter of choice. Often caste information was revealed for affirmative action schemes, for example, rank lists for admissions or appointments. In such cases, it was considered to be counter-productive to discourage the beneficiary from revealing such information. Consequently, it was suggested that they could be regulated differently and qualified wherever required. The floor was then thrown open for discussing the other categories included under the definition of ‘sensitive personal data’.

Political Affiliation

Another contentious issue discussed at the Roundtable was the categorization of ‘political affiliation’ as ‘sensitive personal data’. A participant questioned the validity of including it in the definition, arguing that it is not an issue in India. Further, it was argued that one’s political affiliation was also subject to change and hence did not mandate higher protection as provided for sensitive personal data. Instead, if included at all, it should be categorized as ‘personal data’. This was countered by other participants who argued that revealing such information should be a matter of choice and if this choice is not protected adequately, it may lead to persecution. In light of this, changing one’s political affiliation particularly required greater protection as it may leave one more vulnerable. Everyone was in agreement that the aggregation of this class of data, particularly when conducted by public and private organizations, was highly problematic, as evidenced by its historic use for targeting dissident groups. Further, it was accepted unanimously that this protection should not extend to public figures as citizens had a right to know their political affiliation. However, although there was consensus on voting being treated as sensitive personal data, the same could not be reached for extending this protection to political affiliation.

Conviction Data

The roundtable also elicited a debate on conviction data being enumerated as sensitive personal data. The contention stemmed from the usefulness of maintaining this information as a matter of public record. Inter alia, the judicial practice of considering conviction history for repeat offenders, the need to consider this data before issuing passport and the possibility of establishing a sex offenders registry in India were cited as examples for the same.

Financial and Credit Information

From the outset, the inclusion of Financial and Credit information as sensitive personal data was considered problematic as it would clash directly with existing legislations. Specifically, the Reserve Bank of India mandates on all issues revolving around this class of data. However, it was considered expedient to categorize it in this manner due to grave mismanagement associated with it, despite existing protections. In this regard, the handling of Credit Information was raised as an issue. Even though it is regulated under the Credit Information Companies (Regulation) Act, 2005, its implementation was found to be wanting by some participants. In this context, the harm sought to be prevented by its inclusion in the Bill was unregulated sharing of credit-worthiness data with foreign banks and organs of the state. Informed consent was offered as the primary qualifier. However, some participants proposed that extending a strong regime of protection to such information would not be economically viable for financial institutions. Thus, it was suggested that this category should be categorized as personal data with the aim of regulating unauthorized disclosures.

Conclusion

The debate on the definition of sensitive personal data concluded with the following suggestions and remarks:

  • The categories included under sensitive personal data should be subject to contextual provisions instead of blanket protection.
  • Sensitive personal data mandates greater protection with regard to storage and disclosure than personal data.
  • While obtaining prior consent is important for both kinds of data, obtaining informed consent is paramount for sensitive personal data.
  • Both classes of data can be collected for legitimate purposes and in compliance with the protection provided by law.

Chapter II – Regulation of Personal Data

This chapter of the Bill establishes a negative statement of a positive right under Section 3 along with exemptions under Section 4, as opposed to the previous version of the Bill, discussed at the fifth Privacy Roundtable, which established a positive right. Thus, in its current form, the Bill provides a stronger regime for the regulation of personal data. The single exemption provided under this part is for personal or domestic use.

The main issues under consideration with regard to this part were –

  • The scope of the protection provided
  • Whether the exemptions should be expanded or diminished.

A participant raised a doubt regarding the subject of the right. In response, it was clarified that the Bill was subject to existing Constitutional provisions and relevant case law. According to the apex court, in Kharak Singh v. The State of U.P. (1964), the Right to Privacy arose from the Right to Life and Personal Liberty as enshrined under Article 21 of the Constitution of India. Since the Article 21 right is applicable to all persons, the Right to Privacy has to be interpreted in conjunction. Consequently, the Right to Privacy will apply to both citizens and non-citizens in India. It would also extend to information of foreigners stored by any entity registered in India and any other entity having an Indian legal personality irrespective of whether they are registered in India or not.

The next issue that arose at the Roundtable stemmed from the exemption provided under Section 4 of the Bill. A participant opined that excluding domestic use of such data was unadvisable as often such data was used maliciously during domestic rows such as divorce. With regard to the how ‘personal and domestic use’ was to be defined it was proposed that the same had to cater existing cultural norms. In India, this entailed that existing community laws had to be followed which does not recognize nuclear families as a legal entity. It was also acknowledged that Joint Hindu Families had to be dealt with specially and their connection with large businesses in India would have to be carefully considered.

Another question regarding exemptions brought up at the Roundtable was whether they should be broadened to include the information of public servants and the handling of all information by intelligence agencies. Similarly, some participants proposed that exemptions or exceptions should be provided for journalists, private figures involved in cases of corruption, politicians, private detective agencies etc. It was also proposed that public disclosure of information should be handled differently than information handled in the course of business.

Conclusion

The overall conclusion of the discussion on this Chapter was –

  • All exemptions and exceptions included in this Chapter should be narrowly tailored and specifically defined.
  • Blanket exemptions should be avoided. The specificities can be left to the Judiciary to adjudicate on as and when contentions arise.

Chapter III – Protection of Personal Data

This chapter seeks to regulate the collection, storage, processing, transfer, security and disclosure of personal data.

Collection of Personal Data

Sections 5, 6 and 7 of the Bill regulate the collection of personal data. While section 5 establishes a broad bar for the collection of personal data, Section 6 and 7 provide for deviations from the same, for collecting data with and without prior informed consent respectively.

Collection of Data with Prior Informed Consent

Section 6 establishes the obligation to obtain prior informed consent, sets out the regime for the same and by way of 2 provisos allows for withdrawal of consent which may result in denial of certain services.

The main issues discerned from this provision involved (i) notice for obtaining consent, (ii) mediated data collection, and (iv) destruction of data.

Regarding notice, some participants observed that although it was a good practice it was not always feasible. A participant raised the issue of the frequency of obtaining consent. It was observed that services that allowed its users to stay logged in and the storage of cookies etc. were considered benefits which would be disrupted if consent had to be obtained at every stage or each time the service was used. To solve this problem, it was unanimously accepted that consent only had to be obtained once for the entirety of the service offered except when the contract or terms and conditions were altered by the service provider. It was also decided that the entity directly conducting the collection of data was obligated to obtain consent, even if the same was conducted on behalf of a 3rd party.

Mediated date collection proved to be a highly contentious issue at the Roundtable. The issue was determining the scope and extent of liability in cases where a mediating party collects data for a data controller for another subject who may or may not be a user. In this regard, two scenarios were discussed – (i) uploading pictures of a 3rd party by a data subject on social media sites like Facebook and (ii) using mobile phone applications to send emails, which involves, inter alia, the sender, the phone manufacturer and the receiver. The ancillary issues recognized by participants in this regard were – (i) how would data acquired in this manner be treated if it could lead to the identification of the 3rd party?, and (ii) whether destruction of user data due to withdrawal of consent amount to destruction of general data, i.e. of the 3rd party. The consensus was that there was no clarity on how such forms of data collection could be regulated, even though it seemed expedient to do so. The government’s inability to find a suitable solution was also brought to the table. In this regard it was suggested by some participants that the Principle of Collection Limitation, as defined in the A.P. Shah Committee Report, would provide a basic protection. Further the extent to which this would be exempted for being personal use was suggested as a threshold. A participant observed that it would be technically unfeasible for the service provider to regulate such collection, even if it involved illicit data such as pornographic or indecent photographs. Further, it was opined that such an oversight by the service provider could be undesirable since it would result in the violation of the user’s privacy. Thus, any proposal for regulation had to balance the data subject’s rights with that of the 3rd party. In light of this, it was suggested that the mediating party should be made responsible for obtaining consent from the 3rd party.

Another aspect of this provision which garnered much debate was the proviso mandating destruction of data in case of withdrawal of consent. A participant stated the need for including broad exceptions as it may not always be desirable. Regarding the definition of ‘destroy’, as provided for under Section 2, it was observed that it mandated the erasure/deletion of the data in its entirety. Instead, it was suggested, that the same could be achieved by merely anonymising the information.

Collection of Data without Consent

Section 7 of the Bill outlines four scenarios which entail collection of personal data without prior consent, which are reproduced below -

“(a) necessary for the provision of an emergency medical service to the data subject;
(b) required for the establishment of the identity of the data subject and the collection is authorised by a law in this regard;
(c) necessary to prevent a reasonable threat to national security, defence or public order; or
(d) necessary to prevent, investigate or prosecute a cognisable offence”

Most participants at the Roundtable found that the list was too large in scope. The unqualified inclusion of prevention in that last two sub clauses was found to be particularly problematic. It was suggested that Section 7 (c) was entirely redundant as its provisions could be read into Section 7 (d). Furthermore, the inclusion of ‘national security’ as a basis for collecting information without consent was rejected almost unanimously. It was suggested that if it was to be included then a qualification was desirable, allowing collection of information only when authorized by law. Some participants extended this line of reasoning to Section 7 (c) as state agencies were already authorized to collect information in this manner. It was opined that including it under the Bill would reassert their right to do so in broader terms. For similar reasons, Section 7 (b) was found objectionable as well. It was further suggested that if sub clauses (b), (c) and (d) remained in the Bill, it should be subject to existing protections, for example those established by seminal cases such as Maneka Gandhi v. Union of India (1978) and PUCL v. Union of India (1997).

Storage and Processing of Personal Data

Section 8 of the Bill lays down a principle mandating the destruction of the information collected, following the cessation of the necessity or purpose for storage and provides exceptions to the same. It sets down a regime of informed consent, purpose specific storage and data anonymization.

The first amendment suggested for this provision was regarding the requirement of deleting the stored information ‘forthwith’. It was proposed by a participant that deleting personal data instantaneously had practical constraints and a reasonability criteria should be added. It was also noticed that in the current form of the Bill, the exception of historical, archival and research purposes had been replaced by the more general phrase ‘for an Act of Parliament’. The previous definition was altered as the terms being used were hard to define. In response, a participant suggested a broader phrase which would include any legal requirement. Another participant argued that a broader phrase would need to me more specifically defined to avoid dilution.

Section 9 of the Bill sets out two limitations for processing data in terms of (i) the kind of personal data being processed and (ii) the purpose for the same. The third sub clause enumerates exceptions to the abovementioned principles in language similar to that found in Section 7.

With regard to the purpose limitation clause it was suggested by many participants that the same should be broadened to include multiple purposes as purpose swapping is widespread in existing practice and would be unfeasible and undesirable to curtail. Sub clause 3 of this Section was critiqued for the same reasons as Section 7.

Section 10 restricts cross-border transfer of data. It was clarified that different departments of the same company or the same holding company would be treated as different entities for the purpose of identifying the data processor. However, a concern was raised regarding the possibility of increased bureaucratic hurdles on global transfer of data in case this section is read too strictly. At the same time, to provide adequate protection of the data subject’s rights certain restrictions on the data controller and location of transfer.

The regime for disclosure of personal data without prior consent is provided for by Section 14. The provision did not specify the rank of the police officer in charge of passing orders for such disclosure. It was observed that a suitable rank had to be identified to ensure adequate protection. Further, it was suggested that the provision be broadened to include other competent agencies as well. This could be included by way of a schedule or subsequent notifications.

Conclusion

  • Mediated collection of data should be qualified on the basis of purpose and intent of collection.
  • The issue of cost to company (C2C) was not given adequate consideration in the Bill.
  • The need to lay down Procedures at all stages of handling personal data.
  • Special exemptions need to be provided for journalistic sources.

Meeting Conclusion

The Sixth Privacy Roundtable was the second to last of the stakeholder consultations conducted for the Citizens’ Personal Data (Protection) Bill, 2013. Various changes made to the Bill from its last form were scrutinized closely and suitable suggestions were provided. Further changes were recommended for various aspects of it, including definitions, qualifications and procedures, liability and the chapter on offences and penalties. The Bill will be amended to reflect multi-stakeholder suggestions and cater to various interests.

6th Privacy Roundtable

by Prachi Arya last modified Aug 30, 2013 08:15 AM
6th Privacy Roundtable
Full-size image: 502.1 KB | View image View Download image Download

CIS Cybersecurity Series (Part 10) - Lawrence Liang

by Purba Sarkar last modified Sep 10, 2013 08:31 AM
CIS interviews Lawrence Liang, researcher and lawyer, and co-founder of Alternative Law Forum, Bangalore, as part of the Cybersecurity Series.

"The right to privacy and the right to free speech have often been understood as distinct rights. But I think in the ecology of online communication, it becomes crucial for us to look at the two as being inseparable. And this is not entirely new in India. But, interestingly, a lot of the cases that have had to deal with this question in the Indian context, have pitted one against the other. Now, India doesn't have a law for the protection of whistle-blowers. So how do we now think of the idea of whistle-blowers being one of the subjects of speech and privacy coming together? How do we use the strong pillars that have been established, in terms of a very rich tradition that Indian law has, on the recognition of free speech issues but slowly start incorporating questions of privacy?" - Lawrence Liang, researcher and lawyer, Alternative Law Forum. 

Centre for Internet and Society presents its tenth installment of the CIS Cybersecurity Series. 

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.

Lawrence Liang is one of the co-founders of the Alternative Law Forum where he works on issues of intellectual property, censorship, and the intersection of law and culture. He is also a fellow with the Centre for Internet and Society and serves on its board.  

 
This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

Out of the Bedroom

by Nishant Shah last modified Sep 06, 2013 08:32 AM
We have shared it with our friends. We have watched it with our lovers. We have discussed it with our children and talked about it with our partners. It is in our bedrooms, hidden in sock drawers. It is in our laptops, in a folder marked "Miscellaneous". It is in our cellphones and tablets, protected under passwords. It is the biggest reason why people have learned to clean their browsing history and cookies from their browsers.

The article by Nishant Shah was published in the Indian Express on August 25, 2013.


Whether we go into surreptitious shops to buy unmarked CDs or trawl through Torrent and user-generated content sites in the quest of a video, there is no denying the fact that it has become a part of our multimedia life. Even in countries like India, where consumption and distribution of pornography are punished by law, we know that pornography is rampant. With the rise of the digital technologies of easy copy and sharing, and the internet which facilitates amateur production and anonymous distribution, pornography has escaped the industrial market and become one of the most intimate and commonplace practices of the online world.

In fact, if Google trend results are to be believed, Indians are among the top 10 nationalities searching for pornography daily. Even a quick look at our internet history tells us that it has all been about porn. The morphed pictures of a naked Pooja Bhatt adorned the covers of Stardust in the late 1990s, warning us that the true potential of Photoshop had been realised. The extraordinary sensation of the Delhi Public School MMS case which captured two underage youngsters in a grainy sexcapade announced the arrival of user-generated porn in a big way. The demise of Savita Bhabhi — India's first pornographic graphic novel — is still recent enough for us to remember that the history of the internet in India is book-ended by porn and censorship.

Recent discussions on pornography have been catalysed by a public interest litigation requesting for a ban on internet pornography filed in April by Kamlesh Vaswani. Whether Vaswani's observations on what porn can make us do stem from his own personal epiphany or his self-appointed role as our moral compass is a discussion that merits its own special space. Similarly, a debate on the role, function, and use of pornography in a society is complex, rich and not for today.

Instead, I want to focus on the pre-Web imagination of porn that Vaswani and his endorsers are trying to impose upon the rest of us. There is a common misunderstanding that all porn is the same porn, no matter what the format, medium and aesthetics of representations. Or in other words, a homogenising presumption is that erotic fiction and fantasies, pictures of naked people in a magazine, adult films produced by entertainment houses, and user-generated videos on the internet are the same kind of porn. However, as historical legal debates and public discussions have shown us, what constitutes porn is specific to the technologies that produce it. There was a time when DH Lawrence's iconic novel now taught in undergraduate university courses — Lady Chatterley's Lover — was deemed pornographic and banned in India. In more recent times, the nation was in uproar at the Choli ke peeche song from Khalnayak which eventually won awards for its lyrics and choreography.

In all the controversy, there has so far been a "broadcast imagination" of how pornography gets produced, consumed and distributed. There is a very distinct separation of us versus them when it comes to pornography. They produce porn. They distribute porn. They push porn down our throats (that was probably a poor choice of words) by spamming us and buying Google adwords to infect our search results. We consume porn. And all we need to do is go and regulate, like we do with Bollywood, the central management and distribution mechanism so that the flow of pornography can be curbed. This is what I call a broadcast way of thinking, where the roles of the performers, producers, consumers and distributors of pornography are all distinct and can be regulated.

However, within the murky spaces of the World Wide Web, the scenario is quite different. Internet pornography is not the same as availability of pornography on the internet. True, the digital multimedia space of sharing and peer-2-peer distribution has made the internet the largest gateway to accessing pornographic objects which are produced through commercial production houses. However, the internet is not merely a way of getting access to existing older forms of porn. The internet also produces pornography that is new, strange, unprecedented and is an essential part of the everyday experience of being digitally connected and networked into sociality.

The recent controversies about the former congressman from New York, Anthony Weiner, sexting — sending inappropriate sexual messages through his cellphone — gives us some idea of what internet porn looks like. It is not just something captured on a phone-cam but interactive and collaboratively produced. Or as our own Porngate, where two cabinet ministers of the Karnataka legislative assembly were caught surfing some good old porn on their mobile devices while the legislature was in session, indicated, porn is not something confined to the privacy of our rooms. Naked flashmobs, young people experimenting with sexual identities in public, and sometimes bizarre videos of a bus-ride where the camera merely captures the banal and the everyday through a "pornographic gaze" are also a part of the digital porn landscape. The world of virtual reality and multiple online role-playing games offer simulated sexual experiences that allow for human, humanoid, and non-human avatars to engage in sexual activities in digital spaces. Peer-2-peer video chat platforms like Chatroulette, offer random encounters of the naked kind, where nothing is recorded but almost everything can be seen.

The list of pornography produced by the internet — as opposed to pornography made accessible through the internet — is huge. It doesn't just hide in subcultural practices but resides on popular video-sharing sites like YouTube or Tumblr blogs. It vibrates in our cellphones as we connect to people far away from us, and pulsates on the glowing screens of our tablets as we get glimpses of random strangers and their intimate bodies and moments. An attempt to ban and censor this porn is going to be futile because it does not necessarily take the shape of a full narrative text which can be examined by others to judge its moral content. Any petition that tries to censor such activities is going to fall flat on its face because it fails to recognise that sexual expression, engagement and experimentation is a part of being human — and the ubiquitous presence of digital technologies in our life is going to make the internet a fair playground for activities which might seem pornographic in nature. In fact, trying to restrict and censor them, will only make our task of identifying harmful pornography — porn that involves minors, or hate speech or extreme acts of violence — so much more difficult because it will be pushed into the underbelly of the internet which is much larger than the searched and indexed World Wide Web.

Trying to suggest that internet pornography is an appendage which can be surgically removed from everyday cyberspace is to not understand the integral part that pornography and sexual interactions play in the development and the unfolding of the internet. The more fruitful efforts would be to try and perhaps create a guideline that helps promote healthy sexual interaction and alerts us to undesirable sexual expressions which reinforce misogyny, violence, hate speech and non-consensual invasions of bodies and privacy. This blanket ban on trying to sweep all internet porn under a carpet is not going to work — it will just show up as a big bump, in places we had not foreseen.

An Interview with Suresh Ramasubramanian

by Elonnai Hickok last modified Sep 06, 2013 09:37 AM
Suresh Ramasubramanian is the ICS Quality Representative - IBM SmartCloud at IBM. We from the Centre for Internet and Society conducted an interview on cybersecurity and issues in the Cloud.
  1. You have done a lot of work around cybersecurity and issues in the Cloud. Could you please tell us of your experience in these areas and the challenges facing them?
    a. I have been involved in antispam activism from the late 1990s and have worked in ISP / messaging provider antispam teams since 2001. Since 2005, I expanded my focus to include general cyber security and privacy, having written white papers on spam and botnets for the OECD, ITU and UNDP/APDIP. More recently, have become a M3AAWG special advisor for capacity building and outreach in India.

    In fact capacity building and outreach has been the focus of my career for a long time now. I have been putting relevant stakeholders from ISPs, government and civil society in India in touch with their counterparts around the world, and, at a small level, enabling an international exchange of ideas and information around antispam and security.

    This was a challenge over a decade back when I was a newbie to antispam and it still is. People in India and other emerging economies, with some notable exceptions, are not part of the international communities that have grown in the area of cyber security and privacy.

    There is a prevalent lack of knowledge in this area, which combined with gaps in local law and its enforcement. There is a tendency on the part of online criminals to target emerging and fast growing economies as a rich source of potential victims for various forms of online crime, and sometimes as a safe haven against prosecution.
  2. In a recent public statement Google said "Cloud users have no legitimate expectation of privacy. Do you agree with this statement?
    a. Let us put it this way. All email received by a cloud or other Internet service provider for its customers is automatically processed and data mined in one form or the other. At one level, this can be done for spam filtering and other security measures that are essential to maintain the security and stability of the service, and to protect users from being targeted by spam, malware and potential account compromises.

    The actual intent of automated data mining and processing should be transparently provided to customers of a service, with a clearly defined privacy policy, and the deployment of such processing, and the “end use” to which data mined from this processing is put, are key to agreeing or disagreeing with such a statement.

    It goes without saying that such processing must stay within the letter, scope and spirit of a company’s privacy policy, and must actually be structured to be respectful of user privacy.

    Especially where mined data is used to provide user advertising or for any other commercial purpose (such as being aggregated and resold), strict adherence to a well written privacy policy and periodic review of this policy and its implementation to examine its compliance to laws in all countries that the company operates in are essential.

    There is way too much noise in the media for me to usefully add any more to this issue and so I will restrict myself to the purely general comments above.
  3. What ways can be privacy of an individual be compromised on the cloud? What can be done to prevent such instances of compromise?
    a. All the recent headlines about companies mining their own users’ data, and yet more headlines about different countries deploying nationwide or even international lawful intercept and wiretap programs, aside, the single largest threat to individual privacy on the cloud is, and has been for years before the word “cloud” came into general use, the constant targeting of online users by online criminals with a variety of threats including scams, phish campaigns and data / account credential stealing malware.

    Poor device security is another threat – one that becomes even more of a serious problem when the long talked about “internet of things” seems set to become reality, with cars, baby monitors, even Bluetooth enabled toilets, and more dangerously, critical national infrastructure such as power plants and water utilities becoming accessible over the Internet but still running software that is basically insecure and architected with assumptions that date back to an era when there was no conception or need to connect these to the Internet.

    Someone in Bluetooth range with the appropriate android application being able to automatically flush your toilet and even download a list of the dates and times when you last used it is personally embarrassing. Having your bank account broken into because your computer got infected with a virus is even more damaging. Someone able to access a dam’s control panel over the internet and remotely trigger the dam’s gates to open can cause far more catastrophic damage.

    The line between security and privacy, between normal business practice and unacceptable, even illegal behaviour, is sometimes quite thin and in a grey area that may be leveraged to the hilt for commercial and/or national security interests. However, scams, malware, exploits of insecure systems and similar threats are well on the wrong side of the “criminal” spectrum, and are a clear and present danger that cause far more than an embarrassing or personally damaging loss of privacy.
  4. How is the jurisdiction of the data on the cloud determined?
    This is a surprisingly thorny question. Normally, a company is based in a particular country and has an end user agreement / terms of service that makes its customers / users accept that country’s jurisdiction.

    However, a cloud based provider that does business around the world may, in practice, have to comply to some extent at least, with that country’s local laws – at any rate, in respect to its users who are citizens of that country. And any cloud product sold to a local business or individual by a salesman from the vendor’s branch in the country would possibly fall under a contract executed in the country and therefore, subject to local law.

    The level of compliance for data retention and disclosure in response to legal processes will possibly vary from country to country – ranging from flat refusals to cooperate (especially where any law enforcement request for data are for something that is quite legal in the country the cloud provider is based in) to actual compliance.

    In practice this may also depend on what is at stake for the cloud vendor in complying or refusing to comply with local laws – regardless of what the terms of use policies or contract assert about jurisdiction. The number of users the cloud vendor has in the country, the extent of its local presence in the country, how vulnerable its resident employees and executives are to legal sanctions or punishment.

    In the past, it has been observed that a practical balance [which may be based on business economics as much as it is based on a privacy assessment] may be struck by certain cloud vendors with a global presence, based on the critical mass of users it stands to gain or lose by complying with local law, and the risks it faces if it complies, or conversely, does not comply with local laws – so the decision may be to fight lawsuits or prosecutions on charges of breaking local data privacy laws or not complying with local law enforcement requests for handover of user data in court, or worst case, pulling out of the country altogether.
  5. Currently, big cloud owners are US corps, yet US courts do not extend the same privacy rights to non US citizens. Is it possible for countries to use the cloud and still protect citizen data from being accessed by foreign governments? Do you think a "National Cloud" is a practical solution?
    a. The “cloud” in this context is just “the internet”, and keeping local data local and within local jurisdiction is possible in theory at any rate. Peering can be used to keep local traffic local instead of having it do a roundtrip through a foreign country and back [where it might or might not be subject to another country’s intercept activities, no comment on that].

    A national cloud demands local infrastructure including bandwidth, datacenters etc. that meet the international standards of most global cloud providers. It then requires cloud based sites that provide an equivalent level of service, functionality and quality to that provided by an international cloud vendor. And then after that, it has to have usable privacy policies and the country needs to have a privacy law and a sizeable amount of practical regulation to bolster the law, a well-defined path for reporting and redress of data breaches. There are a whole lot of other technical and process issues before having a national cloud becomes a reality, and even more before such a reality makes a palpable positive difference to user privacy.
  6. What audit mechanisms of security and standards exist for Cloud Service Providers and Cloud Data Providers?
    a. Plenty – some specific to the country and the industry sector / kind of data the cloud handles. The Cloud Security Alliance has been working for quite a while on CloudAudit, a framework developed as part of a cross industry effort to unify and automate Assertion, Assessment and Assurance of their infrastructure and service.

    Different standards bodies and government agencies have all come out with their own sets of standards and best practices in this area (this article has a reasonable list - http://www.esecurityplanet.com/network-security/cloud-security-standards-what-youshould-know.html). Some standards you absolutely have to comply with for legal reasons.

    Compliance reasons aside, a judicious mix of standards, and considerable amounts of adaptation in your process to make those standards work for you and play well together.

    The standards all exist – what varies considerably, and is a major cause of data privacy breaches, are incomplete or ham handed implementations of existing standards, any attempt at “checkbox compliance” to simply implement a set of steps that lead to a required certification, and a lack of continuing initiative to keep the data privacy and securitymomentum going once these standards have been “achieved”, till it is time for the next audit at any rate.
  7. What do you see as the big challenges for privacy in the cloud in the coming years?
    a. Not very much more than the exact same challenges for privacy in the cloud over the past decade or more. The only difference is that any threat that existed before has always amplified itself because the complexity of systems and the level of technology and computing power available to implement security, and to attempt to breach security, is exponentially higher than ever before – and set to increase as we go further down the line.
  8. Do you think encryption the answer to the private and public institutions snooping?
    a. Encryption of data at rest and in transit is a key recommendation of any data privacy standard and cloud / enterprise security policy. Companies and users are strongly encouraged to deploy and use strong cryptography for personal protection. But to call it “the answer” is sort of like the tale of the blind men and the elephant.

    There are multiple ways to circumvent encryption – social engineering to trick people into revealing data (which can be mitigated to some extent, or detected if it is tried on a large cross section of your userbase – it is something that security teams do have to watch for), or just plain coercion, which is much tougher to defend against.

    As a very popular XKCD cartoon that has been shared around social media and has been cited in multiple security papers says -

    “A crypto nerd’s imagination”

    “His laptop’s encrypted. Let us build a million dollar cluster to crack it”
    “No good! It is 4096 bit RSA”
    “Blast, our evil plan is foiled”

    “What would actually happen”
    “His laptop’s encrypted. Drug him and hit him with this $5 wrench till he tells us the password”
    “Got it”
  9. Spam is now consistently used to get people to divulge their personal data or otherwise compromise a persons financial information and perpetuate illegal activity. Can spam be regulated? If so, how?
    a. Spam has been regulated in several countries around the world. The USA has had laws against spam since 2003. So has Australia. Several other countries have laws that specifically target spam or use other statutes in their books to deal with crime (fraud, the sale of counterfeit goods, theft..) that happens to be carried out through the medium of spam.

    The problems here are the usual problems that plague international enforcement of any law at all. Spammers (and worse online criminals including those that actively employ malware) tend to pick jurisdictions to operate in where there are no existing laws on their activities, and generally take the precaution not to target residents of the country that they live in. Others send spam but attempt to, in several cases successfully, skate around loopholes in their country’s antispam laws.

    Still others fully exploit the anonymity that the Internet provides, with privately registered domain names, anonymizing proxy servers (when they are not using botnets of compromised machines), as well as a string of shell companies and complex international routing of revenue from their spam campaigns, to quickly take money offshore to a more permissible jurisdiction.

    Their other advantage is that law enforcement and regulatory bodies are generally short staffed and heavily tasked, so that even a spammer who operates in the open may continue his activities for a very long time before someone manages to prosecute him.

    Some antispam laws allow recipients of spam to sue the spammer in small claims courts – which, like regulatory action, has also previously led to judgements being handed out against spammers and their being fined or possibly imprisoned in case their spam has criminal aspects to it, attracting local computer crime laws rather than being mere violations of civil antispam laws.
  10. There has been a lot of talk about the use of malware like FinFisher and its ability to compromise national security and individual security. Do you think regulation is needed for this type of malware - and if so what type - export  controls? privacy regulation? Use control?
    a. Malware used by nation states as a part of their surveillance activities is a problem. It is further a problem if such malware is used by nation states that are not even nominally democratic and that have long standing records of human rights violations.

    Regulating or embargoing their sale is not going to help in such cases. One problem is that export controls on such software are not going to be particularly easy and countries that are on software export blacklists routinely manage to find newer and more creative ways to attempt to get around these and try to purchase embargoed software and computing equipment of all kinds.

    Another problem is that such software is not produced just by legitimate vendors of lawful intercept gear. Criminals who write malware that is capable of, say, stealing personal data such as bank account credentials are perfectly capable of writing such software, and there is a thriving underground economy in the sale of malware and of “take” from malware such as personal data, credit cards and bank accounts where any rogue nation state can easily acquire products with an equivalent functionality.

    This is going to apply even if legitimate vendors of such products are subject to strict regulations governing their sale and national laws exist regulating the use of such products. So while there is no reason not to regulate / provide judicial and regulatory oversight of their sale and intended use, it should not be seen as any kind of a solution to this problem.

    User education in privacy and access to secure computing resources is probably going to be the bedrock of any initiative that looks to protect user privacy – a final backstop to any technical / legal or other measure that is taken to protect them.

Privacy Law Must Fit the Bill

by Sunil Abraham last modified Sep 12, 2013 06:25 AM
The process of updating Indian privacy policy has gained momentum ever since the launch of the UID project and also the leak of the Radia tapes. The Department of Personnel and Training has lead the drafting of privacy bill for the last three years. This bill will ideally articulate privacy principles and establish the office of the privacy commissioner and most importantly have an over-riding effect over 50 odd existing laws, rules and policies with privacy implications.
Privacy Law Must Fit the Bill

Sunil Abraham


The article was published in the Deccan Chronicle on September 9, 2013.


Given the harmonizing impact of the proposed privacy bill, we must ensure that rigorous debate and discussion happens before the bill is finalized otherwise there may be terrible consequences.

Here is a short list of what can possibly go wrong:

One, the privacy bill ignores the massive power asymmetry in Indian societies undermining the right to information – in other jurisdictions referred to as freedom of information and access to information. The power asymmetry is addressed via a public interest test. The right to privacy would be the same for everyone except when public interest is at stake. This enables protection of the right to privacy to be inversely proportionate to power and almost conversely the requirement of transparency to be directly proportionate to power. In other words, the poor would have greater privacy than a middle-class citizens who in turn would have greater privacy than political and economic elites. And transparency requirements would be greatest for economic and political elites and lower for middle-class citizens and lowest for the poor.  If this is not properly addressed in the language of the bill – privacy activists would have undone the significant accomplishments of the right to information or transparency movement in India over the last decade.

Two, the privacy bill has chilling effect on free speech. This can happen either by denying the speaker privacy, or by affording those who are spoken about too much privacy. For the speaker - Know Your Customer (KYC) and data retention requirements for telecom and internet infrastructure necessary to participate in the networked public sphere can result in the death of anonymous and pseudonymous speech. Anonymous and pseudonymous speech must be protected as it is a necessary for good governance, free media, robust civil society, and vibrant art and culture in a democracy.  For those spoken about - privacy is clearly required in certain cases to protect the victims of certain categories of crimes. However, the right to privacy could be abused by those occupying public office and those in public life to censor speech that is in the public interest. If for example a sport person does not publicly drink the aerated drink that he or she endorses in advertisements then the public has a right to know.

Three, the privacy bill has a limited scope. Jurisprudence in India derives the right to privacy from the right to life and liberty through several key judgments including Naz Foundation v. Govt. of NCT of Delhi decided by the Delhi High Court. The right to life and liberty or Article 21 unlike other constitutionally guaranteed fundamental rights does not distinguish between citizens and non-citizens. As a consequence the privacy bill must also protect residents, visitors and other persons who may never visit India, but whose personal information may travel to India as part of the global outsourcing phenomena. Also the obligations and safeguards under the privacy bill must equally apply to both the state and the private sector entities that could potentially infringe upon the individual's right to privacy. Different levels of protection may be afforded to citizens, residents, visitors and everybody else. Government and private sector data controllers may be subject to different regulations – for ex. an intelligence agency may not require 'consent' of the data subject to collect personal information and may only provide 'notice' after the investigation has cleared the suspect of all charges.

Four, the privacy bill is expected to fix poorly designed technology. There are two diametrically opposite definitions of projects like NATGRID, CMS and UID. The government definition is that all these systems will allow only for targeted interception and surveillance, however the majority of civil society believes that these system will be used for blanket surveillance. If these systems are indeed built in a manner that supports blanket surveillance then legal band-aid in the form of a new law or provision that prohibits blanket surveillance will be a complete failure. The principle of 'privacy by design' is the only way to address this. For ex. shutters of digital cameras are silent and this allows for a particular form of voyeurism called upskirt. Almost a decade ago, the Korean government enacted a law that requires camera and mobile phone manufacturers to ensure that audio recording of a mechanical shutter is played every time the camera function is used. It is also illegal for the user to circumvent or disable this feature. In this example, the principle of notice is hardwired within the technology itself. To remix Spiderman's motto – with great power comes great temptation. We know that a rogue NTRO official installed a spy camera in the office toilet to make recording female colleagues and most recently that NSA officers confessed to spying on their love interests. If the technology can be abused it will be abused. Therefore legal safeguards are a poor substitute for technological safeguards. We need both simultaneously.

Five, the bill does not require compliance with internationally accepted privacy principles including the ones discussed so far 'consent', 'notice' and 'privacy by design'. Apart from human rights considerations – the most important imperative to modernize India privacy laws is trade. We have a vibrant ITES, BPO and KPO sector which handles personal information of foreigners mostly from the North American and European continents.  The Justice AP Shah committee in October 2012 identified privacy principle that required for India - notice, choice and consent, collection limitation, purpose limitation, access and correction, disclosure of information, security, openness and accountability. A privacy bill that does include all these principles will increase the regulatory compliance overhead for Indian enterprise with foreign clients and for multinationals operating in India. There is also the risk that privacy regulators in these jurisdictions will ban outsourcing to Indian firms because our privacy laws are not adequate by their standards.

To conclude, it is not sufficient for India to enact a privacy law it is essential that we get it right so that there are no unintended consequences on other equally important rights and dimensions of our democracy.

Transparency Reports — A Glance on What Google and Facebook Tell about Government Data Requests

by Prachi Arya last modified Sep 13, 2013 09:44 AM
Transparency Reports are a step towards greater accountability but how efficacious are they really?

Prachi Arya examines the transparency reports released by tech giants with a special focus on user data requests made to Google and Facebook by Indian law enforcement agencies.

The research was conducted as part of the 'SAFEGUARDS' project that CIS is doing with Privacy International and IDRC.


According to a recent comScore Report India has now become the third largest internet user with nearly 74 million citizens on the Internet, falling just behind China and the United States. The report also reveals that Google is the preferred search engine for Indians and Facebook is the most popular social media website followed by LinkedIn and Twitter. While users posting their photos on Facebook can limit viewership through privacy settings, there isn’t much they can do against government seeking information on their profiles. All that can be said for sure in the post-Snowden world is that large-scale surveillance is a reality and the government wants it on their citizen’s online existence. In this Orwellian scenario, transparency reports provide a trickle of information on how much our government finds out about us.

The first transparency report was released by Google three years ago to provide an insight into ‘the scale and scope of government requests for censorship and data around the globe’. Since then the issuance of such reports is increasingly becoming a standard practice for tech giants. An Electronic Frontier Foundation Report reveals that major companies that have followed Google’s lead include Dropbox, LinkedIn, Microsoft and Twitter with Facebook and Yahoo! being the latest additions . Requests to Twitter and Microsoft from Indian law enforcement agencies were significantly less than requests to Facebook and Google. Twitter revealed that Indian law enforcement agencies made less than 10 requests, none of which resulted in sharing of user information. Out of the 418 requests made to Microsoft by India (excluding Skype), 88.5 per cent were complied with for non-content user data. The Yahoo! Transparency Report revealed that 6 countries surpassed India in terms of the number of user data requests. Indian agencies requested user data 1490 times from 2704 accounts for both content and non-content data and over 50 per cent of these requests were complied with.

The following is a compilation of what the latest transparency reports issued by Facebook and Google.

"The information we share on the Transparency Report is just a sliver of what happens on the internet"
Susan Infantino, Legal Director for Google

Beginning from December 2009, Google has published several biannual transparency reports:

  • It discloses traffic data of Google services globally and statistics on removal requests received from copyright owners or governments as well as user data requests received from government agencies and courts. It also lays down the legal process required to be followed by government agencies seeking data.
  • There was a 90 per cent increment in the number of content removal requests received by Google from India. The requests complied with included:
    • Restricting videos containing clips from the controversial movie “Innocence of Muslims” from view.
    • Many YouTube videos and comments as well as some Blogger blog posts being restricted from local view for disrupting public order in relation to instability in North East India.
  • For User Data requests, the Google report details the number of user data requests and users/accounts as well as percentage of requests which were partially or completely complied with. In India the user data requests more than doubled from 1,061 in the July-December 2009 period to 2,431 in the July-December 2012 period. The compliance rate decreased from 79 per cent in the July-December 2010 period to 66 per cent in the last report.
  • Jurisdictions outside the United States can seek disclosure using Mutual Legal Assistance Treaties or any ‘other diplomatic and cooperative arrangement’. Google also provides information on a voluntary basis if requested following a valid legal process if the requests are in consonance with international norms, U.S. and the requesting countries' laws and Google’s policies.

Facebook

    "We hope this report will be useful to our users in the ongoing debate about the proper standards for government requests for user information in official investigations."
    Colin Stretch, Facebook General Counsel

Facebook inaugurated its first ever transparency report last Tuesday with a promise to continue releasing these reports.

  • The ‘Global Government Requests Report’ provides information on the number of requests received by the social media giant for user/account information by country and the percentage of requests it complied with. It also includes operational guidelines for law enforcement authorities.
  • The report covers the first six months of 2013, specifically till June 30. In this period India made 3,245 requests from 4,144 users/accounts and half of these requests were complied with.
  • Jurisdictions outside the United States can seek disclosure by way of mutual legal assistance treaties requests or letter rogatory. Legal requests can be in the form of search warrants, court orders or subpoena. The requests are usually made in furtherance of criminal investigations but no details about the nature of such investigations are provided.
  • Broad or vague requests are not processed. The requests are expected to include details of the law enforcement authority issuing the request and the identity of the user whose details are sought.

The Indian Regime

Section 69 and 69 B of the Information Technology (Amended) Act, 2008 prescribes the procedure and sets safeguards for the Indian Government to request user data from corporates. According to section 69, authorized officers can issue directions to intercept, monitor or decrypt information for the following reasons:

  1. Sovereignty or integrity of India,
  2. Defence of India,
  3. Security of the state,
  4. Friendly relations with foreign states,
  5. Maintenance of public order,
  6. Preventing incitement to the commission  of any cognizable offence relating to the above, or
  7. For investigation of any offence.

Section 69 B empowers authorized agencies to monitor and collect information for cyber security purposes, including ‘for identification, analysis and prevention of intrusion and spread of computer contaminants’. Additionally, there are rules under section 69 and 69 B that regulate interception under these provisions.

Information can also be requested through the Controller of Certifying Authority under section 28 of the IT Act which circumvents the stipulated procedure. If the request is not complied with then the intermediary may be penalized under section 44.

The Indian Government has been increasingly leaning towards greater control over online communications. In 2011, Yahoo! was slapped with a penalty of Rs. 11 lakh for not complying with a section 28 request, which called for email information of a person on the grounds of national security although the court subsequently stayed the Controller of Certifying Authorities' order. In the same year the government called for pre-screening user content by internet companies and social media sites to ensure deletion of ‘objectionable content’ before it was published. Similarly, the government has increasingly sought greater online censorship, using the Information Technology Act to arrest citizens for social media posts and comments and even emails criticizing the government.

What does this mean for Privacy?

The Google Transparency Report has thrown light on an increasing trend of governmental data requests on a yearly basis. The reports published by Google and Facebook reveal that the number of government requests from India is second only to the United States. Further, more than 50 per cent of the requests from India have led to disclosure by nearly all the companies surveyed in this post, with Twitter being the single exception.

Undeniably, transparency reports are important accountability mechanisms which reaffirm the company’s dedication towards protecting its user’s privacy. However, basic statistics and vague information cannot lift the veil on the full scope of surveillance. Even though Google’s report has steadily moved towards a more nuanced disclosure, it would only be meaningful if, inter alia, it included a break-up of the purpose behind the requests. Similarly, although Google has also included a general understanding of the legal process, more specifics need to be disclosed. For example, the report could provide statistics for notifications to indicate how often user’s under scrutiny are not notified. Such disclosures are important to enhance user understanding of when their data may be accessed and for what purposes, particularly without prior or retrospective intimation of the same. Till such time the report can provide comprehensive details about the kind of surveillance websites and internet services are subjected to, it will be of very limited use. Its greatest limitation, however, may lie beyond its scope.

The monitoring regime envisioned under the Information Technology Act effectively lays down an overly broad system which may easily lead to abuse of power. Further, the Indian Government has become infamous for their need to control websites and social media sites. Now, with the Indian Government’s plan for establishing the Central Monitoring System the need for intermediaries to conduct the interception may be done away with, giving the government unfettered access to user data, potentially rendering corporate transparency of data requests obsolete.

Privacy Meeting Brussels - Bangalore Slides

by Prasad Krishna last modified Sep 12, 2013 07:55 AM

PDF document icon presentation_vub_lsts_v3.pdf — PDF document, 1269 kB (1300025 bytes)

Privacy and Surveillance Talk by Sunil Abraham

by Prasad Krishna last modified Sep 13, 2013 09:47 AM

PDF document icon lecture_ccmg_2013september18.pdf — PDF document, 212 kB (217342 bytes)

The National Privacy Roundtable Meetings

by Bhairav Acharya last modified Mar 21, 2014 10:03 AM
The Centre for Internet & Society ("CIS"), the Federation of Indian Chambers of Commerce and Industry ("FICCI"), the Data Security Council of India ("DSCI") and Privacy International are, in partnership, conducting a series of national privacy roundtable meetings across India from April to October 2013. The roundtable meetings are designed to discuss possible frameworks to privacy in India.

This research was undertaken as part of the 'SAFEGUARDS' project that CIS is undertaking with Privacy International and IDRC.


Background: The Roundtable Meetings and Organisers

CIS is a Bangalore-based non-profit think-tank and research organisation with interests in, amongst other fields, the law, policy and practice of free speech and privacy in India. FICCI is a non-governmental, non-profit association of approximately 250,000 Indian bodies corporate. It is the oldest and largest organisation of businesses in India and represents a national corporate consensus on policy issues. DSCI is an initiative of the National Association of Software and Service Companies, a non-profit trade association of Indian information technology ("IT") and business process outsourcing ("BPO") concerns, which promotes data protection in India. Privacy International is a London-based non-profit organisation that defends and promotes the right to privacy across the world.

Privacy in the Common Law and in India

Because privacy is a multi-faceted concept, it has rarely been singly regulated. A taxonomy of privacy yields many types of individual and social activity to be differently regulated based on the degree of harm that may be caused by intrusions into these activities.[1]

The nature of the activity is significant; activities that are implicated by the state are attended by public law concerns and those conducted by private persons inter se demand market-based regulation. Hence, because the principles underlying warranted police surveillance differ from those prompting consensual collections of personal data for commercial purposes, legal governance of these different fields must proceed differently. For this and other reasons, the legal conception of privacy — as opposed to its cultural construction – has historically been diverse and disparate.

Traditionally, specific legislations have dealt separately with individual aspects of privacy in tort law, constitutional law, criminal procedure and commercial data protection, amongst other fields. The common law does not admit an enforceable right to privacy.[2] In the absence of a specific tort of privacy, various equitable remedies, administrative laws and lesser torts have been relied upon to protect the privacy of claimants.[3]

The question of whether privacy is a constitutional right has been the subject of limited judicial debate in India. The early cases of Kharak Singh (1964)[4] and Gobind (1975)[5] considered privacy in terms of physical surveillance by the police in and around the homes of suspects and, in the latter case, the Supreme Court of India found that some of the Fundamental Rights “could be described as contributing to the right to privacy” which was nevertheless subject to a compelling public interest. This inference held the field until 1994 when, in the Rajagopal case (1994),[6] the Supreme Court, for the first time, directly located privacy within the ambit of the right to personal liberty guaranteed by Article 21 of the Constitution of India. However, Rajagopal dealt specifically with a book, it did not consider the privacy of communications. In 1997, the Supreme Court considered the question of wiretaps in the PUCL case (1996)[7] and, while finding that wiretaps invaded the privacy of communications, it continued to permit them subject to some procedural safeguards.[8] A more robust statement of the right to privacy was made recently by the Delhi High Court in the Naz Foundation case (2011)[9] that de-criminalised consensual homosexual acts; however, this judgment is now in appeal.

Attempts to Create a Statutory Regime

The silence of the common law leaves the field of privacy in India open to occupation by statute. With the recent and rapid growth of the Indian IT and BPO industry, concerns regarding the protection of personal data to secure privacy have arisen. In May 2010, the European Union ("EU") commissioned an assessment of the adequacy of Indian data protection laws to evaluate the continued flow of personal data of European data subjects into India for processing. That assessment made adverse findings on the adequacy and preparedness of Indian data protection laws to safeguard personal data.[10]

Conducted amidst negotiations for a free trade agreement between India and the EU, the failed assessment potentially impeded the growth of India’s outsourcing industry that is heavily reliant on European and North American business.

Consequently, the Department of Electronics and Information Technology of the Ministry of Communications and Information Technology, Government of India, issued subordinate legislation under the rule-making power of the Information Technology Act, 2000 ("IT Act"), to give effect to section 43A of that statute. These rules – the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 ("Personal Data Rules")[11] — were subsequently reviewed by the Committee on Subordinate Legislation of the 15th Lok Sabha.[12] The Committee found that the Personal Data Rules contained clauses that were ambiguous, invasive of privacy and potentially illegal.[13]

In 2011, a draft privacy legislation called the ‘Right to Privacy Bill, 2011’, which was drafted within the Department of Personnel and Training ("DoPT") of the Ministry of Personnel, Public Grievances and Pensions, Government of India,  was made available on the internet along with several file notings ("First DoPT Bill"). The First DoPT Bill contained provisions for the regulation of personal data, interception of communications, visual surveillance and direct marketing. The First DoPT Bill was referred to a Committee of Secretaries chaired by the Cabinet Secretary which, on 27 May 2011, recommended several changes including re-drafts of the chapters relating to interception of communications and surveillance.

Aware of the need for personal data protection laws to enable economic growth, the Planning Commission constituted a Group of Experts under the chairmanship of Justice Ajit P. Shah, a retired Chief Justice of the Delhi High Court who delivered the judgment in the Naz Foundation case, to study foreign privacy laws, analyse existing Indian legal provisions and make specific proposals for incorporation into future Indian law. The Justice Shah Group of Experts submitted its Report to the Planning Commission on 16 October 2012 wherein it proposed the adoption of nine National Privacy Principles.[14] These are the principles of notice, choice and consent, collection limitation, purpose limitation, disclosure of information, security, openness, and accountability. The Report recommended the application of these principles in laws relating to interception of communications, video and audio recordings, use of personal identifiers, bodily and genetic material, and personal data.

Criminal Procedure and Special Laws Relating to Privacy

While the Kharak Singh and Gobind cases first brought the questions of permissibility and limits of police surveillance to the Supreme Court, the power to collect information and personal data of a person is firmly embedded in Indian criminal law and procedure. Surveillance is an essential condition of the nation-state; the inherent logic of its foundation requires the nation-state to perpetuate itself by interdicting threats to its peaceful existence. Surveillance is a method by which the nation-state’s agencies interdict those threats. The challenge for democratic countries such as India is to find the optimal balance between police powers of surveillance and the essential freedoms of its citizens, including the right to privacy.

The regime governing the interception of communications is contained in section 5(2) of the Indian Telegraph Act, 1885 ("Telegraph Act") read with rule 419A of the Indian Telegraph Rules, 1951 ("Telegraph Rules"). The Telegraph Rules were amended in 2007[15] to give effect to, amongst other things, the procedural safeguards laid down by the Supreme Court in the PUCL case. However, India’s federal scheme permits States to also legislate in this regard. Hence, in addition to the general law on interceptions contained in the Telegraph Act and Telegraph Rules, some States have also empowered their police forces with interception functions in certain cases.[16] Ironically, even though some of these State laws invoke heightened public order concerns to justify their invasions of privacy, they establish procedural safeguards based on the principle of probable cause that surpasses the Telegraph Rules.

In addition, further subordinate legislation issued to fulfil the provisions of sections 69(2) and 69B(3) of the IT Act permit the interception and monitoring of electronic communications — including emails — to collect traffic data and to intercept, monitor, and decrypt electronic communications.[17]

The proposed Privacy (Protection) Bill, 2013 and Roundtable Meetings

In this background, the proposed Privacy (Protection) Bill, 2013 seeks to protect privacy by regulating (i) the manner in which personal data is collected, processed, stored, transferred and destroyed — both by private persons for commercial gain and by the state for the purpose of governance; (ii) the conditions upon which, and procedure for, interceptions of communications — both voice and data communications, including both data-in-motion and data-at-rest — may be conducted and the authorities permitted to exercise those powers; and, (iii) the manner in which forms of surveillance not amounting to interceptions of communications — including the collection of intelligence from humans, signals, geospatial sources, measurements and signatures, and financial sources — may be conducted.

Previous roundtable meetings to seek comments and opinion on the proposed Privacy (Protection) Bill, 2013 took place at:

The roundtable meetings were multi-stakeholder events with participation from industry representatives, lawyers, journalists, civil society organizations and Government representatives. On an average, 75 per cent of the participants represented industry concerns, 15 per cent represented civil society and 10 per cent represented regulatory authorities. The model followed at the roundtable meetings allowed for equal participation from all participants.


[1]. See generally, Dan Solove, “A Taxonomy of Privacy” University of Pennsylvania Law Review (Vol. 154, No. 3, January 2006).

[2]. Wainwright v. Home Office [2003] UKHL 53.

[3]. See A v. B plc [2003] QB 195; Wainwright v. Home Office [2001] EWCA Civ 2081; R (Ellis) v. Chief Constable of Essex Police [2003] EWHC 1321 (Admin).

[4]. Kharak Singh v. State of Uttar Pradesh AIR 1963 SC 1295.

[5]. Gobind v. State of Madhya Pradesh AIR 1975 SC 1378.

[6]. R. Rajagopal v. State of Tamil Nadu AIR 1995 SC 264.

[7]. People’s Union for Civil Liberties v. Union of India (1997) 1 SCC 30.

[8]. A Division Bench of the Supreme Court of India comprising Kuldip Singh and Saghir Ahmad, JJ, found that the procedure set out in section 5(2) of the Indian Telegraph Act, 1885 and rule 419 of the Indian Telegraph Rules, 1951 did not meet the “just, fair and reasonable” test laid down in Maneka Gandhi v. Union of India AIR 1978 SC 597 requisite for the deprivation of the right to personal liberty, from whence the Division Bench found a right to privacy emanated, guaranteed under Article 21 of the Constitution of India. Therefore, Kuldip Singh, J, imposed nine additional procedural safeguards that are listed in paragraph 35 of the judgment.

[9]. Naz Foundation v. Government of NCT Delhi (2009) 160 DLT 277.

[10]. The 2010 data adequacy assessment of Indian data protection laws was conducted by Professor Graham Greenleaf. His account of the process and his summary of Indian law can found at Graham Greenleaf, "Promises and Illusions of Data Protection in Indian Law" International Data Privacy Law (47-69, Vol. 1, No. 1, March 2011).

[11]. The Rules were brought into effect vide Notification GSR 313(E) on 11 April 2011. CIS submitted comments on the Rules that can be found here – http://cis-india.org/internet-governance/blog/comments-on-the-it-reasonable-security-practices-and-procedures-and-sensitive-personal-data-or-information-rules-2011.

[12]. The Committee on Subordinate Legislation, a parliamentary ‘watchdog’ committee, is mandated by rules 317-322 of the Rules of Procedure and Conduct of Business in the Lok Sabha (14th edn., New Delhi: Lok Sabha Secretariat, 2010) to examine the validity of subordinate legislation.

[13]. See the 31st Report of the Committee on Subordinate Legislation that was presented on 21 March 2013.

[14]. See paragraphs 7.14-7.17 on pages 69-72 of the Report of the Group of Experts on Privacy, 16 October 2012, Planning Commission, Government of India.

[15]. See, the Indian Telegraph (Amendment) Rules, 2007, which were brought into effect vide Notification GSR 193(E) of the Department of Telecommunications of the Ministry of Communications and Information Technology, Government of India, dated 1 March 2007.

[16]. See, inter alia, section 14 of the Maharashtra Control of Organised Crime Act, 1999; section 14 of the Andhra Pradesh Control of Organised Crime Act, 2001; and, section 14 of the Karnataka Control of Organised Crime Act, 2000.

[17]. See, the Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data and Information) Rules, 2009 vide GSR 782 (E) dated 27 October 2009; and, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009 vide GSR 780 (E) dated 27 October 2009.

Blocking of Websites

by Prasad Krishna last modified Sep 24, 2013 09:11 AM

PDF document icon Blocking of websites A1.pdf — PDF document, 2037 kB (2086287 bytes)

Freedom of Speech (Poster)

by Prasad Krishna last modified Sep 24, 2013 09:16 AM

PDF document icon Freedom of speech.pdf — PDF document, 1448 kB (1482956 bytes)

Intermediary Liabilty Poster

by Prasad Krishna last modified Sep 24, 2013 09:30 AM

PDF document icon intermediary 36x12.pdf — PDF document, 1566 kB (1604607 bytes)

Internet Governance Forum Poster

by Prasad Krishna last modified Sep 24, 2013 09:35 AM

PDF document icon IGF a2.pdf — PDF document, 11476 kB (11752118 bytes)

DNA Poster 1

by Prasad Krishna last modified Sep 24, 2013 10:12 AM

PDF document icon DNA 1.pdf — PDF document, 205 kB (210890 bytes)

DNA Poster 2

by Prasad Krishna last modified Sep 24, 2013 10:14 AM

PDF document icon DNA2.pdf — PDF document, 200 kB (205486 bytes)

UID Poster 1

by Prasad Krishna last modified Sep 24, 2013 10:15 AM

PDF document icon UID 1.pdf — PDF document, 187 kB (191529 bytes)

UID Poster 2

by Prasad Krishna last modified Sep 24, 2013 10:17 AM

PDF document icon UID 2.pdf — PDF document, 235 kB (241347 bytes)