Centre for Internet & Society

Maria Xynou recently interviewed Caspar Bowden, an internationally renowned privacy advocate and former Chief Privacy Adviser at Microsoft. Read this exciting interview and gain an insight on India's UID and CMS schemes, on the export of surveillance technologies, on how we can protect our data in light of mass surveillance and much much more!

Caspar Bowden is an independent advocate for better Internet privacy technology and regulation. He is a specialist in data protection policy, privacy enhancing technology research, identity management and authentication. Until recently he was Chief Privacy Adviser for Microsoft, with particular focus on Europe and regions with horizontal privacy law.
From 1998-2002, he was the director of the Foundation for Information Policy Research (www.fipr.org) and was also an expert adviser to the UK Parliament for the passage of three bills concerning privacy, and was co-organizer of the influential Scrambling for Safety public conferences on UK encryption and surveillance policy. His previous career over two decades ranged from investment banking (proprietary trading risk-management for option arbitrage), to software engineering (graphics engines and cryptography), including work for Goldman Sachs, Microsoft Consulting Services, Acorn, Research Machines, and IBM.
The Centre for Internet and Society interviewed Caspar Bowden on the following questions:

 

1. Do you think India needs privacy legislation? Why / Why not?

 

Well I think it's essential for any modern democracy based on a constitution to now recognise a universal human right to privacy. This isn't something that would necessarily have occurred to the draft of constitutions before the era of mass electronic communications, but this is now how everyone manages their lives and maintains social relationships at a distance, and therefore there needs to be an entrenched right to privacy – including communications privacy – as part of the core of any modern state.

2. The majority of India's population lives below the line of poverty and barely has any Internet access. Is surveillance an elitist issue or should it concern the entire population in the country? Why / Why not?

 

Although the majority of people in India are still living in conditions of poverty and don't have access to the Internet or, in some cases, to any electronic communications, that's changing very rapidly. India has some of the highest growth rates in take up with both mobile phones and mobile Internet and so this is spreading very rapidly through all strata of society. It's becoming an essential tool for transacting with business and government, so it's going to be increasingly important to have a privacy law which guarantees rights equally, no matter what anyone's social station or situation. There's also, I think, a sense in which having a right to privacy based on individual rights is much preferable to some sort of communitarian approach to privacy, which has a certain philosophical following; but that model of privacy - that somehow, because of a community benefit, there should also be a sort of community sacrifice in individual rights to privacy - has a number of serious philosophical flaws which we can talk about.

3. "I'm not a terrorist and I have nothing to hide...and thus surveillance can't affect me personally." Please comment.

 

Well, it's hard to know where to begin. Almost everybody in fact has “something to hide”, if you consider all of the social relationships and the way in which you are living your life. It's just not true that there's anybody who literally has nothing to hide and in fact I think that it's rather a dangerous idea, in political culture, to think about imposing that on leaders and politicians. There's an increasing growth of the idea – now, probably coming from America- that political leaders (and even their staff - to get hired in the current White House) should open up their lives, even to the extent of requiring officials to give up their passwords to their social network accounts (presumably so that they can be vetted for sources of potential political embarrassment in their private life). This is a very bad idea because if we only elect leaders, and if we only employ bureaucrats, who do not accord any subjective value to privacy, then it means we will almost literally be electing (philosophical) zombies. And we can't expect our political leaders to respect our privacy rights, if we don't recognise that they have a right to privacy in their own lives also. The main problem with the “nothing to hide, so nothing to fear” mantra is that this is used as a rhetorical tool by authoritarian forces in government and society, who simply wish to take a more paternalistic and protective attitude. This reflects a disillusionment within the “deep state” about how democratic states should function.

Essentially, those who govern us are given a license through elections to exercise power with consent, but this entails no abrogation of a citizen's duty to question authority. Instead, that should be seen as a civic duty - providing the objections are reasonable. People actually know that there are certain things in their lives that they don't wish other people to know, but by indoctrinating the “nothing to hide” ideology, it inculcates a general tendency towards more conformism in society, by inhibiting critical voices.

4. Should people have the right to give up their right to privacy? Why / Why not?

 

In European data protection law there is an obscure provision which is particularly relevant to medical privacy, but almost never used in the area of so-called sensitive personal data, like political views or philosophical views. It is possible currently for European governments to legislate to override the ability of the individual to consent. So this might arise, for example, if a foreign company sets up a service to get people to consent to have their DNA analysed and taken into foreign databases, or generally where people might consent to a big foreign company analysing and capturing their medical records. I think there is a legitimate view that, as a matter of national policy, a government could decide that these activities were threatening to data sovereignty, or that was just bad public policy. For example, if a country has a deeply-rooted social contract that guarantees the ability to access medical care through a national health service, private sector actors could try to undermine that social-solidarity basis for universal provision of health care. So for those sorts of reasons I do think it's defensible for governments to have the ability in those sectors to say: “Yes, there are areas where people should not be able to consent to give up their privacy!”

But then going back to the previous answer, more generally, commercial privacy policies are now so complicated – well, they've always been complicated, but now are mind-blowingly devious as well - people have no real possibility of knowing what they're consenting to. For example, the secondary uses of data flows in social networks are almost incomprehensible, even for technologists at the forefront of research. The French Data Protection authorities are trying to penalize Google for replacing several very complicated privacy policies by one so-called unified policy, which says almost nothing at all. There's no possible way for people to give informed consent to this over-simplified policy, because it doesn't even tell anything useful to an expert. So again in these circumstances, it's right for a regulator to intercede to prevent unfair exploitation of the deceptive kind of “tick-box” consent. Lastly, it is not possible for EU citizens to waive or trade away their basic right to access (or delete) their own data in future, because this seems a reckless act and it cannot be foreseen when this right might become essential in some future circumstances. So in these three senses, I believe it is proper for legislation to be able to prevent the abuse of the concept of consent.

5. Do you agree with India's UID scheme? Why / Why not?

 

There is a valid debate about whether it's useful for a country to have a national identity system of some kind - and there's about three different ways that can be engineered technically. The first way is to centralise all data storage in a massive repository, accessed through remote terminal devices. The second way is a more decentralised approach with a number of different identity databases or systems which can interoperate (or “federate” with eachother), with technical and procedural rules to enforce privacy and security safeguards. In general it's probably a better idea to decentralise identity information, because then if there is a big disaster (or cyber-attack) or data loss, you haven't lost everything. The third way is what's called “user-centric identity management”, where the devices (smartphones or computers) citizens use to interact with the system keep the identity information in a totally decentralised way.

Now the obvious objection to that is: “Well, if the data is decentralised and it's an official system, how can we trust that the information in people's possession is authentic?”. Well, you can solve that with cryptography. You can put digital signatures on the data, to show that the data hasn't been altered since it was originally verified. And that's a totally solved problem. However, unfortunately, not very many policy makers understand that and so are easily persuaded that centralization is the most efficient and secure design – but that hasn't been true technically for twenty years. Over that time, cryptographers have refined the techniques (the alogithms can now run comfortably on smartphones) so that user-centric identity management is totally achievable, but policy makers have not generally understood that. But there is no technical reason a totally user-centric vision of identity architecture should not be realized. But still the UID appears to be one of the most centralised large systems ever conceived.

There are still questions I don't understand about its technical architecture. For example, just creating an identity number by itself doesn't guarantee security and it's a classic mistake to treat an identifier as an authenticator. In other words, to use an identifier or knowledge of an identifier - which could become public information, like the American social security number – to treat knowledge of that number as if it were a key to open up a system to give people access to their own private information is very dangerous. So it's not clear to me how the UID system is designed in that way. It seems that by just quoting back a number, in some circumstances this will be the key to open up the system, to reveal private information, and that is an innately insecure approach. There may be details of the system I don't understand, but I think it's open to criticism on those systemic grounds.

And then more fundamentally, you have to ask what's the purpose of that system in society. You can define a system with a limited number of purposes – which is the better thing to do – and then quite closely specify the legal conditions under which that identity information can be used. It's much more problematic, I think, to try and just say that “we'll be the universal identity system”, and then you just try and find applications for it later. A number of countries tried this approach, for example Belgium around 2000, and they expected that having created a platform for identity, that many applications would follow and tie into the system. This really didn't happen, for a number of social and technical reasons which critics of the design had predicted. I suppose I would have to say that the UID system is almost the anithesis of the way I think identity systems should be designed, which should be based on quite strong technical privacy protection mechanisms - using cryptography - and where, as far as possible, you actually leave the custody of the data with the individual.

Another objection to this user-centric approach is “back-up”: what happens when you lose the primary information and/or your device? Well, you can anticipate that. You can arrange for this information to be backed-up and recovered, but in such a way that the back-up is encrypted, and the recovered copy can easily be checked for authenticity using cryptography.

6. Should Indian citizens be concerned about the Central Monitoring System (CMS)? Why / Why not?


Well, the Central Monitoring System does seem to be an example of very large scale “strategic surveillance”, as it is normally called. Many western countries have had these for a long time, but normally only for international communications. Normally surveillance of domestic communications is done under a particular warrant, which can only be applied one investigation at a time. And it's not clear to me that that is the case with the Central Monitoring System. It seems that this may also be applicable to mass surveillance of communications inside India. Now we're seeing a big controversy in the U.S - particularly at the moment - about the extent to which their international strategic surveillance systems are also able to be used internally. What has happened in the U.S. seems rather deceptive; although the “shell” of the framework of individual protection of rights was left in place, there are actually now so many exemptions when you look in the detail, that an awful lot of Americans' domestic communications are being subjected to this strategic mass surveillance. That is unacceptable in a democracy.

There are reasons why, arguably, it's necessary to have some sort of strategic surveillance in international communications, but what Edward Snowden revealed to us is that in the past few years many countries – the UK, the U.S, and probably also Germany, France and Sweden – have constructed mass surveillance systems which knowingly intrude on domestic communications also. We are living through a transformation in surveillance power, in which the State is becoming more able to monitor and control the population secretively than ever before in history. And it's very worrying that all of these systems appear to have been constructed without the knowledge of Parliaments and without precise legislation. Very few people in government even seem to have understood the true mind-boggling breadth of this new generation of strategic surveillance. And no elections were fought on a manifesto asking “Do people want this or not?”. It's being justified under a counter-terrorism mantra, without very much democratic scrutiny at all. The long term effects of these systems on democracies are really uncharted territory.

We know that we're not in an Orwellian state, but the model is becoming more Kafkaesque. If one knows that this level of intensive and automated surveillance exists, then it has a chilling effect on society. Even if not very much is publicly known about these systems, there is still a background effect that makes people more conformist and less politically active, less prepared to challenge authority. And that's going to be bad for democracy in the medium term – not just the long term.

7. Should surveillance technologies be treated as traditional arms / weapons? If so, should export controls be applied to surveillance technologies? Why / Why not?


Surveillance technologies probably do need to be treated as weapons, but not necessarily as traditional weapons. One probably is going to have to devise new forms of export control, because tangible bombs and guns are physical goods – well, they're not “goods”, they're “bads” - that you can trace by tagging and labelling them, but many of the “new generation” of surveillance weapons are software. It's very difficult to control the proliferation of bits – just as it is with copyrighted material. And I remember when I was working on some of these issues thirteen years ago in the UK – during the so-called crypto wars – that the export of cryptographic software from many countries was prohibited. And there were big test cases about whether the source code of these programs was protected under the US First Amendment, which would prohibit such controls on software code. It was intensely ironic that in order to control the proliferation of cryptography in software, governments seemed to be contemplating the introduction of strategic surveillance systems to detect (among other things) when cryptographic software was being exported. In other words, the kind of surveillance systems which motivated the “cypherpunks” to proselytise cryptography, were being introduced (partly) with the perverse justification of preventing such proliferation of such cryptography!

In the case of the new, very sophisticated software monitoring devices (“Trojans”) which are being implanted into people's computers – yes, this has to be subject to the same sort of human rights controls that we would have applied to the exports of weapon systems to oppressive regimes. But it's quite difficult to know how to do that. You have to tie responsibility to the companies that are producing them, but a simple system of end-user licensing might not work. So we might actually need governments to be much more proactive than they have been in the past with traditional arms export regimes and actually do much more actively to try and follow control after export – whether these systems are only being used by the intended countries. As for the law enforcement agencies of democratic countries which are buying these technologies: the big question is whether law enforcement agencies are actually applying effective legal and operational supervision over the use of those systems. So, it's a bit of a mess! And the attempts that have been made so far to legislate this area I don't think are sufficient.

8. How can individuals protect their data (and themselves) from spyware, such as FinFisher?

 

In democratic countries, with good system of the rule of law and supervision of law enforcement authorities, there have been cases – notably in Germany – where it's turned out that the police using techniques, like FinFisher, have actually disregarded legal requirements from court cases laying down the proper procedures. So I don't think it's good enough to assume that if one was doing ordinary lawful political campaigning, that one would not be targeted by these weapons. So it's wise for activists and advocates to think about protecting themselves – of course, other professions as well who look after confidential information – because these techniques may also get into the hands of industrial spies, private detectives and generally by people who are not subject to even the theoretical constraints of law enforcement agencies.

After Edward Snowden's revelations, we understand that all our computer infrastructure is much more vulnerable – particularly to foreign and domestic intelligence agencies – than we ever imagined. So for example, I don't use Microsoft software anymore – I think that there are techniques which are now being sold to governments and available to governments for penetrating Microsoft platforms and probably other major commercial platforms as well. So, I've made the choice, personally, to use free software – GNU/Linux, in particular – and it still requires more skill for most people to use, but it is much much easier than even a few years ago. So I think it's probably wise for most people to try and invest a little time getting rid of proprietary software if they care at all about societal freedom and privacy. I understand that using the latest, greatest smartphone is cool, and the entertainment and convenience of Cloud and tablets – but people should not imagine that they can keep those platforms secure.

It might sound a bit primitive, but I think people should have to go back to the idea that if they really want confidential communications with their friends, or if they are involved with political work, they have to think about setting aside one machine - which they keep offline and just use essentially for editing and encrypting/decrypting material. Once they've encrypted their work on their “air gap” machine, as it's called, then they can put their encrypted emails on a USB stick and transfer them to their second machine which they use to connect online (I notice Bruce Schneier is just now recommending the same approach). Once the “air gap” machine has been set up and configured, you should not connect that to the network – and preferably, don't connect it to the network, ever! So if you follow those sorts of protocols, that's probably the best that is achievable today.

9. How would you advise young people working in the surveillance industry?

 

Young people should try and read a little bit into the ethics of surveillance and to understand their own ethical limits in what they want to do, working in that industry. And in some sense, I think it's a bit like contemplating a career in the arms industry. There are defensible uses of military weapons, but the companies that build these weapons are, at the end of the day, just corporations maximizing value for shareholders. And so, you need to take a really hard look at the company that you're working for or the area you want to work in and satisfy your own standard of ethics, and that what you're doing is not violating other people's human rights. I think that in the fantastically explosive growth of surveillance industries that we've seen over the past few years – and it's accelerating – the sort of technologies particularly being developed for electronic mass surveillance are fundamentally and ethically problematic. And I think that for a talented engineer, there are probably better things that he/she can do with his/her career.

    The views and opinions expressed on this page are those of their individual authors. Unless the opposite is explicitly stated, or unless the opposite may be reasonably inferred, CIS does not subscribe to these views and opinions which belong to their individual authors. CIS does not accept any responsibility, legal or otherwise, for the views and opinions of these individual authors. For an official statement from CIS on a particular issue, please contact us directly.