<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://cis-india.org/internet-governance/blog/online-anonymity/search_rss">
  <title>We are anonymous, we are legion</title>
  <link>https://cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 151 to 165.
        
  </description>
  
  
  
  
  <image rdf:resource="https://cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/livemint-asmita-bakshi-october-18-2019-dystopia-vs-development"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/livemint-shreya-nandi-prathma-sharma-october-15-2019-will-fastag-raise-privacy-concerns"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/privacy-international-ambika-tandon-october-17-2019-mother-and-child-tracking-system-understanding-data-trail-indian-healthcare"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/participation-in-iso-iec-jtc-1-sc-27-meetings"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/reuters-annie-banerji-october-17-2019-indias-hiv-positive-trans-people-find-new-strength-in-technology"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/news/newsminute-october-1-2019-theja-ram-why-conviction-rate-for-cyber-crime-cases-in-karnataka-is-abysmally-low"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/ai-full-spectrum-regulatory-challenge-launch-workshop-reference-files"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft"/>
        
        
            <rdf:li rdf:resource="https://cis-india.org/internet-governance/blog/modern-war-institute-september-30-2019-arindrajit-basu-and-karan-saini-setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="https://cis-india.org/internet-governance/news/livemint-asmita-bakshi-october-18-2019-dystopia-vs-development">
    <title>Dystopia vs development: The Kashmir paradox</title>
    <link>https://cis-india.org/internet-governance/news/livemint-asmita-bakshi-october-18-2019-dystopia-vs-development</link>
    <description>
        &lt;b&gt;On 26 July, Azmat Ali Mir, 26, landed in her hometown, Srinagar. A day later, uncertainty and panic gripped the Kashmir valley—the Amarnath yatris (pilgrims) and other tourists were being evacuated, there was heavy military deployment and news reports claimed that there could be a threat to the border.&lt;/b&gt;
        &lt;p&gt;The article by Asmita Bakshi was &lt;a class="external-link" href="https://www.livemint.com/mint-lounge/features/dystopia-vs-development-the-kashmir-paradox-11571377960811.html"&gt;published by Livemint&lt;/a&gt; on October 19, 2019. Ambika Tandon was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;But Mir had a lot of work to do—she had events planned as part of her startup Manzar Experience Curators, which promotes Kashmiri art, culture and fashion made and produced locally for audiences outside the state, particularly Bengaluru, where she now lives. “We are so used to things like this, we were like, ‘these things will keep happening, curfew &lt;em&gt;laga denge&lt;/em&gt; (they will impose a curfew), that means you need to have ration in your home. But until then, you have to do your work’," Mir tells me over the phone from Bengaluru. “I had very little time, my tickets were already booked for 5 August, there was so much work, I had no time to think. I was going around, signing contracts, getting things done."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;But soon, it became clear that things would be different this time. By August 1, fear and tension had escalated. Rumours of war grew louder, and additional troops were flown in. “The guy who heads the agency that was to help with online promotions for my event said things don’t seem okay and we should wait and see how this goes," says Mir. “Our lives, both personal and professional, are governed around the political calendar of Kashmir."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Across town, on 26 July, Sheikh Samiullah, 28, from downtown Srinagar was at a café called ZeroBridge Fine Dine along with his team and representatives from the state administration, including deputy commissioner Shahid Choudhary, to launch the Android app for his company FastBeetle. The logistics startup, launched last year by Samiullah and co-founder Abid Rashid Lone, is often called “Kashmir’s Dunzo", and provides door-to-door delivery services for businesses ranging from online grocers and retail commerce to pharmacies and individuals.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The launch of their iOS app was scheduled for 13 August, the day after Eid. But this had to be cancelled a few days later due to the prevailing situation in the valley. Today, FastBeetle’s operations—which run on the internet—have ceased. “I invested all my savings in this company. For me, it’s not possible to run this again. It is like starting from the beginning. I have a massive liability on my head," Samiullah tells me in Delhi, where he has gone from running a profitable business to being unemployed and now searching for work.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Over the same period, Qazi Zaid, 30, who runs and edits the news platform Free Press Kashmir, was in overdrive. “As journalists living in Kashmir, we aren’t just reporting the conflict, we are also living the conflict. We are members of the same society," he says. “One of the last stories we did was on the panic—how panic is being manufactured and the standard response of people who are scared and entering panic mode. That’s what happened with us as well." Free Press Kashmir, which is primarily an online news portal, has not published for close to three months. And now Zaid is in the Capital, exploring ways to save his news portal from complete closure and prevent the 15 young journalists he employs from being rendered jobless.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;These young Kashmiris and their organizations have been driven into a state of near-obscurity since 5 August, when the Union government abrogated Article 370 of the Constitution, which granted the state of Jammu and Kashmir its special status, and subsequently sent the valley into a communication blackout. Two and a half months later, only landlines and post-paid mobile services (excluding SMS) have been restored. Internet and data services remain closed.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;With thousands of arrests, instances of violence from both militants and the Armed Forces reported in the international press, the impact of this shutdown has been immense. But it has also inflicted a huge monetary cost. A report in the BBC, published on 8 October, stated that “the Kashmir Chamber of Commerce and Industry estimates the shutdown has already cost the region more than $1.4bn (around ₹9,800 crore), and thousands of jobs have been lost".&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Shutting down of startups&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In a region ridden with decades of armed conflict and the presence of the Indian armed forces in large numbers, entrepreneurship is no easy feat. Kashmiris have typically chosen public sector jobs, but the valley’s entrepreneurs agree that over the last decade or so, young and resilient men and women from the valley had been working to change this with online and offline ventures.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In fact, the startup ecosystem in Kashmir seemed to have been poised for growth. Notably, in September last year, the Jammu and Kashmir Entrepreneurship Development Institute (JKEDI), established by the state government, released the J&amp;amp;K Startup Policy 2018, which aimed to boost the startup ecosystem by granting founders a monthly allowance of up to ₹12,000 for a period of one year during incubation. Recognized startups would be provided with one-time assistance of up to ₹12 Lakh for product research and development, marketing and publicity.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It was around this time that Samiullah started FastBeetle. He had noticed that though logistics companies existed, they catered largely to big organizations like Amazon. FastBeetle tied up with smaller businesses, including close to 200 women in the valley who were making and selling apparel and other wares on Instagram. “They would have trouble going out every day on multiple deliveries since it is a conservative society," he says. FastBeetle had over 30 merchants within its first month of operations. Over the first five months, they had grown to making 100 deliveries per day, employed a team of six, got an office space and two bikes. In a year, they had generated a positive cash flow despite numerous internet shutdowns imposed in the valley.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Since August 5, the company has been plunged into what Samiullah believes is an interminable downturn. He estimates monetary losses at approximately ₹15 lakh, not considering the ₹4 lakh he invested in the Android app and another ₹3 lakh on the iOS app that never took off. In the unlikely event that restrictions are lifted immediately and business as usual resumes in the valley, it will cost him another ₹10 lakhs to restart the company.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Financial losses aside, he says, it is the time and passion he had invested in the business that won’t come back. And his young employees face an uncertain future as well. One of his delivery boys, Arsalan Shabir Bhat, 21, doesn’t know what the future holds both for him or the valley. “The salary of ₹10,000 for me was good, I was satisfied. “&lt;em&gt;Aage ka nahi pata par haalaat bohot kharab hai. Filhaal toh baithe hi hai ghar pe&lt;/em&gt; (I don’t know about the future but the current situation is grim. For now, I am sitting at home)," he says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Through all this, the state administration and Union government are trying to push the narrative of development. In late September, minister of state for finance and corporate affairs Anurag Thakur, told news outlets: “Our government has taken a historic decision to abrogate Article 370. Now, J&amp;amp;K will witness massive development." Yet, the 33 startups registered with the JKEDI and 70 with the Startup India portal in J&amp;amp;K, among others that run on private funding and bootstrapping models, have been struggling since this decision was taken. Earlier this week, militants attacked two non-local apple traders in the valley, casting doubt on the claim that Kashmir is safe for business.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It was to assess conflicting claims such as these, by providing an insight into the lives of people in the valley, that Zaid restarted Free Press Kashmir in 2017 (it was previously shut down in 2014), using investments from his family business. “It’s all the more important now. Because authentic voices from Kashmir are not coming out," says Zaid. He says that while the international media focuses on Kashmir from a breaking news perspective and some of the Indian press takes a nationalistic line, human perspectives from the valley largely remained unheard.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“There was a gap of a human narrative coming out of Kashmir, which we saw and filled," he says. “If we were to relaunch right now, I don’t think there would be a lot of positive stories. There would be stories of struggle, survival, trauma, pain, hardship. That’s what we would be reporting right now."&lt;/p&gt;
&lt;p&gt;With a civil curfew reportedly in place in the valley as a means of protest, even businesses that could have provided financial assistance to these startups are not in operation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“The economy is so badly hit and it will take another year or two years or more—no idea how long—to recover. Because right now advertisers will take some time to recover as well," says Zaid. “I don’t think we can sustain that long. Our business was at 50% of sustenance and now it’s down to 0. Traffic is down to 0 form 350,000-500,000 hits."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some investors like Asmat Ashai, who runs the US-based non-profit organization Funkar International, would provide financial assistance to young Kashmiri artists, nevertheless maintain that the difficult situation will not deter them from providing support. “I will continue to help anyone who asks me for help because we cannot give up and we will not be broken. We will stay the course and save whatever we have in spite of the abrogation of all the articles. That is paperwork. Kashmiris will not be broken."&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lost hope&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to the Software Freedom Law Foundation, a legal services organization working to protect digital freedom, Kashmir has had the maximum number of internet shutdowns in the country—55, of varying durations and extents, in 2019 alone, and a total of 180 since 2015. This time however, the shutdown was far more severe—all media and communication platforms, including landlines, internet, news publications and certain television services were suspended. “A large majority of businesses today rely on the internet for some part if not all of their function," says Ambika Tandon, policy officer, Centre for Internet and Society (CIS), Bengaluru.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;CIS published a digital book titled &lt;em&gt;Internet Shutdown Stories&lt;/em&gt; in May 2018 which tracked how internet blockades impact lives and livelihoods in India. “We collected stories from Internet Service Providers (ISPs) and digital marketing firms in Kashmir that were on the brink of closing down due to the frequency of shutdowns in the valley. The reporters spoke to musicians who used YouTube as a means to earn a livelihood and popularity, and were doubly upset with the effect on their income and their freedom of expression. Given the absence of any public notice before shutdowns, or information regarding the extent and duration of shutdowns, the government definitely has the minimal responsibility of compensating direct losses incurred by those who cannot afford it," says Tandon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Take the example of Furqan Qureshi, who set up KartFood, popularly called “Kashmir’s Zomato", when he was still pursuing a commerce degree from Islamia College, Srinagar. He started in 2017 and would take orders on call. Once the response grew, Qureshi had a website and application built. But for two months thereafter, in May and June 2017, there was a clampdown on the internet. “I suffered a loss of close to ₹1.5 lakh and that time I had no investment, but I had employed people and was responsible for them, so I persevered and started again from July. It’s always about working from scratch in Kashmir. Whenever there is a shutdown, you start from zero," he says on the phone from Bengaluru.&lt;/p&gt;
&lt;p&gt;Qureshi says they always fought the odds and remained in business through internet shutdowns during which the team, which stood at 25-30 as on 5 August, would call customers and coordinate deliveries on the phone.&lt;/p&gt;
&lt;p&gt;This dedication is what eventually resulted in his first round of investment in February 2018, from a local Kashmiri businessman. “I upgraded the app, included more restaurants, added delivery tracking features and was creating jobs."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Since 5 August, however, not only have communication channels been hit, initially there was complete restriction on movement within the valley. “I had to leave Kashmir around six or seven days after the clampdown, since I live in an area where there was stone-pelting every day and the police was entering homes and picking up boys. My parents were scared and said it was better to go to Bengaluru and stay here," he says, now hoping he can set up a small restaurant in the city, using whatever he has managed to save.&lt;/p&gt;
&lt;p&gt;As young entrepreneurs leave, the JKEDI remains hopeful that the startup ecosystem will bounce back once normalcy returns. “I think as soon as the internet starts working again we will push the things here as well, with the policy we are trying to give some incentive to these people, so that we can get these startups back and they can inspire other people to start their own," says Irtif Lone, in-charge, Centre for Innovation Incubation and Business Modelling, JKEDI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“It is difficult for people to choose to pursue a startup and these situations make it even tougher. We will be pushing all the startups that have made a mark and are now suffering due to the financial constraints. They will be given an incentive as soon as possible so that none of them are starved for finances."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;But there are doubts about whether such promises can be fulfilled. In any case, it may already be too late. Shayan Nabi, 29, who ran a digital marketing company and had invested in other ventures of his own such as KashmirCalling (to coordinate private carpooling), has given up hope. As he waits for his employees to receive the emails he has sent asking them to look for alternative opportunities, he himself is facing professional uncertainty in Delhi. “I have been very vocal about providing internet freedom in Kashmir. It’s a basic human right. But it always falls on deaf ears." He adds: “I had ideas about making Kashmir digital. But I am sorry, not any more. Not after all the humiliation we have been through."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The road to recovery from here is paved with crippling debt, unemployment and loss of morale. What was once seen as an act of resilience amidst conflict, has today crumbled due to a State diktat, paradoxically executed with promises of peace and prosperity.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;When Mir finally landed in Bengaluru on the morning of 5 August, she broke down when she finally heard the news. Today, with payments stuck with vendors and Mir’s inability to reach her artisans and wazas (Kashmiri cooks) in the valley, the Manzar website reads, “All verticals of Manzar Experience Curators... are currently unoperational due to the unprecedented lockdown in Kashmir". She fears that her venture, which set out to create conversations about Kashmir around the country, has lost all meaning and purpose. “I am not someone who set out with hate, I set out with love and passion and this idea of changing things," she says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Do you think with the kind of environment that this country has created for a Kashmiri today, I can go out and do what I do? Is it safe for someone like me to take a place somewhere in Bengaluru to open a place that serves authentic Kashmiri food? I am scared it could be burnt down the next day."&lt;/p&gt;
&lt;p&gt;The question she now asks herself transcends the uncertainty of business in the valley, and straddles a precariousness both political and personal: “Where do I go from here?"&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/livemint-asmita-bakshi-october-18-2019-dystopia-vs-development'&gt;https://cis-india.org/internet-governance/news/livemint-asmita-bakshi-october-18-2019-dystopia-vs-development&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Asmita Bakshi</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-10-20T06:31:00Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/livemint-shreya-nandi-prathma-sharma-october-15-2019-will-fastag-raise-privacy-concerns">
    <title>Will FASTag raise privacy concerns?</title>
    <link>https://cis-india.org/internet-governance/news/livemint-shreya-nandi-prathma-sharma-october-15-2019-will-fastag-raise-privacy-concerns</link>
    <description>
        &lt;b&gt;FASTag, an electronic device that enables direct, cashless toll payment, has been touted as the Aadhaar for vehicles as it would help the government track movement of automobiles. But the move can also stoke fresh concerns on privacy.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Shreya Nandi and Prathma Sharma was &lt;a class="external-link" href="https://www.livemint.com/news/india/will-fastag-raise-privacy-concerns-11571125214325.html"&gt;published in Livemint&lt;/a&gt; on October 15, 2019. Pranesh Prakash was quoted.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The device can track movement of vehicles, toll booth cameras can catch traffic law violations, prevent crime, and help authorities curb tax evasion.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While the movement of commercial vehicles will be tracked by revenue authorities by integrating with e-way bill system under &lt;a href="https://www.livemint.com/news/india/ihmcl-gstn-to-ink-pact-to-link-fastag-with-gst-e-way-bill-system-on-oct-14-11570973104434.html" target="_blank"&gt;Goods and Services Tax (GST)&lt;/a&gt; to curb revenue leakage, experts believe that tracking personal vehicle is a matter of concern.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is not that the government will only use the stored data or video under limited and well-defined circumstances such as for evidence in case of traffic accidents, according to Pranesh Prakash, fellow, Centre for Internet Society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“As transport minister Gadkari said (on Monday), the government will also use the video or data for any for analysis. And that will happen in a non-consensual manner, and outside the purview of a data protection framework, and without paying heed to the Supreme Court's landmark judgment on privacy," Prakash said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On Monday, transport minister &lt;a href="https://www.livemint.com/news/india/gadkari-says-revenue-from-toll-collection-to-hit-rs-1-lakh-crore-in-5-years-11571057140954.html" target="_blank"&gt;Nitin Gadkari&lt;/a&gt; said cameras at the toll booth will take photos of passengers in a vehicle, which will be useful for the home ministry as there will be a record of the vehicle’s movement.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;FASTag, which comes into effect 1 December, uses radio frequency identification technology to enable direct toll payments from a moving vehicle. The toll fare is deducted from the bank account linked to FASTag. It will not only encourage cashless payments at toll plaza, but also decongest national highways, thereby ensuring seamless movement of vehicles, and reduce pollution and logistics cost.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Amid privacy concerns related to sharing Aadhaar details with banks, telecom companies or any other authority for fulfilling KYC norms, the Supreme Court had in September last year ruled that Aadhaar can only be used for welfare schemes and for delivering state subsidies. It had barred private companies from using Aadhaar data for authenticating customers.&lt;br /&gt;Another expert said since FASTag data includes information that is personally identifiable with the vehicle owner, it can be misused if shared with various entities.&lt;br /&gt;"With FASTag being linked with National Vehicle Database (Vahan database), it does raise privacy concerns, specially as Nitin Gadkari, the minister of road transport and highways, has admitted that the government has provided access to Vahan and Sarathi database to 32 government and 87 private entities for ₹65 crore till date," Salman Waris Managing Partner, TechLegis Advocates &amp;amp; Solicitors, said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“With the Personal Data Protection Bill still in the making there are little regulatory measures to prevent or even punish FasTag data breaches," Waris said.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/livemint-shreya-nandi-prathma-sharma-october-15-2019-will-fastag-raise-privacy-concerns'&gt;https://cis-india.org/internet-governance/news/livemint-shreya-nandi-prathma-sharma-october-15-2019-will-fastag-raise-privacy-concerns&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shreya Nandi and Prathma Sharma</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2019-10-18T15:22:27Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/privacy-international-ambika-tandon-october-17-2019-mother-and-child-tracking-system-understanding-data-trail-indian-healthcare">
    <title>The Mother and Child Tracking System - understanding data trail in the Indian healthcare systems</title>
    <link>https://cis-india.org/internet-governance/blog/privacy-international-ambika-tandon-october-17-2019-mother-and-child-tracking-system-understanding-data-trail-indian-healthcare</link>
    <description>
        &lt;b&gt;Reproductive health programmes in India have been digitising extensive data about pregnant women for over a decade, as part of multiple health information systems. These can be seen as precursors to current conceptions of big data systems within health informatics. In this article, published by Privacy International, Ambika Tandon presents some findings from a recently concluded case study of the MCTS as an example of public data-driven initiatives in reproductive health in India. &lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;This article was first published by &lt;a href="https://privacyinternational.org/news-analysis/3262/mother-and-child-tracking-system-understanding-data-trail-indian-healthcare" target="_blank"&gt;Privacy International&lt;/a&gt;, on October 17, 2019&lt;/h4&gt;
&lt;h4&gt;Case study of MCTS: &lt;a href="https://cis-india.org/raw/big-data-reproductive-health-india-mcts" target="_blank"&gt;Read&lt;/a&gt;&lt;/h4&gt;
&lt;hr /&gt;
&lt;p&gt;On October 17th 2019, the UN Special Rapporteur (UNSR) on Extreme Poverty and Human Rights, Philip Alston, released his thematic report on digital technology, social protection and human rights. Understanding the impact of technology on the provision of social protection – and, by extent, its impact on people in vulnerable situations – has been part of the work the Centre for Internet and Society (CIS) and Privacy International (PI) have been doing.&lt;/p&gt;
&lt;p&gt;Earlier this year, &lt;a href="https://privacyinternational.org/advocacy/2996/privacy-internationals-submission-digital-technology-social-protection-and-human" target="_blank"&gt;PI responded&lt;/a&gt; to the UNSR's consultation on this topic. We highlighted what we perceived as some of the most pressing issues we had observed around the world when it comes to the use of technology for the delivery of social protection and its impact on the right to privacy and dignity of benefit claimants.&lt;/p&gt;
&lt;p&gt;Among them, automation and the increasing reliance on AI is a topic of particular concern - countries including Australia, India, the UK and the US have already started to adopt these technologies in digital welfare programmes. This adoption raises significant concerns about a quickly approaching future, in which computers decide whether or not we get access to the services that allow us to survive. There's an even more pressing problem. More than a few stories have emerged revealing the extent of the bias in many AI systems, biases that create serious issues for people in vulnerable situations, who are already exposed to discrimination, and made worse by increasing reliance on automation.&lt;/p&gt;
&lt;p&gt;Beyond the issue of AI, we think it is important to look at welfare and automation with a wider lens. In order for an AI to function it needs to be trained on a dataset, so that it can understand what it is looking for. That requires the collection large quantities of data. That data would then be used to train and AI to recognise what fraudulent use of public benefits would look like. That means we need to think about every data point being collected as one that, in the long run, will likely be used for automation purposes.&lt;/p&gt;
&lt;p&gt;These systems incentivise the mass collection of people's data, across a huge range of government services, from welfare to health - where women and gender-diverse people are uniquely impacted. CIS have been looking specifically at reproductive health programmes in India, work which offers a unique insight into the ways in which mass data collection in systems like these can enable abuse.&lt;/p&gt;
&lt;p&gt;Reproductive health programmes in India have been digitising extensive data about pregnant women for over a decade, as part of multiple health information systems. These can be seen as precursors to current conceptions of big data systems within health informatics. India’s health programme instituted such an information system in 2009, the Mother and Child Tracking System (MCTS), which is aimed at collecting data on maternal and child health. The Centre for Internet and Society, India, &lt;a href="https://cis-india.org/raw/big-data-reproductive-health-india-mcts" target="_blank"&gt;undertook a case study of the MCTS&lt;/a&gt; as an example of public data-driven initiatives in reproductive health. The case study was supported by the &lt;a href="http://bd4d.net/" target="_blank"&gt;Big Data for Development network&lt;/a&gt; supported by the International Development Research Centre, Canada. The objective of the case study was to focus on the data flows and architecture of the system, and identify areas of concern as newer systems of health informatics are introduced on top of existing ones. The case study is also relevant from the perspective of Sustainable Development Goals, which aim to rectify the tendency of global development initiatives to ignore national HIS and create purpose-specific monitoring systems.&lt;/p&gt;
&lt;p&gt;After being launched in 2011, 120 million (12 crore) pregnant women and 111 million (11 crore) children have been registered on the MCTS as of 2018. The central database collects data on each visit of the woman from conception to 42 days postpartum, including details of direct benefit transfer of maternity benefit schemes. While data-driven monitoring is a critical exercise to improve health care provision, publicly available documents on the MCTS reflect the complete absence of robust data protection measures. The risk associated with data leaks are amplified due to the stigma associated with abortion, especially for unmarried women or survivors of rape.&lt;/p&gt;
&lt;p&gt;The historical landscape of reproductive healthcare provision and family planning in India has been dominated by a target-based approach. Geared at population control, this approach sought to maximise family planning targets without protecting decisional autonomy and bodily privacy for women. At the policy level, this approach was shifted in favour of a rights-based approach to family planning in 1994. However, targets continue to be set for women’s sterilisation on the ground. Surveillance practices in reproductive healthcare are then used to monitor under-performing regions and meet sterilisation targets for women, this continues to be the primary mode of contraception offered by public family planning initiatives.&lt;/p&gt;
&lt;p&gt;More recently, this database -&amp;nbsp;among others collecting data about reproductive health - is adding biometric information through linkage with the Aadhaar infrastructure. This data adds to the sensitive information being collected and stored without adhering to any publicly available data protection practices. Biometric linkage is aimed to fulfill multiple functions - primarily authentication of welfare beneficiaries of the national maternal benefits scheme. Making Aadhaar details mandatory could directly contribute to the denial of service to legitimate patients and beneficiaries - as has already been seen in some cases.&lt;/p&gt;
&lt;p&gt;The added layer of biometric surveillance also has the potential to enable other forms of abuse of privacy for pregnant women. In 2016, the union minister for Women and Child Development under the previous government suggested the use of strict biometric-based monitoring to discourage gender-biased sex selection. Activists critiqued the policy for its paternalistic approach to reduce the rampant practice of gender-biased sex selection, rather than addressing the root causes of gender inequality in the country.&lt;/p&gt;
&lt;p&gt;There is an urgent need to rethink the objectives and practices of data collection in public reproductive health provision in India. Rather than continued focus on meeting high-level targets, monitoring systems should enable local usage and protect the decisional autonomy of patients. In addition, the data protection legislation in India - expected to be tabled in the next session in parliament - should place free and informed consent, and informational privacy at the centre of data-driven practices in reproductive health provision.&lt;/p&gt;
&lt;p&gt;This is why the systematic mass collection of data in health services is all the more worrying. When the collection of our data becomes a condition for accessing health services, it is not only a threat to our right to health that should not be conditional on data sharing but also it raises questions as to how this data will be used in the age of automation.&lt;/p&gt;
&lt;p&gt;This is why understanding what data is collected and how it is collected in the context of health and social protection programmes is so important.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/privacy-international-ambika-tandon-october-17-2019-mother-and-child-tracking-system-understanding-data-trail-indian-healthcare'&gt;https://cis-india.org/internet-governance/blog/privacy-international-ambika-tandon-october-17-2019-mother-and-child-tracking-system-understanding-data-trail-indian-healthcare&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>ambika</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Big Data</dc:subject>
    
    
        <dc:subject>Data Systems</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Research</dc:subject>
    
    
        <dc:subject>BD4D</dc:subject>
    
    
        <dc:subject>Healthcare</dc:subject>
    
    
        <dc:subject>Big Data for Development</dc:subject>
    

   <dc:date>2019-12-30T17:18:05Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/participation-in-iso-iec-jtc-1-sc-27-meetings">
    <title>Participation in ISO/IEC JTC 1 SC 27 meetings</title>
    <link>https://cis-india.org/internet-governance/news/participation-in-iso-iec-jtc-1-sc-27-meetings</link>
    <description>
        &lt;b&gt;From October 14 - 18, 2019, Gurshabad Grover, participated in the meetings of ISO/IEC JTC 1 SC 27 held in Paris, the committee that develops international standards for IT Security techniques.&lt;/b&gt;
        &lt;p&gt;Gurshabad focused on the meetings of working group 5 that deals with identity management and privacy technologies. Some highlights of the participation:&lt;/p&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;span&gt;I represented the Indian delegation's contributions in the comment &lt;/span&gt;&lt;span&gt;resolution meeting on WD TS 27570: Privacy guidelines for smart cities.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;span&gt;Since &lt;/span&gt;&lt;span class="Object" id="OBJ_PREFIX_DWT207_com_zimbra_date"&gt;October 2018&lt;/span&gt;&lt;span&gt;, I have been a co-rapporteur on the working groups' &lt;/span&gt;&lt;span&gt;study period on the impact of machine learning on privacy. At this &lt;/span&gt;&lt;span&gt;meeting, we presented our interim report. We are extending the study &lt;/span&gt;&lt;span&gt;period for six months to further collaborate with SC 42 (that deals with &lt;/span&gt;&lt;span&gt;artificial intelligence standards) to document privacy aspects for the &lt;/span&gt;&lt;span&gt;applications and use cases they have developed.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;li style="text-align: justify; "&gt;&lt;span&gt;I will now be a co-rapporteur on the study period on `Privacy for &lt;/span&gt;&lt;span&gt;fintech services', which was initiated in this meeting. We will be &lt;/span&gt;&lt;span&gt;surveying privacy standards and data protection regulations to assess &lt;/span&gt;&lt;span&gt;the need for new work items (standards/guidelines document) in the space.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/participation-in-iso-iec-jtc-1-sc-27-meetings'&gt;https://cis-india.org/internet-governance/news/participation-in-iso-iec-jtc-1-sc-27-meetings&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Privacy</dc:subject>
    

   <dc:date>2019-11-02T06:31:46Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/reuters-annie-banerji-october-17-2019-indias-hiv-positive-trans-people-find-new-strength-in-technology">
    <title>India's HIV-positive trans people find 'new strength' in technology</title>
    <link>https://cis-india.org/internet-governance/news/reuters-annie-banerji-october-17-2019-indias-hiv-positive-trans-people-find-new-strength-in-technology</link>
    <description>
        &lt;b&gt;Shoved, cursed and ridiculed, Nisha's hospital visits were always stressful as a transgender woman and got worse after she was diagnosed as HIV-positive.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Annie Banerji was &lt;a class="external-link" href="https://www.thejakartapost.com/life/2019/10/16/indias-hiv-positive-trans-people-find-new-strength-in-technology.html"&gt;published in Reuters&lt;/a&gt; on October 17, 2019 and mirrored in the Jakarta Post as well. Ambika Tandon was quoted. It was mirrored in &lt;a class="external-link" href="https://health.economictimes.indiatimes.com/news/health-it/indias-hiv-positive-trans-people-find-new-strength-in-technology/71599241"&gt;ET Healthworld.com&lt;/a&gt; as well.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;But a new app introduced as part of a drive to end an HIV epidemic in India by 2030 is providing her and the transgender community better access to doctors, lifesaving drugs - and hope - although it has raised concerns about digital privacy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;India has the world's third largest population living with HIV - 2.1 million people - according to UNAIDS, with recognition that help is needed in the transgender community where the prevalence is 3.1% compared to 0.26% among all adults.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Nisha tested HIV positive last year after earning a living as a sex worker in New Delhi. On the job, she said, condoms would often break or she would not use one for more money.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"That was a bad idea. I ended up with HIV. I felt suicidal after I found out," Nisha, 29, a trans woman who goes by one name, told the Thomson Reuters Foundation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"It didn't help that going to the hospital was torturous. People made faces, passed lewd comments ... a doctor even kicked me out."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Despite the Supreme Court recognizing India's 2 million transgender people as a third gender with equal rights in 2014, they are often kicked out by their families and denied jobs, education and healthcare, leading them to begging or sex work.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Trans women like Nisha say they face "double discrimination" and the risk of being shunned and abused - first because of their gender identity and then because of their HIV status.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;But a counselling program along with a new app is helping health workers track down HIV-positive transgender people, monitor their treatment and link them to doctors and antiretroviral therapy (ART) to suppress the AIDS virus.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"I have found new strength. I don't feel depressed or nervous anymore," said Nisha, who now begs at traffic lights.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"The app helps keep me physically healthy and she ensures I'm mentally and emotionally (healthy)," she said, pointing to her outreach worker Samyra, an HIV-positive trans woman.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The eMpower app - developed by IBM in partnership with India HIV/AIDS Alliance and the Global Fund to Fight AIDS, Tuberculosis and Malaria - monitored more than 1.2 million people between January 2018 and March 2019.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;'Half the battle won'&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;With mobile tablets in hand, HIV-positive transgender outreach workers keep a tab on others in their community living with HIV and counsel them and accompany them to see doctors.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"I tell them 'I'm like you. I'm HIV-positive and I'm taking medicines too. You're not alone'," said Samyra, who works with Vihaan, a national initiative to expand counselling, outreach and follow-up programs to people living with HIV.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"That makes a huge difference because it's coming from one of your own. Half of the HIV battle is won when you have someone to hold your hand along the way."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Health experts said transgender focused initiatives like this and the launch in March of India's first HIV treatment clinic in Mumbai city run for and by LGBT+ people were pushing the country towards its target to end the epidemic by 2030.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;But to achieve this target they said it was critical for patients to stick with ART. Sometimes stigma and side effects can cause them to drop out of the treatment.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;That is why health workers follow up with clients every few months and record information on the eMpower app, including their weight, viral load and CD4 - white blood cells that fight HIV - and advise them on everything from their diet to safe sex.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;They also note whether a client has faced discrimination, and arrange for partners and family members to get tested.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Sonal Mehta, head of India HIV/AIDS Alliance, said the app has helped boost Vihaan's outreach numbers as well as the confidence of trans clients and workers, who often come from poor, semi-literate backgrounds.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"The trans clients definitely feel much more secure ... but the outreach workers themselves also feel very empowered. They are professional officers working on the field, talking to doctors, government officers, engaging with various organisations," she said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Double-edged sword&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While such technological advances are seen as key in the HIV/AIDS fight, health and software experts warn they can come at the cost of privacy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The eMpower app creates a profile for each client with personal information including name, biometric ID number, occupation and monthly income, and a map pinning their location.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Without proper safeguards, such an app runs the risk of data breach and sharing information with third-parties, which can further ostracize an already marginalized community, said Ambika Tandon, a cyber security expert.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"The potential to monetize is definitely a risk factor," said Tandon, policy officer in gender-based research at the Banaglore-based Centre for Internet and Society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"Another is informational privacy ... (clients) may not necessarily know where their information is being stored, who will have access to it ... There could be multiple points at which their data could be vulnerable."&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Saravanan RM, a senior technical officer at India HIV/AIDS Alliance, said the eMpower app a "fool-proof system".&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He said all sensitive data was stored on the organisation's server, which could only be accessed by specific workers through a password-protected system.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;None of the information can be seen by any partners - not IBM, state or federal governments. It is further beefed up by a mobile device management (MDM), he said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"For example, if any device is lost or has gone into someone else's hands, what we can do through MDM is clean out the entire tablet and the data will not be acquired," he said.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Dr. V Sam Prasad, India program manager of the AIDS Healthcare Foundation, said the app should not be dismissed because there was a privacy risk as it came with major benefits.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Several HIV-positive transgender people like Swati, a trans woman who contracted HIV after injecting drugs, felt the same.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"Even if it (personal data) is leaked, what's the worst that could happen? I've faced unimaginable things. Nothing scares me, at least not such things," said Swati, 25, after a follow-up meeting with her outreach worker at her one-room home.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;"It is (eMpower) saving me. It is not an enemy."&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/reuters-annie-banerji-october-17-2019-indias-hiv-positive-trans-people-find-new-strength-in-technology'&gt;https://cis-india.org/internet-governance/news/reuters-annie-banerji-october-17-2019-indias-hiv-positive-trans-people-find-new-strength-in-technology&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Annie Banerji</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-10-18T15:28:18Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report">
    <title>Panelist at launch of Google-UNESCAP AI Report</title>
    <link>https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report</link>
    <description>
        &lt;b&gt;Arindrajit Basu was a speaker at the panel launching the Google-UNESCAP AI Report at the GovInsider Forum held at the United Nations Convention Centre in Bangkok on October 16, 2019. &lt;/b&gt;
        &lt;p&gt;Click to &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/launch-the-ai-report"&gt;view the agenda&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report'&gt;https://cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-11-02T06:48:25Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future">
    <title>Farming the Future: Deployment of Artificial Intelligence in the agricultural sector in India</title>
    <link>https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future</link>
    <description>
        &lt;b&gt;This case study was published as a chapter in the joint UNESCAP-Google publication titled Artificial Intelligence in Public Service Delivery. The chapter in its final form would not have been possible without the efforts and very useful interventions by our colleagues at Digital Asia Hub,Google, and UNESCAP.&lt;/b&gt;
        &lt;p&gt;&lt;img src="https://cis-india.org/home-images/Findings.jpg" alt="Findings" class="image-inline" title="Findings" /&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Although agriculture is a critical sector for India’s economic development, it continues to face many challenges including a lack of &lt;span&gt;modernization of agricultural methods, fragmented landholdings, erratic rainfalls, overuse of groundwater and a lack of access to &lt;/span&gt;&lt;span&gt;information on weather, markets and pricing. As state governments create policies and frameworks to mitigate these challenges, the &lt;/span&gt;&lt;span&gt;role of technology has often come up as a potential driver of positive change.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Farmers in the southern Indian states of Karnataka and Andhra Pradesh are facing significant challenges. For hundreds of years,these farmers have relied on traditional agricultural methods to make sowing and harvesting decisions, but now volatile weather patterns and shifting monsoon seasons are making such ancient wisdom obsolete. Farmers are unable to predict weather patterns or crop yields accurately, making it difficult for them to make informed financial and operational decisions associated with planting and harvesting. Erratic weather patterns particularly affect those farmers who reside in remote areas, cut off from meaningful accessto infrastructure and information. In addition to a lack of vital weather information, farmers may lack information about market conditions and may then sell their crops to intermediaries at below-market prices.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Against this backdrop, the state governments and local partners in southern India teamed up with Microsoft to develop predictive AI services to help smallholder farmers to improve their crop yields and give them greater price control. Since 2016 three applications have been developed and applied for use in these communities, two of which are discussed in this case study: the AI-sowing app and the price forecasting model.&lt;/span&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;a class="external-link" href="https://www.unescap.org/sites/default/files/publications/AI%20Report.pdf"&gt;Click to read&lt;/a&gt; the report here.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future'&gt;https://cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Elonnai Hickok, Arindrajit Basu, Siddharth Sonkar and Pranav M B</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-16T13:41:02Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art">
    <title>AI Opera- AI as a total work of art</title>
    <link>https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art</link>
    <description>
        &lt;b&gt;On October 11, 2019,  Shweta Mohandas and Mira were invited as panelists for the 'AI Opera- AI as a total work of art' event organized by Goethe as part of the India Week Hamburg 2019 held in Bangalore. CIS was an event partner. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The panel had to present different perspectives and possibilities of Artificial Intelligence (AI). The discussion was facilitated by German artist, performer and filmmaker Christoph Faulhaber. For more info, &lt;a class="external-link" href="https://www.goethe.de/ins/in/en/sta/ban/ver.cfm?fuseaction=events.detail&amp;amp;event_id=21670394"&gt;click here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art'&gt;https://cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-14T14:30:56Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision">
    <title>We need a better AI vision</title>
    <link>https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision</link>
    <description>
        &lt;b&gt;Artificial intelligence conjures up a wondrous world of autonomous processes but dystopia is inevitable unless rights and privacy are protected.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The blog post by Arindrajit Basu was published by&lt;a class="external-link" href="https://fountainink.in/essay/we-need-a-better-ai-vision-"&gt; Fountainink&lt;/a&gt; on October 12, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;he dawn of Artificial Intelligence (AI) has policy-makers across the globe excited. In India, it is seen as a tool to overleap structural hurdles and better understand a range of organisational and management processes while improving the implementation of several government tasks. Notwithstanding the apparent enthusiasm in the government and private sectors, an adequate technological, infrastructural, and financial capacity to develop these models at scale is still in the works.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A number of policy documents with direct or indirect references to India’s AI future—to be powered by vast troves of data—have been released in the past year and a half. These include the National Strategy for Artificial Intelligence (which I will refer to as National Strategy) authored by NITI Aayog, the AI Taskforce Report, Chapter 4 of the Economic Survey, the Draft e-Commerce Bill and the Srikrishna Committee Report.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;While they extol the virtues of data-driven analytics, references to the preservation or augmentation of India’s constitutional ethos through AI has been limited though it is crucial for safeguarding the rights and liberties of citizens while paving the way for the alleviation of societal oppression.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In this essay, I outline the variety of AI use cases that are in the works. I then highlight India’s AI vision by culling the relevant aspects of policy instruments that impact the AI ecosystem and identify lacunae that can be rectified. Finally, I attempt to “constitutionalise AI policy” by grounding it in a framework of constitutional rights that guarantee protection to the most vulnerable sections of society.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;In the manufacturing industry, AI adoption is not uniform across all sectors. But there has been a notable transformation in electronics, heavy electricals and automobiles.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;It is crucial to note that these cases, still emerging in India, have been implemented at scale in other countries such as the United Kingdom, United States and China. Projects were rolled out to the detriment of ethical and legal considerations. Hindsight should make the Indian policy ecosystem much wiser. By closely studying the research produced in these diverse contexts, Indian policy-makers should try to find ways around the ethical and legal challenges that cropped up elsewhere and devise policy solutions that mitigate the concerns raised.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;B&lt;span&gt;efore anything else we need to define AI—an endeavour fraught with multiple contestations. My colleagues and I at the Centre for Internet &amp;amp; Society ducked this hurdle when conducting our research by adopting a function-based approach. An AI system (as opposed to one that automates routine, cognitive or non-cognitive tasks) is a dynamic learning system that allows for the delegation of some level of human decision-making to the system. This definition allows us to capture some of the unique challenges and prospects that stem from the use of AI.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The research I contributed to at CIS identified key trends in the use of AI across India. In healthcare, it is used for descriptive and predictive purposes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For example, the Manipal Group of Hospitals tied up with IBM’s Watson for Oncology to aid doctors in the diagnosis and treatment of seven types of cancer. It is also being used for analytical or diagnostic services. Niramai Health Analytix uses AI to detect early stage breast cancer and Adveniot Tecnosys detects tuberculosis through chest X-rays and acute infections using ultrasound images. In the manufacturing industry, AI adoption is not uniform across all sectors. But there has been a notable transformation in the electronics, heavy electricals and automobiles sector gradually adopting and integrating AI solutions into their products and processes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is also used in the burgeoning online lending segment in order to source credit score data. As many Indians have no credit scores, AI is used to aggregate data and generate scores for more than 80 per cent of the population who have no credit scores. This includes Credit Vidya, a Hyderabad-based data underwriting start-up that provides a credit score to first time loan-seekers and feeds this information to big players such as ICICI Bank and HDFC Bank, among others. It is also used by players such as Mastercard for fraud detection and risk management. In the finance world, companies such as Trade Rays are being used to provide user-friendly algorithmic trading services.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;AI is also being increasingly used in the education sector for providing services to students such as decision-making assistance and also for student-progress monitoring.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The next big development is in law enforcement. Predictive policing is making great strides in various states, including Delhi, Punjab, Uttar Pradesh and Maharashtra. A brainchild of the Los Angeles Police Department, predictive policing is the use of analytical techniques such as Machine Learning to identify probable targets for intervention to prevent crime or to solve past crime through statistical predictions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Conventional approaches to predictive policing start with the mapping of locations where crimes are concentrated (hot spots) by using algorithms to analyse aggregated data sets. Police in Uttar Pradesh and Delhi have partnered with the Indian Space Research Organisation (ISRO) in a Memorandum of Understanding to allow ISRO’s Advanced Data Processing Research Institute to map, visualise and compile reports about crime-related incidents.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;There are aggressive developments also on the facial recognition front. Punjab Police, in association with Gurugram-based start-up Staqu has started implementing the Punjab Artificial Intelligence System (PAIS) which uses digitised criminal records and automated facial recognition to retrieve information on the suspected criminal. At the national level, on June 28, the National Crime Records Bureau (NCRB) called for tenders to implement a centralised Automated Facial Recognition System (AFRS), defining the scope of work in broad terms as the “supply, installation and commissioning of hardware and software at NCRB.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;AI is also being increasingly used in the education sector for providing services to students such as decision-making assistance and also for student-progress monitoring. The Andhra Pradesh government had started collecting information from a range of databases and processes the information through Microsoft’s Machine Learning Platform to monitor children and devote student focussed attention on identifying and curbing school drop-outs.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In Andhra Pradesh, Microsoft collaborated with the International Crop Institute for Semi-Arid Tropics (ICRISAT) to develop an AI Sowing App powered by Microsoft’s Cortana Intelligence Suite. It aggregated data using Machine Learning and sent advisories to farmers regarding optimal dates to sow. This was done via text messages on feature phones after ground research revealed that not many farmers owned or were able to use smart phones. The NITI Aayog AI Strategy specifically cited this use case and reported that this resulted in a 10-30 per cent increase in crop yield. The government of Karnataka has entered into a similar arrangement with Microsoft.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Finally, in the defence sector, our research found enthusiasm for AI in intelligence, surveillance and reconnaissance (ISR) functions, cyber defence, robot soldiers, risk terrain analysis and moving towards autonomous weapons systems. These projects are being developed by the Defence Research and Development Organisation but the level of trust and support in AI-driven processes reposed by the wings of the armed forces is yet to be publicly clarified. India also had the privilege of leading the global debate on Lethal Autonomous Weapons Systems (LAWS) with Amandeep Singh Gill chairing the United Nations Group of Governmental Experts (UN-GGE) on the issue. However, ‘lethal’ autonomous weapons systems at this stage appear to be a speck in the distant horizon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A&lt;span&gt;long with the range of use cases described above, a patchwork of policy imperatives is emerging to support this ecosystem. The umbrella document is the National Strategy for Artificial Intelligence published by the NITI Aayog in June 2018. Despite certain lacunae in its scope, the existence of a cohesive and robust document that lends a semblance of certainty and predictability to a rapidly emerging sphere is in itself a boon. The document focuses on how India can leverage AI for both economic growth and social inclusion. The contents of the document can be divided into a few themes, many of which have also found their way into multiple other instruments.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;NITI Aayog provides over 30 policy recommendations on investment in scientific research, reskilling, training and enabling the speedy adoption of AI across value chains. The flagship research initiative is a two-tiered endeavour to boost AI research in India. First, new centres of research excellence (COREs) will develop fundamental research. The COREs will act as feeders for international centres for transformational AI which will focus on creating AI-based applications across sectors.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/AIinCountries.jpg/@@images/16b4af34-cb6d-423c-be35-e45a60d501cf.jpeg" alt="AI in Countries" class="image-inline" title="AI in Countries" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This is an impressive theoretical objective but questions surrounding implementation and structures of operation remain to be answered. China has not only conceptualised an ecosystem but through the Three Year Action Plan to Promote the Development of New Generation Artificial Intelligence Industry, it has also taken a whole-of-government approach to propelling the private sector to an e-leadership position. It has partnered with national tech companies and set clear goals for funding, such as the $2.1 billion technology park for AI research in Beijing.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The contents of the NITI document can be divided into a few themes, many of which have also found their way into multiple other instruments. First, it proposes an “AI+X” approach that captures the long-term vision for AI in India. Instead of replacing the processes in their entirety, AI is understood as an enabler of efficiency in processes that already exist. NITI Aayog therefore looks at the process of deploying AI-driven technologies as taking an existing process (X) and adding AI to them (AI+X). This is a crucial recommendation all AI projects should heed. Instead of waving AI as an all-encompassing magic wand across sectors, it is necessary to identify specific gaps AI can seek to remedy and then devise the process underpinning this implementation.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;A cacophony of policy instruments by multiple government departments seeks to reconceptualise data to construct a theoretical framework that allows for its exploitation for AI-driven analytics.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The AI-driven intervention to develop sowing apps for farmers in Karnataka and Andhra Pradesh are examples of effective implementation of this approach. Instead of other knee-jerk reactions to agrarian woes such as a hasty raising of Minimum Support Price, effective research was done in this use-case to identify a lack of predictability in weather patterns as a key factor in productive crop yields. They realised that aggregation of data through AI could provide farmers with better information on weather patterns. As internet penetration was relatively low in rural Karnataka, text messages to feature phones that had a far wider presence was indispensable to the end game.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;T&lt;span&gt;his is in contrast to the ill-conceived path adopted by the Union ministry of electronics and information technology in guidelines for regulating social media platforms that host content (“intermediaries”). Rule 3(9) of the Draft of the Information Technology [Intermediary Guidelines (Amendment) Rules] 2018 mandates intermediaries to use “automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content”.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Proposed in light of the fake news menace and the unbridled spread of “extremist” content online, the use of the phrase “automated tools or appropriate mechanisms” is reflective of an attitude that fails to consider ground realities that confront companies and users alike. They ignore, for instance, the cost of automated tools: whether automated content moderation techniques developed in the West can be applied to Indic languages or grievance redress mechanisms users can avail of if their online speech is unduly restricted. This is thus a clear case of the “AI” mantra being drawn out of a hat without studying the “X” it is supposed to remedy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second focus of the National Strategy that has since morphed into a technology policy mainstay across instruments is on data governance, access and utilisation. The document says the major hurdle to the large scale adoption of AI in India is the difficulty in accessing structured data. It recommends developing big annotated data sets to “democratise data and multi-stakeholder marketplaces across the AI value chain”. It argues that at present only one per cent of data can be analysed as it exists in various unconnected silos. Through the creation of a formal market for data, aggregators such as diagnostic centres in the healthcare sector would curate datasets and place them in the market, with appropriate permissions and safeguards. AI firms could use available datasets rather than wasting effort sourcing and curating the sets themselves.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A cacophony of policy instruments by multiple government departments seeks to reconceptualise data to construct a theoretical framework that allows for its exploitation for AI-driven analytics.The first is “community data” and appears both in the Srikrishna Report that accompanied the draft Data Protection Bill in 2018 and the draft e-commerce policy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;But there appears to be some conflict between its usage in the two. Srikrishna endorses a collective protection of privacy by protecting an identifiable community that has contributed to community data. This requires the fulfilment of three key conditions: &lt;i&gt;first,&lt;/i&gt; the data belong to an identifiable community; &lt;i&gt;second, &lt;/i&gt;individuals in the community consent to being a part of it, and &lt;i&gt;third&lt;/i&gt;, the community as a whole consents to its data being treated as community data. On the other hand, the Department of Promotion of Industry and Internal Trade’s (DPIIT) draft e-commerce policy looks at community data as “societal commons” or a “national resource” that gives the community the right to access it but government has ultimate and overriding control of the data. This configuration of community data brings into question the consent framework in the Srikrishna Bill.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;The government’s attempt to harness data as a national resource for the development of AI-based solutions may be well-intentioned but is fraught with core problems in implementation.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The matter is further confused by treating “data as a public good”. This is projected in Chapter 4 of the 2019 Economic Survey published by the Ministry of Finance. It explicitly states that any configuration needs to be deferential to privacy norms and the upcoming privacy law. The “personal data” of an individual in the custody of a government is also a “public good” once the datasets are anonymised. At the same time, it pushes for the creation of a government database that links several individual databases, which leads to the “triangulation” problem, where matching different datasets together allows for individuals to be identified despite their anonymisation in seemingly disparate databases.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Building an AI ecosystem” was also one of the ostensible reasons for data localisation—the government’s gambit to mandate that foreign companies store the data of Indian citizens within national borders. In addition to a few other policy instruments with similar mandates, Section 40 of the Draft Personal Data Protection Bill mandates that all “critical data” (this is to be notified by the government) be stored exclusively in India. All other data should have a live, serving copy stored in India even if transfer abroad is allowed. This was an attempt to ensure foreign data processors are not the sole beneficiaries of AI-driven insights.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The government’s attempt to harness data as a national resource for the development of AI-based solutions may be well intentioned but is fraught with core problems in implementation. First, the notion of data as a national resource or as a public good walks a tightrope with constitutionally guaranteed protections around privacy, which will be codified in the upcoming Personal Data Protection Bill. My concerns are not quite so grave in the case of genuine “public data” like traffic signal data or pollution data. However, the Economic Survey manages to crudely amalgamate personal data into the mix.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It also states that personal data in the custody of a government is a public good once the datasets are anonymised. This includes transactions data in the User Payments Interface (UPI), administrative data including birth and death records, and institutional data including data in public hospitals or schools on pupils or patients. At the same time, it pushes for a government database that will lead to the triangulation problem outlined above. The chapter also suggests that said data may be sold to private firms (unclear if this includes foreign or domestic firms). This not only contradicts the notion of public good but is also a serious threat to the confidentiality and security of personal data.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;T&lt;span&gt;herefore, along with the concerted endeavour to create data marketplaces, it is crucial for policy-makers to differentiate between public data and personal data individuals may consent to be made public. The parameters for clearly defining free and informed consent, as codified in the Draft Personal Data Protection Bill need to be strictly followed as there is a risk of de-anonymisation of data once it finds its way into the marketplace. Second, it is crucial for policy-makers to define clearly a community and parameters for what constitutes individual consent to be part of a community. Finally, along with technical work on setting up a national data marketplace, there must be protracted efforts to guarantee greater security and standards of anonymisation.&lt;/span&gt;&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;The National Strategy  mentions that India should position itself as a “garage” for AI in emerging economies. This could mean Indian citizens are used as guinea pigs for AI-driven solutions at the cost of their rights.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;Assuming that a constitutionally valid paradigm may be created, the excessive focus on data access by tech players dodges the question of the capabilities of analytic firms to process this data and derive meaningful insights from the information. Scholars on China, arguably the poster-child of data-driven economic growth, have sent mixed messages. Ding argues that despite having half the technical capabilities of the US, easy access to data gives China a competitive edge in global AI competition. On the contrary, Andrew Ng has argued that operationalising a sufficient number of relevant datasets still remains a challenge. Ng’s views are backed up by insiders at Chinese tech giant Tencent who say the company still finds it difficult to integrate data streams due to technical hurdles. NITI Aayog’s idea of a multi-stream data marketplace may theoretically be a solution to these potential hurdles but requires sustained funding and research innovation to be converted into reality.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The National Strategy suggests that government should create a multi-disciplinary committee to set up this marketplace and explore levers for its implementation. This is certainly the need of the hour. It also rightly highlights the importance of research partnerships between academia and the private sector, and the need to support start-ups. There is therefore an urgent need for innovative allied policy instruments that support the burgeoning start-up sector. Proposals such as data localisation may hurt smaller players as they will have to bear the increased fixed costs of setting up or renting data centres.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The National Strategy also incongruously mentions that India should position itself as a “garage” for the use of AI in emerging economies. This could mean Indian citizens are used as guinea pigs for AI-driven solutions at the cost of their fundamental rights. It could also imply that India should occupy a leadership position and work with other emerging economies to frame the global rights based discourse to seek equitable solutions for the application of AI that works to improve the plight of the most vulnerable in society.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;O&lt;span&gt;ur constitutional ethos places us in a unique position to develop a framework that enables the actualisation of this equitable vision—a goal the policy instruments put out thus far appear to have missed. While the National Strategy includes a section on privacy, security and ethical implications of AI, it stops short of rooting it in fundamental rights and constitutional principles. As a centralised policy instrument, the National Strategy deserves praise for identifying key levers in the future of India’s AI ecosystem and, with the exception of the concerns I outlined above, it is at par with the policy-making thought process in any other nation.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;When we start the process of using constitutional principles for AI governance, we must remember that as per Article 12, an individual can file a writ against the state for violation of a fundamental right if the action is taken under the aegis of a “public function”. To combat discrimination by private actors, the state can enact legislation compelling private actors to comply with constitutional mandates. In July, Rajeev Chandrashekhar, a Rajya Sabha MP, suggested a law to combat algorithmic discrimination along the lines of the Algorithmic Accountability Bill proposed in the US Senate. There are three core constitutional questions along the lines of the “golden triangle” of the Indian Constitution any such legislation will need to answer—those of accountability and transparency, algorithmic discrimination and the guarantee of freedom of expression and individual privacy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Algorithms are developed by human beings who have their own cognitive biases. This means ostensibly neutral algorithms can have an unintentional disparate impact on certain, often traditionally disenfranchised groups.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In the &lt;i&gt;MIT Technology Review&lt;/i&gt;, Karen Hao explains three stages at which bias might creep in. The first stage is the framing of the problem itself. As soon as computer scientists create a deep-learning model, they decide what they want the model to finally achieve. However, frequently desired outcomes such as “profitability”, “creditworthiness” or “recruitability” are subjective and imprecise concepts subject to human cognitive bias. This makes it difficult to devise screening algorithms that fairly portray society and the complex medley of identities, attributes and structures of power that define it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The second stage Hao mentions is the data collection phase. Training data could lead to bias if it is unrepresentative of reality or represents entrenched prejudice or structural inequality. For example, most Natural Language Processing systems used for Parts of Speech (POS) tagging in the US are trained on the readily available data sets from the &lt;i&gt;Wall Street Journal&lt;/i&gt;. Accuracy would naturally decrease when the algorithm is applied to individuals—largely ethnic minorities—who do not mimic the speech of the &lt;i&gt;Journal&lt;/i&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to Hao, the final stage for algorithmic bias is data preparation, which involves selecting parameters the developer wants the algorithm to consider. For example, when determining the “risk-profile” of car owners seeking insurance premiums, geographical location could be one parameter. This could be justified by the ostensibly neutral argument that those residing in inner-city areas with narrower roads are more likely to have scratches on their vehicles. But as inner cities in the US have a disproportionately high number of ethnic minorities or other vulnerable socio-economic groups, “pin code” becomes a facially neutral proxy for race or class-based discrimination.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;T&lt;span&gt;he right to equality has been carved into multiple international human rights instruments and into the Equality Code in Articles 14-18 of the Indian Constitution. The dominant approach to interpreting the right to equality by the Supreme Court has been to focus on “grounds” of discrimination under Article 15(1), thus resulting in a lack of recognition of unintentional discrimination and disparate impact.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A notable exception, as constitutional scholar Gautam Bhatia points out, is the case of &lt;i&gt;N.M. Thomas &lt;/i&gt;which pertained to reservation in promotions. Justice Mathew argued that the test for inequality in Article 16(4) is an effects-oriented test independent of the formal motivation underlying a specific act. Justice Krishna Iyer and Mathew also articulated a grander vision wherein they saw the Equality Code as transcending the embedded individual disabilities in class driven social hierarchies. This understanding is crucial for governing data driven decision-making that impacts vulnerable communities. Any law or policy on AI-related discrimination must also include disparate impact within its definition of “discrimination” to ensure that developers think about the adverse consequences even of well-intentioned decisions.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;AI driven assessments have been challenged on grounds of constitutional violations in other jurisdictions. In 2016, the Wisconsin Supreme Court considered the legality of using risk assessment tools such as COMPAS for sentencing criminals. It affirmed the trial court’s findings and held that using COMPAS did not violate constitutional due process standards. Eric Loomis had argued that using COMPAS infringed both his right to an individualised sentence and to accurate information as COMPAS provided data for specific groups and kept the methodology used to prepare the report a trade secret. He additionally argued that the court used unconstitutional gendered assessments as the tool used gender as one of the parameters.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Wisconsin Supreme Court disagreed with Loomis arguing that COMPAS only used publicly available data and data provided by the defendant, which apparently meant Loomis could have verified any information contained in the report. On the question of individualisation, the court argued that COMPAS provided only aggregate data for groups similarly placed to the offender. However, it went on to argue as the report was not the sole basis for a decision by the judge, a COMPAS assessment would be sufficiently individualised as courts retained the discretion and information necessary to disagree.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;By assuming that Loomis could have genuinely verified all the data collected about similarly placed groups and that judges would exercise discretion to prevent the entrenchment of inequalities through COMPAS’s decision-making patterns, the judges ignored social realities. Algorithmic decision-making systems are an extension of unequal decision-making that re-entrenches prevailing societal perceptions around identity and behaviour. An instance of discrimination cannot be looked at as a single instance but as one in a menagerie of production systems that define, modulate and regulate social existence.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The policy-making ecosystem needs, therefore, to galvanise the “transformative” vision of India’s democratic fibre and study existing systems and power structures AI could re-entrench or mitigate. For example, in the matter of bank loans there is a presumption against the credit-worthiness of those working in the informal sector. The use of aggregated decision-making may lead to more equitable outcomes given that there is concrete thought on the organisational structures making these decisions and the constitutional safeguards provided.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Most case studies on algorithmic discrimination in Virgina Eubanks’ &lt;i&gt;Automating Inequality &lt;/i&gt;or Safiya Noble’s &lt;i&gt;Algorithms of Oppression&lt;/i&gt; are based on western contexts. There is an urgent need for publicly available empirical studies on pilot cases in India to understand the contours of discrimination. Primary research questions should explore three related subjects. Are specified ostensibly neutral variables being used to exclude certain communities from accessing opportunities and resources or having a disproportionate impact on their civil liberties? Is there diversity in the identities of the coders themselves? Are the training data sets used representative and diverse and, finally, what role does data driven decision-making play in furthering the battle against embedded structural hierarchies?&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;***&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A key feature of AI-driven solutions is the “black box” that processes inputs and generates actionable outputs behind a veil of opacity to the human operator. Essentially, the black box denotes that aspect of the human neural decision-making function that has been delegated to the machine. A lack of transparency or understanding could lead to what Frank Pasquale terms a “Black Box Society” where algorithms define the trajectories of daily existence unless “the values and prerogatives of the encoded rules hidden within black boxes” are challenged.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Ex-&lt;i&gt;post facto&lt;/i&gt; assessment is often insufficient for arriving at genuine accountability. For example, the success of predictive policing in the US was drawn from the fact that police have indeed found more crimes in areas deemed “high risk”. But this assessment does not account for the fact that this is a product of a vicious cycle through which more crime is detected in an area simply because more policemen are deployed. Here, the National Strategy rightly identifies that simply opening up code may not deconstruct the black box as not all stakeholders impacted by AI solutions may understand the code. The constant aim should be explicability which means the human developer should be able to explain how certain factors may be used to arrive at a certain cluster of outcomes in a given set of situations.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The requirement of accountability stems from the Right to Life provision under Article 21. As stated in the seven-judge bench in &lt;i&gt;Maneka Gandhi vs. Union of India&lt;/i&gt;, any procedure established by law must be seen to be “fair, just and reasonable” and not “fanciful, oppressive or arbitrary.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The Right to Privacy was recognised as a fundamental right by the nine-judge bench in &lt;i&gt;K.S. Puttaswamy (Retd.) vs. Union of India&lt;/i&gt;. Mass surveillance can lead to the alteration of behavioural patterns which may in turn be used for the suppression of dissent by the State. Pulling vast tracts of data on all suspected criminals—as in facial recognition systems like PAIS—create a “presumption of criminality” that can have a chilling effect on democratic values.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Therefore, any use, particularly by law enforcement would need to satisfy the requirements for infringing on the right to privacy: the existence of a law, necessity—a clearly defined state objective—and proportionality between the state object and the means used restricting fundamental rights the least. Along with centralised policy instruments such as the National Strategy, all initiatives taken in pursuance of India’s AI agenda must pay heed to the democratic virtues of privacy and free speech and their interlinkages.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;India needs a law to regulate the impact of Artificial Intelligence and enable its development without restricting fundamental rights. However, regulation should not adopt a “one-size-fits-all” approach that views all uses with the same level of rigidity. Regulatory intervention should be based on questions around power asymmetries and the likelihood of the use case adversely affronting human dignity captured by India’s constitutional ethos.&lt;/p&gt;
&lt;blockquote class="synopsis" style="text-align: justify; "&gt;As an aspiring leader in global discourse, India can lay the rules of the road for other emerging economies not only by incubating, innovating and implementing AI powered technologies but by grounding it in a lattice of rich constitutional jurisprudence that empowers the individual.&lt;/blockquote&gt;
&lt;p style="text-align: justify; "&gt;The High Level Task Force on Artificial Intelligence (AI HLEG) set up by the European Commission in June 2018 published a report on “Ethical Guidelines for Trustworthy AI” earlier this year. They feature seven core requirements which include human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. While the principles are comprehensive, this document stops short of referencing any domestic or international constitutional law that helps cement these values. The Indian Constitution can help define and concretise each of these principles and could be used as a vehicle to foster genuine social inclusion and mitigation of structural injustice through AI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;At the centre of the vision must be the inherent rights of the individual. The constitutional moment for data driven decision-making emerges therefore when we conceptualise a way through which AI can be utilised to preserve and improve the enforcement of rights while also ensuring that data does not become a further avenue for exploitation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;National vision transcends the boundaries of policy and to misuse Peter Drucker, “eats strategy for breakfast”. As an aspiring leader in global discourse, India can lay the rules of the road for other emerging economies not only by incubating, innovating and implementing AI powered technologies but by grounding it in a lattice of rich constitutional jurisprudence that empowers the individual, particularly the vulnerable in society. While the multiple policy instruments and the National Strategy are important cogs in the wheel, the long-term vision can only be framed by how the plethora of actors, interest groups and stakeholders engage with the notion of an AI-powered Indian society.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision'&gt;https://cis-india.org/internet-governance/blog/fountain-ink-october-12-2019-arindrajit-basu-we-need-a-better-ai-vision&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>basu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-14T13:55:59Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/news/newsminute-october-1-2019-theja-ram-why-conviction-rate-for-cyber-crime-cases-in-karnataka-is-abysmally-low">
    <title>Why conviction rate for cyber crime cases in Karnataka is abysmally low</title>
    <link>https://cis-india.org/internet-governance/news/newsminute-october-1-2019-theja-ram-why-conviction-rate-for-cyber-crime-cases-in-karnataka-is-abysmally-low</link>
    <description>
        &lt;b&gt;Police say a third of the cases involving economic offences in Karnataka are related to job scams, a third related to OTP and UPI fraud, and the remaining are lottery related scams.&lt;/b&gt;
        &lt;p&gt;The blog post by Theja Ram published by the &lt;a class="external-link" href="https://www.thenewsminute.com/article/why-conviction-rate-cyber-crime-cases-karnataka-abysmally-low-109803"&gt;News Minute&lt;/a&gt; on October 1, 2019 quotes Karan Saini.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Just like thousands of engineering graduates in pursuit of a job, 22-year-old Samhita RH had been trying to find one since she graduated from AMC Engineering College in Bengaluru. Samhita’s parents, who live in Hassan district’s Sakleshpur, were counting on their daughter to help clear loans they had taken for her education. A year after graduating, Samhita was desperate. She had uploaded her resume on several job portals and hoped she would get an interview call.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;On the afternoon of December 21 last year, Samhita received an email from an id that read: hr.monster13@india.com. Samhita had signed up for job alerts on employment portal Monster and was thrilled when she finally received a call for an interview, over a year after graduating.&lt;/p&gt;
&lt;p class="_yeti_done" style="text-align: justify; "&gt;“I did not suspect that this was a fake account. Soon after I received the email, I also got a call on my mobile number and a man named Abhishek Acharya said he was from Monster and that there was an interview call for a position at HCL. He said I have to pay Rs 1,200 as registration fee and that I would be able to go for the interview then. A few hours later, he asked me to pay Rs 18,000. The next day I had to pay Rs 13,000 and later the same day another Rs 15,000,” Samhita says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The next day, she received a fake offer letter from HCL after a telephonic conversation and this time another man named Amit Singh, who claimed to be an employee in HCL’s HR department, allegedly told Samhita that she had to pay Rs 29,000 for a certification programme that would be conducted as part of her induction programme. Samhita paid Amit Singh too and when she asked him the date of joining, Amit allegedly informed her that he would be in touch.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;By the first week of January 2019, Samhita was worried that she may have been duped. She got in touch with HCL in Bengaluru and enquired about the job offer she had received. She even sent them a copy of the “offer letter” she had received. To her dismay, HCL informed her that the letter was forged and that no one from the company had reached out to her.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;When she contacted the mobile number of the alleged Amit Singh and demanded her money back, he allegedly hung up and could never be reached again. “I lost around Rs 76,000 in a few days’ time. My parents were struggling for money. They had taken loans to pay for the job and it turned out to be a sham. When I got that email, I should have been more alert. But hope and relief of finally getting a job had clouded my judgement. I filed a complaint with the Cyber Crime Police Station in Bengaluru on January 19 this year, but there has been no progress in my case,” Samhita says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In Samhita’s case, police say that the phones used to contact her were last used in Madhya Pradesh and the IP address from which the email was sent was from Nigeria. “How can we track down some online identity that we don’t know. If it’s a robbery or a murder, its jurisdictional. When it comes to people morphing pictures and extortion rackets on online dating platforms, it is easier to track down the people as there is an ID of the person. But economic offences are the hardest to crack,” says Sandeep Patil, Joint Commissioner of Crime, Bengaluru.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Just like Samhita, thousands of people have fallen victim to job scams on the internet and the Bengaluru Cyber Crime Police say that a third of the cases involving economic offences in Karnataka are related to job scams, a third of them are related to OTP and UPI fraud, and the remaining are lottery related scams. And the police say that investigating cyber crimes related to economic offenses are very difficult.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;UPI, lottery fraud on the rise&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Ever since demonetisation led people to switch to online money transfer, police say that Unique Identification Pin (UPI) related cyber crimes are on the rise. According to the Cyber Crime Police Station in Bengaluru, of the 12,754 cyber crime cases reported in the city between January 2018 and August 2019, 38% of them were related to UPI.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Before demonetisation, a lot of people were not using Google Pay, PayTM, BHIM and other UPI apps for money transfer. With more users, the pool of potential victims for those committing cyber crimes has increased,” Sandeep Patil says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In July this year, a Madhusudhan, businessman from Bengaluru, filed a complaint with the Cyber Crime Police Station that a person posing to be a representative of an e-commerce company had looted Rs 1.6 lakh from three of his bank accounts via his BHIM app. Madhusudhan’s wife Lekha had ordered material for a dress from an e-commerce website. After it was delivered, she wanted to return it, and found a customer service number when she searched on Google. She asked Madhusudhan to help her get the money back.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Madhusudhan spoke to the representative, who informed him that the product could not be returned but that he could initiate a refund. “The product quality was bad and so we wanted to return it. The representative said he would refund the money and told me that he would send me a message, I had to click the link in the message and fill in a form for the refund to be processed. I never thought that this could be a scam. Within minutes, Madhusudhan received a message with a message ID that read: HDFC-UPI. Assuming it was legitimate, I clicked the link, which led me to a portal. But there was no form,” Madhusudhan says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Madhusudhan tried calling the customer care number once more but there was no response. About three or four minutes later he received a message from his bank that Rs 90,000 was transferred to an unknown account via BHIM. Seconds later, he received another message that Rs 70,000 was transferred to another bank account via the same app. Madhusudhan immediately called his bank and asked them to stop any fund transfer from his account.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“I have three bank accounts linked to BHIM and money was wiped out from two accounts. I was able to save Rs 40,000 only after I called the bank,” he says. When Madhusudhan approached the police, Cyber Crime sleuths informed him that it was a phishing scam. “That message that I clicked, that was where it started,” Madhusudhan adds.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to Karan Saini, Programme Officer with the Centre for Internet and Society, most UPI-related crimes are phishing operations and in rare cases involve spyware. Karan says that SMS gateways are the easiest means to con people into believing that a message is from a legitimate source.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Businesses have the ability to send messages to people from SMS gateways provided by telecom companies. Consider the messages we get from e-commerce sites, banks, etc. While most businesses send messages to customers who have wilfully provided their details, bulk contact information can still be procured quite easily, and the cost barrier for sending bulk SMSes is also quite low. Most SMS providers charge customers around Rs. 300 for sending 1000 messages. Further, businesses have the ability to specify a custom sender ID (i.e., the name that appears on the message), which TRAI (Telecom Regulatory Authority of India) mandates to be 6 characters long (e.g., AXISBK), however, fraudsters can easily subvert the custom Sender ID feature to push their phishing campaigns. Most of the reported UPI scams seem to have succeeded because people were conned by the name of the sender. While several SMS providers maintain ‘blacklists’ that let them protect the Sender IDs of prominent customers, fraudsters can still trivially bypass these blacklists, by alternating characters within the Sender ID (e.g., changing AXISBK to AXISBA or even BKAXIS), or by simply moving to another SMS provider.,” Karan says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In December 2017, Venkateshulu S, a jewellery store owner in Bengaluru, received an SMS allegedly from an e-commerce website stating that he had won Rs 1,00,00,000 in a lucky draw. The message stated that Venkateshulu, who had recently purchased a TV from the website, had won the lucky draw from a pool of customers chosen for it.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Venkateshulu, who was initially sceptical, had ignored the SMS. A day later, he received another SMS from the same sender ID, which claimed that he had to claim the prize within the next 24 hours or the offer would expire. He also received a call from a person posing as a customer care executive and informed him that he had to pay Rs 1 lakh to claim the prize and that the money would be refunded to him once the winnings were deposited into his account.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Within a few minutes, he transferred the money to an account number given by the conman. It was only after two days that Venkateshulu realised he had been swindled.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“I was waiting for the money to get deposited into my account. When I contacted that man again, his phone was switched off. Then I filed a complaint with the cyber crime police but they haven’t caught the culprit even now,” Venkateshulu says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Speaking to The News Minute, Director General of Police, CID, Praveen Sood, said that just like Venkateshulu, thousands of people get conned in lottery scams. “When someone is asking you to pay money to collect alleged winnings, that must be the trigger. People get conned a lot by lotteries because the amount of money is too huge for them to pass up,” he says.&lt;/p&gt;
&lt;h3 id="_mcePaste"&gt;Thousands of cases, negligible convictions&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;From just 1,045 cases registered in 2014 to 8,495 cases between January and August 2019, the number of cyber crime cases being reported in Karnataka are on the rise. Between January 2014 and August 2019, 20,920 cases were registered across 30 Cyber Crime police stations in Karnataka and a whopping 85% of them have been registered in the lone Cyber Crime Police Station in Bengaluru City.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Of the 8,495 cases registered between January and August 2019, 7,516 of them were in Bengaluru. Another alarming reality is the low rate of conviction. There have been only 36 convictions in cyber crime cases in Karnataka in the last six years and out of them only 5 convictions have occurred in cases registered in Bengaluru. Of these convictions, four of them occurred in 2014 and one in 2018. There were zero convictions in the years in between.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The shockingly low rate of conviction, Cyber Crime sleuths say, is because 95% of the cases registered go unresolved for various reasons. Of the total number of cases registered in the last six years, arrests have been made in only 6.2% of the cases and the number of cases in which chargesheets have been filed is even lower.&lt;/p&gt;
&lt;p&gt;Between 2014 and August 2019, chargesheets were filed only in 736 cases in Karnataka, of which 46.86% were from Bengaluru.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://www.thenewsminute.com/sites/all/var/www/images/Cybercrime_karnataka.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src="https://www.thenewsminute.com/sites/all/var/www/images/Cybercrime_bengaluru.jpg" /&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;DGP Sood says that one of the primary reasons for the low conviction rate in cyber crime cases, not only in Karnataka but across the country, is the lack of geographical boundaries in cyber crime cases.&lt;/p&gt;
&lt;p&gt;“In most cases across the country, the crime is perpetrated by people from other countries,” he says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Senior police officials who have worked on numerous cyber crime cases in Karnataka say that another reason for low conviction rates in these crimes is that the cost of investigating cyber crime cases, especially economic offences, exceeds the actual loss suffered by victims.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“In many cases that I have worked on, the IP addresses or phone numbers are from Nigeria, Trinidad, Congo or an eastern European country. How do we track down and arrest these people? After five to six days of investigating, we reach a dead end. Between the amount that individual victims lose and the amount that needs to be spent on investigating that case, there is a huge difference. Lakhs have to be spent on one investigation. The economics do not add up and the physical international boundaries are major hurdles,” a senior police officer says.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Limited knowledge about advanced technology&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Senior officials with the Cyber Crime unit in the Criminal Investigation department say that apart from a severe staff crunch in Cyber Crime stations, most police officers, public prosecutors and magistrates have limited knowledge about cyber crimes, the technology used, the methods of perpetrating such crimes and, most importantly, the technological jargon.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Initially, we had only 10 police officials working in one police station in Bengaluru and they were handling thousands of cases. It was only in 2018 that the number of personnel were increased to 40. Even now, these officers are handling thousands of cases and it’s an overload,” JCP Sandeep Patil says.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to DGP Praveen Sood, even in cases where arrests are made and chargesheets filed, overburdened sessions courts with limited magistrates who understand the nuances of cyber crime cases contribute to low rates of conviction.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;&lt;article class="_yeti_done top_horizontal_show_ads_on_desktop right_vertical_ads_block left_vertical_ads_block"&gt;
&lt;p style="text-align: justify; "&gt;Senior police officials who work with the Centre for Cyber Crime Investigation and Training Centre say that prosecutors and lawyers fail to put forth a strong case due to lack of knowledge about these crimes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Understanding the methods used by perpetrators of cyber crimes and most importantly the jargon is difficult for prosecutors and magistrates. Even officers working in Cyber Crime stations keep learning new things every day. Magistrate courts are overloaded and to find judges who can understand the nuances of the case and prosecutors who can put forth a good case is difficult,” the official explains.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In February this year, the Centre for Cyber Crime Investigation and Training Centre was inaugurated in Bengaluru in order to train police officers, prosecutors and magistrates on the nuances of cyber crime. DGP Praveen Sood says that with more police officers being trained, it is a first step towards ensuring that more cases are detected and disposed of quickly.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Since cyber crime has no boundaries, the best way is to prevent it. More awareness is required. People must not use the same email ID for personal and financial transactions. Separate email IDs must be used for social media accounts because many people get conned on social media. There are many cases where social media accounts are hacked and pictures of women are morphed. It’s always better to change passwords frequently and not share it with anyone. Do not believe people who say they are bank officials asking for OTP and PINs. Never buy into lottery scams where they ask you to pay money in order to get your winnings,” Praveen Sood adds.&lt;/p&gt;
&lt;/article&gt;&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/news/newsminute-october-1-2019-theja-ram-why-conviction-rate-for-cyber-crime-cases-in-karnataka-is-abysmally-low'&gt;https://cis-india.org/internet-governance/news/newsminute-october-1-2019-theja-ram-why-conviction-rate-for-cyber-crime-cases-in-karnataka-is-abysmally-low&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Theja Ram</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-10-13T06:07:23Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival">
    <title>AI for Good</title>
    <link>https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival</link>
    <description>
        &lt;b&gt;CIS organised a workshop titled ‘AI for Good’ at the Unbox Festival in Bangalore from 15th to 17th February, 2019. The workshop was led by Shweta Mohandas and Saumyaa Naidu. In the hour long workshop, the participants were asked to imagine an AI based product to bring forward the idea of ‘AI for social good’.&lt;/b&gt;
        &lt;p&gt;The report was edited by Elonnai Hickok.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The workshop was aimed at examining the current narratives around AI and imagining how these may transform with time. It raised questions about how we can build an AI for the future, and traced the implications relating to social impact, policy, gender, design, and privacy.&lt;/p&gt;
&lt;h3&gt;Methodology&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The rationale for conducting this workshop in a design festival was to ensure a diverse mix of participants. The participants in the workshop came from varied educational and professional backgrounds who had different levels of understanding of technology. The workshop began with a discussion on the existing applications of artificial intelligence, and how people interact and engage with it on a daily basis. This was followed by an activity where the participants were provided with a form and were asked to conceptualise their own AI application which could be used for social good. The participants were asked to think about a problem that they wanted the AI application to address and think of ways in which it would solve the problem. They were also asked to mention who will use the application. It prompted participants to provide details of the AI application in terms of the form, colour, gender, visual design, and medium of interaction (voice/ text). This was intended to nudge the participants into thinking about the characteristics of the application, and how it will lend to the overall purpose. The form was structured and designed to enable participants to both describe and draw their ideas. The next section of the form gave them multiple pairs of principles. They were asked to choose one principle from each pair. These were conflicting options such as ‘Openness’ or ‘Proprietary’, and ‘Free Speech’ or ‘Moderated Speech’. The objective of this section was to illustrate how a perceived ideal AI that satisfies all stakeholders can be difficult to achieve, and that the AI developers at times may be faced with a decision between profitability and user rights.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;Participants were asked to keep their responses anonymous. These responses were then collected and discussed with the group. The activity led to the participants engaging in a discussion on the principles mentioned in the form. Questions around where the input data to train the AI would come from, or what type of data the application will collect were discussed. The responses were used to derive implications on gender, privacy, design, and accessibility.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/ConceptualiseAI.jpg" alt="Conceptualise AI" class="image-inline" title="Conceptualise AI" /&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Responses&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;img src="https://cis-india.org/home-images/Responses.jpg" alt="" class="image-inline" title="" /&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Analysis&lt;/h3&gt;
&lt;p&gt;Even as the responses were varied, they had a few key similarities and observations.&lt;/p&gt;
&lt;h3&gt;Participants’ Familiarity with AI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The participants’ understanding of AI was based on what they read and heard from various sources. While discussing the examples of AI, the participants were familiar with not just the physical manifestation of AI such as robots, but also AI software. However when asked to define an AI the most common explanations were, bots, software, and the use of algorithms to make decisions using large amounts of data. The participants were optimistic of the way AI could be used for social good. However, some of them showed concern about the implications on privacy.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Perception of AI Among Participants&lt;/h3&gt;
&lt;p class="Normal1"&gt;With the workshop, our aim was to have the participants reflect on their perception of AI based on their exposure to the narratives around AI by companies and the government.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The participants were given the brief to imagine an AI that could solve a problem or be used for social good. Most participants considered AI to be a positive tool for social impact. It was seen as a problem solver. The ideas conceptualised by the participants varied from countering fake news, wildlife conservation, resource distribution, and mental health. This brought to focus the range of areas that were seen as pertinent for an AI intervention. Most of the responses dealt with concerns that affect humans directly, the one aimed at wildlife conservation being the only exception.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;span&gt;On being asked, who will use the AI application, it was interesting to note that all the responses considered different stakeholders such as individuals, non profits, governments and private companies to be the end user. However, it was interesting that through the discussion the harms that might be caused by the use of AI by these stakeholders were not brought up. For example, the use of AI for resource distribution did not take into consideration the fact that the government could provide unequal distribution based on the existing biased datasets.&lt;/span&gt; &lt;a name="fr1"&gt;&lt;/a&gt; &lt;span&gt;Several of the AI applications were conceptualised to work without any human intervention. For example, one of the ideas proposed was to use AI as a mental health counsellor which was conceptualised as a chatbot that would learn more about human psychology with each interaction. It was assumed that such a service would be better than a human psychologist who can be emotionally biased. Similarly, while discussing the idea behind the use of AI for preventing the spread of fake news, the participant believed that the indication coming from an AI would have greater impact than one coming from a human. They believed that the AI could provide the correct information and prevent the spread of fake news. &lt;/span&gt;&lt;span&gt;By discussing these cases we were able to highlight that the complete reliance on technology could have severe consequences.&lt;/span&gt;&lt;a name="fr2"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Form and Visual Design of the AI Concepts&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In most cases, the participants decided the form and visual design of their AI concepts keeping in mind its purpose. For instance, the therapy providing AI mentioned earlier, was envisioned as a textual platform, while a ‘clippy type’ add on AI tool was thought of for detecting fake news. Most participants imagined the AI application to have a software form, while the legal aid AI application was conceptualised to have a human form. This revealed that the participants perceived AI to be both a software and a physical device such as a robot.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Accessibility of the Interfaces&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The purpose of including the type of interface (voice or text) while conceptualising the AI application was to push the participants towards thinking about accessibility features. We aimed to have the participants think about the default use of the interface, both in terms of language and accessibility. The participants though cognizant of the need to have a large number of users, preferred to have only textual input into the interface, not anticipating the accessibility concerns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The choices between access vs cost, and accessibility vs scalability were also questioned by the participants during the workshop. They enquired about the meaning of the terms as well as discussed the difficulty in having an all inclusive interface. Some of the responses consisted only of text inputs, especially for sensitive issues involving interactions, such as for therapy or helplines. This exercise made the participants think about the end user as well as the ‘AI for all’ narrative. We decided to add these questions that made the participants think about how the default ability, language, and technological capability of the user is taken for granted, and how simple features could help more people interact with the application. This discussion led to the inference that there is a need to think about accessibility by design during the creation of the application and not as an afterthought.&lt;a name="fr3"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Biases Based on Gender&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;We intended for the participants to think about the inherent biases that creep into creating an AI concept. These biases were evident from deciding identifiably male names, to deciding a male voice when the application needed to be assertive, or a female voice and name for when it was dealing with school children. Most of the other participants either did not mention the gender or they said that the AI could be gender neutral or changeable.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;These observations are also revealing of the existing narrative around AI. The popular AI interfaces have been noted to exemplify existing gender stereotypes. For example, the virtual assistants were given female identifiable names and default female voices such as Siri, Alexa, and Cortana. The more advanced AI were given male identifiable names and default male voices such as Watson, Holmes etc.&lt;a name="fr4"&gt;&lt;/a&gt; &lt;span&gt;Although these concerns have been pointed out by several researchers, there needs to be a visible shift towards moving away from existing gender biases.&lt;/span&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Concerns around Privacy&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Though the participants were aware of the privacy implications of data driven technologies, they were unsure of how their own AI concept could deal with questions of privacy. The participants voiced concerns about how they would procure the data to train the AI but were uncertain about their data processing practices. This included how they would store the data, anonymise the data, or prevent third parties from accessing it. For example, during the activity, it was pointed out to the participants that there would be sensitive data collected in applications such as therapy provision, legal aid for victims of abuse, and assistance for people with social anxiety. In these cases, the participants stated that they would ensure that the data was shared responsibly, but did not consider the potential uses or misuses of this shared data.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Choices between Principles&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;This part of the exercise was intended to familiarise the participants with certain ethical and policy questions about AI, as well as to look at the possible choices that AI developers have to make. Along with discussing the broader questions around the form and interface of AI, we wanted the participants to also look at making decisions about the way the AI would function. The intent behind this component of the exercise was to encourage the participants to question the practices of AI companies, as well as understand the implications of choices while creating an AI. As the language in this section was based on law and policy, we spent some time describing the terms to the participants. Even as some of the options presented by us were not exhaustive or absolute extremes, we placed this section to demonstrate the complexity in creating an AI that is beneficial for all. We intended for the participants to understand that an AI that is profitable to the company, free for people, accessible, privacy respecting, and open source, though desirable may be in competition with other interests such as profitability and scalability.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The participants were urged to think about how decisions regarding who can use the service, how much transparency and privacy the company will provide, are also part of building an AI. Taking an example from the responses, we talked about how having a closed proprietary software in case of AI applications such as providing legal aid to victims of abuse would deter the creation of similar applications. However, after the terms were explained, the participants mostly chose openness over proprietary software, and access over paid services.&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The aim of this exercise was to understand the popular perception of AI. The participants had varied understanding of AI, but were familiar with the term. They also knew of the popular products that claim to use AI. Since the exercise was designed for people as an introduction to AI policy, we intended to keep questions around data practices out of the concept form. Eventually, with this exercise, we, along with the participants, were able to look at how popular media sells AI as an effective and cheaper solution to social issues. The exercise also allowed the participants to understand certain biases with gender, language, and ability. It also shed light on how questions of access and user rights should be placed before the creation of a technological solution. New technologies such as AI are being featured as problem solvers by companies, the media and governments. However, there is a need to also think about how these technologies can be exclusionary, misused, or how they amplify existing socio economic inequities.&lt;/p&gt;
&lt;hr /&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;span&gt;[1]. &lt;/span&gt;&lt;a class="external-link" href="https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html"&gt;https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[2]. &lt;a class="external-link" href="https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/"&gt;https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[3]. &lt;a class="external-link" href="https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition"&gt;https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[4]. &lt;a class="external-link" href="https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied"&gt;https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival'&gt;https://cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas and Saumyaa Naidu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-13T05:32:28Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes">
    <title>Designing a Human Rights Impact Assessment for ICANN’s Policy Development Processes</title>
    <link>https://cis-india.org/internet-governance/blog/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes</link>
    <description>
        &lt;b&gt;As co-chairs of Cross Community Working Party on Human Rights (CCWP-HR) at International Corporation of Names and Numbers (ICANN), Akriti Bopanna and Collin Kurre executed a Human Rights Impact Assessment for ICANN's processes. It was the first time such an experiment was conducted, and unique because of being a multi-stakeholder attempt. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;This report outlines the iterative research-and-design process carried  out between November 2017 and July 2019, focusing on successes and  lessons learned in anticipation of the ICANN Board’s long-awaited  approval of the Work Stream 2 recommendations on Accountability. The  process, findings, and recommendations will be presented by Akriti and  Austin at CCWP-HR’s joint session with the Government Advisory Council  at ICANN66 in Montreal during 2nd-8th November.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;Click to download the &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes"&gt;full research paper here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes'&gt;https://cis-india.org/internet-governance/blog/designing-a-human-rights-impact-assessment-for-icann2019s-policy-development-processes&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Collin Kure, Akriti Bopanna and Austin Ruckstuhl</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Freedom of Speech and Expression</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-10-03T14:43:28Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/ai-full-spectrum-regulatory-challenge-launch-workshop-reference-files">
    <title>AI: Full Spectrum Regulatory Challenge Launch Workshop [Reference Files]</title>
    <link>https://cis-india.org/internet-governance/ai-full-spectrum-regulatory-challenge-launch-workshop-reference-files</link>
    <description>
        &lt;b&gt;These are the files released at the AI Full Spectrum Regulatory Challenge Launch Event, organised by CIS, and CCG-NLUD on September 27 2019. At the event, Sunil Abraham discussed the draft policy brief linked below, which is an output of the Regulatory Practices Lab at CIS.&lt;/b&gt;
        
&lt;p&gt;The Event poster can be found &lt;a href="https://cis-india.org/internet-governance/ai-reg-paper-event-files/ai-rpl-poster-06" class="internal-link" title="AI RPL Poster"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The Infographic in the Policy brief can be found &lt;a href="https://cis-india.org/internet-governance/ai-reg-paper-event-files/ai-full-spectrum-regulatory-challenge-twitter" class="internal-link" title="AI Full Spectrum Regulatory Challenge Infographic"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The working draft that was released at the workshop can be found &lt;a href="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft-pdf" class="internal-link" title="Artificial Intelligence: A Full-Spectrum Regulatory Challenge (Working Draft) PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/ai-full-spectrum-regulatory-challenge-launch-workshop-reference-files'&gt;https://cis-india.org/internet-governance/ai-full-spectrum-regulatory-challenge-launch-workshop-reference-files&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>pranav</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Regulatory Practices Lab</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2020-08-04T06:08:48Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft">
    <title>Artificial Intelligence: a Full-Spectrum Regulatory Challenge [Working Draft]</title>
    <link>https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft</link>
    <description>
        &lt;b&gt;&lt;/b&gt;
        
&lt;p&gt;Today, there are certain misconceptions regarding the regulation of AI. Some corporations would like us to believe that AI is being developed and used in a regulatory vacuum. Others in civil society organisations believe that AI is a regulatory circumvention strategy deployed by corporations. As a result, these organisations call for onerous regulations targeting corporations. However, some uses of AI by corporations can be completely benign and some uses AI by the state can result in the most egregious human rights violations. Therefore policy makers need to throw every regulatory tool from their arsenal to unlock the benefits of AI and mitigate its harms.&lt;/p&gt;
&lt;p&gt;This policy brief proposes a granular, full spectrum approach to the regulation of AI depending on who is using AI, who is impacted by that use and what human rights are impacted. Everything from deregulation, to forbearance, to updated regulations, to absolute and blanket prohibitions needs to be considered depending on the specifics. This approach stands in contrast to approaches of ethics, omnibus law, homogeneous principles, and human rights, which will result in inappropriate under-regulation or over-regulation of the sector.&lt;/p&gt;
&lt;p&gt;Find a copy of the working draft &lt;a href="https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft-pdf" class="internal-link" title="Artificial Intelligence: A Full-Spectrum Regulatory Challenge (Working Draft) PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft'&gt;https://cis-india.org/internet-governance/artificial-intelligence-a-full-spectrum-regulatory-challenge-working-draft&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>sunil</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Regulatory Practices Lab</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2020-08-04T06:10:13Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="https://cis-india.org/internet-governance/blog/modern-war-institute-september-30-2019-arindrajit-basu-and-karan-saini-setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying">
    <title>Setting International Norms of Cyber Conflict is Hard, But that Doesn't Mean that We Should Stop Trying</title>
    <link>https://cis-india.org/internet-governance/blog/modern-war-institute-september-30-2019-arindrajit-basu-and-karan-saini-setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying</link>
    <description>
        &lt;b&gt;Last month, cyber-defense analyst and geostrategist Pukhraj Singh penned a stinging epitaph, published by MWI, for global norms-formulation processes that are attempting to foster cyber stability and regulate cyber conflict—specifically, the Tallinn Manual.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Arindrajit Basu and Karan Saini was published by &lt;a class="external-link" href="https://mwi.usma.edu/setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying/"&gt;Modern War Institute&lt;/a&gt; on September 30, 2019.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;His words are important, and should be taken seriously by the legal and technical communities that are attempting to feed into the present global governance ecosystem. However, many of his arguments seem to suffer from an unjustified and dismissive skepticism of any form of global regulation in this space.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;He believes that the unique features of cyberspace render governance through the application of international law close to impossible. Given the range of developments that are in the pipeline in the global cyber norms proliferation process, this is an excessively defeatist attitude toward modern international relations. It also unwittingly encourages the continued weaponization of cyberspace by fomenting a “no holds barred” battlespace, to the detriment of the trust that individuals can place in the security and stability of the ecosystem.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;“The Fundamentals of Computer Science”&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Singh argues that the “fundamentals of computer science” render rules of international humanitarian law (IHL)—which serve as the governing framework during armed conflict in other domains—inapplicable, and that lawyers and policymakers have gotten cyber horribly wrong. Singh theorizes that in the case of the United States having pre-positioned espionage malware in Russian military networks, that malware could have been “repurposed or even reinterpreted as an act of aggression.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The possibility of a fabricated act of espionage being used as justification for an escalated response exists within the realm of analogous espionage, too. A reconnaissance operation that has been compromised can also be repurposed midway into a full-blown armed attack, or could be reinterpreted as justification for an escalatory response. However, &lt;a href="https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e401"&gt;i&lt;/a&gt;&lt;a href="https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e401"&gt;nternational &lt;/a&gt;&lt;a href="https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e401"&gt;l&lt;/a&gt;&lt;a href="https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e401"&gt;aw states&lt;/a&gt; that self-defense can only be exercised when the “necessity of self-defense is instant, overwhelming, leaving no choice of means, and no moment of deliberation.” In order to legitimize any action taken under the guise of self-defense, the threat would have to be imminent and the response both necessary and proportionate. There is nothing inherently unique in the nature of cyber conflict that would render the traditional law of self-defense moot.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Further, the presumption that cyber operations are ambiguous and often uncontrollable, as Singh suggests, is flawed. An exploit that is considered “deployment-ready” is the result of an attacker’s attempts at fine-tuning variables—until it is determined that the particular vulnerability can be exploited in a manner that is considered to be reasonably reliable. An exploit may have to be worked upon for quite some time for it to behave exactly how the attacker intends it to. While it is true that there still may be unidentified factors that can potentially alter the behavior of a well-developed exploit, a skilled operator or malware author would nonetheless have a reasonable amount of certainty that an exploit code’s execution will result in the realization of only a certain possible set of predefined outcomes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It is true that a number of remote exploits that target systems and networks &lt;a href="https://media.blackhat.com/bh-us-10/whitepapers/Meer/BlackHat-USA-2010-Meer-History-of-Memory-Corruption-Attacks-wp.pdf"&gt;may make use of&lt;/a&gt; unreliable vulnerabilities, where outcomes &lt;a href="https://googleprojectzero.blogspot.com/2015/06/what-is-good-memory-corruption.html"&gt;may not be fully apparent&lt;/a&gt; prior to execution—and sometimes even afterward. However, for most deployment-ready exploits, this would simply not be the case. In fact, the example of the infamous Stuxnet malware, which Singh uses in his article, helps buttress our point.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Singh questions whether India should have interpreted the &lt;a href="https://www.indiatoday.in/india/north/story/stuxnet-cyber-war-critical-infrastructure-of-india-ntro-115273-2012-09-05"&gt;widespread infection of systems&lt;/a&gt; within the region—which also happened to affect certain critical infrastructure—as an armed attack. This question can cursorily be dismissed since we now know that Stuxnet did not cause any deliberate damage to Indian computing infrastructure. A &lt;a href="https://www.reuters.com/article/us-usa-cyberweapons-specialreport/special-report-u-s-cyberwar-strategy-stokes-fear-of-blowback-idUSBRE9490EL20130510"&gt;2013 report by journalist Joseph Menn&lt;/a&gt; correctly states that &lt;span style="text-decoration: underline;"&gt;“the only place deliberately affected [by Stuxnet] was an Iranian nuclear facility.&lt;/span&gt;” Therefore, for India to claim mere infection of systems located within the bounds of its territory as having been an armed attack, it would have to concretely demonstrate that the operators of Stuxnet caused “grave harm”—as described in IHL—purely by way of having infected those machines, through execution of malicious instructions programmed in the malware’s payload.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;At the same time, it should not be dismissed that the act of the Stuxnet malware infecting a machine could very well be interpreted by a state as constituting an armed attack. However, given the current state of advancement in malware decompilation and reverse-engineering studies, the process of deducing instructions that a particular malicious program seeks to execute can in most cases be performed in a reasonably reliable manner. Thus, for a state to make such a claim, it would have to prove that the malware did indeed cause grave harm, that which meets the criteria of the “scale and effects” threshold laid down in &lt;em&gt;Nicaragua v. United States&lt;/em&gt;—whether it was caused due to operator interaction or preprogrammed instructions—along with sufficient reasoning and evidence for attributing it to a state.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;An analysis of the Stuxnet code made it apparent that operators were seeking out machines that had the Siemens STEP 7 or SIMATIC WinCC software installed. The authors of the malware quite clearly had prior knowledge that the nuclear centrifuges that they intended to target made use of a particular type of programmable logic controllers, which the STEP 7 and WinCC software interacted with. On the basis of this prior knowledge, the authors of Stuxnet &lt;a href="https://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_stuxnet_dossier.pdf"&gt;made design choices&lt;/a&gt; by which, upon infection, target machines would communicate to the Stuxnet command-and-control server—including identifiers such as operating system version, IP address, workstation name, and domain name—whether or not the infected system had the STEP 7 or WinCC software installed. This allowed the operators of Stuxnet to easily identify and distinguish machines that they would ultimately attack for fulfilling their objectives. In effect, this gave them some amount of control over the scale of damage they would deliberately cause.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;It has been &lt;a href="https://www.cnet.com/news/stuxnet-delivered-to-iranian-nuclear-plant-on-thumb-drive/"&gt;theorized&lt;/a&gt; that the malware reached the nuclear facility in Iran through a flash drive. It may be true that widespread and unnecessary propagation of the worm—which could be described as it “going out of control”—was not something the operators had intended (as it would attract unwanted attention and raise alarm bells across the board). It has nonetheless been several years since Stuxnet was in action, and there have been no documented cases of Stuxnet having caused &lt;em&gt;grave harm&lt;/em&gt; to Indian (or other) computers. For all purposes, it could be said that the risk of collateral damage was minimized as the control operators were able to direct the execution of damaging components of the malware, to a degree that could be interpreted as having complied with IHL—thereby making it a &lt;em&gt;calculated&lt;/em&gt; cyberattack, with &lt;em&gt;controllable&lt;/em&gt; effects.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;However, if the adverse effects of the operation were to be indiscriminate (i.e., machines were tangibly damaged immediately upon being infected), and could not be controlled by the operator within reasonable bounds, then the rules of IHL would render the operation illegal—a red line that, among other declarations, the &lt;a href="https://www.justsecurity.org/66194/frances-major-statement-on-international-law-and-cyber-an-assessment/"&gt;recent French statement&lt;/a&gt; on the application of international law to cyberspace recognizes.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;“Bizarre and Regressive”: The Westphalian Precept of Territoriality&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Singh’s next grievance is with the precept of territoriality and sovereignty in cyberspace. However, the reasoning he provides decrying this concept is unclear at best. The International Group of Experts authoring the Tallinn Manual argued that “cyber activities occur on territory and involve objects, or are conducted by persons or entities, over which States may exercise their sovereign prerogatives.” They continued to note that even though cyber operations can transcend territorial domains, they are conducted by “individuals and entities subject to the jurisdiction of one or more state.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Contrary to Singh’s assertions, our reasoning is entirely in line with the “defend forward” and “persistent engagement” strategies adopted by the United States defense experts. In fact, Gen. Paul Nakasone, commander of US Cyber Command—&lt;a href="https://www.schneier.com/blog/archives/2019/02/gen_nakasone_on.html"&gt;whose interview&lt;/a&gt; Singh cites to explain these strategies—explicitly states in that interview that “we must ‘defend forward’ in cyberspace as we do in the physical domains. . . . [Naval and air forces] patrol the seas and skies to ensure that they are positioned to defend our country before our borders are crossed. The same logic applies in cyberspace.” This is a recognition of the Westphalian precept of territoriality in cyberspace—which includes the right to take pre-emptive measures against adversaries before the people and objects within a nation’s sovereign borders are negatively impacted.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Below-the-Threshold Operations&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Singh also argues that most cyber operations would not reach the threshold armed attack to invoke IHL. He concludes, therefore, that applying the rules of IHL “bestows another garb of impunity upon rogue cyber attacks.” However, as discussed above, the application of IHL does not require a certain threshold of intensity, but the mere application of armed force that is attributable to a state.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Therefore, laying down “red lines” by, for example, applying the &lt;a href="https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule1"&gt;principle of distinction&lt;/a&gt;, which seeks to minimize damage to civilian life and property, actually works toward setting legal rules that seek to prevent the negative civilian fallout of cyber conflict. There appears to be no reason why any cyberattack by a state should harm civilians without the state using all means possible to avoid this harm. If there is an ongoing armed conflict, this entails compliance with the IHL principles of &lt;a href="https://gsdrc.org/topic-guides/international-legal-frameworks-for-humanitarian-action/concepts/overview-of-international-humanitarian-law/"&gt;necessity and proportionality&lt;/a&gt;, ensuring that any collateral damage ensuing as a result of an operation is proportionate to the military advantage being sought.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Moreover, we agree that certain information operations may not cause any damage in terms of injury to human life or property. But IHL is not the only framework for governing cyber conflict. Ongoing cyber norms proliferation efforts are attempting to move beyond the rigid application of international law to account for the unique challenges of cyberspace. Despite the flaws in the process thus far, individuals from a variety of backgrounds and disciplines must engage meaningfully and shape effective regulation in this space. Singh’s “garb of impunity” exists when there are a lack of restrictions on collateral damage caused by cyber operations, to the detriment of civilian life and property alike.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Obstacles in Developing Customary International Law&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;His third argument is on the fetters limiting the development of customary international law in the cyber domain. This is a valid concern. Until recently, most states involved in cyber operations have adopted a stance of silence and ambiguity with regard to their legal position on the applicability of international law in cyberspace or their position on the Tallinn Manual.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This is due to&lt;a href="https://www.cambridge.org/core/journals/american-journal-of-international-law/article/rule-book-on-the-shelf-tallinn-manual-20-on-cyberoperations-and-subsequent-state-practice/54FBA2B30081B53353B5D2F06F778C14"&gt; multiple reasons&lt;/a&gt;: First, states are not certain if the rules of the Tallinn Manual protect their long-term interests of gaining covert operational advantages in the cyber domain, which acts as a disincentive for strongly endorsing the rules laid out therein. Second, even those states keen on applying and adhering to the manual may not be able to do so in the absence of technical and effective processes that censure other states that do not comply. Given this ambiguity, states have demonstrated a preference to engage in cyber operations and counteroperations that are below the threshold—in other words, those that do not bring IHL into play. However,&lt;a href="https://www.cambridge.org/core/journals/american-journal-of-international-law/article/rule-book-on-the-shelf-tallinn-manual-20-on-cyberoperations-and-subsequent-state-practice/54FBA2B30081B53353B5D2F06F778C14"&gt; as &lt;/a&gt;&lt;a href="https://www.cambridge.org/core/journals/american-journal-of-international-law/article/rule-book-on-the-shelf-tallinn-manual-20-on-cyberoperations-and-subsequent-state-practice/54FBA2B30081B53353B5D2F06F778C14"&gt;others have convincingly argued&lt;/a&gt;, it is incorrect to assume that the current trend of silence and ambiguity will continue.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Recent developments indicate that the variety of normative processes and actors alike may render the Tallinn Manual more relevant as a focal point in the discussions. &lt;a href="https://www.gov.uk/government/speeches/cyber-and-international-law-in-the-21st-century"&gt;The &lt;/a&gt;&lt;a href="https://www.gov.uk/government/speeches/cyber-and-international-law-in-the-21st-century"&gt;UK&lt;/a&gt;, &lt;a href="https://www.lawfareblog.com/frances-cyberdefense-strategic-review-and-international-law"&gt;France,&lt;/a&gt; &lt;a href="https://www.lawfareblog.com/germanys-position-international-law-cyberspace"&gt;Germany&lt;/a&gt;, &lt;a href="https://www.justsecurity.org/64490/estonia-speaks-out-on-key-rules-for-cyberspace/"&gt;Estonia&lt;/a&gt;, &lt;a href="https://www.justsecurity.org/wp-content/uploads/2017/06/Cuban-Expert-Declaration.pdf"&gt;Cuba&lt;/a&gt; (backed by China and Russia), and the &lt;a href="https://www.justsecurity.org/wp-content/uploads/2016/11/Brian-J.-Egan-International-Law-and-Stability-in-Cyberspace-Berkeley-Nov-2016.pdf"&gt;United States&lt;/a&gt; have all engaged in public posturing in advocacy of their respective positions regarding the applicability of international law in cyberspace, in varying degrees of detail—which is essentially customary international law in the making. The statements made by a number of delegations at the recently concluded&lt;a href="https://twitter.com/RungRage/status/1176732729615908864"&gt; first substantive session&lt;/a&gt; of the United Nations’ Open-Ended Working Group covered a broad range of issues, from capacity building to the application of international law, which is the first step towards fostering consensus among the variety of global actors.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;strong&gt;Positive Conflict and the Future of Cyber Norms&lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The final argument—a theme that runs from the beginning of Singh’s article—is a stark criticism of Western-centric cyber policy processes. Despite attempts to foster inclusivity, efforts like those that produced the Tallinn Manual are still driven largely from and by the United States in an attempt to, as Singh describes it, keep “cyber offense fully potentiated.” This is an unfortunate reality, but one that is not limited solely to the cyber domain. For example, in an &lt;a href="https://people.duke.edu/~pfeaver/dunlap.pdf"&gt;excellent paper&lt;/a&gt; written in 2001, retired US Air Force Maj. Gen. Charles Dunlap explained “that ‘lawfare,’ that is, the use of law as a weapon of war, is the newest feature of 21st century combat.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;We are presented therefore with two options: either sit back and witness the hegemonization of policy discourse by a limited number of powerful states, or actively seek to contest these assumptions by undertaking adversarial work across standards-setting bodies, multilateral and multi-stakeholder norms-setting forums, as well as academic and strategic settings. In &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2916171"&gt;a recent paper&lt;/a&gt;, international law scholar Monica Hakimi argues that international law can serve as a fulcrum for facilitating positive conflict in the short run between a variety of actors across industry, civil society, and military and civilian government entities, which can lead to the projection of shared governance endeavors in the long run. Despite its several flaws, the Tallinn Manual can serve as a this type of fulcrum for facilitating this conflict.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;In writing a premature eulogy of efforts to bring to realization a set of norms in cyberspace, Singh dismisses that historically, &lt;a href="https://cis-india.org/internet-governance/files/gcsc-research-advisory-group.pdf"&gt;global governance regimes&lt;/a&gt; have taken considerable time  and effort to come into being and emerge after an arduous process of continuous prodding and probing. This process necessitates that any existing assumptions—and the bases on which they are constructed—are challenged regularly, so that we can enumerate and ultimately arrive at an agreeable definition for what works and what does not. Rejecting these processes in their entirety foments a global theater of uncertainty, with no benchmarks for cooperation that stakeholders in this domain can reasonably rely on.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='https://cis-india.org/internet-governance/blog/modern-war-institute-september-30-2019-arindrajit-basu-and-karan-saini-setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying'&gt;https://cis-india.org/internet-governance/blog/modern-war-institute-september-30-2019-arindrajit-basu-and-karan-saini-setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Arindrajit Basu and Karan Saini</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    

   <dc:date>2019-10-14T15:04:03Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>




</rdf:RDF>
