Centre for Internet & Society

Recently, Punjab police detained three suspects on a tip-off. The cops clicked their photographs, uploaded them on an app called the Punjab Artificial Intelligence System or PAIS which uses facial recognition, and immediately got the lowdown on their criminal history, which involved a contract killing and looting. Four stolen vehicles and five weapons were recovered from them.

The article by Ketaki Desai with inputs from Sanjeev Verma in Chandigarh was published in the Times of India on March 31, 2019. Arindrajit Basu was quoted.

Police forces around the country are increasingly relying on artificial intelligence as both a crime-busting and prevention tool despite concerns about the inbuilt biases of algorithms, and the worry that policing may win over privacy.

Atul Rai, founder CEO of Gurgaon-based firm Staqu Technologies, which developed the app used by Punjab police, says they collect data from various state agencies and have built a database on criminals. “An NCRB (National Crime Records Bureau) report showed that 70% of crimes are committed by repeat criminals. Our facial recognition software works even on low-resolution photos and videos.” Police departments in UP, Uttarakhand and Rajasthan are armed with similar apps. And how likely is it that this facial recognition tech makes mistakes? It has 98% accuracy, claims Rai.

Other companies are also in the fray. Chennai’s FaceTagr has built a database of 75,000 photographs, and claims its app – currently being used in Tamil Nadu, Andhra Pradesh and Puducherry – is able to identify faces with an accuracy of 99.4%.

Facial recognition doesn’t just help catch the bad guys, it can also help trafficked kids reunite with their families. Vijay Gnanadesikan, CEO of FaceTagr, says they have begun to work with the Indian Railways to facilitate this. “Often, a child is not able to tell police officers where they are from. Now if a child is reported missing in Mumbai, and found in Bengaluru, the parents can be found if both police departments upload his or her photo.”

UP’s Trinetra app, developed by Staqu Tech, was launched in December, 2018. I-G (crime) S K Bhagat says, “We didn’t have any centralised criminal database in UP before this.” The app informs the cops of the status of each person in custody, and whether they are out on bail. This means that if a chain-snatching takes place in Lucknow, they can filter suspects and also show the victim their photos for identification. “Right now we’re in the first phase of using the app; beat officers still don’t yet have direct access but their SHO does.” To prevent misuse, a centralised monitoring system has been developed.

Staqu also offers predictive analytics— they collect data from news sites and blogs, aggregate information about crime patterns across the country, and create a ‘heat map’ for law enforcement agencies. Using this, they are able to make predictions about where, how and by whom a crime might be committed.

But predictive policing is problematic, points out Vidushi Marda of human rights organisation Article 19. “Data is often imperfect, and it’s easy for a deeply unequal, biased outlook of society to be embedded in data. This means that people who have always been arrested for no reason will continue to be arrested for no reason. And because it’s an algorithm, we can’t refute it,” she says.

Uncanny Vision, a Bengaluru-based company, makes smart CCTVs that are deployed in ATMs and number plate readers that are used on national highways in Kerala to curb speeding. Co-founder Navaneethan Sundaramoorthy says, “Our ATM CCTVs don’t use facial recognition, they just look at the actions. We take the video, convert it into information and toss away the video.” He thinks that privacy concerns are important. “Is it possible for the government to use it in any big-brother scenario? Yes, but that has always been the case. With this technology, we can control who has access to the information that comes from CCTVs, and there is a clear footprint about who has accessed the information,” he says.

Apar Gupta, executive director of the Internet Freedom Foundation, points out that there are no regulatory safeguards in place. “The first thing surveillance needs is a statutory framework—what is its purpose, how is it being carried out, and what limits they cannot go beyond. These surveillance systems are being made without adequate civil society engagement. There needs to be a third-party data protection authority to audit their activities.”

Arindrajit Basu, senior policy officer at the Centre for Internet and Society, says the advantages of AI in tracking crime have to be balanced with accountability. “The government can be taken to court for violation of a fundamental right, including the recently recognised right to privacy. But, most of this tech is being rolled out in collaboration with private actors. You can’t take a private company to court for this.” The solution might lie in private companies sharing liability with the government, he says. “That will ensure that they consider the ethical consequences of their products.”

Filed under: