Deepfakes: Algorithms at war, trust at stake
A case in point is the video that surfaced of an Indian journalist not so long ago.
The article by Rajmohan Sudhakar was published in Deccan Herald on July 14, 2019. Elonnai Hickok was quoted.
Now machines are learning to manipulate imagery. That is a real worry. Deepfakes for instance. They are AI-manipulated videos achieved by machine learning. Products of the humongous volume of images and videos now available online.
The danger is, this imagery could be yours or mine. Imagine artificial intelligence of neural networks creating convincing identities of our real counterparts, and starts posting videos. Absurd.
“Society has grappled with spurious and specious content in media over time. Media has been modified for various reasons, usually by those with access to significant resources and influence in the past,” says Elonnai Hickok, COO of the Bengaluru-based Centre for Internet and Society.
From an AI and machine learning perspective, deepfakes could be understood by what is known as GAN -- generative adversarial networks, essentially two algorithms at war. One is a generator, the other a discriminator. They compete with each other based on set inputs, in time bettering the version they together help create. These are behind what are now known as deepfakes of popular figures floating around online. Barack Obama is seen saying in a purported deepfake, “stay woke bitches”, which of course he did not say.
Another deepfake has Mark Zuckerberg boasting: “I have total control of billions of people’s stolen data, all their secrets, their lives, their futures.” “Deepfakes are media modified by current technology and techniques. Easy availability of technology and media allows anyone to create, tailor or manipulate media for their own ends. Deepfakes present an opportunity for introspection and research into the contours of freedom of expression as well as societal frameworks for dealing with fake content,” explains Hickok.
One of the horrid instances of a deepfake-like attack was the video that surfaced of an Indian woman journalist not so long ago. Or the child-kidnapping rumours that spread through WhatsApp and the subsequent mob lynchings. However, there’s the view that in post-truth times, deepfakes would be seen with caution in the inherent dilemma over believing what one views online.
“In India, people do not take these so seriously, especially on social media. It is mostly entertainment for many. Now, we are seeing people with diametrically opposing views. They often view content which they like to see. It would rather work as a reinforcer of views than a transformer,” feels political analyst Sandeep Shastri.
Open source software can create basic deepfakes if someone wanted to hurt somebody. The potential scale of danger and damage looms larger for influential figures and nations at war.
“While deep fakes can be used to damage societies, it is important that collectively society takes steps to become sensitised to ways that media can be used to manipulate opinions and choices, and allow people to develop skills that build awareness and context to what they see and believe,” adds Hickok.
A video emerged recently of an ‘Iranian’ boat near an attacked oil tanker in the Persian Gulf. Deepfake or not, the authenticity of the video was questionable. If used wily, it could have triggered a war.
According to Hickok, society has to get more resilient to manipulation. “This includes spoken, written, seen as well as heard information. We have to learn to question the basis on which we confirm trust. Multiple forms of verification may help to address spurious media and information,” she says.
Deepfakes are no surprise as social media feed into the small and large divisions and differences of multitudes. Emergence of such potentially dangerous AIs isn’t taken quite seriously by the tech czars. In fact, it is a matter of economy for them.
Oscar Schwartz writes in The Guardian that ‘technological solutionism’ in the ‘attention economy’ may not be the real approach. “And herein lies the problem: by formulating deepfakes as a technological problem, we allow social media platforms to promote technological solutions to those problems – cleverly distracting the public from the idea that there may be more fundamental problems with powerful Silicon Valley tech platforms,” Schwartz warns.
“The measures do not fall on the regulators alone. I think, individuals (by introspection and building awareness), society (through education), the legal system (stringent evidentiary requirements and capacity building) industry (differentiating recreational and prejudicial content, tagging content that is manipulated, etc.) and regulators (enabling accountability, oversight, transparency and redress) can all contribute to a more resilient society,” observes Hickok.
In India, viewing a video is still considered close to truth, almost sacred by the vast majority. Necessarily, it would not require a technologically advanced deepfake, especially in the backward rural pockets, to rile up and aggravate biases and prejudices.
“Deepfakes can further existing biases and manipulate opinions and choices. They can disrupt trust inherent in societal groups to co-exist and politically, they can breed distrust in leadership and capability. That said, deepfakes can be used for humour and satire. Ultimately, the impact will be shaped by a number of factors including pre-existing biases, individual response, etc.,” Hickok elaborates.
On a lighter note, deepfakes could be helpful too. We could very well do away with some of our television news presenters.