Centre for Internet & Society

The proliferation of lies and manipulative content supplies an ever-willing state a pretext to step up surveillance.

The op-ed was published in Hindu Businessline on September 7, 2018.


After a set of rumours spread over WhatsApp triggered a series of lynchings across the country, the government recently took the interesting step of placing the responsibility for this violence on WhatsApp. This is especially noteworthy because the party in power, as well as many other political parties, have taken to campaigning over social media, including using WhatsApp groups in a major way to spread their agenda and propaganda.

After all, a simple tweet or message could be shared thousands of times and make its way across the country several times, before the next day’s newspaper is out. Nonetheless, while the use of social media has led to a lot of misinformation and deliberately polarising ‘news’, it has also helped contribute to remarkable acts of altruism and community, as seen during the recent Kerala floods.

While the government has taken a seemingly techno-determinist view by placing responsibility on WhatsApp, the duality of very visible uses of social media has led to others viewing WhatsApp and other internet platforms more as a tool, at the mercy of the user. However, as historian Melvin Kranzberg noted, “technology is neither good nor bad; nor is it neutral”. And while the role of political and private parties in spreading polarising views should be rigorously investigated, it is also true that these internet platforms are creating new and sometimes damaging structural changes to how our society functions. A few prominent issues are listed below:

Fragmentation of public sphere

Jurgen Habermas, noted sociologist, conceptualised the Public Sphere as being “a network for communicating information and points of view, where the streams of communication are, in the process, filtered and synthesised in such a way that they coalesce into bundles of topically specified public opinions”.

To a large extent, the traditional gatekeepers of information flow, such as radio, TV and mainstream newspapers, performed functions enabling a public sphere. For example, if a truth-claim about an issue of national relevance was to be made, it would need to get an editor’s approval.

In case there was a counter claim, that too would have to pass an editorial check. Today however, nearly anybody can become a publisher of information online, and if it catches the right ‘influencer’s attention, it could spread far wider and far quicker than it would’ve in traditional media. While this does have the huge positive of giving space to more diverse viewpoints, it also comes with two significant downsides.

First, that it gives a sense of ‘personal space’ to public speech. An ordinary person would think a few times, do some research, and perhaps practice a speech before giving it before 10,000 people. An ordinary person would also think for perhaps five seconds before putting out a tweet on the very same topic, despite now having a potentially global audience.

Second, by having messages sent directly to your hand-held device, rather than open for anyone to fact-check and counter, there is less transparency and accountability for those who send polarising material and misinformation. How can a mistaken and polarising view be countered, if one doesn’t even know it is being made? And if it can’t be countered, how can its spread by contained?

The attention market

Not only is that earlier conception of public sphere being fragmented, these new networked public spheres are also owned by giant corporations. This means that these public spheres where critical discourse is being shaped and spread, are actually governed by advertisement-financed global conglomerates. In a world of information overflow, and privately owned, ad-financed public spheres, the new unit of currency is attention.

It is in the direct interest of the Facebooks and Googles of the world, to capture user attention as long as possible, regardless of what type of activity that encourages. It goes without saying that neither the ‘mundane and ordinary’, nor the ‘nuanced and detailed’ capture people’s attention nearly as well as the sensational and exciting.

Nearly as addicting, studies show, are the headlines and viewpoints which confirm people’s biases. Fed by algorithms that understand the human desire to ‘fit in’, people are lowered into echo chambers where like-minded people find each other and continually validate each other. When people with extremist views are guided to each other by these algorithms, they not only gather validation, but also now use these platforms to confidently air their views — thus normalising what was earlier considered extreme. Needless to say, internet platforms are becoming richer in the process.

Censorship by obfuscation

Censorship in the attention economy, no longer requires blocking of views or interrupting the transmission of information. Rather, it is sufficient to drown out relevant information in an ocean of other information. Fact checking news sites face this problem. Regardless of how often they fact-check speeches by politicians, only a minuscule percentage of the original audience comes to know about, much less care about the corrections.

Additionally, repeated attacks (when baseless) on credibility of news sources causes confusion about which sources are trustworthy. In her extremely insightful book “Twitter and Tear Gas”, Prof Zeynep Tufekci rightly points out that rather than traditional censorship, powerful entities today, (often States) focus on overwhelming people with information, producing distractions, and deliberately causing confusion, fear and doubt. Facts, often don’t matter since the goal is not to be right, but to cause enough confusion and doubt to displace narratives that are problematic to these powers.

Viewpoints from members of groups that have been historically oppressed, are especially harangued. And those who are oppressed tend to have less time, energy and emotional resources to continuously deal with online harassment, especially when their identities are known and this harassment can very easily spill over to the physical world.

Conclusion

Habermas saw the ideal public sphere as one that is free of lies, distortions, manipulations and misinformation. Needless to say, this is a far cry from our reality today, with all of the above available in unhealthy doses. It will take tremendous effort to fix these issues, and it is certainly no longer sufficient for internet platforms to claim they are neutral messengers. Further, whether the systemic changes are understood or not, if they are not addressed, they will continue to create and expand fissures in society, giving the state valid cause for intervening through backdoors, surveillance, and censorship, all actions that states have historically been happy to do!

The views and opinions expressed on this page are those of their individual authors. Unless the opposite is explicitly stated, or unless the opposite may be reasonably inferred, CIS does not subscribe to these views and opinions which belong to their individual authors. CIS does not accept any responsibility, legal or otherwise, for the views and opinions of these individual authors. For an official statement from CIS on a particular issue, please contact us directly.