Centre for Internet & Society

The Covid-19 pandemic has seen an extensive proliferation of misinformation and misleading information on the internet - which in turn has highlighted a heightened need for online intermediaries to promptly and effectively deploy its content removal mechanisms. This blogpost examines how this necessity may affect the best practices of transparency reporting and obligations of accountability that these online intermediaries owe to their users, and formulates recommendations to allow preservation of information regarding Covid-19 related content removal, for future research.

 

This article first appeared in the CyberBrics. The author would like to thank Gurshabad Grover for his feedback and review. 

Introduction

We are living through, to put it mildly, strange times. The ongoing pandemic has pinballed into a humanitarian crisis, revealing and deepening the severe class inequalities that exist today. The crisis has been exacerbated by an ‘infodemic’, as the World Health Organization (WHO) notes: a massive abundance of information - occasionally inaccurate - has reduced the general perception of trust and reliability of online sources regarding the disease.

As a response to this phenomenon, in March, the Ministry of Electronics and Information Technology (MeitY) issued an advisory to all social media platforms, asking them to “take immediate action to disable/remove [misinformation on Covid-19] hosted on their platforms on priority basis.” This advisory comes at a time when several prominent online platforms, including GoogleTwitter and Facebook are also voluntarily stepping up to remove ‘harmful’ and misleading content relating to the pandemic. In the process, these intermediaries have started to increasingly rely on automated tools to carry out these goals, since their human moderator teams had to be sent home on lockdown norms. 

While the intention behind these decisions is understandable, one must wonder how this new-found speed to remove content, prompted by the bid to rid the social media space of ‘fake news’ may affect the best practices of transparency reporting and obligations of accountability that these online intermediaries owe to their users. In this piece, we explore these issues in a little more detail. 

What is transparency reporting? 

Briefly speaking, transparency reports, in the context of online intermediaries and social media companies, are periodic (usually annual or half-yearly) reports that map different policy enforcement decisions the company has taken regarding, among other things, surveillance and censorship. These decisions are either carried out unilaterally by the company, by third-party notices (in case of content that is infringing copyright, for instance), or at the behest of state authorities. For instance, Google’s page on transparency reporting describes the process as “[s]haring data that sheds light on how the policies and actions of governments and corporations affect privacy, security, and access to information.”x

To gauge the importance of transparency reporting in today’s age of the internet, it is perhaps potent to consider their history. In the beginning of the past decade, Google was one of the only online intermediaries providing any kind of information regarding government requests for user data, or requests for removal of content. 

Then, in 2013, the Snowden Leaks happened. This was a watershed moment in the internet’s history, inasmuch as it displayed that these online intermediaries were often excessively pliant with government requests for user information, allowing them backdoor surveillance access. Of course, all of these companies denied these allegations. 

However, from this moment onwards, online intermediaries began to roll out transparency reports in a bid to fix their damaged goodwill, and till last year, it was noted that these reports continued to be more detailed, at least in the context of data and content related to users located in the US. A notable exception to this rule was the tech giant Amazon, whose reports are essentially a PDF document of three pages, with no nuance regarding any of the verticals mentioned. 

Done well, these reports are invaluable sources of information about things like the number of legal takedowns effectuated by the intermediary, the number of times the government asked for user information from the intermediary for law enforcement purposes, and so on. This in turn becomes a useful way of measuring the breadth of government and private censorship and surveillance. For instance, this report shows that the government emergency reports sent to Facebook have doubled since 2019, which is concerning, since it is not clear what does the company mean by an ‘emergency’ request, and whether its understanding matches up with that provided under the Indian law. Which means that it becomes difficult, in turn, to ascertain the nature of information that the company is handing over to the government. 

Best practices and where to find them

While transparency reports are great repositories to gauge the breadth of government censorship and surveillance, one early challenge has been the lack of standardized reporting. Since these reports were mostly autonomous initiatives by online intermediaries, each of them had taken their own forms. This in turn, had made any comparison between them difficult.

This has since been addressed by a number of organizations, including Electronic Frontier Foundation (EFF), New America and Access Now, all creating their own metrics for measuring transparency reports. More definitively, in the context of content removal in 2018, a group of academicians, organizations and experts had collaborated to form the ‘Santa Clara Principles on Transparency and Accountability in Content Moderation’ which have since received the endorsement of around seventy human rights groups. Taken together, these standards and methodologies of analysing transparency reports present a considerable body of work, against which content removals can be mapped.

Content takedown in the time of pandemic

In some of our previous research, we have argued how the speed of removal, or the time taken by an intermediary to remove ‘unlawful’ content, says nothing about the accuracy of the said action. Twitter, for instance, can say that it took some ‘action’ against 584,429 reports of hateful conduct for a specified period; this does not always mean that all the action it took was accurate, or fair, since very little publicly available information is there to comprehensively gauge how effective or accurate are the removal mechanisms deployed by these intermediaries. The heightened pressure to deal with harmful content related to the pandemic, can contribute further to one, removal of perfectly legitimate content (as examples from Facebook shows, and as YouTube has warned in blogs), and two, towards increasing and deepening the information asymmetry regarding accurate data around removals. 

Given the diverse nature of misinformation and conspiracy theories relating to the pandemic currently present on the internet, this offers a critical time to study the relation between online information and the outcomes of a public health crisis. However, these efforts stand to be thwarted if reliable information around removals relating to the pandemic continue to be unavailable.   

How to map removals in these times?

One, as the industry body IAMAI notes, while positive, collaborative steps between social media companies and the government to curb misinformation are welcome, any form of takedown at the behest of the state must take the correct legal path, as mandated by the provisions of the Information Technology (IT) Act. Additionally, all information regarding content takedowns to remove fake news related to Covid-19 must be preserved and collected separately by these companies, and subsequently represented in their transparency reports.  

Two, if the recent case of Twitter fact-checking Donald Trump’s tweet on electoral ballots is any indication, an online intermediary’s suo motu enforcement of its internal speech norms may take different shapes, apart from the usual takedown/leave up binary, including fact-checking and showing warning labels for conspiratorial content (Facebook for instance, has taken to adopt measures that would connect verified sources of information to users interacting with Covid-19 related misinformation). Accordingly, information regarding these additional measures must be mapped, including the efficacy of these steps, and should be presented in the transparency reports. 

Additionally, several of these companies have stepped up to use automated moderation tools and systems for quick response against the spread of disinformation on their platforms. However, as YouTube’s Creator Blog warns its users, some of these removals may be erroneous, and the users would accordingly have to appeal these decisions. Therefore, while information regarding removals prompted by the use of these tools must be preserved, and represented separately, these numbers should also be expanded to include the error rates of these automated tools, and the rate at which posts removed by error are reinstated. 

Three, as previous research on transparency reporting has shown, there is a substantive bridge between the information provided by these companies for users based in the US, and those based out of other countries. This is problematic on several counts. Due to the expansive issues with the laws relating to content removal in India, this inadequate representation of information makes it impossible to gauge the practical ramifications of the opaque legal system, and accordingly, makes reforms difficult. In the current times, this lack of information may also paint an imperfect  picture of government censorship. After all, the Indian government has, on multiple occasions, the dubious reputation of sending flawed legal takedown notices and forcing intermediaries to censor content nevertheless. 

Therefore, this continued refusal to provide more nuanced information in the context of India would continue to facilitate these practices, and only increase the breadth of censorship of digital expression. 

While the need to remove harmful information from social media platforms in this stage of the crisis might be necessary, such need must not circumvent the adherence to the minimum standards of transparency and accountability. If the Snowdean leaks are any indication, online companies can be made to change their policies during watershed moments in history. The current Covid-19 crisis is one such moment, both offline and online, and the need is more pressing than ever, for these companies to step up and do better. 


Shared under Creative Commons BY-SA 4.0 license

The views and opinions expressed on this page are those of their individual authors. Unless the opposite is explicitly stated, or unless the opposite may be reasonably inferred, CIS does not subscribe to these views and opinions which belong to their individual authors. CIS does not accept any responsibility, legal or otherwise, for the views and opinions of these individual authors. For an official statement from CIS on a particular issue, please contact us directly.