Safe Harbours: A Mirage of Intermediary Protection
The term “Intermediary” generally refers to a facilitator of information across the Internet between the content creator and the consumer of data. The intermediaries are supposed to deal with various forms of information ranging from innocuous to harmful on the spectrum. This often raises the question: Should intermediaries be made liable for the content created by a third party even though the they don’t actively participate in the dissemination of the same? Does imposing liability on an intermediary infringe upon prerogatives including freedom of speech and expression and right to privacy and does the immunity provided to them in the form of safe harbours adequate and fulfil its purpose?
The article aims to provide adequate answers to these questions by providing a comparative analysis of different jurisdictions pertaining to safe harbour mechanism and how it is being diluted nonchalantly.
Even though the definition of intermediary liability varies in each country, their fundamental characteristics remain the same. The primary job of an intermediary is to receive, store and transmit information. It plays no part whatsoever in creating any such information. Rather, it is the third parties or users who happen to create all forms of data, which is then received by the intermediaries and transmitted to the consumers. These online hosting platforms don’t play a role in creating any content available or stored by them, with their job being to act as a bridge between the content creators and consumers. Thus, making intermediaries liable for each and everything posted by any third party is not only excessively unreasonable due to the vast amount of data produced everyday which makes it impossible to track every act that qualifies as harmful or controversial but also impinges upon the freedom of speech and expression of the users as well owing to arbitrary censorship on online content.
On the other hand, conferring absolute immunity on these online intermediaries for content hosted by them does not bode well for internet regulation either. An efficient and robust model needs to be there in place to regulate their acts for averting fake news, political propaganda, hate speech, child pornography etc. from entering the societal layers via online transmission. Application of such a model is the need of the hour for supervising acts of active intermediaries which are relatively more operational than passive and conduit intermediaries, and hold the potential of disseminating harmful information in the public sphere.
Dilution of Safe Harbours
In order to provide protection to intermediaries for the illegal and unlawful acts of third parties, certain immunity is conferred upon them in the form of “safe harbour”. The safe harbour exempts intermediaries who host, store and disseminate data, from any form of liability unless they were aware of any illegal content being stored and transmitted on their platform, which was not acted upon under a reasonable span of time. The purpose of safe harbour is not limited to mere preservation of intermediaries from imposition of any arbitrary penalty, but also extends to ensuring that the fundamental rights of online users and consumers are not impinged upon through unlawful restriction on legitimate material posted by them and hosted and disseminated by intermediaries. In recent times, safe harbour is gradually becoming a myth, with various jurisdictions introducing stringent legislations to bypass the mechanism and imposing excessive self-regulation duties as well as liability upon the intermediaries.
The Information Technology Act, 2000 (IT Act) is the primary legislation dealing with liability of intermediaries for content generated by third parties. Section 79 of the Act grants safe harbour protection to the intermediaries for any kind of third-party content. It provides conditional immunity to the intermediary under the “due diligence doctrine”, irrespective of the nature of the content. This in no way implies that absolute immunity is provided to intermediaries, as they are under a mandate to remove any content under a ‘notice and takedown’ regime which requires an intermediary to remove information which does not adequately fulfil the test of being lawful upon receiving “actual knowledge”. If it fails to do so within a stipulated period of time amounting to 36 hours under Rule 3(4) of the Information Technology (Intermediary Guidelines) Rules, 2011, protection under safe harbour can be taken immediately. This would make the intermediary directly liable for its inability to remove the unlawful content which was being stored and perhaps transferred through its platform. In Shreya Singhal v. UOI, it was held by the Supreme Court that “intermediary upon receiving actual knowledge from a court order or on being notified by the appropriate government or its agency that unlawful acts relatable to Article 19 (2) are going to be committed then fails to expeditiously remove or disable access to such material”. This clearly implies that the intermediary is not under an obligation to observe self-regulation. It would not lose safe harbour protection if it refuses to take down unlawful content on its platform, pursuant to a written intimation by any private party. Compliance with the conditions states under Section 79 would ensure safe harbour protection to the intermediary. This protects the right to freedom of speech and expression of the third parties as well as the intermediaries to deal with data which is legitimate.
The Information Technology Intermediary Guidelines (Amendment) Rules, 2018 proposed by the Ministry of Electronics and Information Technology that aim at strengthening the online regulatory framework and curbing fake news on online platforms, have not really helped the cause of protection conferred by safe harbour under Section 79. The guidelines have been drafted with the sole purpose of snatching away the conditional freedom granted to the intermediaries for protection from any unnecessary liability and to enable them to function efficiently. It aims at expanding the obligations of these intermediaries.
Rule 3(2) of the guidelines lists certain terms including “harmful”, “obscene” and “hateful”. An intermediary cannot in any case deal with such information. It can be prohibited from hosting legitimate material which doesn’t relate to any of the terms mentioned in Rule 3(2) and yet bear the brunt of liability due to sheer ambiguous language of the provision. This runs contrary to the observation made by Supreme Court in Express Newspapers (Private) Ltd. and Anr. v. The Union of India (UOI) and Ors. that “if any limitation on the exercise of the fundamental right under Art. 19(1)(a) does not fall within the four corners of Art. 19(2), it cannot be upheld.” The list of vague terms listed in Rule 3(2) can be considered as ultra vires the Constitution, for they fail to reflect transparency and clarity as established under Art. 19(2). It further makes a mockery of safe harbour and blatantly violates Section 79 of the IT Act by imposing liability if any intermediary fails to duly comply with the said provision.
Under Rule 3(5), intermediaries are supposed to trace the origin of the malicious information hosted by them as may be required by authorized government agencies. The main rationale behind the insertion of this clause is to curb fake news and bring the individuals involved in its dissemination to justice. Irrespective of the bona fide intention behind the provision, it is especially hard for social media platforms such as WhatsApp to detect any unlawful content due to the presence of end-to-end encryption. Therefore, tracing such information created by any third party would not only be unfeasible but also run the risk of being in contravention to the judgement pronounced by the Supreme Court in K.S. Puttaswamy v. UOI, in which the court held the right to privacy as a fundamental right under Article 21 of the Indian Constitution.
Rule (9) which calls for “Pro-active content monitoring” further enhances the brunt of responsibility over intermediaries by ‘proactively identifying and removing or disabling public access’ to unlawful information or content. This is not a feasible solution to censor online material owing to the huge volume of information stored and processed by online intermediaries, especially the social media platforms. Besides, there’s a possibility of private censorship which might arise in the future, which is not only in contravention with numerous judgements including Shreya Singhal Case but also dilutes the notice and takedown regime under Section 79.
The protection given to internet intermediaries in United States is comparatively flexible and wide than the one in Indian legal framework. According to Section 230 of the Communications Decency Act, 1996 (CDA), “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This provision is considered as one of the most influential tools for protecting intermediaries from any arbitrary and unnecessary liability for acts of third parties, thereby establishing Internet utopianism sans any interference. It has also been considered a pioneer for the protection of free speech and expression of the users as the intermediaries are not bound to remove any information which doesn’t fulfil the criteria of being ‘unlawful’ in nature, thus protecting the free speech clause under the First Amendment. Unfortunately, the legislation has not impeded the Congress from passing laws to dilute the safe harbour protection including the Stop Enabling Sex Traffickers Act and Allow States and Victims to Fight Sex Trafficking Online Act, 2018 which expands liability of intermediaries for hosting certain controversial ads despite their non-participation in this. Moreover, recently there have been news of the Trump Administration repealing Section 230 of CDA through an executive order. So, it remains to be seen whether the immunity enjoyed by intermediaries through safe harbours persists or will they be stripped away of this immunity.
Safe harbour for intermediaries in South Korea is essentially non-existent due to the penal provisions that they may be subjected to on non-compliance with certain conditions. Even though notice and takedown regime is followed in the country, it is comparatively stringent in nature as compared to the ones followed by other jurisdictions. Intermediaries are mandatorily required to take down any unlawful or obscene material from their platform under Article 22-3 of the Telecommunications Business Act (TBA). Failure to do this would result in penalty in the form of civil fine not exceeding twenty million or the registration of that intermediary may be withdrawn. This is an extremely harsh provision which not only provides immoderate penalty on intermediaries but also overlooks the passive role played by them in dealing with online content.
The provision is complemented by Article 44-2 of the ‘Act Regarding Promotion of Use of Information Communication Networks and Protection of Information’. The arbitrary nature of the provision dilutes the safe harbour created by it through Notice and Takedown procedure, which makes the intermediaries remove any form of information which comes under the ambit of being ‘unlawful’ under the Act. This poses a threat to legitimate information which can be censored arbitrarily by the authorities based on this particular provision. Such an intermediary framework in the country presents an apposite illustration of Internet paranoia, which refers to exceptionally strict treatment meted out to hosting services.
The E-Commerce Directive, 2000 is the predominant legal framework in European Union which regulates acts of intermediaries and imposes liability on them. Article 14 of the Directive provides safe harbour and imposes conditional liability for hosting providers. According to it, a hosting service can be exempted from liability “if it does not have any knowledge of illegal activity or information” and in case it does, the hosting service must act expeditiously to remove any such content.
Despite the flexibility provided by the Directive, it still suffers from numerous shortcomings which need to be rectified to provide clarity pertaining to intermediary liability. Terms including ‘expeditiously’, ‘actual knowledge’ and ‘illegal content or activities’ have not been properly defined in the framework. This has resulted in states compelling online hosting services to remove content not fulfilling the criteria of illegality, which further creates ambiguity with regards to free speech and expression of users. Legitimate dissent may also become blurred due to such lack of clarity. Furthermore, even though Article 15 of the Directive prohibits member states from imposing on online intermediaries any kind of general obligation to monitor information which they store or disseminate, it has not been found to be effective in adequately fulfilling its purpose. This duty is considered only as a mere directory by the states. So, there’s a possibility of them imposing an obligation on intermediaries to monitor every information hosted by them. The intermediaries can be made liable despite the safe harbour provision if they fail to comply with self-regulation principle enshrined under Article 15.
Regulation of information in the online landscape cannot be refuted. It is extremely important in the wake of certain instances wherein social media platforms have been widely used by perpetrators from disseminating blasphemous content to inciting communal violence. But this doesn’t necessarily mean such hosting services must bear the brunt. At the end of the day, they’re only content providers and not creators.
The principle of safe harbour has been exclusively created to protect content hosting platforms from liability for the acts of third-party users. Such a conditional immunity is extremely necessary in the contemporary times due to the wide array of information stored and transmitted by them. Since this information is not created by hosting services, there’s no need to make them liable for they are only involved in hosting the same. This is not being followed by states, as the The regime is not being followed strictly and is consistently being bypassed to stifle free speech and expression of users and to impose excessive liability on intermediaries. There’s an urgent need for the nations to realize the importance of safe harbour protection conferred on intermediaries for them to function to their full extent, while ensuring that the fundamental rights of online users are not curbed through unlawful restriction on their freedom of speech.
Authored by Mr. Tariq Khan, Principal Associate at Advani & Co. He was assisted by Mr. Bitthal Sharma, student of RGNUL, Punjab. This blog is a part of the RSRR Excerpts from Experts Blog Series, initiated to bring forth discussion by experts on contemporary legal issues.