top of page
  • Mohan Katarki

IT Act, 2000: Should India Revisit on Safe Harbour Principle to Quell Fake Information?

The role of intermediary in social media has invited huge attention after Twitter dramatically blocked the account of Donald Trump (when he was still the President of the United States of America), allegedly for his role in the events at Capitol on 6th January, 2021. The expectations are rising high in public that the intermediaries, which are multinational tech companies, should play a pro-active role in stifling posts which are in the nature of fake information. This paper is not dealing with criminal liability and is confined to the analysis of civil action against the intermediary in combating the menace of fake information. 


The internet undoubtedly opened the door for interactive digital communication. The technology promoted by multinational tech companies soon got popularly institutionalised as social media. Social media facilitates photo sharing, blogging, social gaming, social networks, video sharing and much more. Depending on their utility, social media is categorised as social networking sites, social review sites, image sharing sites, video hosting sites, discussion sites, sharing economy sites.[1] Among these, social networking sites and video hosting sites have caught the eyes of the public. WhatsApp, Facebook, Twitter, Signal etc., are popular social networking sites. YouTube is a known video hosting site. The little fingers of about three billion people in the world have touched the Apps. If kids are playing games, public officials and politicians are known to engage with the public on social media. The law enforcement officials have been using social media in preventing crimes, in trailing the suspects and in proof of crimes. The accused, too, rely on social media chats to rebut the false charges. The all-pervasive use of technology and its dependence is undisputedly true in the 21st century. However, there is a serious downside of social media. The abuse of freedom of speech by posting fake information (fake news) on social media has reached its nadir in the form of fake information clickbait, propaganda, satire, sloppy journalism, misleading heading or biased news.[2] The fake information may prejudicially affect the reputation of individuals particularly women, disturb peace and tranquillity, poison the minds of innocent with hateful ideas, etc. Oxford Professor of political communication, Ramsus Kleis Nielson, says “the problems of disinformation in a society like India might be more sophisticated and more challenging than in the West”.[3] Of course, after the events in Capitol on 6th January based on false propaganda on social media, that election was stolen, it can be seen that western society is equally vulnerable to fake information. If truth is the essence of an orderly society, fake information disseminated by social media is doing greatest disservice to the society. So, the need for legal control of social media to eliminate the menace of fake information is a pressing urgency in the world.   


The legal control of technology is essentially a regulation of the conduct of men in their ownership and or management of technology. The key to the regulation of technology is the identification of specific acts or omissions, which if not controlled, would cause injury.  With regard to networking social media, the actors are the hoister of App, the web hosting service provider, the internet service provider on mobile and end-users or customers. The end-users communicate or share among themselves photos, video, blog, messages or play games. The seamless juridical relationship among actors in social media is founded in contract, while the App hoister is a licensor and end-users are licensees. The regulatory regime considers the App hoister, web hosting service provider and internet service providers intermediaries, which is evident from the definition of intermediary defined in Sec. 2(1)(w) of the Information Technology Act of 2000 (ITA 2000) enacted by the Indian Parliament.


The fake information is actionable in the common law of torts provided it is defamatory, maliciously or negligently misstatement, etc. If it is prejudicial to peace and tranquillity, social morals, etc, it may invite criminal investigation. In India, defamation is actionable in civil and as well prosecutable as a crime. However, every fake information may not amount to defamation, misstatement or crime. A large part of the information is not actionable in the interest of free speech, academic freedom, etc even though it may be insidious and at times, it may have a poisonous effect on the society. The actionable and non-actionable fake information is expected to be addressed separately.


The booming internet technology business fell to the attention of legislatures in the 1990s around the world for appropriate regulation. The orderly growth and at the same time, the prevention of misuse were the prime purposes which demanded legal regulation. The Indian Parliament accordingly enacted ITA 2000 to “provide legal recognition for transactions carried out by means of electronic data interchange and other means of electronic communication” as stated in the Preamble. While granting recognition to paperless transactions and facilitating digital signatures, the ITA 2000 has also enacted penal provisions and regulated the civil liabilities of the intermediary. Subsequently, by amendment Act of 2009 to ITA 2000, the common law of tort has been modified to immunise the intermediaries from tortious liability in law. The pre amended Sec. 79 had only immunised intermediaries from the liability with regard to the offences committed under the ITA 2000. However, Sec. 79 (1) of ITA 2000 (as substituted by the amendment Act 10 of 2009) immunised the intermediaries from civil liability and further, it extended the immunity to offences beyond the ITA 2000. This immunity is otherwise known as safe harbour principle. The immunity from liability is made subject to the conditions mentioned in sub sections (2) and (3), and out of these, the important condition is that an intermediary should not undertake to “select or modify the information contained in the transmission”, as stated Sec. 79(2)(b)(iii). If intermediary violates these conditions, then intermediary loses the benefit of immunity granted under sub section (1) and becomes liable in law which may include payment of damages for hosting defamatory material knowingly or negligently. The intermediary has also been mandated that it shall observe due diligence and guidelines framed by the Central Government under Sec. 79(2)(c). The guidelines framed by the Central Government prescribing dos and don’ts and spell out steps to be taken. In view of the immunity in Sec 79 of ITA 2000 (as substituted by the amendment Act 10 of 2009), the end-user of social media platform who publishes the post is alone liable in civil proceedings. If someone posts fake information on his Facebook page, he is liable in law for payment of damages etc but not the intermediary Facebook which has hosted the social networking site.


Even though, more than a decade has passed since the provisions of Sec. 79 of ITA 2000 were enacted by amendment Act 10 of 2009, very few cases have reached the Supreme Court or High Courts. The benefit of fuller judicial view is not available. In the recent case of Google India (P) Ltd. v. Visaka Industries[4] dealing with claim of Google for immunity as an intermediary, the Supreme Court observed that “In the case, it is found that in spite of the first respondent’s complaint issuing notice about dissemination of defamatory information on the part of (A-1) Accused 1– appellant did not move its little finger to block the material or to stop dissemination of unlawful and objectionable material. This conduct of the appellant disentitles it from claiming protection either under the provisions of the unamended Section 79 or under Section 79 after substitution”. In Shreya Singhal v. Union of India[5], which is the mother of Information Technology law, the judges didn’t have an opportunity to critically examine the role of intermediaries in discharging responsibility of due diligence. The Delhi High Court in the case of Myspace Inc. v. Super Cassettes Industries Ltd[6] was faced with interesting techno-legal question on the interpretation of expression “modify”. The court found that “Now, on the third sub-clause of whether MySpace selects or modifies information, this court at a prima facie stage finds that firstly the modification is to the format and not to the content secondly even the process of modifying the format is an automatic process without either MySpace’s tacit or expressed control or knowledge.  In the circumstances this Court concludes that MySpace prima facie complies with the requirements of Section 79(2)(b)”. The first proposition enunciated by the court is implicit in Sec. 79(2)(a). The second proposition, if read literally, may give scope to an argument that an intermediary may modify the content or acquire editorial rights by his chosen machine with artificial intelligence. The role of artificial intelligence in assisting intermediary is a study which is not within the province of this article.


The precursor to this legal immunity granted by the Indian Parliament under Sec. 79(1) by the amendment in 2009 to ITA 2000 is the European Union’s Directive on Electronic Commerce, 2000. Articles 12 to 14[7] mandate that the Member States of the European Union shall ensure that intermediaries, which are classified as a mere conduit, caching and hosting are not held liable.  Among the condition, one of the conditions in clause (c) of Art 12 mandates that the service provider should not undertake to “modify the information contained in the transmission”. This is similar to the condition imposed by the Indian Parliament in Sec. 79(2)(b)(iii) of ITA 2000. However, the safe harbour principle which first originated in the United States of America has been formulated differently. Sec. 230 (1) of the Communication and Decency Act enacted by the United States of America’s Congress in 1996 (CDA 1996), while granting the immunity to intermediary from liability (“Good Samaritan”), merely declared that he is not to be treated as “publisher or speaker”.[8]


The safe harbour principle overrides the pre-existing common law. The English Common law is that innocent dissemination is permissible provided that the disseminator had no knowledge and or his failure to detect the defamatory content was not due to negligence. The leading case on this is Vizetelly v. Mudie’s Select Library Ltd [9]. In John Bunt v. David Tilley,[10] Justice Eady said – “In determining responsibility for publication in the context of the law of defamation, it seems to me to be important to focus on what the person did, or failed to do, in the chain of communication. It is clear that the state of a defendant’s knowledge can be an important factor. If a person knowingly permits another to communicate information which is defamatory, when there would be an opportunity to prevent the publication, there would seem to be no reason in principle why liability should not accrue. So too, if the true position were that the applicants had been (in the claimant’s words) responsible for ‘corporate sponsorship and approval of their illegal activities’”. The common law of defamation, as applied in India, is same as applied by English courts in England, by virtue of Art. 372 of the Constitution of India (COI). The law in United States of America does not seem to be against the protection of intermediary from the civil liability in action for defamation. The US federal court in Cubby v. CompuServe[11]  dealt with the issue before the enactment of Sec. 230 of CDA and came to the conclusion that ISP (intermediary) is not liable for defamatory content on discussion forum because, according to the court, ISP is not a publisher but merely a distributor of content. However, the intermediary was held liable in Stratton Oakmont v. Prodigy Services[12] because defendant Prodigy was exercising editorial control to make it fit for children. These decisions are in line with the earlier decision of the US Supreme Court in 1959 in Smith v. California[13], where a view has been taken that bookseller is a distributor and therefore, no liability can be imposed on him.


Self-regulation by the intermediaries has gained importance to quell fake information. Due diligence as a duty of care is a common law concept in torts. It travelled to property and service contracts. Now, it’s a good defence even against the charge of breach of regulations. In the last decade, it has been extensively adopted by the legislatures in the United States of America, European Union and common law jurisdictions to promote legality, particularly in corporate transactions.  Due diligence as a duty of care is the responsibility of intermediaries in common law. The observation of due diligence is also mandated in Sec. 79(2)(c) of Indian ITA 2000 read with the Information Technology (intermediary guidelines) Rules, 2021 to identify, flag and block the accounts which are found to be involved in peddling fake information[14]. However, it seems, that the scope of the obligation of self-regulation, duty of care or due diligence is severely restricted by the rule against modification or editing in Sec. 79(2)(b)(iii).  Intermediaries may not wish to be proactive and risk losing the immunity from liabilities, because due diligence under Sec. 79(3) is linked to mandate against modification or editing of content in transmission from social networks under Sec. 79(2)(b)(iii). The intermediaries may be happy to play a passive role, since a proactive role in discharging the duty of due diligence or duty of care is cost prohibitive which may affect the bottom line of their profits.


The safe harbour principle didn’t have safe sail in Parliament. In this regard, it is worth recalling the critical observations of the Parliamentary Standing Committee[15] in 2007:


9…. The Department’s reasoning for not making the intermediaries/service providers liable in certain cases that a general consensus was arrived at, while discussions were going on the amendments to the IT Act, to the effect that the intermediaries/service providers may not be knowing what their subscribers are doing and hence they should not be penalised.  The Committee do not agree with this.  What is relevant here is that when their platform is abused for transmission of allegedly obscene and objectionable contents, the intermediaries/service providers should not be absolved of responsibility.  The Committee, therefore, recommend that a definite obligation should be cast upon the intermediaries/service providers in view of the immense and irreparable damages caused to the victims through reckless activities that are undertaken in the cyber space by using the service providers’ platform.  Casting such an obligation seems imperative, more so when it is very difficult to establish conspiracy or abetment on the part of the intermediaries/service providers, as also conceded by the Department”.


The immunity from liability may not meet the test of Arts. 14 and 21 of the COI, prima facie. The legislative grant under Sec 79(1) of ITA sounds discriminatory and arbitrary. It is a suspect classification as there is no legitimate object sought to be achieved at the cost of individual’s right to reputation protected under Art. 21 of the COI. If the purpose had existed in 2009 which deserved grant of immunity despite strong reservations of parliamentary standing committee extracted above, it has arguably ceased to exist now in the light of experience. The alarming spread of fake information has proved that such an immunity is counter-productive.

Even in the United States of America, the removal of legal immunity to intermediary in Sec. 230 of CDA 1996 is extensively debated.  However, the learned authors Ellen P. Goodman and Ryan Whittington, say it would be “misguided and destructive” to the digital economy.[16] A narrowly tailored immunity reform appears to be reasonable according to them. Venessa S. Browne-Barbour in her article[17] has suggested that “the revised statute should clarify that intermediaries, as distributors, will be held liable for communicating defamatory material after receiving notice of the defamation.  This result is consistent with traditional defamation law, where a distributor or secondary publisher is liable for communications when it has actual or constructive knowledge of defamatory content”. Benjamin W. Cramer in his Article[18] has suggested a proactive role for intermediaries as part of corporate ethical accountability. He says “… social media platforms can avoid this conundrum through a focus on corporate responsibility and citizenship that in turn encourages a proactive policy on the types of behaviour they will condone and a plan for removing objectionable content as they see fit.  After all, they are private businesses with an interest in the benefits of ethical behaviour, and as discussed here, Section 230 allows responsible policing of content. A focus on accountability by social media firms over the content hosted on their platforms can have strategic advantages as experienced by practitioners of CSR, and will also avoid a ruinous fight over what is or is not allowable under Section 230”.


The United Kingdom has not brought the safe harbour principle to grant immunity from liability to the intermediaries in defamation actions.  It has allowed the common law torts to prevail, subject to the modification to adjust the interests of intermediaries, individuals in reputation and public in truth. The Defamation Act, 2013, in Sec. 5 (2), states that “It is defence for the operator to show that it was not the operator who posted the statement on the website”[19]. In Sec. 5(3), three circumstances are listed to defeat the defence of operator or intermediary.  The explanatory memorandum to the regulations framed under the Act of 2013, state in para 7.3 that “Sec.5 of the Act rebalances the law by providing additional protection for website operators in the form of a new defence in the circumstances set out in para 4.2 above (Sec.5 of the Act of 2013)”.[20]


Summing up, fake information on social media which has been menacingly affecting the vital interests of individuals and the public at large, requires to be addressed by restoring civil liability of intermediaries. The decades old safe harbour rule, which grants immunity from liabilities in common law of torts to the intermediary, is outdated and may be abolished as early as possible. If the intermediary has had the knowledge and or he had failed to detect the defamatory information due to negligence, then it must be held liable in law. The Parliament may also modify the common law on defamation to rebalance the conflicting interests of the stakeholders. The legislative formulation in Sec. 5 of the Defamation Act, enacted by the British Parliament in United Kingdom, seems to be rational for consideration since it protects the interests of individuals in reputation and public in truth while accommodating the interests of intermediaries. The pro-activeness in intermediaries, for discharging its duty of care,  is necessary for deterring the spread of fake information.

 

[1] “The 7 different types of social media”, Biteable, available at https://biteable.com/blog/the-7-different-types-of-social-media/.

[2] “Explained: What is False Information (Fake News)?”, Webwise, available at https://www.webwise.ie/teachers/what-is-fake-news/.

[3] “India’s Disinformation War More Complex Than in West: Oxford Prof”, The Quint, available at https://www.thequint.com/news/india/media-coverage-disinformation-in-india-interview-rasmus-nielsen (06/10/18).

[4] (2020) 4 SCC 162, pp. 187-188.

[5] (2015) 5 SCC 1.

[6] (2017) 236 DLT 478 (DB).

[7] Where an information society service is provided that consists of the transmission in a communication network information provided by a recipient of the service, or the provision of access to a communication network, Member States shall ensure that the service provider is not liable for the information transmitted, on condition that the provider: 2. Member States shall ensure that, except when otherwise (a) does not initiate the transmission; agreed by parties who are not consumers, the service provider indicates any relevant codes of conduct to which he subscribes (b) does not select the receiver of the transmission; and and information on how those codes can be consulted electronically. (c) does not select or modify the information contained in the transmission.

[8] The Congress enacted it for “two basic policy reasons: to promote the free exchange of information and ideas over the Internet and to encourage voluntary monitoring for offensive or obscene material” as mentioned by Federal Court in Carafano v. Metrosplash.Com. Inc, 339 F. 3d 1119, p. 1123 approved in Hassel v. Bird, 5 Cal. 5th 522 (2018)

[9] (1900) 2 QB 170, p. 180.

[10] (2007) 1 WLR 1243.

[11] 776F. Supp. 135 (SDNY), 1991.

[12] 1995 N.Y. Misc. LEXIS 712.

[13] 361 US 147, pp. 152-153.

[14] The Rules of 2021 have invited strong opposition from digital media who are not strictly intermediaries but publishers. Besides, the Rules have also invited concerns from opposition and rights activists for investing arbitrary powers in the Government and its agencies. However, these issues are outside the scope of this paper.

[15] The extract of the recommendation of Parliamentary Standing Committee on intermediaries headed by Sri Nitish Kumar is available at pages 15-16 of  the Book titled Law of Intermediaries by Pavan Duggal [First Edition (2016); published by Universal Law Publication]

[16] Ellen P. Goodman and Ryan Whittington, “Section 230 of the Communications Decency Act and the Future of Online Speech”, Jstor, p.12, available at https://www.jstor.org/stable/resrep21228?seq=1#metadata_info_tab_contents (01/08/19).

[17] Losing Their License to Libel: Revisiting 230 Immunity: Berkeley Technology Law Journal, Fall 2015, Vol. 30, No. 2 (Fall 2015), pp. 1505-1560, 1560.

[18] From Liability to Accountability: The Ethics of Citing Section 230 to Avoid the Obligations of Running a Social Media Platform: Journal of Information Policy, 2020, Vol. 10 (2020), pp. 123-150, 144.

[19] See page 378-380, Winfield & Jolowicz on Tort, Twentieth Edition (2020), Published by Thomson Reuters, Sweet & Maxwell.

[20] Ministry of Justice, “Explanatory Memorandum to The Defamation (Operator of Websites) Regulations, 2013 (Draft)”, Government of U.K., available at https://www.legislation.gov.uk/ukdsi/2013/9780111104620/pdfs/ukdsiem_9780111104620_en.pdf.


This article has been authored by Mr. Mohan Katarki, Senior Advocate, Supreme Court, New Delhi. The author would like to appreciate and thank Ms. Anandita Bhargava, a student at RGNUL, Punjab, for her excellent research assistance in writing this article. This blog is a part of RSRR’s Excerpts from Experts Blog Series, initiated to bring forth discussion by experts on contemporary legal issues. A PDF form of this article is available here: Should India Revisit on Safe Harbour Principle to Quell Fake Information?- Mr. Mohan Katarki.

Comments


bottom of page