From Shield to Sword: How Safe Harbour Became the State’s Tool of Platform Control
- Yukta Chordia & Kanika Chhajerh
- Mar 6
- 9 min read
Introduction
Recent regulatory interventions, including the amendment concerning Synthetically Generated Information, the Ministry of Electronics and Information Technology’s advisory on unlawful and obscene content, and the evolving understanding of ‘reasonable efforts’ under the intermediary liability framework, reflect a broader shift toward heightened compliance expectations within India’s platform governance regime. While each of these measures has been analysed individually (see here, here, and here), this piece takes a step back to identify a connecting pattern across them.
It begins by revisiting the original rationale of safe harbour under Section 79 of the Information Technology Act, 2000, then examines how recent regulatory interventions interact with that framework, and finally argues that a regime originally intended to protect intermediaries is increasingly being repurposed to influence platform behaviour and regulate speech by proxy. The concern is not merely theoretical because these developments have the potential to alter the practical operation of safe harbour without formal legislative amendment.
This piece does not propose a comprehensive redesign of intermediary liability, which would require extensive legislative deliberation (already underway) and stakeholder consultation. Instead, it highlights an emerging pattern in the current regulatory approach and evaluates its implications for the operational integrity of safe harbour.
The Original Intent of Safe Harbour
The Information Technology Act, 2000 (‘IT Act’) conceptualises ‘intermediaries’ as entities that facilitate the transmission, storage, or hosting of third-party content, rather than create or control it. The defining feature of this role is facilitation rather than authorship. This understanding forms the foundation of the safe-harbour regime under Section 79 of the Act, which grants intermediaries a ‘safe harbour’ from liability for third-party content, subject to limited conditions like observing due diligence. The rationale underlying this protection is to prevent intermediaries from being exposed to liability for content they neither create nor control, thereby protecting the growth of digital platforms and the free flow of information.
However, this immunity is not absolute. Section 79(3)(b) withdraws protection where, upon receiving ‘actual knowledge’, an intermediary fails to expeditiously remove or disable access to such content. Importantly, in Shreya Singhal v. Union of India, the Supreme Court clarified the meaning of ‘actual knowledge’, holding that it arises only through a takedown order from a court of competent jurisdiction or a notification from an appropriate government authority. The Court recognised that intermediaries cannot realistically assess the legitimacy of millions of content requests. Therefore, the safe-harbour framework, as originally conceived and judicially interpreted, rests on a clear premise: intermediaries are neutral conduits, not publishers or censors. This court-mediated understanding of ‘actual knowledge’, however, has increasingly been unsettled by subsequent advisories and amendments, raising a foundational question about whether safe harbour continues to operate the way it was intended to operate originally.
A Closer Look at the MeitY Advisory
The Ministry of Electronics and Information Technology (‘MeitY’) issued an advisory dated 29th December 2025 (‘Advisory’) reminding intermediaries of their obligations under Section 79 of the IT Act and Rules 3 and 4 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (‘IT Rules 2021’). It reiterates that intermediaries must take ‘reasonable efforts’ to prevent the circulation of unlawful content, including obscene or pornographic content.
It further directs Significant Social Media Intermediaries (‘SSMIs’), ‘to deploy technology-based measures, including automated tools or other mechanisms, to proactively prevent the dissemination of such unlawful content.’ However, this approach appears to sit uneasily with the apex court’s interpretation of intermediary liability in Shreya Singhal’s case, where actual knowledge was confined to knowledge through a court order or government notification. By deploying proactive filtering through automated tools, the advisory risks expand intermediary obligations beyond this judicially recognised standard.Â
The ambiguity surrounding what constitutes ‘reasonable efforts’ compounds this concern. No clear standards exist to assess the accuracy, bias, or contextual sensitivity of automated systems. These concerns are further intensified by the absence of a clear legal test for what constitutes ‘obscene’ or ‘vulgar content.’ Courts have repeatedly emphasised that obscenity must be assessed contextually with regard to the intent and purpose. Nudity or sexual expression may form part of protest or artistic expression, and movements such as #MeToo involved the public sharing of explicit messages to expose harassment rather than to propagate obscenity. However, automated tools are incapable of evaluating such nuance. When coupled with the threat of criminal liability, this advisory incentivises over-cautious takedowns, making intermediaries remove anything remotely questionable to avoid risk. In effect, platforms are being pushed into the role of private adjudicators, reflecting a broader trend of imposing proactive obligations on intermediaries.Â
The Amendment to IT Rules 2021 Regarding Synthetically Generated Information
On 22 October 2025, MeitY released a Draft Amendment to the IT Rules on Synthetically Generated Information (‘SGI’) for public consultation. Subsequently, on 10 February 2026, it notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (‘SGI Rules’).
The Draft Amendment had proposed an overbroad definition of SGI (as discussed here). Although the SGI Rules introduce exceptions to this overbroad definition, they are both narrow and deeply subjective. They are confined to routine or good-faith creation or editing of content, without offering any guidance on how a user’s intent is to be identified. In practice, this forces intermediaries to infer and evaluate user intent, an inherently adjudicatory function they are neither institutionally equipped nor constitutionally meant to perform.
Moreover, Rule 4(1A) amounts to prior restraint because content is subjected to prior verification by SSMI before it can circulate, shifting regulation from post-publication accountability to pre-publication control. Additionally, Rule 4(1A) requires SSMIs to obtain user declarations, verify them using reasonable and appropriate technical measures, and label any content confirmed as synthetic, effectively mandating modification of user-generated content. Therefore, it introduces new substantive obligations requiring SSMIs to modify user-generated information by mandating the application of labels or notices. But section 79(2) of the IT Act explicitly grants safe harbour merely to intermediaries that do not initiate transmission, select receivers, or modify information. Therefore, SGI Rules’ claim that the proposed actions shall not amount to a violation of Section 79(2) appears to re-engineer the safe harbour framework without a corresponding amendment to the parent Act (as discussed here and here).
The SGI Rules require SSMIs to deploy appropriate technical measures, including automated tools, to verify SGI, but offer no guidance on what counts as adequate verification and provide no certification or oversight mechanism. For instance, if an SSMI relies on a third-party detection tool that later proves unreliable, the SGI Rules offer no mechanism to determine whether the SSMI acted reasonably in adopting it. Instead, intermediaries bear the risk of failure at every stage of the process, pushing them toward over-compliance while leaving users without recourse. This highlights the State’s broader regulatory trend: using 'due diligence' as a tool to shift substantive liability.
The Ambiguity of ‘Reasonable Efforts’
The 2022 amendment to Rule 3(b) of the IT Rules requires intermediaries to make reasonable efforts by itself to prevent users from hosting, sharing, or transmitting any unlawful content. As seen in the SGI Rules and the Advisory, the legislature has repeatedly relied on reasonable efforts to expand intermediary obligations, yet it remains undefined.
The closest judicial guidance comes from Justice Neela Gokhale’s dissent in the case of Kunal Kamra v. Union of India, which held that reasonable efforts need not mean takedown; in the context of fact-checking units, issuing a disclaimer could also suffice. Her opinion reflects that under reasonable efforts, takedown is not the only compliance mechanism. While the dissent offers a useful interpretive lens, its persuasive value remains limited, both because it does not represent the binding view of the court and because it arose in a specific context.
This ambiguity has become a persistent problem. Intermediaries operate under constant fear of losing safe harbour protection, expected to take ‘reasonable efforts’ without any clear definition. It further complicates judicial assessment of intermediary liability. A striking example is the ongoing dispute of Starbucks Corporation v. NIXI. Starbucks argued that intermediaries must proactively identify and prevent trademark-infringing domains under the ‘reasonable efforts’ clause. The Court acknowledged the uncertainty surrounding what Rule 3(1)(b) requires and directed MeitY to clarify what would qualify as reasonable efforts. Yet, despite the order in 2023, the standard remains undefined as we enter 2026. The persistence of this ambiguity even after judicial and executive engagement leaves intermediaries trying to comply with an obligation whose limits are far from clear.
The Pattern that Runs through it all
On its face, the Advisory and the SGI Rules appear to be driven by a legitimate intent to curb the circulation of unlawful content online. A closer look reveals a clear and troubling pattern: the State is shifting its responsibility onto private intermediaries. By imposing vague duties, platforms are pushed into the role of censors, while the government stays safely out of view. Intermediaries shoulder the risk, the backlash, and the blame.
This pattern is most visible in the evolving approach to safe harbour under Section 79 of the IT Act. Safe harbour was never intended as a privilege granted at the government’s discretion. Yet recent policy shifts treat it as conditional. Moreover, a public statement by the Minister of State has openly questioned whether intermediaries should receive safe harbour at all. Without a safe harbour, intermediaries would operate under constant fear, and platforms may internalise government preferences on speech norms to avoid liability.
The repeated use of the undefined standard of ‘reasonable efforts’ carries significant legal consequences when coupled with the threat of criminal liability. Criminal law traditionally requires both a wrongful act and a guilty mind. However, when compliance is assessed against an indeterminate benchmark, intermediaries may act in good faith and still lose safe harbour, not due to intent or wrongdoing, but because their compliance is later deemed insufficient.
This pressure was further intensified in X Corp v. UOI, where the Karnataka HC upheld the validity of the Sahyog Portal. What is significant, however, is not merely the portal’s validity but the route it enables. While Section 69A contains procedural safeguards for government-ordered takedowns, notices issued under Section 79(3)(b) through the portal operate without those protections, yet carry the consequence of exposing intermediaries to liability through loss of safe harbour. The State’s arguments characterised these notices as merely advisory that don’t compel takedown, but the regulatory consequences attached to these notices make them difficult to treat as purely voluntary. This effectively allows content regulation without the procedural structure that parliament designed under section 69A.
Taken together, these developments point to a structural shift in the regulation of online speech. The State fires the gun but places it firmly on the shoulders of intermediaries. Platforms are pushed into censoring speech, absorbing backlash and user resentment, while the government remains formally detached from the decision. At the same time, intermediaries themselves remain under the constant threat of liability and withdrawal of safe harbour. The result is a system where speech is curtailed, accountability is diffused, and constitutional protections are weakened without a single explicit act of censorship by the State.
Conclusion and Suggestions
While the Advisory and the SGI Rules stem from genuine concern about deepfakes and harmful content, good intentions cannot excuse poor regulatory design. Recurring reliance on undefined ‘reasonable efforts’ creates uncertainty. Before demanding compliance, the law must clearly define what those efforts entail. Verification mechanisms must be structured and accountable. Undefined demands for ‘reasonable and appropriate technical measures’ incentivise overcompliance. India could consider certification of detection tools or human review for borderline cases. Independent experts, rather than opaque algorithms alone, could help identify genuinely harmful content, reducing over-removal and unchecked platform liability.
The concern becomes sharper in the context of Rule 3(3) of the SGI Rules, which requires intermediaries that enable the creation of SGI to embed prominent labels or permanent metadata. This obligation presupposes that generative AI developers fall within the definition of ‘intermediary’ under Section 2(1)(w) of the IT Act, a classification that remains legislatively unresolved so far. For instance, if a company developing a text-to-image model, such as OpenAI, were treated as an ‘intermediary’, it would be subjected to obligations designed for entities that receive, store, or transmit third-party content. However, such generative AI developers train models that generate outputs algorithmically; they are not mere conduits storing or transmitting content on behalf of others. Extending intermediary obligations to such actors without statutory amendment to the parent act risks stretching Section 2(1)(w) beyond its original facilitative conception. Compliance duties cannot precede legal clarity.
Collectively, these measures reshape how Section 79 operates in practice. Although safe harbour remains formally intact, vague due-diligence obligations and heightened compliance expectations incentivise precautionary takedown. This expands private moderation discretion beyond the actual knowledge threshold (specific court order or notification from an appropriate government agency) recognised by the Supreme Court in Shreya Singhal. While intermediaries continue to be characterised as neutral conduits in statutory text, they increasingly perform adjudicatory functions in operation. Requiring them to act on indirect compliance pressures blurs the boundary between facilitation and adjudication that Section 79 was designed to preserve. The result is a strategic relocation of decision-making authority over online speech, without an explicit statutory amendment to that effect.
This article has been authored by Yukta Chordia and Kanika Chhajerh, students at Maharashtra National Law University, Nagpur. This blog is part of RSRR's Rolling Blog Series.