The Architecture of Harm: Intermediaries and the Constitution
- Dr. Nupur Chowdhury
- Apr 17
- 8 min read
Introduction
The harms arising from intermediary behaviour are often public in nature, yet remain insufficiently recognised. This is largely because such harms are frequently understood merely as the aggregations of private harms. However, this framing is both inadequate and limiting, as it obscures the need for remedies that go beyond conventional privacy protection. This concern can be better understood through an analogy.
The ocean exists as a natural entity populated by living and non-living beings. Humans have fashioned mechanisms such as boats, fishing nets and other technological applications to access it, and they also introduce substances into it, many of which may be referred to as pollutants. In several respects, the internet is analogous to the ocean. Users venture into the internet and produce content that may be harmful, much like pollutants, but sometimes completely harmless. Intermediaries function as the mechanisms without which we cannot access and navigate this space. Unlike mechanisms used to access the ocean, however, many intermediaries derive profit not from facilitating access, but from user traffic, which is repurposed for advertisement revenue. Another category of intermediaries, such as telecom service providers like Airtel or Jio, provides the infrastructure that enables access to the internet. They are analogous to boats that allow us to access the ocean. Further, much like the ocean, the internet operates beyond territorially bounded spaces, often complicating the exercise of regulation by States.
Activities on the internet may therefore result in both private and public harms. Private harm may arise from the actions of individuals, while intermediaries may allow such activities through inaction and also profit from them because of the multisided markets that characterise the internet. At the same time, public harms such as the undermining of democracy may result from activities on the internet. Just as environmental pollution cannot be effectively dealt with through private law remedies alone, public harms on the internet cannot be addressed simply by combining individual privacy claims. Long-term structural harms often go unnoticed because existing legal frameworks do not have the proper tools to identify or deal with them. This article, therefore, asks how such emerging public harms, caused by intermediary behaviour, should be addressed. It does so by classifying different types of harms, proposing a category of harms termed “constitutional harms”, and assessing whether the Information Technology Act, 2000 (“IT Act”) and the accompanying rules are adequate to address them.
Spectrum of Intermediary Harms
The first category of harm (and the most apparent) comprises those that fall within what is commonly described as cybercrimes. These include activities such as child pornography, human trafficking, the sale of banned substances like narcotics and identity theft. Although such crimes are well established and laws have been enacted across jurisdictions to address them, enforcement continues to face challenges because of the scale of online activity. Investigations usually focus on internet users. Nevertheless, intermediaries (such as Google) that function as gatekeepers and provide access to such users also play a role in enabling these activities. Google profits from the traffic generated by users, including those who may be engaging in cybercrimes, arguments have been made for shared responsibility. An analogy may be drawn with environmental regulation applied to automobile manufacturers, where cars generate pollution and therefore manufacturers are subject to emission standards in addition to obligations imposed on drivers. Accepting such a responsibility would require reconsidering the safe harbour architecture under which intermediaries are excluded from liability on the ground that they cannot regulate content before publication. Historically, this architecture has supported self-regulation by intermediaries exercising ex post control. However, this is slowly changing. An Ad Hoc Committee of the Rajya Sabha reported the rising prevalence of pornography on social media and recommended amendments to the IT Act, including introducing provisions imposing punitive obligations on intermediaries to address child sexual abuse material.
The second category of harm arises directly from the activities of intermediaries themselves and may be described as privacy and discrimination. Many online services operate through the collection of personal information, that allows the creation of granular profiles of users, resulting in a violation of their right to privacy. The expansion of artificial intelligence has intensified this process because Artificial Intelligence (“AI”) systems rely on large datasets generated from such information. AI systems are architecturally opaque, in the sense that the logical processes through which they perform tasks remain masked, even from those who develop or deploy them. Because these systems rely on data derived from large populations, they may replicate biases embedded in the data. Such biases may become visible only when their effects emerge on a sufficiently large scale and almost always after the harm has occurred. For instance, in August 2020, the use of an AI system by the United Kingdom examinations regulator to predict student grades during the COVID pandemic produced outcomes that favoured students from private schools over those from disadvantaged backgrounds. Such instances illustrate how algorithmic systems may generate discriminatory outcomes, affecting groups of individuals and undermining equality in society, and are therefore in the nature of public harm.
The third category of harm concerns the rule of law. The structure of the internet allows intermediaries extraordinary power to shape human action in pursuit of their private objectives. This may occur through unilateral contracts that influence user behaviour and through technological design that enables or restricts particular actions. Under such circumstances, user actions may be shaped to such an extent that individuals act less as independent actors and more as agents within the architecture designed by intermediaries. Concerns have also arisen regarding the spread of deliberate online falsehoods intended to misguide, misinform and incite harmful actions, including violence. Platforms that rely on advertising are often pushed to promote such content because sensational or false information spreads faster and attracts more user attention, raising concerns about the limits of freedom of speech and expression. While the State traditionally regulates speech within constitutional limits, digital intermediaries like Google or Facebook often exercise greater practical control over online content. This makes it necessary to rethink how constitutional protections for speech under Article 19(2) apply to private intermediaries.
The fourth and final category of harm relates to anticompetitive practices arising from the gatekeeping role of intermediaries. In the present digital ecosystem, platforms often control access to digital markets. When intermediaries, act both as gatekeepers and content providers, conflicts of interest can arise, as they may use this power to block or disadvantage competitors, leading to monopolistic behaviour, that affects both a healthy competitive environment and consumer choice (see, for example, Matrimony.com Ltd. v. Google LLC & Ors., where Google was found to favour its own services in search results). The Competition Act, 2002, has established the Competition Commission of India to address such issues. However, the law restricts intervention by the Commission to cases of abuse of dominant position, while dominance per se may not be questioned. Digital intermediaries like Google hold significant market power due to their dominance in areas such as mobile operating systems, search engines, and browsers, particularly through Android and Google Chrome. This allows them to charge high commissions from app developers, favour their own services, and exclude competitors through practices like vertical integration. As a counter, a better appreciation of the effects of vertical integration is required, as observed in the operations of global intermediaries.
Constitutional Harms in Intermediary Governance
Among the harms discussed above, those relating to privacy, discrimination and the rule of law have direct constitutional implications, given that these are in the nature of public harms. These harms affect fundamental rights and democratic processes and may therefore be described as constitutional harms. The Supreme Court in the significant case of Justice K.S. Puttaswamy v. Union of India (“Puttaswamy”) recognised the right to privacy as a fundamental right. The right to equality includes protection against horizontal discrimination between private actors, as reflected in Article 15(2), which prohibits discrimination in access to public spaces. If the Internet is treated as such a space, discriminatory practices by intermediaries or AI systems may violate this right and produce wider societal harm. Given that access to the internet is recognised as a fundamental right in India, such discrimination impacts not just individuals but the community as a whole. The third category of harm affects the rule of law by enabling powerful intermediaries to shape access to (mis)information, potentially influencing electoral outcomes and, through possible collusion with State actors, in effect undermining democratic processes and dissent.
I suggest that these harms should be categorised as a distinct species called “constitutional harms”, as they directly undermine fundamental rights, democracy, and the balance of power between citizens, the State, and intermediaries. Such harms may arise either from singular events or from a cumulative set of actions, but often go unrecognised due to the absence of clear conceptual categories. A distinction must be drawn between private harms and constitutional harms. As discussed above, cybercrimes and anticompetitive practices are primarily private harms, affecting specific individuals and addressed through existing statutory pathways. In contrast, constitutional harms are a species of public harm, impacting the public at large and affecting the core constitutional principles, with their likelihood increasing with the scale of intermediaries. An exclusive focus on privacy and individual harm often overlooks broader, long-term public harms, making it important to recognise constitutional harms to properly understand and address them.
Statutory Responses and their Constitutional Limits
In this context, it becomes necessary to examine whether current laws are equipped to address such constitutional harms. The principal statute governing intermediary behaviour in India is the IT Act, whose interpretation has been shaped by judicial decisions. In Shreya Singhal v. Union of India, the constitutionality of Section 66A of the IT Act was challenged. The provision was contested on the grounds that it violated Article 19(1)(a), was not within the limits of Article 19(2), and that its vague and undefined terms enabled arbitrary enforcement, which led to censorship and a chilling effect on speech. The Court held that Section 66A was “unconstitutionally vague” as it lacked clear standards, was too broad, and did not fall within Article 19(2); since no part could be separated, it was struck down completely. In contrast, Section 69A and the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 were upheld because, unlike Section 66A, the former was narrowly drawn with adequate procedural safeguards. On intermediary liability under Section 79 (which provides a safe harbour clause exempting intermediaries from liability for user-generated content under certain conditions), the Court read it down to require action, only upon actual knowledge through a court or government order, subject to Article 19(2). While welcomed by civil rights activists, this was argued to disincentivise proactive action by intermediaries.
In Sabu Matthew George v. Union of India, the Court expanded due diligence by requiring intermediaries to develop protocols allowing auto-blocking of certain words and phrases to be overseen by an “In-House Expert Body”, to detect violations of the PNDT Act. In effect, this allows them interpretative powers, raising concerns of over-delegation of powers and misuse.
A disjuncture emerges between the Court’s reluctance to treat intermediaries as publishers (see Google v. Visakha Industries, where the Supreme Court ruled that intermediaries, as facilitators of information exchanges and sales, were on a different footing and cannot be considered as publishers of content) and its direction to perform content regulation. Intermediaries profit from user-generated content and exercise contractual control over it. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, further expand due diligence, allowing intermediaries to regulate content on vague grounds and deploy automated tools for proactive monitoring. Simultaneously, State powers have expanded, particularly through traceability requirements, that undermine encryption and privacy protections recognised in Puttaswamy. Overall, the framework mainly focuses on cybercrimes and pays little attention to wider structural harms, while the increasing powers of intermediaries and the State may restrict free speech and weaken democracy.
Conclusion
Like the ocean, the internet generates both private and public harms. Yet long-term structural harms arising from intermediary conduct have received limited recognition. These harms, described as constitutional harms, affect fundamental rights, democratic institutions and the relationship between citizens, states and private actors. Recognising constitutional harms allows greater attention to the broader social consequences of intermediary conduct. Intermediaries possess significant structural power in shaping the digital environment, and their conduct therefore plays a critical role in ensuring that constitutional values within cyberspace are protected. Recognising intermediary responsibility for constitutional harms provides us with a categorical framework for addressing the long-term public harms that may arise from the operations of digital platforms.
This article has been authored by Dr Nupur Chowdhury, Assistant Professor of Law at the Centre for the Study of Law and Governance, Jawaharlal Nehru University, Delhi. It is part of RSRR's Excerpts from Experts Series and offers a condensed version of the arguments set out by the author in the paper published in the Journal of Information Policy (2023). She was assisted by Mehul Sharma, Junior Editor at RSRR.
Comments