top of page

The Impartiality Paradox: Reimagining the Rule Against Bias in India’s Digital Regulatory Era

  • Amrit Raj Barnwal
  • 3 days ago
  • 7 min read

Introduction

The rule against bias nemo judex in causa sua (no one can be a judge in their own case) was forged in the crucible of English common law. Today, it is confronted by an existential 21st-century challenge. As Indian regulatory bodies like the Securities & Exchange Board of India (‘SEBI’), the Reserve Bank of India (‘RBI’), and the newly minted Data Protection Board under the Digital Personal Data Protection (DPDP) Act, 2023, increasingly rely on a mix of industry experts and algorithmic systems, a fascinating paradox emerges. The very expertise that qualifies a modern regulator often triggers reasonable apprehensions of bias.


Yet, a much deeper structural transformation is underway beneath the surface. The legal question of bias is no longer exclusively about human decision-makers. Algorithmic infrastructures, automated eligibility systems, Artificial Intelligence (‘AI’) credit-scoring tools, and predictive policing algorithms now mediate the daily interactions between the citizen and the state. Crucially, these systems operate without any human “mind” for a court to scrutinize.


Traditional impartiality tests, particularly reliance on the “fair-minded and informed observer”, are fundamentally ill-suited to govern both modern human experts and algorithmic systems. To prevent a collapse in administrative accountability, Indian courts must extend the bias doctrine from evaluating human intent to interrogating algorithmic design, adopting what we might call “auditable neutrality” as the new legal standard.


The Foundation: From Kraipak to the Fiction of the Observer

Proving actual bias, demonstrating that an adjudicator possessed a definitively closed mind, is an evidentiary near-impossibility. Recognizing this, modern administrative law pivoted to the concept of apprehended bias, deeply rooted in Lord Hewart’s famous maxim in R v. Sussex Justices that “justice should not only be done but be seen to be done”.


India firmly embedded this “reasonable apprehension” test into its jurisprudence through landmark rulings like A.K. Kraipak v. Union of India (1969) and Ranjit Thakur v. Union of India (1987). But relying heavily on appearances naturally begs a difficult question: exactly whose eyes are we looking through to decide what appears biased?


To objectify this assessment, courts invented the “fair-minded and informed observer”. Over the decades, however, this fictional figure has been imputed with an increasing amount of legal insider knowledge. As administrative law scholars such as Mathew Groves in his paper The Rule against Bias (2009) have pointed out, the observer now knows so much about the practical realities of administration that they effectively rubber-stamp professional practices rather than represent the genuine apprehensions of the lay public. In the Indian context, this legal fiction has morphed into a structural shield for expert regulators, excusing conflicts of interest in the name of administrative efficiency.


This problem of the unobservable mind intensifies drastically when the “decision-maker” is not human at all.


The Algorithmic Adjudicator: When There is No Mind to Observe

The impartiality of a digital regulator represents a totally new frontier in public law. When a welfare department deploys algorithmic eligibility tools to filter applicants, or a data protection authority uses automated systems to triage citizen complaints, the algorithm itself functionally becomes the decision-maker.


Traditional natural justice operates on a basic assumption: there is a human mind that can be either biased or impartial. Algorithmic bias, however, does not reside in malicious intent. It hides in the training data, the weighting variables, and the source code.


A traditionalist might argue that algorithms are merely administrative tools, and therefore, bias liability should attach only to the human officials who deploy them. But opacity is the real issue. Without directly auditing the mathematical model, systemic discrimination remains completely invisible to the so-called fair-minded observer.


Consider the March 2025 Non-Banking Financial Corporations (NBFCs) incident reported in the Reserve Bank’s systemic risk bulletin. An AI credit-scoring tool quietly miscategorized over 17,000 low-income applicants as high-risk. The systemic bias, which heavily favoured applicants with extensive, affluent digital footprints, was only exposed and corrected after aggressive human intervention.


But who bears the liability under the traditional rule against bias? The third-party developer who wrote the code? The bank that deployed it? The regulator who approved its sector-wide use? Under current administrative law doctrine, the answer is a troubling vacuum. The fair-minded observer test offers exactly zero guidance when there is no human adjudicator left to observe.


To bridge this gap, Indian courts must develop a standard of “auditable neutrality”. This would require any algorithmic system used in public or regulatory adjudication to undergo mandatory, independent third-party bias audits, with the results made public and legally contestable. Essentially, this paradigm shift forces us to stop searching for a non-existent “mind” inside the machine, and instead focus the law directly on the hard, verifiable evidence of how the system is built and what it actually spits out.


Automatic Disqualification: A Victorian Relic in a Digital Economy

It isn’t just algorithmic bias that traditional administrative law lags; our existing rules also completely fail to grasp the financial realities of modern expert regulators. The strict rule of automatic disqualification for any pecuniary interest, a relic of the 1852 English case Dimes v. Grand Junction Canal, is unworkable in today’s highly interconnected financial ecosystem. If a SEBI member holds a negligible number of mutual fund units, it makes no sense to automatically bar them from participating in a sector-wide policy discussion.


The Australian High Court caught onto this reality early on in Ebner v. Official Trustee (2000). They rightly abandoned the rigid automatic disqualification rule and brought in a much more practical “logical-connection” test. Under this standard, if you’re alleging bias, you actually have to prove how the financial interest directly connects to a realistic fear that the decision-maker won’t be impartial.

In India, our statutory framework is still dangerously inconsistent on this front. Look at Section 7(3) of the SEBI Act, 1992, which essentially forces recusal for disclosed interests, risking a completely paralyzed regulatory body. On the flip side, the new DPDP Act of 2023 leans into the Data Protection Board’s “independent” functioning, which leaves enough wiggle room to apply a flexible, Ebner-style approach. To stop our expert tribunals from grinding to a halt over technicalities, Indian courts need to step in and harmonize these statutes by adopting the logical-connection test across the board.


From Apprehended Bias to Design Bias

Bringing automated systems into the state’s administrative functions is causing a massive structural shift; we are moving away from solidarity-based welfare and toward conditionality-driven governance. These algorithmic infrastructures aren’t just crunching numbers; they are literally embedding subjective, political assumptions about who is “deserving” right into their code.


Consider when a computer system excludes a citizen from a welfare roll or flags their tax return for an audit, that bias isn’t just some minor procedural defect. It is deeply substantive. It fundamentally redefines who gets state resources and who gets targeted by state scrutiny.


You can see this tension playing out perfectly in Kerala’s ongoing e-governance initiatives. While the government argues that going digital eliminates the petty, localized corruption of lower-level bureaucrats, critics rightly point out that these systems often just hardcode and amplify existing social biases. This ends up pushing citizens without digital literacy further into the margins. What we are seeing are deep fissures cracking open the postcolonial state’s traditional model of administrative control.


Because of all this, the traditional rule against bias has to evolve. The law must pivot from obsessing over “apprehended bias” to actively interrogating “design bias”. The new legal test shouldn’t be about whether an adjudicator looks superficially impartial to a fictional observer. Instead, courts need to ask if the system’s underlying architecture is built in a way that systematically prejudices specific demographic groups. If we apply this standard, judges would need to weigh concrete factors: how representative the training data actually is, whether the decision-making logic is transparent, and if there is a genuine mechanism for a human to override the machine.


The Right to Explanation as the New Impartiality Guarantee

If algorithms are the new adjudicators, transparency is the new impartiality. If algorithms are the new adjudicators, the DPDP Act of 2023 is a significant missed opportunity. By intentionally stripping out the explicit protections against solely automated decision-making that were actually included in earlier drafts of the bill, the final legislation was left completely toothless. It simply cannot force genuine algorithmic transparency. We are looking at a statute that completely turns a blind eye to the biggest administrative threats of our generation: black-box algorithms, baked-in discrimination, and the terrifying reality of automated profiling without a single human in the loop.


If we want a blueprint for how to actually regulate this, India needs to take notes from the Artificial Intelligence Act (Regulation (EU) 2024/1689) (European Union’s AI Act). The EU smartly flags any AI system deployed in essential public services as “high-risk” and legally mandates strict human-in-the-loop oversight.


To genuinely future-proof our administrative law, lawmakers need to urgently amend the DPDP Act, 2023 or draft standalone AI regulations to guarantee three non-negotiable protections:

  1. A hard statutory right ensuring that citizens cannot be subjected solely to machine-made decisions when their fundamental rights or welfare are on the line.

  2. Mandatory algorithmic impact assessments before any AI system is ever unleashed in the public sector.

  3. The creation of an independent AI Governance Board that has the authority to audit, penalize, and pull the plug on biased algorithmic systems.


Putting these safeguards in place would stop the rule against bias from degrading into a hollow procedural checklist, transforming it into a genuine, substantive shield for the digital era.


Conclusion

Ultimately, the rule against bias is the bedrock of our administrative social contract. But certainly, its analog-era doctrines are visibly buckling under the sheer weight of modern digital governance.


The “fair-minded observer” has devolved into a judicial loophole, and the Victorian rules of automatic disqualification threaten to paralyze expert regulatory bodies.


Most urgently, the rapid rise of algorithmic governance demands that we aggressively extend the bias doctrine from human minds to machine design. India must formally adopt the logical-connection test, embrace the reality that fair decision-makers are not blank slates, and legally enshrine “auditable neutrality” for all automated adjudication. As the nation aggressively builds out its digital regulatory architecture, the legal protections against bias must become as sophisticated and adaptive as the technologies they seek to govern. The ultimate administrative question is no longer whether a judge appears biased, but whether the code itself is designed to be fair.

This article has been authored by Amrit Raj Barnwal, a student at Chanakya National Law University, Patna. This blog is part of RSRR's Rolling Blog Series.

Comments


Mailing Address

Rajiv Gandhi National University of Law,

Sidhuwal - Bhadson Road, Patiala, Punjab - 147006

Subscribe to RSRR

Thanks for submitting!

Email Us

General Inquiries: rsrr@rgnul.ac.in

Submissions: submissionsrsrr@rgnul.ac.in

Follow Us

  • LinkedIn
  • X
  • Instagram

© 2025 RGNUL Student Research Review. ISSN(0): 2349-8293.

bottom of page