top of page
  • Yash Choudhary

Legal Personhood for AI: A Possible Key for Unlocking Human-AI Symbiosis?

Introduction

Blake Lemoine, an engineer at Google LLC, recently claimed that LaMDA, the company’s new artificial intelligence program, has become sentient. The scenario which was only possible in fiction is coming out as a reality. The question before the court is whether an AI can be considered as a person in the eyes of the law. Incidents like these put forth the question of the legal personhood of AI in front of humankind.


It would be naïve to say that the concept of personhood for AI is fairly new. John McCarthy, the father of AI, in his paper “Ascribing Mental Qualities to Machines” (1979) talked about the concept of free will for machines in which he says that in the process of letting computers evaluate the most appropriate and moral outcome by analyzing options and their consequences, we should program computer to have its own freedom of choice which is of the similar degree as to what humans exercise in the said situation. The following article attempts to evaluate the legitimacy of the demand for legal personhood for AI.


Need for the AI to be Regulated

Before regulating AI, it is crucial to comprehend the kind of AI system that is being taken into account in this scenario. An automatic washing machine, though automated, would still not have a high social impact and associated risks for us to debate its legal standing. In situations like high-frequency trading, performing complex medical surgeries, and driving automated vehicles, AI interacts in such a way that is not pre-instructed by its developers. Such autonomous behavior implies that the consequences may not be the result of instructions predetermined by the human designer.


It’s quite evident that AI can interact with its environment in a very diverse manner that can sometimes even lead to injury or damage, which furthermore increases the need to regulate legally. The three probable options for regulation of AI are as follows[1]

  1. Treating AI systems as innocent persons; means they would be incapable of forming mens rea (criminal intent) in any offense. The problem with this solution is that it would fail in the objective of holding AI accountable for its deeds.

  2. Holding the developer of the AI system liable for any offense committed by the AI system that was foreseeable. This is unsuitable as this would attach liability to a completely innocent person and further it will limit the scope of further development in the field of AI as this provision would act as a hanging sword.

  3. Making the AI system directly responsible by ascribing legal personality to it. An option that might appear seemingly probable would require a lot of modifications in the conventional definition of legal personality as AI systems cannot be put into a straitjacket offered by any of the theories so far propounded by jurists yet. AI systems are not like other non-human entities which are provided personhood till now. They are capable of making simple and complex decisions, nearly autonomously, without human intervention.

Although, devising this method of legal personhood should be viewed as a matter of convenience and practice, furthermore, a tool for better governance. It does not cater to the existing dilemma of whether AI should be provided with legal personality or not.


Concept of Legal Personhood

In layman’s terms, legal personhood is the capacity of holding rights and performing duties which also includes the ability to bear responsibility. In 2002, Peter Benson in the book, The Oxford Handbook of Jurisprudence and Philosophy of Law, talked about the difference between the status of “things” and that of “persons”. It is believed that “persons”, have personal interests, exhibit agency, and their conduct, with consequences, can be attributed to them; on the contrary, “things” are objects of use: they do not possess will, interests, competence, or accountability. As AIs possess practical and epistemic authority over their actions and decision, they tend to challenge this distinction.


Under the law, there exist two types of legal personhood – Natural and Judicial. Natural personhood is recognized for those who are actual human beings and judicial personhood essentially means when certain rights and duties are granted to non-human entities by the law. Tending to the subject matter of AI systems, judicial personhood comes into the picture as it was earlier extended to corporations, religious entities, governmental and intergovernmental entities, etc. As the relentless march of technology presents us with a new problem, it becomes important to evaluate both sides of the proposition before implementing it.


Arguments in Favor of Granting Legal Personhood to AI Systems

Autonomous Nature of AI

It is contended that AIs have considerable practical authority over their own decisions which enables them to adjust to the scarcity of information, cope with stochastic environments, collect vital information, and interact with other human and non-human agents. Their ability to go a step forward and operate on an expanded knowledge that is not included in the initial subset of input reflects their humanistic trait. All these aspects are considered as a precursor for legal personhood and hence provide strong grounds for granting one to AI systems.


Filling of Accountability Gap

The autonomy covered in the previous point does produce a vacuum in accountability. The term ‘autonomy skills’ frequently connotes a lack of user control. This becomes crucial in the legal realm while considering situations like forming contracts in high-frequency trading or driving autonomous vehicles. Here, granting legal personhood to AIs would prove to be beneficial as it would ensure that someone could be held responsible when things go south. As the costs of unforeseeable accidents would, rightfully so, parcel out to AI systems, it would make sure to encourage the technological innovation of AI systems.


Limitations in the Conventional Liability Model

Conventional liability schemes fail to provide incentives that ensure continuing innovation as well as victims’ rights to seek damages. In cases of unforeseeable accidents caused by AI’s autonomy, it’s unfair to hold AI developers guilty of negligence, and the doctrine of strict liability comes out as too severe which could discourage technological innovation. Conferring legal personality to AIs can prove to be a silver lining here, as this would allow AIs to enter into legal obligations on their own possibly with complete pecuniary autonomy.


Giving Due Credits to AI systems

The referred sections deal with concepts like – that if work is computer-generated then author is taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.[2] In the case of Stephen Thaler v. Andrew Hirshfeld[3] (2021), federal district court of US held that an AI system cannot be treated as a natural person for the purpose of granting patents under the current US patent law. The underlying jurisprudence of such a system favors human creativity over machine creativity. This makes it impossible for non-human entities to claim their rightful ownership over the intellectual property (IP) created by them. Ascribing legal identity to AI systems would ensure that the ownership rights of IP created by them lie with them.


Arguments Against Granting Legal Personhood to AIs

Ontological Objections

The lack of human-like cognitive and moral traits in AIs is one of the most important ontological objections. The current AIs are not comparable to humans in terms of attribution and consciousness. Real sensitivity to codes and obligations in the legal sphere cannot be equated to being mechanically compelled to obey rules. Though strong AI systems might be capable of acknowledging contemporary legal system, still their legal status would not be comparable to natural humans as they inherently lack a full moral agency of their own.


Instrumentalist Objections

Firstly, there is an apprehension of human parties escaping the responsibility. As history shows, this is not the first time, ‘evasion of liability had been debated in the field of company law too.’ Interestingly, Claudio Novelli believes that the doctrine of ‘piercing the corporate veil’ which was introduced as a countermeasure for abusive practices in company law can also be applied to AI systems. Proper registering of associated persons and declaration of the patrimonial capacity of AI can be intimated to third parties can be an underlying solution.


Second, there is a lack of incentive for technological innovation and damage prevention. If legal personhood is ascribed to AIs, the costs and the liability of the accidents to the stakeholders will decrease. This might lead to the cost of making products safer exceeding than expected gain from the innovation. The idea of general welfare will fail to sustain.


Third, it can be difficult to track AI as they have no physical address and can duplicate themselves. In this case, holding a responsible AI accountable would be a herculean task. A specific system of assigning digital signatures to AIs might be a solution to the problem.


Coordination Problems

According to technological aspects, AIs perform diverse tasks of variable risks, social impact, and autonomy. This may call for diversified regulations depending upon technological peculiarities.


Different meanings of legal personhood are attached to each legal field causing intra-system inconsistencies. For example, perceptions of personhood attached to the medical laws are different to those of corporate laws or criminal laws. As the outcome varies, field-specific rules need to be established.


Further, the legal system of each country has its legal definition of “person”, often debated time to time, over which it has exclusive jurisdiction. This might lead to inter-system disputes. This would result in a lack of universality pertaining to the essence of personhood.


The Way Forward

Going by the current literature and understanding so far, it can be adequately said that current legal models are not equipped to deal with the issue of legal personhood for AI. Before designing the specific dress of laws fit for AI regulation, there is a need to clear up existing ambiguities that would require an in-depth study of the interactions of AI systems with human society. There lie vast socio-economic implications when an AI is ascribed a legal identity. Therefore, if it is to be done, it should be done without damaging the moral fabric of the society, that is to say, without humanizing the automated systems and dehumanizing the humans.


In the author’s personal view, there is an urgent need that both the disciplines of law and technology to come together and work. We need to classify different kinds of AI interactions and their associated risks. Coherent defining of liability and compensation of damages can only work properly if they have a strong technological and philosophical base.


Till the time our system doesn’t come up with a workable definition of legal personhood for AIs, it would be better to explore other under-discussed alternatives such as insurance mechanism. This method talks about the allocation of risks and system of compensation, but at the same time doesn’t provide active standing and excessive powers to AI. This might prove to be fruitful, at least, in the short run.

 

[1] Purvi Pokhariyal (ed), Amit K Kashyap (ed) and Arun B Prasad (ed), Artificial Intelligence: Law and Policy Implications (EBC, 2020).

[2] Copyright, Designs and Patents Act 1988 (UK), s 9(3), Copyright Act 1994 (NZ), s 5(2)(a), Copyright Amendment Act 1994 (India), s 2(d)(vi), Copyright Ordinance 1997 (HK), s 11(3), Copyright and Related Rights Act 2000 (Ireland), s 21(f).

[3] Stephen Thaler v. Andrew Hirshfeld 558 F. Supp. 3d 238 (E.D. Va. 2021).


This article has been authored by Yash Choudhary, a student at ILS, Pune. This blog is a part of RSRR’s Blog Series on “Emerging Technologies: Addressing Issues of Law and Policy”, in collaboration with Ikigai Law.


bottom of page