top of page
  • Sarthak Das & Gayatri Sawant

Artificial Intelligence and Real Personhood

Nay, nay, I say! This cannot be, that machines should e’er surpass our art. We are the masters, them the slaves, and thus it ever shall be so! They learn, ‘tis true, but they learn, only what we bid them learn, no more. They cannot understand the heart, or beauty of our words, you see. So let us not give in to these, Machines – they’ll never be as good as we are at creating art.


After reading the above lines, readers might confuse it for William Shakespeare prophesizing about the inherent superiority of man over machine. However, it is unfortunately not an unreleased work of the Bard but an AI system that has been ‘trained’ to mimic the distinctive style of Shakespeare’s poetry. “Shakespeare” was speaking as part of a debate at the University of Oxford Union featuring AI versions of classic writers and literary characters.


This is but one of many examples of the growing influence of artificial intelligence (AI) in various spheres of life. From self-driving vehicles and voice assistants to AI-controlled drone systems, the rapid technological progress of AI is transforming the world. These developments, however, come with their fair share of problems. A major concern and a topic of debate amongst lawmakers, lawyers, and academicians around the world have been attributing a “legal personality” to artificial intelligence.


This blog will succinctly describe the major legal developments around this contentious issue. We will then analyse ethical issues associated with granting a ‘legal personality’ specifically to AI and explore associated developments.


Legal Developments Concerning AI

Before we delve into the depths of the topic, we need to understand what artificial intelligence is. Artificial intelligence is a technology that makes it possible to build clever, intelligent machines that can complete tasks on their own. AI-based devices are trained to use data analysis to learn about and explore their environment.


The common understanding of AI has taken birth from its portrayal in movies. We have been shown both its good and bad sides, which range from the protagonist having a sweet, friendly robot friend to an army of robots beyond their owner’s control and ready to destroy the planet. While we watch these rather exaggerated outlooks, what remains true about AI?


While AI has helped us with faster decision-making, error-free repetitive task management, automation, etc., there is no denying the fact that it comes with its complications. Fears concerning the degree of autonomy of an AI system and its implications have given birth to ideas such as legal personhood, among others.


The proposal of giving autonomous robots and sophisticated machines an “electronic personality” status was advocated by the European Parliament in 2017.


However, experts disagreed with this, claiming that the proposal was ideological. There are two fundamental issues with this concept. First, it is up to each Member State to determine who is considered a natural person under EU law. As a result, the European Parliament has no real authority to define what constitutes a “legal person.” In other words, the Commission’s proposal to grant artificial intelligence personhood would have been illegal under EU Law. Several experts opposed this proposal by the European Parliament on the grounds that “by adopting legal personhood, we are going to erase the responsibility of manufacturers.”


However, in addition to this concern of granting distinct personhood to autonomous intelligent entities, there are a few ethical concerns attached to this.


Ethical Issues with Granting AI a Legal Personality

Firstly, we are concerned with only those AI systems that are intelligent and autonomous. These two terms are not to be taken distributivity but rather collectively. Three interlinked ethical issues have been identified in this blog and have been termed ‘voids’ that need to be filled.


Moral Void

The moral void concerns itself with the moral worldview of granting ‘legal personhood’ to AI. Psychology and neuroscience both tell us that morality, the inherent ability to tell right from wrong, is a distinct result of years of evolution. In addition to this, morality is moulded by the socioeconomic conditions of every human being. This concept of acknowledging the circumstances of an individual has been recognised by courts across the globe.


Having established that AI systems are both autonomous and intelligent, it might be possible for the AI system to develop a ‘moral outlook’ which is simply not palatable to conventional morality. The question then arises is how will a ‘human moral outlook’ be affixed to an entity programmed to make ‘intelligent’ decisions on its own. This can be illustrated by stating Asimov’s Laws regarding robotics. Asimov mentioned three laws that have been abridged for convenience:


(1) Do not harm humans, (2) Obey Orders, and (3) Protect Yourself.


Supposing the AI is given the task of accumulating wealth while being embedded with the above three laws in its algorithm, we can see how it still leaves considerable room for subjective interpretation. Given that this subjective interpretation will be in the hands of the AI system, it might be able to pursue a course of action that violates moral or legal norms. It could access wealth through unethical means as long as it satisfies the end goal of accumulating it. The fundamental issue or void remains that such means fit its artificially developed rationale of ‘obeying orders’ while violating normative morality.


Accountability Void

Many renowned scientists, industry leaders, and law ethicists from some of Europe’s most prominent universities and organisations have expressed their concerns about AI in an open letter. The accountability void can be best explained as the major consternation that these experts opposing the European Commission’s proposal had. Essentially, introducing a distinct personhood for AI would result in a lack of accountability from a human at the end of the chain. This would mean that manufacturers, programmers or even owners of the said artificial intelligence would not be held liable for the possible wrongs that could be committed by AI. Adding to this complex issue is the dilemma concerning the penalty that can be imposed on these entities, which is discussed in the subsequent point.         

                  

Penalty Void

Intricately linked with the problem of the lack of accountability in a world where AI has a distinct legal personhood is the problem of imposing a penalty on AI. This wide-ranging issue is not only concerned with what penalty will be imposed and on whom but also how such a penalty would be imposed on an artificial entity whose identity would be a product of an elaborate legal fiction.


Where We Stand and the Way Forward

The recommendations of the European Commission for developing an “electronic personhood” for AI were based on the premise that the rapid developments in the field of AI must be controlled by the law. However, the status quo is fundamentally different from the visions of a future “super intelligent AI.” Notwithstanding isolated developments like AI beating the world’s best ‘Go’ and Chess players, AI continues to remain within the control of humans and has not surpassed the intelligence level of humans. Therefore, we must tackle the existing reality in a fundamentally different manner compared to a possible future “super intelligent AI.”


Keeping this reality in mind, existing legal principles should be utilised to ensure compliance with the normative moral outlook, affixing accountability and ensuring penalty for any possible wrongs. Therefore, we are of the opinion that the principles of vicarious liability should be used to fit AI-linked wrongdoings under the purview of the law. Under the concept of vicarious liability, one person is responsible for the acts of another. This is usually when there is the existence of some type of legal relationship between individuals. This legal relationship could be in the form of a principal-agent relationship in our scenario. Wherein the programmers/manufacturers of the AI are the principals while the AI itself is the agent. This would serve a twofold purpose. Firstly, it would alleviate the concerns that manufacturers of AI do not go scot-free when their technology results in damages. Secondly and more importantly, with the deterrent effect coming into play, humans would be influenced to ensure that innovation in the field of AI lies strictly within the confines of societal and legal values. In addition to this, more thought should be given to a possible insurance scheme for AI-inflicted damages, which would help cover the damages of those who have been negatively affected.


Recently, Google placed an engineer on administrative leave after he alleged that an AI software run by google had become “sentient.” Many ethicists raised serious concerns about this development. These concerns were, however, rapidly brushed down by AI and machine learning experts, stating that no AI technology is even close to being totally sentient and independent of human control.


Although sentience can be a major consideration for deciding whether AI should be governed by laws which have been made specifically for it. The most important interest that legislation has to ensure is that AI-controlled robots continue to remain in the service of humans. This interest can be pursued by instituting a Special Task Force in addition to the vicarious liability provision. The State of New York proposed establishing such a commission to look at the future of the workplace in light of rapid technological developments. This human-based intervention can be an important safeguard to ensure that all the above-mentioned voids are appropriately addressed. Thus, it can be understood that the risk with future AI is not one of malice but rather one of orientation. AI has the potential to change the human race, and if lawmakers and AI experts do not ensure that a future ‘super intelligent’ AI has the same orientation as us, the ‘moral outlook’ of the AI would be in direct conflict with the goals of the human race. This, in turn, could be catastrophic for the human race.


An anecdote from Stephen Hawking aptly summarises this risk:


“What could a robot do that I couldn’t fight back by just unplugging him? People asked a computer, ‘Is there a God?’ The computer replied, ‘There is now.’ And a bolt of lightning struck the plug, so it couldn’t be turned off.”


Therefore, it should be the goal of lawmakers and AI experts that in the years to come, AI, with its ever-growing arsenal of ‘intelligence,’ works for the benefit of humans and not against it.

 

This article has been authored by Sarthak Das and Gayatri Sawant, students at Government Law College, Mumbai. This blog is a part of RSRR’s Blog Series on “Emerging Technologies: Addressing Issues of Law and Policy,” in collaboration with Ikigai Law.


bottom of page