R2-D(irector)2? The Ethics of AI in the Boardroom
Artificial intelligence (“AI”) is understood as “the science and engineering of making intelligent machines, especially intelligent computer programs”. While this term was commonly used and associated with the sciences, it has gained traction in other non-conventional and commercial areas; and with “Siri” and “Alexa” becoming household fixtures, AI has achieved global proliferation. AI has made its way from the realm of science fiction to our everyday lives at a breakneck speed. A survey conducted by the World Economic Forum in 2015 suggests that AI will make its way into boardrooms in a decision-making capacity by 2025. While this seemed like a startling proposition then, the possibility of this seems more real than ever five years down the line in 2020. Not only is AI being increasingly used in different spheres to enable better health care facilities, faster transportation, and better communication, it has transformed businesses through the use of cognitive technologies and algorithms to generate faster, bias-free and rational results on different data sets. It is also emerging as an attractive prospect for businesses to gain a competitive edge over their contemporaries. It is no surprise, therefore, that AI is touted to have several benefits if used in the boardroom.
Some leading examples of AI making its way into boardrooms in a decision-making capacity include an algorithm named Vital (an acronym for Validating Investment Tool for Advancing Life Sciences) used by a Hong Kong-based venture capital firm to make decisions regarding investments, with voting rights equivalent to other human directors on the board of the company on certain matters in the company. A Nordic company, Tieto, also appointed an AI, Alicia T, on its leadership team for a data-driven business unit, with the power to cast votes. Another tailor-made version of AI technology, Einstein, is being used by the Salesforce CEO, Marc Benioff, to assist in dispute resolution and making unbiased decisions to run the company.
The growth of AI in businesses has accelerated manifold due to the COVID-19 crisis with most businesses now operating on remote working models. As new and old businesses embrace the new culture of “work from home”, AI is playing a greater role and is enabling the shaping of the future of global businesses by allowing for greater mobility and efficient performance. However, these developments have been accompanied with a heated debate about the ethical considerations with letting AI take over human functions: since there are problems including corrupt data sets, data protection, legal personality, and the assignment of liability.
Understanding Indian Regulation of AI
The Indian government has taken steps such as launching the MCA-21 database (of companies and partnerships) based on AI and machine learning processes so as to ensure compliance and simplicity in the processes involved in the different stages of creating and running companies. AI is also being used to tackle money laundering and terror financing. In order to increase the role and ensure appropriate regulation of Artificial Intelligence & Machine Learning (“AI” & “ML”) Systems, the Securities & Exchange Board of India (“SEBI”) released three circulars on the 4th of January, 31st of January and on 9th of May in 2019. The circulars require quarterly reporting by Market Intermediaries, Market Infrastructure Institutions, Mutual Funds in respect of the AI & ML powered systems and products used and offered by them. The definitions of the terms AI & ML technologies in these circulars have been worded broadly to ensure a larger regulatory framework. Additionally, there is also mention of systems such as Natural Language Processing etc., that would be ‘deemed’ to be based on AI & ML technologies in the annexures of these circulars. This initiative on part of SEBI has been brought in place to keep up with the growth of the use of AI & ML technologies and to provide for better understanding and preparedness in respect of AI & ML policies for the future. It is also a measure in line with international best practices such as the Directive on Markets in Financial Instruments brought about by the European Union (“EU”) in 2014.
Presently, the Companies Act, 2013 framework lacks any provision to deal with AI directors in boardrooms or the use of AI in a decision-making capacity. The standard to be followed for human directors is that of exercise of reasonable care, skill and diligence (S. 166 of the Companies Act, 2013), and avoiding negligence on their part. They can also be made liable on accounts of civil and criminal grounds personally towards the company or even third parties. However, this framework cannot be used for AI-empowered directors to attach liability on account of bad faith or fraud and the like, involving elements of “human” emotion. Further, the Personal Data Protection Bill is also grossly inadequate to protect against automated decisions, making data protection and privacy a major concern.
Ethical Use of AI: Regulation Across the World
“What’s wrong, what’s right, and what is just creepy?” These are questions raised by the United Kingdom’s Financial Conduct Authority’s insight article, in the context of ethical quandaries arising out of the use of AI for decision making. It is important to remember that this is largely a murky area, as AI is yet to determine a nuanced, human understanding of what is right and wrong. The question of ethical use of AI is still in debate. Several jurisdictions have struggled to find a consensus on the extent to which AI can be used, and how liability can be assigned to AI.
The EU has emerged as a frontrunner in terms of developing a legislative framework for the regulation and use of AI. It relies on a “European Approach to AI”, hinging heavily on three pillars: one of which is ensuring an appropriate ethical and legal framework. It also recently brought out a White Paper (“Paper”) on the regulation of AI, which highlights a heavy focus on trustworthiness, privacy and human rights. Notably, the Paper classifies certain sectors as being “high-risk”, and proposes strict regulation of the use of AI in these sectors, including healthcare, transport, government, etc. Low-risk AI, on the other hand, is exempt from rigorous regulation. However, this simplistic binary classification fails to take into account the nuanced aspects of risk in AI. It is likely AI in the boardroom would fall somewhere in the middle of this spectrum, due to its ability to have significant impact on human rights through its decision-making. Thus, the extent of regulatory compliance that would be needed is unclear.
The Paper calls for a greater degree of human oversight to reduce risk and increase trustworthiness. This would imply that while AI could be used to complement corporate decision-making, it cannot, at this stage, replace human directors. It proposes to account for data protection concerns via the GDPR. It also recognises that it is difficult to assign liability, and proposes that it be imposed on the person best placed to have addressed the risk of harm; with strict liability for issues arising out of defective software and other digital features malfunctioning.
The EU’s Ethical Guidelines for Trustworthy AI (“Guidelines”) stress on the importance of trustworthy AI, which can be achieved only when the AI is lawful, ethical and robust. The basis for principles surrounding the ethical use of AI is fundamental rights as contained in the Charter of Fundamental Rights of the EU. The Guidelines stipulate four ethical principles, deeply rooted in human rights, in order to ensure AI is used in the most ethical manner possible. These are respect for “human autonomy, prevention of harm, fairness, and explicability.” Further, AI must follow both substantive and procedural fairness. The substantive dimension implies a commitment to ensure equal and just distribution of both benefits and costs and also that individuals and groups are free from unfair bias, discrimination, and stigmatisation. The Guidelines also imbibe the principle of “proportionality between means and ends;” which has several dimensions: balancing the interests of the rights of companies and users, and limiting use and retention of data to what is strictly necessary. It also implies that the measures adopted should have the least negative impact on human rights, and most compliance to ethical principles. It is important that these baseline considerations are accounted for before decision-making is done by AI.
The procedural dimension of fairness entails the ability to contest and seek effective redress against decisions made by AI systems and by the humans operating them. In order to do so, the entity accountable for the decision must be identifiable, and the decision-making processes should be explicable. To ensure explicability, the processes must be transparent, the purpose and capabilities of AI systems be openly communicated; and decisions should be explainable to those affected, to the extent possible. The rationale behind this is that without such information, a decision cannot be duly contested. Explicability and transparency are important ideals in corporate governance; but need to be implemented even more conscientiously in case of AI-based governance. Further, considering the apprehensions about the use of AI, particularly in a decision-making capacity, it is crucial that both dimensions of fairness are fulfilled.
The Guidelines also provide comprehensive technical and non-technical methods to promote the use of trustworthy AI. Moreover, they give directions to facilitate effective governance of the use of AI systems in an organisation. This model proposes the involvement of both the operational level as well as the top management level of personnel; ranging from management and board, HR, legal department, product development department, and day-to-day operators. Relevant roles are also prescribed to ensure that each level of governance fulfils their respective duties. A pilot checklist is prescribed to help in ease of governance. Additionally, the Guidelines include an indicative checklist of requirements that must be fulfilled in order to ensure that AI is used safely and ethically, based on systemic, individual, and societal needs. These include human oversight, technical safety, privacy and data protection, transparency, fairness, and accountability.
However, a vital detail to keep in mind is that these Directives are neither binding nor mandatory. They are merely best practices and principles, which must be developed upon in greater detail. Nevertheless, they remain significant as the first government-led framework to regulate and promote the use of AI. Further, the framework is an important reference point for all jurisdictions working on the regulation of AI.
Both Singapore and the UAE have broadly followed the EU approach to AI. This section discusses additional steps taken in these jurisdictions which could provide useful guidance to Indian legislators. These broad principles have also been followed by Singapore in their Model AI Governance Framework, which provides a baseline set of considerations and measures for organisations to adopt. One significant amendment to this has clarified the significance of human involvement in the use of AI, focusing on the “human-over-the-loop” approach. The Framework has also emphasised the need for stakeholder involvement to enhance the governance of AI. It sets out a series of steps businesses can take to set a decision-making model for AI that best suits their objectives and corporate values, but which also factors in societal norms and values and risks to individuals. Businesses following the Framework are advised to determine the level of human oversight in their decision-making process involving AI after classifying the probability and severity of harm that could be caused to an individual due to the decision made by an organisation about that individual. Similarly, in the UAE, SmartDubai has released an AI Ethics Guidelines, which prioritise fairness, transparency and accountability. Further, the Ethical AI toolkit includes a Self-Assessment Tool which allows AI users to evaluate the ethical level of their AI systems.
AI, through tangible benefits in quick decision-making, identification of risks and opportunities, and assisting in risk management; is likely to become a fixture in boardrooms. However, the ethical quandaries surrounding it are a significant barrier to realising the benefits it can offer. As with other legislations in India in the areas of bankruptcy law, competition law, etc., which have implemented nascent measures to regulate multifarious legal aspects in niche areas, there is a need for a comprehensive law in the area of AI. With AI becoming increasingly important, the Information Technology Act, 2000 and the SEBI Circulars as the only concrete legal frameworks in India in relation to AI are grossly inadequate.
There is pressing need for a comprehensive and complete legislative framework on AI to exclusively deal with issues as such as the ethical use of AI, establishing and imposing liability and creating a standard of liability among other factors. This need for comprehensive regulation coupled with stakeholder involvement has been echoed in government reports. A major hurdle in the drafting of legislation in this regard, however, is the lack of awareness and specialisation in this subject in India. While drafting such legislation, it is imperative that questions of trustworthiness and human rights are kept at the forefront of any AI decision-making.
For some of these aspects, such as assignment of liability, the EU Guidelines provide a vague but a valuable starting point. This would require adopting the practice of imposing liability on the person “best suited to have prevented harm”, or the person responsible for overseeing the use of the AI; as use of AI, especially in the nascent stages in India, would involve supervision by a human interface. Further, liability may also be imposed on software developers where use of defective software causes issues. These measures would enable ensuring legal issues arising at different levels of operating and using AI in the boardroom are addressed and that liability is assigned accordingly. While assigning standards of liability is a tricky question, one potential manner would be the meting out penalties based on the seriousness of the infraction caused, as has been done in Canada. The severity of the consequences would determine the liability, on a case by case basis.
It must be noted that the law in this area is at a nascent stage all over the world, and very little certainty exists in how all of these regulations will play out. Further, a lot of the standards which Indian law can take guidance from are broad and will have to specified and amended with practice and greater experience. Thus, this will remain a significant, but unavoidable limitation even after the relevant legislations are framed.
Authored by Gautami Govindrajan and Divya Kumar, students at National Law University, Jodhpur. This blog is a part of the RSRR Blog Series on Artificial Intelligence, in collaboration with Mishi Choudhary & Associates.