top of page
Swarna Yati

Critical Analysis of European Union's AI Draft Policy: A Step Towards Restricting Existential Threat

Artificial intelligence (A.I.) technology has emerged as one of the most promising technologies in recent years. It is a technology that can identify irregularities and offer affordable alternatives to outdated, expensive methods. It has brought with it new risks and imminent threats. The E.U. Artificial Intelligence Act is based on the interdependence and relationship between applications of A.I. and various spheres of society. The Act provides India with a model framework to prepare its own draft for regulating and restricting artificial intelligence technology. The government of India can employ a regulatory sandbox approach to consider using experimentation to provide a controlled environment in which AI-based business models can be tested and scaled up and quicken the process from development to deployment and commercialization. The Government of India intends to control A.I. and big I.T. through a comprehensive digital act to establish international regulating guidelines. The proposed legislation, called the E. U. A.I. Act (A.I. Act), has been developed over the course of its protracted passage through the European institutions, which began in 2019 to become more explicit about the dangers that may arise from using A.I. in delicate situations and how to monitor and reduce those risks. The current archaic I.T. Act of 2020 proves to be ineffective in dealing with the robust A.I. Act. The present paper focuses on developing India's regulating laws and their implementation. The review and stakeholder feedback on the proposed Information Technology [Intermediaries Guidelines (Amendment) Rules] 2018, published by MEITY, have been solicited.


Introduction

In its recent attempt to revolutionize the digital space, the European Parliament approved its draft proposal of the A.I. Act. Rapid advancement in A.I. technology has resulted in unprecedented risks and impending challenges.


This law was initially suggested by Ursula von der Leyen, the head of the European Commission, in the summer of 2019, soon before a pandemic, a war, and an energy crisis. Additionally, this was before ChatGPT sparked frequent discussions about an existential threat from A.I. among lawmakers and the media.


Unlike other traditional laws regulating any new technology, the proposed policy called the E.U. Artificial Intelligence Act (A.I. Act), is based on the interdependence and relationship between applications of A.I. and various spheres of society. This implies that the effects of artificial intelligence technology on numerous facets of our society, including the job market, the economy, and daily life, are profound and far-reaching. To guarantee a seamless transition and leverage A.I.'s advantages, it is crucial to foresee and prepare for these possible disruptions. The application of A.I. in some societal regions, each with a unique set of potential issues, is what is being controlled, not A.I. itself. It tries to strike a delicate balance between upholding rights and ideals while fostering innovation and addressing both risks and fixes. Although far from ideal, it offers specific actions.


Concerns regarding uncontrolled A.I. have been mounting in recent years. Uncontrolled A.I. refers to developing and deploying ​A.I. systems without sufficient oversight, regulation, or control mechanisms, leading to potential risks and negative consequences.


Many aficionados, such as Geoffrey Hinton, Yoshua Bengio, Alan Turing, and Elon Musk, have expressed their concerns regarding uncontrolled A.I. According to Geoffrey Hinton, Deep Learning and A.I. are transforming the economic, digital and societal space. Although they have exponential power and space to grow, such uninterrupted growth poses a serious existential threat to the human species.


The AI Act

The technology has also demonstrated symptoms of sophisticated intellect and knowledge, raising concerns that "Artificial General Intelligence," or A.G.I., a sort of artificial intelligence that can perform on par with or better than par with humans in a wide range of tasks, may not be far off.


The legislation puts forward a 'future-proof' definition of A.I. The legislation will go into force in two or three years, and any company doing business inside the E.U. must abide by it. Because we have yet to determine how A.I. will develop or how the world will appear in 2027, this lengthy timescale does raise some problems of its own. The Act's language, nevertheless, is sufficiently broad that it may continue to be applicable for some time. Beyond Europe, it may impact how corporations and researchers approach A.I.


Notably, this law is solely based on the idea of risk out of all the possible approaches to A.I. regulation. The four risk categories are unacceptable, high, limited, and low, each subject to a separate set of regulatory requirements.


Systems judged to represent a danger to fundamental rights or E.U. values will be classified as posing an ‘Unacceptable Risk’ and will be forbidden. Under the Act, real-time predictive policing, profiling, and unwarranted use of face recognition technology have been added to the 'Unacceptable Risk' category. Additionally, this has been added to the list of unacceptable risks, and such technology would only be permitted following the commission of a crime and with court approval.


The policy classifies 'High Risk' dangers, which will be subject to disclosure requirements and be required to register in a specific database. Various monitoring or auditing standards will also apply to them. It includes applications accessing data in essential sectors, including finance, healthcare, education, and employment. Although the employment of A.I. in these fields is not viewed as degenerative, control is necessary because it might have a detrimental impact on safety or fundamental rights.


High-risk A.I. can impact various rights, including the right to privacy, non-discrimination, freedom of expression, due process, autonomy, security, and access to services. High-risk A.I. systems often require access to large amounts of personal and sensitive data, and if not properly managed, this can result in privacy breaches and violations of individuals' right to privacy. A.I. systems that make decisions in areas such as hiring, lending, or criminal justice can perpetuate biases and discriminate against specific individuals or groups, infringing upon the right to equal treatment and non-discrimination.


In the case of a ‘Limited risk’, there will be a lower need for transparency. In a similar vein, operators of generative A.I. systems, such as bots that generate text or graphics or deepfakes, will need to make it clear to users that they are engaging with a machine.


The legislation has evolved over the course of its protracted passage through the European institutions, which began in 2019, to become more explicit about the dangers that may arise with deploying artificial intelligence (A.I.) in sensitive situations and how to monitor and mitigate those risks. The concept is clear: we must be particular if we are to accomplish anything; nonetheless, much more effort remains.


The Act has provided immunity to military and defence-based A.I. applications. The Act also provides stringent penalty provisions for non-compliance with the legislation, with a maximum fine of 30 million euros or up to 6% of their entire annual worldwide revenue for the preceding financial year, whichever is higher.


What does this mean for India?

The development of the E.U. Artificial Intelligence Act provides India with a model framework to prepare its own draft for regulating and restricting A.I. India has an exploding population beaming with technological possibilities. To establish a firm foot in this sphere, India must establish strong laws so that other nations can benefit from exchanging the technologies. With the advent of technology that can identify irregularities and offer affordable alternatives to outdated, expensive methods, India has experienced growth in both the fintech and health sectors.


The future of A.I. should be shaped by developing countries that would be affected more by the exploitation of A.I. since there are no proper privacy securities or laws. The rise of AGI would substantially affect India's employment and economic condition, which is a labor-booming market. One of the ways to combat the rise of AGI is an "AI Nanny" with superhuman intelligence that will stop an AGI/ASI from emerging too soon.


The current archaic IT Act of 2020 proves to be ineffectual in dealing with the robust A.I. With the lacunas present in the Digital Personal Data Protection Bill, 2022, bringing a strong A.I. Regulation Act will give the Indian government a chance to redeem itself.


The primary issue with these current acts is that it has still not recognized Artificial Intelligence as a different entity that poses complex ethical and moral considerations, with its autonomous decision-making and complex algorithms, and necessarily requires regulations. It has led to ​regulatory gaps and ​uncertainties in the operation of A.I.


The Ministry of Electronics and Information Technology (MEITY) recently established several committees and published a plan for introducing, applying, and integrating A.I. into society. Review and stakeholder feedback on the proposed Information Technology [Intermediaries Guidelines (Amendment) Rules] 2018, published by MEITY, have been solicited. Though a new approach has been taken, this Rule still contains some serious flaws, including data privacy issues.


Future Aspects

The government of India can employ a regulatory sandbox approach to consider using experimentation to provide a controlled environment in which A.I. systems can be tested and scaled up. The sandbox intends to make testing cutting-edge technologies like blockchain, A.I., and application programming interface services easier. The importance of experimentation is emphasized in OECD AI Principles. Experimentation creates controlled, open environments for testing A.I., enables the growth of AI-based business models that could support solutions to global challenges, and quickens the process from development to deployment and commercialization.


India plans to control A.I. and big IT through a comprehensive digital act, with the goal of establishing international A.I. regulating guidelines. The draft of the Digital India Bill is expected to be released in June, marking a significant overhaul of the laws governing the Internet since the Information Technology Act. The draft can take inspiration from the Artificial Intelligence Task Force report.


The Artificial Intelligence Task Force, in its 2018 report, provided valuable insights into India's A.I. framework and its potential impact on various sectors. The report also highlighted the potential of A.I. to address societal challenges and improve the quality of life for Indian citizens. The 2018 report provided a roadmap for India's A.I. framework. One of the key recommendations of the Task Force was to establish a National Artificial Intelligence Mission (NAIM) to drive the adoption and development of A.I. technologies in India. The NAIM would serve as a platform for collaboration between academia, industry, and government and facilitate the creation of AI-focused research and development centers.


Conclusion

While it is heartening to see global leaders take into account both A.I.'s economic and strategic benefits and possible concerns, we must keep in mind that not all risks are equal. The skyrocketing growth of A.I. presents different kinds of risks. These risks include Adversarial Attacks and Data Poisoning leading to incorrect decision-making, Data Privacy issues leading to potential misuse of sensitive information, and Bias and Discrimination leading to discriminatory outcomes. The hazards associated with every technology are undeniable, though, and organizations involved in academia and politics are working to anticipate these risks rather than wait for them to materialize.


The G20 Sherpa, Amitabh Kant, stated that nations should approach artificial intelligence in a balanced manner and refrain from enacting legislation hindering innovation. However, the question is, at what costs will this development be allowed? The existential risks emancipating from the growth and development of A.I. cannot be overlooked. They must be dealt with with all seriousness and given as much importance as solving other significant global challenges, like pandemics and nuclear war.


It is crucial to develop a regulatory framework that promotes responsible A.I. development and deployment while safeguarding the interests of individuals and society as a whole. This may involve creating specific A.I. regulations, fostering international collaborations, enhancing technical expertise within regulatory bodies, and incorporating ethical considerations into the regulatory framework.

 

This article has been authored by Swarna Yati, a student at Dr. Ram Manohar Lohiya National Law University, Lucknow. It is a part of RSRR's Rolling Blog Series.

Comments


bottom of page