top of page
  • Megha Bhartiya & Ria Bansal

When Convenience is a Detriment – ChatGPT and its Legal Treatment

Introduction

Do we live in a world where human intellect can be defeated by chatbots? Up until now, we have only seen the kind of Artificial Intelligence (AI) that paraphrases, enumerates or provides treatment to pre-designed text. However, a friendly neighbourhood chatbot by the name of ChatGPT surfaced in November 2022 and took the world by storm. It is essentially a well curated artificial intelligence tool which helps users to create “original text” such as poems, essays, songs, stories, etc. One can even ask it questions, provide it with prompts, etc. and it will be able to converse like an advanced, all-knowing human being. In a week’s time after its launch, it had over 1 million users. As its name suggests (Generative Pre-Training Transformer), it has to be trained to generate original text, and such training comes in the form of processing large amounts of text.


While this open application has the capacity to change the world, such advanced technology always comes with its own issues, which must be regulated. In this editorial piece, the authors will delve into the regulation, or lack thereof, of ChatGPT, specifically in the Indian sphere. The article will further venture into the array of problems that are either already associated with ChatGPT or those that it can potentially cause and suggest possible ways to regulate and even modify this piece of technology to suit the humane standards of safety.


A Wide Range of Problems

The general media consensus is largely skewed in favour of ChatGPT, but this is because of its many different uses and its ability to do practically anything – from writing essays, poems, dissertations etc. to drafting legal documents like contracts, petitions, written statements etc. From a distance, it appears that there are no limitations to ChatGPT since it is a GPT-3 model, meaning that it is a third-generation Generative Pre-trained Transformer and is trained on a mammothian range of data. Yet, there are many problems that still plague the advanced AI chatbot despite it being a blessing to the common man’s daily problems. Some of these problems are in fact due to the large amount of data it is trained on, and others are more complicated and arise as implications of how the chatbot works and functions. The following are three major problems of ChatGPT that require public attention.


The Problem of Artificial Hallucinations

ChatGPT uses deep learning techniques to provide context and situation-specific outputs to people’s queries. Deep learning is a form of machine learning which aims to mimic the pattern, structure, functions and workings of the human brain. This is also why ChatGPT’s responses are “human-like.” However, one major problem with deep learning is that it is similar to the flaws of the human mind – it is prone to hallucinations.


Hallucinations in the context of AI Machines refer to the generation of looped, nonsensical, incoherent and largely bland outputs. These results are undesirable and often inconsistent with the source material. Artificial hallucinations pose a tangible and grave threat that is presently being overlooked in the context of ChatGPT, largely because it is overshadowed by the objectively correct data the bot does provide. An artificially hallucinated response to critical questions such as medical treatments or scientific processes may have indirect legal repercussions. One example would be death, disease or any deformity caused due to a hallucinated response to a user’s query regarding medical treatment. Whom will the court hold liable? Here, the question of whether any liability can be fixed upon AI machines would be intrinsically linked to the prolonged AI-Personhood debate. This debate circles around what the status of personhood is in an era of growing AI involvement in daily life. AI has now occupied a pivotal role in human life and has a significant presence in different sectors such as business, education, administration etc. The question of whether AI should be recognized as a legal person and granted certain rights (as well as liabilities) is now at the forefront. Granting AI certain recognized rights would imply treating them as legal persons whereby they can be sued for any loss or damage caused due to their hallucinations. Another limb to this issue would be how to punish AI machines in case they are imposed with liabilities. There are several problems that are branched out from the issue of AI hallucination that are not visible to the human eye until one digs deep, and while the problem is not a legal issue at its core, addressing its legal implications is the need of the hour in light of the increasing use of AI based on deep learning.


Since the problem with AI hallucination is not directly legal, it cannot particularly be solved by legislations. This is because it is a flaw that is inherent to how AI is built on deep learning functions. There are still some ways to tackle this issue, some of them involve increased research and development (R&D) focused on minimizing and mitigating the effect of AI Hallucinations, as well as formulating regulations that mandate the parent companies of such software to disclose to the users what AI hallucination is and to what extent is their service susceptible to it. Another temporary solution is that the parent companies are held responsible until the AI personhood debate comes to a conclusion and that the responses generated by such chatbots go through a review process in the form of human screening. The larger problem here lies in finding a temporary solution until the legal status of AI is determined i.e., until the AI-personhood debate is resolved. This is because if the conclusion of the debate is to recognize AI as a legal person, then it would imply granting it certain rights like the right to sue or be sued. In that scenario, the liability should rest with the AI itself and not the parent company, since the AI would be a legal person and capable of being sued. However, in a different scenario, if the AI is not recognized as a legal person, then there should be some legal entity that is held accountable for the problems caused by the AI. The authors suggest that this liability rests with the parent company since they would have the exclusive right and control over how the AI is developed and distributed to the users. In the event of the user also being negligent, the legal doctrine of contributory negligence can be resorted to and the compensation or damages may be reduced.


It is important to note here that while the AI-personhood debate may not be resolved yet, there needs to be a mechanism in place to attribute accountability in case of a breach of people’s rights by AI. It is absolutely necessary to devise the same after careful thought.


ChatGPT’s Processing of Personal Data and Implications on User Privacy

ChatGPT’s parent company, OpenAI’s terms of use are nothing unique compared to several different software’s terms of use. However, in the context of its exponential rise in use and the data it acquires, a more critical analysis of its terms is required to ensure that the user’s rights are protected.


Specifically, part 5 of the terms of use is divided into 3 sub-clauses each dealing with confidentiality, security and the processing of personal data. Sub-clause (c) deals with the processing of personal data and states that if an individual’s use of their services involves the processing of personal data, they must provide a “legally adequate” privacy notice and must obtain “necessary consents” for the processing of such data. The entirety of the clause is written in a prescriptive legal language which allows for accountability in the event of any mishap to shift to the user in question because the complete burden of obtaining the consent is on the user. The details of how ChatGPT ultimately processes the personal data are also not disclosed in the terms of use.


The larger problem with this clause lies with what is not stated in it – there is no definition of what is a “legally adequate” privacy notice and the use of the phrase “necessary consents” is left open for interpretation. If such legal ambiguity is not addressed, it may transform into a potential loophole in the regulation of data processing by AI Technology that can be used to bypass an individual’s rights. In India, the recent Digital Personal Data Protection Bill 2022 (DPDP Bill) becomes the most relevant piece of legislation that must be analysed in conjunction with ChatGPT’s data processing clauses to understand what impact it will have in India. Much like the majority of legislation, the DPDP Bill does not define or even feature the phrase “legally adequate,” however, sections 7 and 8 do discuss what “Consent” and “Deemed Consent” mean. Thus, in the Indian Context, one can potentially rely on sections 7 and 8 of the DPDP Bill after it is passed to understand what a legally adequate privacy notice would constitute since an intrinsic part of the notice would be the consent of the individual whose data is being processed.


OpenAI’s privacy policy also deals with the personal information it collects, its use and disclosure. Part 1 of the privacy policy discusses the personal information that it collects and it defines personal information as any information that “alone or in combination with other information in our possession could be used to identify [the user].” There are different modes through which it collects this information, and among these is also the process of online tracking. ChatGPT as well as its third-party service providers can use tracking technologies (including online cookies) to collect data from the user’s browsing activities beyond their use of the site. This includes the user’s data across different sites that they may access after using the chatbot and ChatGPT will not respond to the Use of Do Not Track (DNT) signals. While OpenAI maintains that it uses the data it collects to improve its functioning and responses, it does not mitigate the privacy implications that online tracking entails. In general, users do not take the time to go through the privacy policies of different services they use. Their use of the services is taken to be their deemed consent to their policies, however, this form of deemed consent is broad and does not take into account the interests of the user that may be with regard to specific browsing patterns that they may require to be kept private. This entire process is often called browser tracking. Since ChatGPT does not respond to DNT signals, it does not allow its users the choice to opt out of being tracked and is thus in violation of their right to privacy.


Further, OpenAI establishes that it can use the information it collects for research and may share it with third parties, publish it or even make it generally available. It also has the right to disclose the user’s personal information to third parties without notice to them in certain circumstances which include business transfers, OpenAI’s affiliates as well as other users of their services.


These are only a few of the concerning provisions of OpenAI’s policies that many users of ChatGPT may not be aware of. In a scenario where the ambiguity concerning the definition of what a “legally adequate” notice is not resolved, the ultimate loss is of the victim whose privacy may have been breached. This loss is only aggravated and perpetuated through the many problematic provisions in OpenAI’s terms and policies that the users fail to notice.


Potentiality of Misuse

While it is true that the appeal of generative AI is unmatched, a problem it poses is the ambiguity regarding the source of malicious and derogatory content and false propaganda. When such an application gains widespread popularity, the truth may get distorted. The conversational abilities of AI may resemble a human being’s but this does not mean that it comprehends the context or implications of the words it generates, leading to a situation of unregulated content, among the well-known evils of the internet. This can provoke disinformation campaigns and online harassment.


Information is the biggest asset of the 21st century and the controller of information, if promoted by ill intentions, can be the bearer of the world’s greatest catastrophes. Examples of detrimental misinformation can be the times when it was believed that HIV can spread through touch, or that homosexuality can lead to diseases. The most alarming thing about OpenAI’s Chat GPT is that it is “open”. And given that the application functions on a learning model and generates information based on that, it can definitely be used to spread fake propaganda, seeing that its raging popularity will also generate receptive audiences.


Such an action of spreading misinformation can provoke a charge of defamation under Section 499 of the Indian Penal Code, 1860. In certain cases, where the mala fide information targets the government, it can lead to a charge of sedition under Section 124A as well. Furthermore, the charges under Sections 153A, 295A, 504, and 505, all of which relate to malicious speech which can cause harm, can also be invoked. With respect to the Information Technology Act 2000, Section 66D talks about cheating through impersonation through a computer device or resource and Section 69A allows the government to issue such directions as to block content on certain cognizable grounds. Further, Section 79 of the Act grants immunity to intermediaries from any content, even if it is illegal, posted by third parties. Section 79 has been celebrated for not imposing liability on the platforms when it is the source of information which stands to be malicious. However, in the case of ChatGPT, the intermediary or platform of dissemination of information is the chatbot, and providing immunity to it on this basis, when there is already ambiguity with respect to the source of information, would not be viable.


Other than spreading disinformation, ChatGPT can also lead to an ease in committing certain crimes like scams and abetment to commit certain offenses. In the past, there has been an instance where AI led to fraudsters being able to mimic a CEO’s voice to obtain a sum of money to the tune of $240,000. Generative AI is bound to make these possibilities even worse as there is hardly any mechanism in place to find out the criminal behind these operations, which would only lead to a surge in such crimes.


Furthermore, another consequence of the popularity and commonality of such generative AI is its ability to gaslight people. A Bing Chatbot, for instance, was reported giving death threats to people. Looking at past incidents like the Blue Whale scandal, where an app’s abetment had led to a death by suicide, it would not be a long shot to say that the chatbot can also lead to such catastrophic consequences.


Mitigating the Risks

It is extremely important to deal with the limitations that ChatGPT poses, given its increasing user base. Firstly, it is imperative to figure out the technicalities of imposing liability upon the source of malicious information. An approach could be to use patterns in generated text to identify the generative model it has originated from, also known as watermarking. Supplementing this, there should be laws in place to penalize defamatory or any other kind of harmful content. Laws should also be made to regulate generative AI as a whole, given that this is a completely new piece of technology that has come to the surface. In India, the recently constituted committees to bring about a policy framework for AI should bring out a separate policy roadmap for generative AI.


Additionally, there is a need to establish a fact-checking mechanism similar to Twitter, however, unlike twitter, the moderators of information should consist of seasoned professionals rather than novices. The moderator approach would limit, if not eradicate, the possibilities of incorrect and offensive information. 


Conclusion

The revolution that this new technology has paved the way for is marked by challenges. However, it is certain that if specific necessary interventions are provided for, then ChatGPT can be regulated in a manner that it helps people without harming them. Presently, it is of utmost importance to understand the urgency of tackling such issues owing to the increasing popularity of this new technology. Rest assured, once a mechanism is in place to identify the generative model, then establishing liability and preventing crimes through Chatbots can be made a possibility. Upon mitigating such risks, the opportunities with such advanced AI are endless.

 

This article was authored by Megha Bhartiya and Ria Bansal, Associate Editors at RSRR. This blog is a part of the RSRR Editor’s Column Series.

bottom of page