Introduction: The Realm of Generative AI
“Prompt” was one of the candidates for Oxford’s word of the year 2023, with prompt books and digital colleagues for harnessing the potential of Generative Artificial Intelligence (“GenAI”) coming into the picture. Generative AI refers to a category of algorithms designed to produce diverse content, spanning audios, images, and text. Trained on vast datasets, these algorithms produce content that is utilised across various domains, yielding both positive and negative outcomes. The accessibility of GenAI has undergone a remarkable transformation and no longer requires a high level of technical expertise to utilise it. Today, it has become an accessible and tangible facet of our daily lives, with benefits that can be reaped by persons with disabilities as well.
The influx of GenAI prompts a critical examination of how we navigate these disruptive technologies, capable of not only predicting speeches but also posing potential existential threats. Prominent stakeholders previously signed an open letter in favour of the development of a responsible AI while simultaneously mitigating its risks.
Prominent large language models such as ChatGPT and Bard have been perceived as both a boon and bane, highlighting the need to safeguard the labour force primarily in terms of their intellectual creations. These creations often serve as training data for these models, and there is a pressing need for proper disclosure and ethical considerations in this intricate interplay between human creativity and AI development.
Generative AI: The Biggest Competitor to Low-Cost Labour?
GenAI was first perceived as a potential disruptor to existing labour dynamics, facilitating lower wages and increased competition in the workplace due to its efficiency in mimicking the employees in delivering quality work. However, integrating GenAI into the workplace is evolving to be viewed as a tool to enhance the employees’ productivity and skillset, particularly during the ideation and brainstorming phases. This shift entails moving away from repetitive, everyday tasks that can be automated by GenAI, such as summarising and error-checking, to allow workers to focus on problem-solving and engage in more intellectually stimulating tasks. This leads to a more efficient and productive work environment, with humans and machines collaborating seamlessly.
GenAI is redefining what we contemplate as ‘work’ and designations altogether, especially in the domains of media, communication, advertising, marketing, and data-intensive tasks. It is worth noting that new designations, such as prompt engineers, are on the rise. After all, GenAI's successful integration into the workplace as a facilitator, hinges on the ability of human employees to effectively guide and prompt the system. Key aspects of the job, such as applying human creativity, emotion, and experiential learning, elude replication by GenAI. Further, human vetting and reviewing GenAI’s contributions are mandatory at its current stage. It can augment the capabilities of creative professionals and level the playing field since more filmmakers would have access to affordable, AI-powered VFX.
While GenAI offers a cost-effective alternative to human labour, its reliance on vast amounts of training data presents a double-edged sword. Operating on a vast scale of machine language training, it mirrors the data it is exposed to, raising concerns about data scraping. Data scraping can be understood as an automated process of extracting vast chunks of data from publicly available sources to develop and train the AI. It can be deemed problematic when it is done illegally via infringing copyright-protected content, or in an unethical manner. For instance, when a user’s personal data available across the internet is accessed and misused, or when it mimics an individual's style, posing a potential risk of identity theft.
The ability of GenAI to mimic creative styles poses a significant threat to creative professionals. Deepfakes, voice-cloned song covers of prominent pop artists, and AI generated images have sparked conversations about the blurred boundaries between work created by autonomous machines trained on human-made datasets and work created by human beings emanating from their emotions, experiential learning, and intellect. From content on social media to products driven by AI-facilitated target advertising and influencing elections, it is becoming increasingly apparent that our entire lives are substantially influenced and, in a way, being dictated by AI. These shifts have challenged the contours of creativity, originality, personality rights, and ownership (particularly in the music industry), with a significant impact on aesthetics and cultural diversity as we know it today.
In India, the Delhi High Court has recently held that an artist’s consent must be taken for using the elements of their persona for commercial purposes. The case involved the concerns of famous actor Anil Kapoor about GenAI being used to replicate his iconic dialogues and gain commercial benefits without obtaining his consent. While the court initially expressed reservations, it ultimately acknowledged the potential for other celebrities to face similar issues if their personas were tarnished or exploited, prompting a call for an amended copyright law that encompasses GenAI challenges. The judgments serve as the first step to safeguard the creative contributions of artists in today's era of generative technology, where artists can be exploited posthumously as well.
Lack of Protection in Labour Laws in light of SAG-AFTRA
In an era dominated by OTT platforms and the enduring popularity of shows such as F.R.I.E.N.D.S., where actors earn over USD 20 million from royalties and reruns, a stark reality persists. Screenwriters, the very foundation of any production, face outdated and inequitable pay structures. Their compensation models fail to reflect their crucial role, often offering insulting sums compared to other stakeholders in the project.
Recently, the entertainment industry in the USA was in an uproar due to the inadequacy of the current legal frameworks to protect workers in the creative units, people whose voice, image, prose style, storytelling, and aesthetics were being exploited in the entertainment industry, receiving solidarity worldwide. The workers fear the threat of being replaced unjustly by big corporations utilising GenAI to produce cost-effective and time-efficient results. The risk of uncredited work also exists, where AI contributes to a script, but screenwriters don't receive adequate recognition or compensation.
The SAG-AFTRA strike initiated detailed discussions on the vulnerabilities in entertainment labour laws. The strike ensued due to failed negotiations between the WGA and the Alliance of Motion Picture and Television Producers (“AMPTP”), leading to collective action involving the Screen Actors Guild (“SAG”) and the American Federation of Television and Radio Artists (“AFTRA”). At the heart of the negotiations was the need to safeguard screenwriters against the unauthorised replication of their writing styles using GenAI—an area currently lacking legal protection, thus revealing an exploitative blind spot. Another aspect was the impact of the streaming ecosystem on artists’ earnings.
The turn of events also led to a round table discussion in October 2023 between the Federal Trade Commission of the United States of America with members of the entertainment industry, and artists of all kinds. In the said discussion, the union's stance was extensively discussed, along with their fears and demands and their recurring request for protective policies. Tim Friedlander, President and Founder of the National Association of Voice Actors (“NAVA”) formulated the concept of three Cs, i.e., Consent, Control, and Compensation, which the policies should evolve to protect.
The resulting WGA SAG-AFTRA collaboration successfully crafted a Minimum Basic Agreement (“MBA”), affording contractual protection to its members. The tentative agreement for WGA secured a substantial deal estimated at $233 million per year, a significant increase compared to the $83 million per year offered by the AMPTP. Additionally, it granted screenwriters rights over their literary material, mandated disclosure in cases involving GenAI, and expressly prohibited the exploitation of writers' materials as data for training AI. The SAG-AFTRA, in their tentative agreement, was able to secure a deal of over $1 billion, which will remain valid till June 2026, along with ensuring mandatory consent and proper compensation for the use of digital images.
The current agreement owes much of its success to collective bargaining and robust global support for the movement. The Writer's Guild of Great Britain (“WGGB”) also voiced the importance of the movement and what it can mean for the fraternity at large. The negotiation was a breath of fresh air for the global entertainment industry. It needs to be reiterated that this win does not cover everyone in the creative workforce and does not protect against exploitation by anyone outside the AMPTP. The statutory void remains to be exploited and brings to us the question of how the legislators will deal with the issue. The industry members have addressed concerns about their potential adverse impact on freelancers and non-union workers, particularly those without significant recognition.
Post-negotiation discussions reveal the complexities faced by SAG-AFTRA. Duncan Crabtree-Ireland, the union's chief negotiator, expressed surprise at the industry's outright dismissal of their proposals for AI regulation at the World Economic Forum. This highlights a crucial issue: a lack of awareness and education about AI's impact on the entertainment industry. Mr. Crabtree-Ireland emphasizes the need for unions to educate themselves and their members about AI. This knowledge is critical for securing future contractual protections in an environment where companies might exploit artists by replicating their styles with AI. Such exploitation could leave creators without credit, royalties, or even continued employment if their work becomes easily replicable.
The Screenwriters Association of India (SWA) stands in solidarity with the WGA SAG-AFTRA movement. However, significant hurdles remain domestically. Unlike the US, which has its AMPTP, India lacks a unified producer association for effective negotiations. Additionally, outdated copyright laws hinder progress.
The SWA is reportedly working on a Minimum Basic Contract for its members, aiming to guarantee minimum wages and protect against unfair treatment. SWA reports an exacerbated scenario in India due to low scope and undignified payments to newcomers and unjust agreement clauses authorising one-sided termination, lack of credit guarantee, and no indemnification, to name just a few.
The Right Draft: 2023, produced by Ormax, a media consulting agency, and Tulsea, a talent agency, shed light on the sentiments of screenwriters and identified six facets- right pay, right credit, right feedback, right nurturing, right value, and right environment, all of which are missing in the current frameworks. Over half of screenwriters have voiced dissatisfaction, citing unfair compensation, a lack of due credit for their contributions, and diminished growth prospects due to a lack of nurturing new minds. Notably, only 31% of screenwriters have reported having secured formal contracts incorporating a hybrid pay model, including basic pay and performance-based incentives.
The Indian entertainment industry is just beginning to confront the potential exploitation posed by GenAI. While publishers have sought copyright protection against AI-generated content, the government has downplayed concerns about the current IPR framework. These factors make securing a minimum basic contract, crucial for safeguarding creators against AI exploitation, seem like a distant dream at present.
Proposed Regulatory Framework and Ethical Considerations
AI was not envisioned during the conception of most applicable labour legislations. In the absence of explicit laws protecting aesthetic sense, prose style, voice biometrics, or image, GenAI presents companies with a mode of plagiarising and reciprocating the wit of any person, capturing the timbre of any sound, and even crafting 3-D models of individuals. This exploitation occurs due to the lack of frameworks that label such actions as illegal. Creators, especially those contributing to the underlying training datasets, must receive due acknowledgement for their contributions.
Big names, such as Levis, Louis Vuitton, Vogue, and Hyundai, have tried to surpass the diversity consideration using Shudu, a CGI model of dark complexion generated by Cameron-James Wilson, a white man of British origin, using GenAI. There have been formal contracts for “employing” Shudu as a model, the payment of which goes to the Diigitals. An act that will be called a classic “digital blackface”. This tactic has been used to create the illusion that a model of black descent has been employed, while no opportunity or compensation has been provided to an actual model of dark complexion.
The US AI Bill of Rights seeks to address labour concerns and cultivate an AI-ready workforce, while preventing any instance of misuse of data by ensuring data privacy, confidentiality, and cybersecurity. The bill prioritizes worker well-being by mandating transparency in AI outputs. Watermarking ensures a clear distinction between human and machine-generated work, mitigating concerns about job displacement and unfair competition. Additionally, continuous testing for bias and harmful content safeguards against AI perpetuating inequalities or producing detrimental outputs. These measures promote a more equitable and ethical work environment where humans and AI collaborate effectively. Furthermore, it mandates conducting thorough risk assessments before deploying AI systems.
The newly agreed-upon EU AI Act has set forth certain obligations emanating from the risks of AI prominently, sufficient human oversight, and providing the training data of large language models such as ChatGPT to ensure transparency. It sets conditions for using copyright protected content, requiring authorization unless specific exceptions apply. Notably, providers of general-purpose AI models (including GenAI) engaging in text and data mining need authorization from rightholders unless they have expressly opted out, or if this is done for the purposes of scientific research. This protects creators and ensures fair compensation for their works, potentially impacting industries heavily reliant on content creation.
The Act introduces criteria for general-purpose AI models with systemic risks, proposing adjustable thresholds and benchmarks to reflect technological advancements. It emphasizes the importance of reliable and interoperable content provenance and authenticity techniques, such as watermarks and cryptographic methods. Furthermore, deployers creating 'deepfakes' are obligated to transparently disclose the artificial origin of the content in an attempt to balance freedom of expression and artistic rights while safeguarding against potential misuse.
The UK’s ICO Consultation Series takes a complementary approach to the EU AI Act and highlights 3 principles: lawful basis, accuracy, and purpose limitation. The ICO fosters trust in AI systems by defining clear purposes for each stage of the AI lifecycle and ensuring data protection by design. This creates a more collaborative work environment where human workers feel comfortable interacting with AI tools. Further, it emphasizes understanding training data's impact on AI outputs and transparent communication of these findings. On the other hand, in China, the focus of the regulations has been to ensure that GenAI aligns with the core value of socialism and holds AI service providers accountable for the output and handling of personal information within the context of GenAI services.
In India, the Digital India Act, with its principles of openness, safety, trust, and accountability, is expected to regulate GenAI's increasing role in the professional sphere. While specifics on addressing labour concerns remain unclear, it is anticipated that the Act will ensure AI deployment upholds labour rights and fosters fair workplaces.
As different frameworks emerge across the globe, it is imperative that public policy evolves to recognise the effects and use of GenAI, adopting a co-regulatory model to balance innovation with the imperative of mitigating technological risks. Given GenAI's impact on workforce dynamics, openly discussing its implications on workforce composition is crucial. Upskilling and reskilling programs are vital to equip employees with the necessary skills for adaptation. Organizations must prioritize safe and ethical GenAI use and foster a culture of experimentation through incentives and support programs.
Implementing guardrails such as the ones given by DALL-E governing technology is crucial. These guardrails emphasize factors like diversity, continuous monitoring, and the prevention of harmful content, laying the foundation for the safe and ethical use of GenAI. Transparent data cloning practices guided by intellectual property considerations, robust human oversight mechanisms establishing checks on the accuracy and inherent prejudices of GenAI, and drafting contracts that emphasise the disclosure of AI usage and ensure transparency in the training data of AI models are imperative steps towards accountability. Such clauses would address anxieties about job displacement by ensuring that human labour remains integral to the creative process.
Conclusion: Towards a Harmonious Coexistence
The discourse on the lack of protection for workers' rights in the data-driven workplace had taken place much before ChatGPT came into the picture. In an environment where regulations are still a mere demand, the already vulnerable workforce has faced further exploitation by corporations. In the competitive landscape amongst corporations for low-cost labour, talented and skilled individuals are being stolen from. Their rights must be revised and amended to combat their exploitation within the capitalist setup and prevent GenAI from becoming a weapon against creativity.
While GenAI holds the potential to increase worker productivity and even pave the way for a potential three-day work week, it should not be used to mask the extent of exploitation within the creative arena. It should supplement, and not supplant, the human force. The formulation of policies to regulate this advanced technology must prioritise the three Cs—consent, control, and compensation—in addition to fostering human oversight, privacy, transparency, safety, diversity, and accountability.
This article has been authored by Ria Verma and Aastha Maurya, fourth-year students at Symbiosis Law School, Noida. This blog is a part of RSRR’s Blog Series on 'Traversing the Intersectionality of the Entertainment Industry and Generative Artificial Intelligence', in collaboration with The Dialogue.
Comments