THE RISING TIDE OF DEEPFAKES: NAVIGATING NEW LEGAL CHALLENGES
Introduction
A ‘Deepfake’ is an artificial image or video or audio (a series of images) generated by a special kind of machine learning called “deep” learning (“Deepfake(s)”). It is the artificial production, manipulation and modification of data to create a false representation of a person, object, place, or entity. While we are not alien to the concepts of “face swapping” or “photo morphing”, these applications/ photo-altering technologies are primarily designed for amusement and it is easy to distinguish the real from the fake in such cases. What sets deepfakes apart and what makes them so dangerous is the use of deep learning to produce fake images, videos, audios etc. which we often cannot asses or distinguish real from the fake.
Deepfakes, a technology-driven phenomenon, have raised significant concerns globally due to their potential misuse for deceptive purposes. In a world which consumes most of its information and content online and formulates opinions based on such information, technologies like deepfakes can have a huge impact and influence on the way that population thinks and behaves. Deepfakes can release misinformation and create issues on both micro and macro levels. The rising and impending use of deepfake and artificial intelligence technology has raised the need for stronger and comprehensive legal frameworks to address issues such as privacy, data protection and cybercrimes.
While Deepfake content has been making its rounds globally, there have been several cases of Deepfakes creating havoc in India and has a significant presence in India, with notable applications in politics, the entertainment industry, pornography, and instances of defamation. The prevalence of Deepfake technology has given rise to many concerning cases, for instance, recently a video of actress Rashmika Mandana was made available on social media which was in reality a manipulated video of a British influencer. In other cases, such as in 2020 Indian legislative assembly elections politician Manoj Tiwari’s speech delivered in English was manipulated to be in the ‘haryanvi’ dialect and the video was disseminated via Whatsapp, Twitter and other social media networking websites ahead of the 2020 Legislative Assembly elections in Delhi. Similarly, there is also a case involving journalist Rana Ayyub where a manipulated video was used maliciously, highlighting the gaps in legal protection against revenge porn[1].
Further, Alia Bhatt has fallen prey to the misuse of deepfake technology, sparking concerns regarding the inappropriate application of artificial intelligence. The disconcerting video depicts an individual dressed in a blue floral co-ord set, featuring Alia’s facial features, engaging in explicit gestures. Similarly, another deepfake video involving Kajol has emerged, portraying her seemingly undergoing wardrobe changes. It has come to light that the Kajol video is a manipulated creation, wherein her face has been superimposed onto footage originally posted by a social media influencer on TikTok as part of the “Get Ready with Me” (GRWM) trend. To add on to the aforesaid, the internet was recently stormed with the AI generated, sexually explicit deepfakes of Taylor Swift, while attending football games in the United States of America. Taylor Swift’s likeness was being used in deepfake videos, where her face has been digitally manipulated to appear in situations she was not actually present in. These deepfake videos have raised concerns about the potential for misuse of this technology, including spreading misinformation or creating non-consensual adult content
Such incidents as highlighted above underscores the immediate necessity to address the inherent risks associated with deepfakes in the digital landscape
Legal issues concerning Deepfakes in India:
Deepfakes is essentially a manipulated multimedia content, primarily images and videos, created using artificial intelligence (AI) techniques. These techniques enable the synthesis of realistic-looking content that can deceive individuals into believing false information. As deepfake technology advances, it becomes imperative to evaluate the legal frameworks in place to counter its misuse.
The artificial intelligence technology contravenes various areas of law such as copyright, data protection, privacy, defamation, freedom of speech and expression etc.:
1. Personality Rights and Intellectual Property Rights:
Celebrities and public figures are particularly vulnerable to the misuse of deepfake technology, given the abundance of available data and the ease with which their attributes can be manipulated at no cost. Deepfakes infringe upon personality rights, encompassing control over identity, including name, likeness, and voice. This violation extends to deceptive trade practices and unfair competition, where deepfakes may misrepresent public figures promoting products or services.
Deepfakes often manipulate copyrighted materials, automatically violating copyright protection by altering existing photos and videos. This raises concerns about moral rights, as modifying copyrighted content may be considered distortion or mutilation, depending on the nature of its use. Despite these concerns, there is currently no judicial precedent protecting deepfakes as copyrightable works, potentially leading to legal disputes over ownership and control, further blurring the lines between human and machine creation.
Deepfakes and Copyright Infringement:
Deepfakes present a complex challenge when it comes to copyright infringement as it involves the use of copyrighted material, such as images, videos, or audio recordings of individuals, without their consent. Using such material without permission from the copyright holder may constitute copyright infringement. Deepfakes are considered derivative works since they are created by altering or combining existing copyrighted material. Copyright law typically grants the original copyright holder the exclusive right to create derivative works. Therefore, creating a deepfake without permission could infringe upon this right.
It is pertinent to note that in USA, some deepfake content has been exempted under the doctrine of fair usage. The concept of fair dealing has frequently faced criticism for its perceived rigidity, especially when compared to the broader doctrine of fair use in the USA. This rigidity is evident in the current stance of Indian Copyright law, which provides limited flexibility in addressing emerging issues like deepfakes. Under the current legal framework, all deepfakes, regardless of intent or purpose, may be deemed to infringe copyright under the Indian doctrine of fair dealing. However, there is a growing recognition that this approach may not adequately accommodate deepfakes created for legitimate purposes, such as artistic expression or entertainment. Therefore, there is a need for the Indian Copyright law to evolve to better accommodate deepfakes created for bona fide purposes while still protecting the rights of copyright holders.
Recently, Indian food aggregator Zomato made use of Deepfake technology in one of its advertisements starring Hrithik Roshan[2]. With the use of this technology, they have used the original advertisement and manipulated Hritik Roshan’s voice and facial movements to in a manner such that in each new ad Hrithik Roshan is mentioning a different local restaurant(s) and their famous dishes. However, this is clever and authorized usage of Deepfake technology, as Zomato owns the rights in the advertisement and has the exclusive right to modify it using any technology it desires. Having said that, Hritik Roshan being a celebrity featured in the advertisement also licenses the usage of his attributes for these purposes. On the flip side, there are videos on social media sites such as Instagram and Facebook, featuring Shahrukh Khan where he appears to be advertising a betting platform[3] and Sachin Tendulkar where he appears to be promoting a gaming app[4]. Similarly, there are videos featuring Virat Kohli, who appears to be making claims of his big gains in respect of a betting app. These videos, although evidently doctored, not only use the attributes of these celebrities to make false claims but also use them to advertise services/ products which are not prohibited in India.
2. Defamation: Since Deepfakes is not just limited to manipulation of video or static images and also includes manipulation of audio/ audio-visual content, deepfakes can be created to make/ create fake audio clips/ audio-visual clips that harm an individual’s reputation and which may amount to defamation. However, the existing Laws on defamation and obscenity provide limited recourse in such cases, emphasizing the need for specialized.
3. Data Protection and Privacy: Creation of Deepfakes requires an extensive usage of personal data since deepfakes are generated through Generative Adversarial Networks i.e it constantly needs to be fed large amounts of data which it studies and then replicates. This in turn may include unauthorized disclosure, modification, substitution or use of sensitive data of a person and further increases the liabilities associated with data breaches. Pictures and images are sensitive personal data of an individual that are capable of identifying that very individual as defined under the Digital Personal Data Protection Act, 2023 and usage of any sensitive personal data of an individual requires consent of the individual[5]. When such image is modified using the Deepfake technology, ideally, the consent is twofold: the consent should be obtained not only from the person in the original photo/ video, but also from the person in the fabricated photo/video.
Laws relating to Deepfakes in India:
In India, the legal response to deepfakes is still evolving. Current laws, such as the Information Technology Act, 2000 and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”), primarily focus on issues related to cybercrimes, but they lack specific provisions targeting deepfakes. Certain provision as applicable in the current scenario are as following:
- Section 66E of the IT Act provides the punishment for violation of privacy of an individual by transmission of images of her private area, without her consent and Section 67, 67A and 67 B of the IT Act for transmitting any images/ material containing sexually explicit act and children depicted in sexually explicit act in electronic form.
- Section 66D of the IT Act provides punishment of three years with fine up to one lakh rupees for anyone who by means of any communication device or computer resource cheats by impersonation.
- Section 72 of the IT Act provides for penalty for breach of confidentiality and privacy of with imprisonment for a term which may extend to two years, or with fine which may extend to one lakh rupees, or with both.
- Section 79 (1) of the IT Act releases online intermediaries from liability for any third party information, data, or communication link made available, subject to: (a) the intermediary not having conspired or abetted or aided or induced, whether by threats or promise or otherwise in the commission of the unlawful act and (b) the intermediary fails to expeditiously remove or disable access to that material upon receiving actual knowledge, or on being notified by the appropriate Government or its agency, whereas Rule 7 of the IT Rules authorizes the aggrieved individuals to take intermediaries to court under the provisions of the Indian Penal Code.
In November 7, 2023, the Ministry of Electronics and Information Technology issued an advisory[6] to all ‘social media intermediaries’ as defined under the IT Rules to:
- Exercise reasonable due diligence to identify misinformation and deepfakes, and in particular, any information which violates the provisions of the IT Rules and the IT Act and/or its user agreements with the users;
- Any cases which are related to deepfakes are actioned against as per the provisions and timelines provided under the IT Rules and all such content must be taken down within 36 (thirty-six) hours of it being reported
- Caution all users not to host and/or disseminate information/ content related to deepfakes and promptly disable access to such content
- Any failure to act in accordance with the IT Rules and the IT Act, would make the organisations lose any protection available to them under the Section 79(1) of the IT Act.
- Encouraged the people who are affected by such material to file First Information Reports (FIRs) and avail remedies provided under the IT Rules.
Moreover, on December 26, 2023, the Ministry of Electronics and Information Technology issued another advisory[7], which mandates that intermediaries communicate prohibited content, particularly those specified under Rule 3(1)(b) of the IT Rules. The advisory stated that any content which is not permitted under the IT Rules, in particular those listed under Rule 3(1)(b) must be clearly communicated to the users in clear and precise language including through its terms of service and user agreements and the same must be expressly informed to the user at the time of first-registration along with regular reminders at every instance of login and while uploading/sharing information on the platform. The advisory emphasizes that digital intermediaries must ensure users are informed about penal provisions, including those in the IPC and the IT Act 2000, in case of Rule 3(1)(b) violations. That the intermediary shall inform its rules and regulations, privacy policy and user agreement to the user in English or any language and shall make reasonable efforts to NOT host, display, upload, modify, publish, transmit, store, update or share any information that: (i) belongs to another person and to which the user does not have any right; (ii) is obscene, pornographic, paedophilic, invasive of another’s privacy including bodily privacy; (iii) is harmful to child; (iv) infringes any intellectual property rights; (v) deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any misinformation; (vi) impersonates another person; (vii) threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign States, or public order; (viii) contains software virus or any other computer code, file or program designed to interrupt, destroy or limit the functionality of any computer resource; (ix) is in the nature of an online game that is not verified as a permissible online game; (x) is in the nature of advertisement or surrogate advertisement or promotion of an online game that is not a permissible online game; (xi) violates any law for the time being in force;
Concerning violation of personality rights, including right to publicity, the Delhi High Court, in Anil Kapoor vs. Simply Life India & Ors[8], restrained the defendants (and anyone acting on their behalf) from utilizing the plaintiff-Anil Kapoor’s name, likeness, image, voice, personality or any other aspects of his persona to create any merchandise, ringtones, ring back tones, or in any other manner misuse the said attributes using technological tools such as Artificial Intelligence, Machine Learning, deep fakes, face morphing, GIFs either for monetary gains or otherwise to create any videos, photographs, etc., for commercial purposes, so as to result in violation of the plaintiff’s rights[9]. The plaintiff had approached the court and sought an injunction against 19 defendants who were in some manner or the other utilizing various features of the plaintiff’s persona, and misusing the same in malicious ways, including creation of Deep fakes using Artificial Intelligence representing the plaintiff in derogatory manner and picturising the plaintiff in a song or photograph with the clothes worn by actresses including Katrina Kaif, Madhuri Dixit and Late Sridevi etc. The Delhi High Court observed that technological tools that are freely available and make it possible for any illegal and unauthorised user to use, produce or imitate any celebrity’s persona, by using any tools including Artificial Intelligence and the plaintiff’s image is being morphed along with other actresses in videos and images generated in a manner, which are not merely offensive or derogatory to the plaintiff, but also to such other third-party celebrities and actresses.
Moreover, in November 2023, the Delhi High Court entertained a public interest litigation (PIL) filed by Chaitanya Rohilla, a lawyer, against the unregulated use of artificial intelligence (AI) and deepfakes in India[10]. Through the PIL, the petitioner sought directions to the central government to identify and block websites providing access to deepfakes and regulate artificial intelligence to protect the fundamental rights of citizens. The PIL, in essence, stated that while technological development was happening rapidly, the Indian law is lagging behind. The Delhi High Court observed that the matter required more deliberation as Artificial Intelligence and Deepfake technology are fairly new and complex topics and the issue raised by the petition required deliberations that only the government could undertake and listed the case for further hearing on January 8, 2024.
Meta’s Initiative against Deepfakes: Meta’s forthcoming initiative to label deepfake or AI-generated images on its Facebook, Instagram, and Threads platforms as “Imagined with AI” marks a significant step in enhancing user awareness and content authenticity. By distinguishing between AI-generated and human-created content, Meta aims to provide users with crucial information about the content they encounter. The company’s efforts include developing classifiers capable of automatically detecting AI-generated content, even in the absence of visible markers.[11] This proactive approach not only addresses the growing concern surrounding deepfakes but also empowers audiences to discern between original and manipulated content, thereby mitigating the virality of such misleading videos.
To combat the challenges posed by deepfakes, India needs specific legislation that criminalizes their creation, distribution, and malicious use. Such legislation should outline penalties for offenders, establish mechanisms for rapid takedown of deepfake content, and ensure protection for victims.
Deepfakes and Global Regulations:
Amidst the proliferation of deepfake technology, a consortium of United States senators proposed the “Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024,” commonly referred to as the “Defiance Act.” This legislative initiative empowers victims to pursue civil penalties against individuals who produce or possess manipulated content with the intent to disseminate it, as well as against those who knowingly receive such content without the consent of the victim. Furthermore, the United States has advocated for the establishment of a task force dedicated to addressing the challenges posed by deepfakes.
Internationally, numerous countries are taking steps to mitigate the risks associated with deepfakes through legislative measures. Notably, 28 countries, including the United States, Canada, Australia, China, Germany, India, and members of the European Union, have endorsed “The Bletchley Declaration.” This declaration aims to enhance global cooperation and collaboration in addressing artificial intelligence issues and promoting safety. It underscores the shared commitment of nations to assess the risks, opportunities, and future directions for international collaboration in AI research and safety.[12]
In January 2023, the Cyberspace Administration of China (CAC) introduced the “Deep Synthesis Provisions,”[13] a comprehensive legislative framework aimed at regulating providers of deepfake content. These provisions govern the creation, dissemination, and consumption of deepfake technology and services, encompassing text, images, audio, and video generated using AI-based models. The regulations impose obligations on platforms offering content generation services, requiring them to watermark AI-generated content and regulate the use of personal data by such platforms and services.
Similarly, the United Kingdom is taking proactive measures to combat the dissemination of intimate deepfake content by amending its online safety bill. The proposed amendment seeks to criminalize the sharing of digitally manipulated explicit images or videos.[14]
International Efforts: There have been discussions at the international level regarding the regulation of deepfakes. Organizations like the United Nations and the OECD have examined the implications of deepfake technology and discussed potential regulatory frameworks. However, reaching consensus on global regulations for deepfakes remains challenging due to differences in legal systems, cultural norms, and political priorities among countries.
These legislative initiatives underscore the global recognition of the challenges posed by deepfake technology and the imperative to enact regulatory frameworks to safeguard against its harmful effects. Overall, the regulatory landscape concerning deepfakes is complex and continually evolving as policymakers grapple with the multifaceted challenges posed by this technology. While some countries have taken steps to address specific concerns related to deepfakes, achieving comprehensive and effective regulation remains an ongoing endeavour.
Conclusion
The series of recent deepfake incidents involving top Indian film stars and personalities has prompted the government to meet social media platforms, artificial intelligence companies and industry bodies, to come up with a “clear, actionable plan” to tackle the issue. With no dedicated law on AI, identifying the originator of deepfakes and the first transmitter of deepfakes is a big challenge as a result of which most of the service providers in India do not want to share information about deepfake originators because of potential impact it may have on the service providers loosing statutory exemption from legal liability.
The increasing incidents involving deepfakes in India, from celebrity misuse to political manipulation, highlight the urgent need for a robust legal framework. To effectively safeguard against the evolving threats posed by deepfakes, a collaborative effort among individuals, organizations, and governments is essential. Primarily, keeping the emphasis on protection of data privacy, personal information and personality rights, ensuring that sensitive information remains protected from manipulation and unauthorized usage. Google has already shown its support and stated that it will work with the Indian government to address the safety and security risks posed by deepfake and disinformation campaigns as it requires a collaborative effort, which involves open communication, risk assessment and proactive mitigation strategies.
Given the potential harm caused by deepfakes, legal intervention is necessary to prevent their misuse and protect individuals from malicious intent. Laws must strike a balance between safeguarding freedom of expression and curbing the spread of harmful content. A comprehensive legal framework is essential to deter the creation and dissemination of deepfakes.
Simultaneously it is also necessary to bolster cybersecurity defences, particularly in areas vulnerable to deepfake attacks, to prevent potential breaches and malicious exploitation. Furthermore, fostering widespread awareness and knowledge dissemination regarding the nuances of AI-generated content is important to navigating these transformative times and allowing individuals to identify and confront these threats confidently.
Footnotes
1 https://www.livemint.com/opinion/online-views/the-proliferation-of-deepfakes-has-shrouded-india-s-2024-polls-in-uncertainty-11702208306588.html
2 https://www.socialsamosa.com/2022/07/zomato-uses-deepfake-and-ai-in-a-personalized-ad-starring-hrithik-roshan/
3 https://indianexpress.com/article/india/amid-rising-deepfake-videos-on-social-media-platforms-govt-to-hold-meeting-today-9038920/
4 https://economictimes.indiatimes.com/tech/technology/sachin-tendulkar-falls-victim-to-deepfake-video/articleshow/106869289.cms
5 https://www.mondaq.com/india/social-media/1395304/deepfakes-and-breach-of-personal-data–a-bigger-picture
6 https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445
7 https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1990542
8 CS (COMM) 652/2023.
9 https://www.livelaw.in/top-stories/delhi-high-court-anil-kapoor-voice-image-misuse-personality-rights-238217
10 https://www.deccanherald.com/india/delhi-hc-seeks-centres-stand-on-pil-against-deepfakes-ai-2796412
11 https://economictimes.indiatimes.com/tech/technology/meta-to-start-labelling-ai-generated-deepfake-images-hopes-move-will-pressure-industry-to-follow-suit/articleshow/107462481.cms.
12 https://www.reuters.com/technology/britain-publishes-bletchley-declaration-ai-safety-2023-11-01/
13 https://thediplomat.com/2023/03/chinas-new-legislation-on-deepfakes-should-the-rest-of-asia-follow-suit/
14 https://www.theguardian.com/society/2023/jun/27/sharing-deepfake-intimate-images-to-be-criminalised-in-england-and-wales