ACIPR Logo

Alliance Center for
Intellectual Property Rights



DECODING AND DEMYSTIFYING DEEPFAKE TECHNOLOGY UNDER COPYRIGHT LAW

November 1, 2025

*Mr. Sachit Roy and Mr. Manas Khushikant Tank


INTRODUCTION

In a new world full of booming technology, the question still lingers on, “Whether Artificial Intelligence has the capability of being used as a two-face tool and blur the line between what is real and what is not?”. The digital age has bought fore the invention of Artificial Intelligence (AI) and most importantly, the creation and rise of Deepfake Technologies where AI software is being used to manipulate media, wherein one cannot differentiate between the truth of the real world and the fabricated truth of the manipulator. A question therefore arises, that is, whether Deepfakes should be recognized as Intellectual Property and risk reality being surrendered to the illusioned truth? In order to demystify this question, we will understand how establishing Authorship and Originality is linked with the allocation of rights and liabilities of those employing Deepfake Technologies and how a lack of protection of moral rights challenge law of copyright in relation to personality rights.

AUTHORSHIP & ORIGINALITY

a. Who Can Be Said to be an Author of a Deepfake?

Copyright law traditionally anchors authorship in human creativity. Section 2(d) of the Indian Copyright Act, 1957 defines the term “author” based on the type of work, such as the creator of a literary work, the photographer of a photograph, or the producer of a cinematograph film. But when an AI algorithm generates a deepfake, which qualifies as the legible owner, three possibilities emerge. When it comes to deciding who the “author” of a deepfake really is, the issue becomes somewhat complicated. One possible claim is that the programmer who designs the Generative Adversarial Network (GAN) or other deep learning models should be treated as the author, since it is their technical framework that makes the manipulation possible in the first place. Another view is that the user, who actually feeds images, prompts, or data into the system, plays the more active role and should therefore be considered the author. The third and most debated possibility is whether the AI itself could be seen as the author. However, this remains an uncharted territory because existing copyright laws across jurisdictions, including India, do not recognize non-human entities as authors of creative works.

U.S. Courts have, on multiple occasions, declined to recognize AI as an “author”. For instance, the U.S. Copyright Office refused protection to works generated by algorithms without human input, emphasizing that copyright is inherently tied to human creativity with some level of independent creation wherein a level of modicum of creativity must be present. Indian courts adopt a similar human centric stance, especially given precedents like Eastern Book Company v. D.B. Modak which highlighted the requirement of a modicum of creativity. This case was pertaining to the copyrightability of the judgments.

b. Originality and the Deepfake Dilemma

Deepfakes usually don’t count as original works. Usually, they are made by taking existing videos, pictures, or even voices and changing them slightly. Instead of creating something completely new, deepfakes just reuse and manipulate what already exists. In India, this could make them derivative works, which means making or sharing them without permission could be considered copyright infringement. Deepfakes also raise problems under Section 57 of the Copyright Act, better known as the moral rights of the author. Even if a deepfake doesn’t copy a work exactly, it can still change or harm a person’s image or reputation, which can violate their rights to integrity and attribution. This is where copyright and personality rights come together; a single deepfake could end up breaking copyright law, moral rights, and privacy all at the same time.

Looking at other countries, like the U.S., the law is still uncertain. Courts there use the test of “transformative use” wherein it tries to balance freedom of expression with protecting a person’s image, like in the case of Comedy III Productions v. Saderup. However, deepfakes are so realistic that it’s hard to decide; they may seem like free expression, but they can still harm someone’s reputation. Overall, the world is still uncertain on whether deepfakes can be considered original. The path seems to be hazy until the legislature provides us with some clarity over the same through amended laws that concern deepfakes, and until then, the courts are more likely to protect the people whose identities are being misused.

LIABILITY & ENFORCEMENT

Who Should Bear Responsibility?

The biggest challenge when it comes to deepfakes is to figure out who should be held responsible for the harmful deepfakes. Deepfakes are big issues in themselves with various complications and risks of its own, and with AI becoming more advance every passing day, it has become problematic to curb its shortcomings. From a Birdseye view, there are three main parties involved: First, the creator, that is, the person who actually makes the deepfake, who is usually the main wrongdoer. However, many creators hide behind fake names or operate across borders, which makes it very difficult to identify or punish them. Second, platforms such as social media and content-hosting sites often act as distributors of deepfakes. In India, the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 require platforms to respond to takedown notices. Yet, Section 79 of the IT Act provides safe harbour protection, meaning that the platforms are not liable and can be exempted unless they actually know about the illegal content. Third is the AI developers, the companies, that create the AI tools, that raise a tricky question of liability. Most experts argue that developers should only be held responsible if they intentionally design their AI systems to create illegal deepfakes. Even where liability is theoretically clear, enforcing it is a challenge. Deepfake creators often remain anonymous, making them hard to trace. Jurisdictional barriers also arise as deepfakes can spread across countries, while legal remedies are usually limited to a single jurisdiction. Additionally, the speed at which deepfakes go viral can cause irreversible reputational harm before any legal action can take effect. Some countries have started experimenting with new solutions. China, for instance, requires that deepfakes be labelled, while the EU’s Digital Services Act imposes proactive monitoring duties on large platforms. India, on the other hand, does not have deepfake-specific laws and relies instead on general copyright, IT, and privacy legislation. This fragmented framework leaves victims with limited protection against the harms caused by deepfakes. Even the Digital Personal Data Protection Act, 2023 which is yet to come in force, fails to deal with personal data and requires consent for processing, and does not specifically cover deepfakes per se. The growing distrust in digital media, along with the difficulty of tracing and holding accountable those who misuse deepfakes, shows that current laws are not enough.

PROTECTION OF IDENTITY AND PROTECTION OF AUTHORSHIP: SHOULD DEEPFAKES BE ALLOWED TO ERODE MORAL RIGHTS AND PERSONALITY RIGHTS?

The Copyright Act of 1957 protects the moral rights as a special right wherein the author of the work is given the right to claim the ownership and ensure that the work is not disseminated or modified in a manner which would affect the honor or reputation of the author. However, even though there is a statutory protection of the work of the author under Section 57 of the Copyright Act of 1957, there is an impending question as to whether moral rights and personality rights of the author are truly secured, including their dignity and identity. Despite being officially recognized today, the author's persona may not be adequately protected due to gaps in remedies and limited judicial interpretations. 

To understand how moral rights are protected under Section 57, we can look at the case of Amar Nath Sehgal vs Union of India where the court protected the right of the artist and held that Section 57 was to be given the “widest sense”, which involved protecting his right of creative expression to that of his personal connection and that his reputation is not harmed in relation to his work being destroyed by the Government. Section 57 includes the right of attribution and the right of integrity and in the case of Amar Nath Sehgal v. Union of India, the Delhi High Court upheld both the right of attribution, recognizing the sculptor’s authorship of the mural, and the right of integrity, condemning its mutilation as prejudicial to his honor and reputation, thereby affirming Section 57’s moral rights protections under Indian copyright law. 

However, the issue still remains that moral rights do not protect the personhood of the person subjected to the deepfake but moral rights only protect the authorship of the content. If this is the case, then the trouble of protecting the personality rights arises for the person whose image has been used without prior consent, as that would erode his personality and harm his reputation considering that they will not have a claim under Section 57. Unlike such a concept of personality rights, where a person's rights in commercial exploitation of his name, likeness, voice or some aspect of personality are not codified in the Copyright Act, Indian courts have nonetheless accepted such rights in the form of case laws increasingly. For example, D.M. Entertainment v. Baby Gift House (Delhi HC, 2010) held that a human being could restrain third person(s) from exploiting "likeness or some other recognizable attribute of such a person's personality without express permission," even in the absence of consumer confusion. Very recently, the Delhi High Court has explicitly applied personality rights in the case of deepfakes. Dr. Naresh Trehan, a world-renowned heart surgeon, was granted an injunction against circulating computer-generated videos portraying him incorrectly giving medical opinion (Global Health Ltd. v. John Doe & Ors). The court noted the videos misappropriated Dr. Trehan's "identity" and "traits" (which include his name, likeness, and voice) in a deceptive way, and made a John Doe ex parte order for their takedown in a 24-hour period. 

These instances show the distinction between moral rights and personality rights. Moral rights safeguard the integrity and attribution of a creator's own work; however, they do not encompass the authority to control an individual's autonomous identity. A director or photographer possesses rights under Section 57 in relation to their film or image, yet the individual depicted in the image (such as an actor or doctor) depends on principles of personality rights that exist beyond copyright protection. Consequently, the deficiencies (or constraints) of moral rights protection for non-authors necessitate a reliance on personality rights. This situation can pose challenges for everyday individuals. As highlighted by one commentary, the intellectual property laws in India appear to "focus only on the commercial value of celebrities," thereby rendering "ordinary citizens" susceptible to exploitation through deepfake technology. 

Back home, Indian law’s flexibility may eventually coalesce into a clearer regime. Some commentators urge a constitutional “right to dignity and identity” (Article 21) reinterpretation to underpin personality rights. Still, creators are not destined to powerlessness: writers still enjoy Section 57 rights to injunctions/damages where their copyrighted material is put to use in a deepfake that misshapes the material or erases their name, and celebrities can still bring personality-rights remedies (breach of confidence, consumer law, trademark infringement, etc.) as courts understand. 

CONCLUSION

Deepfakes are a major challenge for copyright and personality rights. Indian law does not recognize AI as an author, and since deepfakes are usually derivative and harmful, they should not get copyright protection. The law should focus on protecting people whose images or likenesses are misused. Creators are mainly responsible, but platforms also share liability, and AI developers could be held accountable if negligence or intent is shown. The legal system is at a crossroads recognizing “digital twins” under copyright could legitimize manipulation, but ignoring them leaves victims unprotected. There are still some grey areas which our current law system is silent on. A balanced approach combining copyright, IT, and personality rights is needed to protect identity while allowing technological innovation.

REFERENCES

  1. Deepfakes in Elections: Challenges and Mitigation, (May 14, 2024), https://www.drishtiias.com/daily-updates/daily-news-editorials/deepfakes-in-elections-challenges-and-mitigation.
  2. Bobby Allyn, Deepfake video of Zelenskyy could be 'tip of the iceberg' in info war, experts warn, (Mar. 16, 2022), https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia.
  3. The Copyright Act. 1957, § 2, No. 14, Acts of Parliament, 1957 (India).
  4. U.S. Copyright Office, Copyright and Artificial Intelligence, Part 1 Digital Replicas Report, (July 26, 2024), https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-1-Digital-Replicas-Report.pdf.
  5. Feist Publications, Inc. v. Rural Telephone Service Co., 499 U.S. 340 (1991).
  6. Eastern Book Company v. D.B. Modak, 2002 PTC 641 (SC).
  7. The copyright act § 2 (1957).
  8. Comedy III Productions, Inc. v. Gary Saderup, Inc., 25 Cal. 4th 387 (2001).
  9. Ai-Generated Deepfakes and Personality Rights: A New Frontier For Intellectual Property Law, International journal of artificial intelligence (June 9, 2025), https://www.academicpublishers.org/journals/index.php/ijai/article/view/5040.
  10. The information technology act, § 79 (2000).
  11. P Jeevetha, Navigating the Legal Framework for Deepfake Technology in the Era of Intellectual Property and Personal Rights, 7 International Journal of Future Management Research (2025). E-ISSN: 2582-2160, www.ijfmr.com.
  12. Amar Nath Sehgal v. Union of India, 2005 (30) PTC 253 (Del).
  13. D.M. Entertainment v. Baby Gift House, 2010 SCC OnLine Del 4790.
  14. Bisman Kaur, Delhi High Court takes strict approach to personality rights violation in healthcare industry amid spike in use of AI and deepfakes, (Feb. 13, 2025), https://www.worldtrademarkreview.com/article/delhi-high-court-takes-strict-approach-personality-rights-violation-in-healthcare-industry-amid-spike-in-use-of-ai-and-deepfakes.

Author:

Mr. Sachit Roy,
4th Year B. Com., LL.B. (Hons.) Student, Institute of Law, Nirma University.

Mr. Manas Khushikant Tank,
4th Year B. Com., LL.B. (Hons.) Student, Institute of Law, Nirma University.

Disclaimer: The opinions expressed in the article are the personal opinions of the author. The facts and opinions appearing in the article do not reflect the views of the Alliance Centre for Intellectual Property Rights (ACIPR) and the Centre does not assume any responsibility or liability for the same.