A robot hand with the letter AI and a lady justice statue on the wooden table with law books. 3d illustration.

In the recent few months, artificial intelligence products such as Midjourney, Stability AI and ChatGPT have taken the forefront in terms of innovation in technology. These “AI Generative Tools” are designed to respond to text-based inputs and generate images, pictures or information. The tools can generate works that can be classified as artistic works or literary works.

The issue here is the challenges faced under the copyright law. Whether AI Generative Tools such as Midjourney use copies of the works present in their training data to create the resultant image and thereby infringe on any copyright vested with the author of such works.

There has already been a petition on these lines in the District Court of California against Midjourney Inc., Stability AI and Deviant Art Inc., which are similar AI Generative Tools, filed by a few artists representing a class of potentially millions of other artists whose works were used by the AI Generative Tools as training data.

  1. The Complaint

The lawsuit, Andersen v. Stability AI[1], was filed on January 13, 2023 by Sarah Andersen, Kelly McKernan, and Karla Ortiz, along with a proposed class of potentially millions, is the first copyright infringement lawsuit against the developers of popular AI art generation tools. Defendants include Stability AI (which has developed DreamStudio), Midjourney and DeviantArt.

The essence of Plaintiffs’ arguments is that AI image generators remix copyright works of millions of artists used as training data. They have described the AI image generators as “a 21st-century collage tool that remixes the copyright works of millions of artists whose work was used as training data.” The training data based on which AI Generative Tools train themselves is provided by LAION (Large-Scale Artificial Intelligence Open Platform) which is a non-profit organization in Hamburg, Germany, which has images taken from various image storing sites such as Pintrest, Getty Images, Shutterstock, Wix, etc.

According to them, as per Para 95 of the complaint, “every output image from the system is derived exclusively from the latent images, which are copies of copyrighted images. For these reasons, every hybrid image is necessarily a derivative work.” The plaintiffs allege that the generated images are exclusively a result of all the training data that consists of millions of copyrighted images. Therefore, any new image generated is essentially a derivative work of the training data.

However, contrary to the complaint of the plaintiffs, in our opinion, the model does not exactly work using technology that creates collages of images thereby directly infringing on the copyright of the Plaintiffs[2]. Every output image is a wholly new work[3].

  1. The Technology

The first element of the technology is known as Diffusion Models (DM) which are generative models that take an image and gradually add noise over time until it is not recognizable and then reconstruct the image to its original form while learning how to generate pictures.[4] DM is used to train the AI Generative Models from the LAION training data.

The second element is a technology known as CLIP which is used to train the AI to understand the relationship between language and images. This allows the model to recognize different images based on the captions and text that we may provide.[5]

Due to the large size of the LAION training data, which according to their website is of around 5.85 billion images-text pairs, it becomes difficult for the AI to train over the limited computer resources available today, therefore the third element used is known as latent space. Latent space can be understood as basically a cluster of images in a room that represent a particular word, such as cats. This helps in not rendering all the images involving cats but having a cluster that represents how a cat looks in an image.

The AI Generative Tool therefore use all the three elements to identify the image from the text input, for example, a table would mean one or more leg with a flat top, and after analysing the whole text and adding aesthetic or style additions, it would create wholly new images that have never existed before.[6]


The complaint alleges that the derivative rights of the plaintiffs are infringed. There are two possible infringements, which are of reproduction rights and derivative rights, that can occur as a result of the technology used by the AI Generative Tools to generate images.

  • Reproduction Rights

The right of reproduction is the right of copyright owners to prevent others from making copies of their works without permission. This is the most basic right protected by copyright legislation and one that is relevant for AI Generative Tools.

The Calcutta High Court in Mohendra Chandra Nath Ghosh v. Emperor[7] has ruled that a copy may be defined as that which comes so near the original as to suggest the original to the mind of the spectator. In deciding whether there is an infringement of copyright pictures, the question is whether the offending pictures are the copies of substantial portion of the copyright pictures.

As the model is initially using the training database provided by LAION, there are reproduction of images for the purpose of training the AI model. But this can be covered in the exception of fair use, as there is no reproduction of the image being made available to the public, similar to the exception given to Google’s Book Search tool in the case titled Authors Guild v. Google, Inc[8]. The AI Generative Tools aren’t even showing the scanned images to the end users unlike Google’s Book Search, therefore there can be no argument regarding infringement of reproduction rights.

It cannot also be argued that the training database provided by LAION is infringing on reproduction rights as the database is covered under the exception contained in Article 4 of Title III of EU’s Digital Single Market Directive.[9]

There is no reproduction of any work as the image is generated through the technology as explained above rather than copied, and in most cases, there is no resemblance to any previous work used in the training data. Therefore, the argument that it is a “21st century collage tool” falls apart.

However, the complaint does not focus on infringement of reproduction rights either during the training stage or in the generated image, rather it focuses more on the fact that such creations may be derivative works of their original images.

  • Derivative Rights

In the Indian Context, a derivative work, or an adaptation according to the Indian Copyright Act, 1957 is defined as “(v) in relation to any work, any use of such work involving its re-arrangement or alteration;”.

In R.G. Anand v. M/s Delux Films[10], violation of copyright is when the reader, spectator, or the viewer after having read or seen both the work is clearly of the opinion and get an unmistakable impression that the subsequent work appears to be a copy of the original. It was clarified that copyright is confined to the expression and does not extend to the idea, subject matter, themes, plots or historical or legendary facts.

Taking the example of this artwork that I have generated with the text prompt “kanye west album cover” through Midjourney, we can see that plaintiffs’ case do not hold true. A viewer after seeing existing album covers by Kanye west, would not get an impression that this work is a copy of any other covers. The idea of an album cover is not protected under any copyright law, it is only the expression, and therefore there is no infringement of derivative rights.

  1. Conclusion

In conclusion, it seems unlikely that there is any infringement of either reproduction rights or derivative rights of the plaintiffs. The AI Generative Tools are able to create entirely new works which are not infringing on any other artwork. This is possible through the technology that such tools operate on. This is the reason that the class action lawsuit brought by the plaintiffs might fail and it is to be seen what the court will decide.

There can be challenges with regard to infringement of trademarks, characters, or logos, and any protectable elements that are included in the images. There was recently a lawsuit filed by an image distribution company called Getty Images against Stability AI for generating images that included the Getty Images watermark. The company argued that such use creates customer confusion and therefore infringed on their trademarks.[11]

There can be infringement of name or likeness of different celebrities, such as Kanye West in the example above, thereby infringing on his publicity or privacy rights. This can be due to the use of certain well-known personalities either in a commercial manner or in a derogatory manner. Therefore, such challenges will have to be kept in mind, and the decisions of the courts on these issues will have to be look at for clarity.

About Author: Savan Dhameliya is currently in his final year and is pursuing a career in the Media and Entertainment industry. He has written various articles and papers on Music Law and Copyright Law.

End notes:

[1] Andersen v. Stability AI, Case 3:23-cv-00201, PDF:

[2] Aaron Moss, Artists Attack AI: Why The New Lawsuit Goes Too Far, Copyright Lately, (January 23, 2023)

[3] Andres Guadamuz, Copyright infringement in artificial intelligence art, TechnoLlama (January 15, 2023)

[4] Arham Islam, How Do DALL·E 2, Stable Diffusion, and Midjourney Work?, Marktechpost (November 14, 2022)

[5] Andres Guadamuz, Artists file class-action lawsuit against Stability AI, DeviantArt, and Midjourney, TechnoLlama (January 15, 2023)

[6] Aaron Moss, Using AI Artwork to Avoid Copyright Infringement, Copyright Lately, (October 24, 2022)

[7] AIR 1928 Calcutta 359

[8] 804 F.3d 202 (2d Cir. 2015)

[9] Directive on copyright and related rights in the Digital Single Market, Article 4, Title III, DIRECTIVE (EU) 2019/790, (April 17, 2019)

[10] AIR 1978 SC 1613

[11] Reuters, Getty Images lawsuit says Stability AI misused photos to train AI, The Indian Express (February 7, 2023, 09:44 IST)

Image source: here