Indian Copyright Law and Generative AI: Part 4: Who is liable for infringing outputs?

(This post is co-authored with Sneha Jain, Partner, Saikrishna & Associates)

In Part 3 of the Series, we explored the output side of things – showing how if the output generated by the Generative AI model is substantially similar or a trivial alteration, or an adaptation in a different format of a work, it may infringe the rights of the copyright owner. However, who will be liable? The user or the Model developer? or both? or neither? These questions are essential to consider given the Model itself lacks legal personhood for imputing any intent, liability, or damages – and responsibility for infringement has to be attributed to a human/corporation.

There are three distinct situations that arise here:

  • Would liability be on the model developer/service provider for direct infringement, as direct copyright infringement is a strict liability offence?
  • Would liability for direct infringement be on the user of the model, for imputing certain prompts which yield infringing outputs, and liability for indirect infringement be on the model developer/service provider?
  • Would both the model developer and the user be jointly liable for direct infringement?

The answer to each of the queries rests on a host of factual considerations, some of which are as follows:

  • A situation where the developer of the model has taken all objectively plausible, practically feasible, and technologically available steps to ensure that the model does not regurgitate memorizations or output that is substantially similar to any of its input training data.
  • A situation where the user/prompter has used prompt injections or the model as a copy-machine, to provoke prohibited regurgitations, exploiting the inherent vulnerabilities/fallibilities of LLM Systems, at the current state of technology,[i] and in violation of the user terms and conditions.
  • A situation where even upon normal/generic prompts, the model generates regurgitations of its memorized training data set, narrowing down possible outputs.
  • A situation where a model is specifically built to produce the inputted work in a different medium/format of expression (for instance a literary work to a dramatic work) and allows attachments to be uploaded by prompters who can upload a copyrighted script or work.

A. When Developer may be liable for Direct Infringement

Section 51(a)(i) provides for direct copyright infringement where copyright is deemed to be infringed if any person does anything, without a license from the copyright owner, the exclusive right to do which is, by the Copyright Act, conferred upon the copyright owner. The provision, which is a strict liability provision, however, like other strict liability provisions requires “causation”, i.e., a “but-for” cause pointing towards volition.[ii] Even around the world, volitional conduct has been deemed to be an essential component of direct copyright infringement, as compared to indirect – for instance, a copy shop that lets customers operate photocopiers is not a direct infringer, but a copy shop that makes infringing photocopies is.[iii] Thus, direct infringement, would imply a degree of “authorization” or “control” over the output produced in the hands of the developer.

In respect of the various factual situations considered above, as Generative AI is a self-evolving tool[iv], which continues to learn and expand its training data set through user prompts, beyond the control of the developer who merely trains it to understand “how to learn”, a volitional requirement is unlikely to be met especially when the developer has taken all objectively plausible, practically feasible, and technologically available steps to ensure that the model does not regurgitate memorizations or output that is substantially similar to any of its input training data. However, if the model is specifically trained to produce adaptations of inputted content (where often copyrighted content is allowed to be attached), or produces memorized regurgitations even on generic prompts, a volitional requirement may more probably be deemed to be fulfilled to attract direct liability.

A contrary view, however, could be that the inherent vulnerability associated with LLMs where they may be manipulated to produce infringing content, makes them an inherently dangerous avenue for copyrighted works. Hence, all responsibility ought to be on the developer choosing to run the platform to ensure no infringing content spurs out of its usage. However, in light of the lack of direct causation, or loss of life and limb, the strength of this argument is arguably thin.

B. When Developer may be liable for Indirect Infringement

When direct infringement claims fail due to lack of causation/volition, the focus often shifts to the indirect infringement standard. Section 51(a)(ii) of the Copyright Act provides that permitting for profit, any place for communications that infringe copyright would constitute indirect infringement, unless the operator was unaware or had no reasonable ground for believing such communication to be infringing.

Reasonable ground for believing” has been interpreted by Indian Courts, in the context of the Copyright Act, to mean “consciousness or awareness and not mere possibility or suspicion of something likely.”[v] Moreover, in respect of automated platforms, it has been held that authorization/approval from a person or authority is essential to connote knowledge.[vi] In other contexts as well, the mere possibility of harm, without the existence of facts that show knowledge, has been held to be insufficient to connote knowledge for the purposes of imputing liability.[vii] Moreover, in an analogous context of banking, it has been held that for determining reasonable belief, officers of banks are not required to be amateur detectives, albeit they can be attributed the degree of intelligence ordinarily required from a person in their position while evaluating cheques for violation of the Negotiable Instruments Act.[viii]

Applying this in the context of Gen AI developers, in factual situations (i) and (ii) above, where the developer has taken reasonably possible steps to eradicate the regurgitation of substantially similar output and disallowed prompts that exploit the inevitable and inherent vulnerability associated with LLM technology at its current state, it is improbable that developers would be held liable for infringing outputs. In other words, the mere possibility of generating substantially similar outputs, due to the inherent vulnerability of LLMs in spite of the developer taking reasonable care in designing the product, would not connote knowledge or reasonable belief for indirect infringement.

To determine “reasonableness”, the classic “risk-utility” test in product-design liability could be adopted – weighing whether the burden on the product developer to eradicate the harm outweighs/is outweighed, by the gravity and the possibility of harm as well as the utility of the alternative design available that reduces the harm. The factors considered in such weighing include – (i) usefulness and desirability of the product; (ii) utility to the public as a whole; (iii) gravity of danger posed; (iv) scientific and mechanical as well as economic feasibility of an alternate design that is safer; (v) user’s ability to avoid harm by exercising care in use of the product including effect of instructions or warnings; (vi) developers ability to eliminate the danger without impairing the usefulness of the product or it being unduly expensive; and (vii) feasibility of alternate design.[ix]

Thus, so long as the developer has taken care to ensure that infringing outputs are not produced in the reasonable course of the model’s use and are only produced when the inherent vulnerabilities of the model are exploited by the user, on the basis of the risk-utility test above, it cannot reasonably be expected that the product developer is to eradicate harms driven by such uses – shifting the focus of liability to the user. In other words, so long as the Model/Product is not built willfully blind to its inherent fallibilities and an attempt to remedy the possibility of copyright infringement at the output stage to the extent technologically and economically feasible/possible is evident, it is unlikely that the developer of the Model would be held liable.

C. When User may be liable for Direct Infringement

It is the user’s specific interaction with the model that leads to the output generated. The user’s instructions add a filter that steers the model towards the output. Without the user’s specific directives, the potential of the model to generate infringing content remains just that – a potential.[x] The questions leading to the answer become as important as the answer itself. This is particularly relevant in a time where prompt engineering has been considered equivalent to a form of creative practice that deserves copyright protection due to its potential to specifically induce the model to create certain outputs.[xi] Therefore, in case of infringing outputs that are a result of very specific prompts, or even prompt injections (that is manipulating the model to answer certain questions it initially is refusing to answer by prompt engineering, thus capitalizing on the inherent fallibility of LLM Systems in contravention of the user terms and conditions of these products), the liability for direct infringement most probably would shift towards the user. Thus, it is important to account for the user’s actions on the product when imputing liability, especially because of the lack of ability of the developer to control the outputs produced by the self-evolving tools of Generative AI, as well as the inherent fallibilities associated with such tools at this stage of technological development.

Part 5 of this series shall look at issues concerning Moral Rights, and Digital Rights Management provisions under the Copyright Act.

End Notes:

[i] Ido Kilovaty, “Hacking Generative AI”,  58 LOY. L.A. L. REV __ (forthcoming), available at < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4788909>.

[ii] Municipal Corporation of Delhi v. Uphaar Tragedy Victims Association and Ors., (2011) 14 SCC 481.

[iii] CoStar Grp., Inc. v. LoopNet, Inc., 373 F.3d 544 (4th Cir. 2004); Religious Technology Center v. Netcom On-Line Communication Services, Inc., 907 F. Supp. 1361 (N.D. Cal. 1995) at 1370; Cartoon Network LP, LLLP v. CSC Holdings, Inc., 536 F (3d) 121 (2d Cir. 2008) at 131.   Perfect 10, Inc. v. Giganews, Inc., 847 F.3d 657 (9th Cir 2017); Basic Books, Inc. v. Kinko’s Graphics Corp., 758 F.Supp. 1522 (S.D.N.Y. 1991); Princeton

   Univ. Press v. Mich. Document, 99 F.3d 1381 (6th Cir. 1996).

[iv] <https://medium.com/@jitu028/the-evolution-of-ai-navigating-the-future-of-self-evolving-systems-ef0ca84b838a>

[v] MySpace Inc. v. Super Cassettes 2016 SCC OnLine Del 6382.

[vi] Ibid  at para 38 and 39.

[vii] India Telecomp Limited v. Adino Telecom Limited, 1993 SCC OnLine Del 127; Collector Of Customs, New Delhi v. Ahmadelieva Nodira, (2004) 3 SCC 549.

[viii] Pradeep Kumar and Anr. v. Postmaster General and Ors., (2022) 6 SCC 351.

[ix] Barker v. Lull Engineering Co., 20 Cal. 3d 413, 143 Cal. Rptr. 225, 573 P.2d 443, 96 A.L.R.3d 1 (1978), Knitz v. Minster Mach. Co., 69 Ohio St. 2d 460, 23 Ohio Op. 3d 403, 432N.E.2d 814 (1982).

[x] Giancarlo Frosio, Generative AI in Court, in Nikol Koitras and Niloufer Selvadurai (eds)., Recreating Creativity, Reinventing Inventiveness- International Perspectives on AI and IP Governance. (Routledge, 2023).

[xi] Mark Lemley, How Generative AI Turns Copyright upside down, 25 Columbia Science and Technology Review 190 (2024).

Image generated on Dall-E