Guest Post: Generative AI & Copyright Exhaustion: Evaluating India’s Proposed Centralised Licensing Model

About Author: Sminal Badge, is a 4th Year student from Maharashtra National Law University.

The rise of generative AI from ChatGPT and Stable Diffusion to large language and image models has collided with traditional copyright. AI systems rely on massive datasets scraped from books, articles, art and code, much of it copyrighted. Content creators worry their works are being reproduced without permission or payment, while AI companies insist training on publicly available data is fair use. The clash of interests is now playing out globally. Even in India, news agency ANI has sued OpenAI over ChatGPT’s use of its content. As DPIIT observed, although “Generative AI has immense potential”, the fact that models are trained “often using copyrighted materials without authorisation” has “sparked an important debate around copyright law”. In December 2025, India’s DPIIT published a working paper titled “One Nation One Licence One Payment”, proposing a bold, compulsory licensing scheme to resolve this tension. This blog critically examines the proposal, situates it within comparative international approaches, and advances a more balanced framework that protects creators without undermining technological progress.

India’s “One Nation One License One Payment” Proposal

In December 2025 the DPIIT released a consultation paper titled “One Nation One Licence One Payment: Balancing AI Innovation and Copyright”.  The core idea is a mandatory blanket licence: all copyright-protected content that can be lawfully accessed (e.g. online without paywalls) would be available for AI training without individual permission. In return, AI companies must pay fixed royalties into a centralized collecting agency (sometimes called “CRCAT” in media reports) run by a board of rights-holders. The key features of the proposal are:

  1. Blanket License: AI developers could use any legally accessed copyrighted work for model training without seeking separate licenses.
  2. Statutory Payments: A new statutory remuneration right would require AI firms to pay a set percentage of their revenue (or a fixed fee) to a government-appointed body, which would distribute funds to creators.
  3. No opt-out: Creators would not have the right to exclude their works from this regime. By default, their content is deemed included in the licence pool.
  4. Central Agency: A centralized non-profit tribunal (the proposed Copyright Royalty Collection and Allocation Tribunal) would handle licensing and royalty distribution. It would be governed by representatives of authors, publishers and CMOs, one from each class of work.
  5. Revenue Sharing: The licence would be cleared through a single window, and royalties set by a government committee. AI firms would pay a fixed percentage of AI-generated revenue as royalties, possibly even retroactively for past uses.

By treating all lawfully accessible content as automatically licensable, the proposal effectively collapses the distinction between access and use, a distinction that lies at the heart of copyright law. Under the Copyright Act, 1957, lawful access to a work does not translate into a right to reproduce it, whether manually or through automated systems. The DPIIT proposal redefines this principle without clearly amending the statutory framework, creating uncertainty about its legal foundation.

Doctrinal & Practical Concerns

While aimed at balance, the compulsory licence model raises serious legal and practical concerns. First, doctrinally it clashes with core copyright principles. Under current law, reproducing a copyrighted work (even by a computer) requires the author’s permission or an exception. It conflicts with the fundamental premise of copyright as a negative right i.e., the right to exclude others from reproducing one’s work. The absence of any opt-out mechanism strips creators of meaningful control over how their works are used in AI systems. This is particularly troubling in light of moral rights under Section 57 of the Copyright Act, which recognize the personal and reputational connection between authors and their works. Copyright does not normally distinguish between free and paywalled content. Any work, even if freely accessible, is protected. If DPIIT’s rule stands, the burden shifts to rights-holders to police the Internet. In practice this means platforms that don’t want their content scraped “must introduce robust access controls, such as paywalls [or] anti-scraping technologies”. 

Pragmatically, the plan is almost unworkable. Contemporary AI training is a black box: models ingest and abstract billions of data points. Expecting an AI company to track exactly which lines or images it “learned” from each source is technically infeasible. How could a startup prove it scraped only lawfully accessed content, or calculate exact royalties for each work? Such granular attribution is far beyond present AI/data engineering. The DPIIT model requires annual reporting of “global revenue percentages” and expensive audits, creating vast compliance overhead. Even the central-licensor concept is problematic. India has seen compulsory licensing before (e.g. under patent law or music laws), but those often benefit only big players; small creators frequently see negligible payouts. Such a regime would disproportionately burden startups and academic researchers, who lack the resources to navigate centralized licensing systems and audits. Meanwhile, nascent AI startups (and even academic researchers) would be smothered by fees. Industry groups like NASSCOM have already called this a “tax or levy on innovation”. In sum, the DPIIT model risks creating more uncertainty and litigation, not less, while overriding creators’ basic rights and chilling innovation.

Why the Mandatory License Model is Flawed?

India’s compulsory licensing model overshoots and risks undermining both creativity and innovation. Simply put, it puts all the weight on one side of the scale. The proposal brushes aside well-established rights: it forces creators to forfeit control of their works, in effect treating copyright as a collective commodity rather than an individual right. One Nation One Licence makes authors powerless; they must submit their art to AI training if it is ever lawfully seen. This is a radical departure from the very purpose of copyright, which is to give creators exclusive control over copying of their work. Moreover, the DPIIT model misunderstands how AI works. Modern AI models are not like humans who read a single book at a time; they train on massive, amorphous datasets. We would ask India’s government to imagine: can even Google or OpenAI honestly say they could list every URL and image used in training? Probably not.

It is also concerning about how the scheme treats unorganized creators. It relies on collective management organisations (CMOs) to register works and claim royalties. But in practice many Indian writers, musicians and artists are not members of any CMO. Will they ever see a rupee from this system? And if a creator is not registered, how do they prove their work was used? The risk is that only a handful of representative bodies will control the purse, while the vast creative community remains on the sidelines. From a policy perspective, a draconian levy on AI revenues may stifle the technology sector. India has big ambitions for AI (the IndiaAI Mission, etc.), and a punitive regime could deter investment and research. Rights holders should be fairly compensated, but that goal should be pursued by enabling voluntary markets, not by imposing a one-size-fits-all tax.

Comparative Approaches

  1. United States (Fair Use) – The U.S. has no statutory licence for AI training. Instead, AI firms rely on the fair use doctrine under 17 USC §107. Fair use requires a case-by-case analysis of purpose, nature, amount used, and market effect. In theory, transformative training might qualify, but outcomes are unpredictable. In May 2025 the U.S. Copyright Office warned that using copyrighted works to train models “may constitute an infringement” if outputs compete with originals. Recent court cases underline the uncertainty: some courts have found fair use (e.g. Authors Guild v. Google for search) while others have denied it (e.g. Thomson Reuters v. ROSS where an AI tool copied legal content). Therefore, the U.S. system tolerates AI training under broad fair use but provides little legal certainty or special protection for rights-holders.
  2. Japan and Singapore (Statutory TDM Exceptions) – Japan and Singapore have adopted relatively permissive statutory approaches to AI training by recognising it as a form of data or computational analysis. In 2019, Japan introduced Section 30-4 of its Copyright Act, allowing copyrighted works to be used for data analysis, including AI training, provided such use does not unreasonably prejudice the interests of copyright owners. Singapore followed in 2021 by introducing a “computational data analysis” exception, permitting AI training on copyrighted material where the user has lawful access. In both jurisdictions, AI developers may use copyrighted content for training without paying royalties, but only for non-expressive, analytical purposes. At the same time, neither country treats these exceptions as replacing licensing. Instead, both frameworks encourage voluntary licensing, industry safeguards, and responsible use of copyrighted works, reflecting a balance between enabling AI innovation and protecting creators’ interests.
  3. European Union (TDM with Opt-Out)The European Union has adopted a balanced approach through the 2019 Copyright Directive by introducing mandatory exceptions for text and data mining (TDM). Under this framework, automated analysis of copyrighted material is permitted where the user has lawful access, unless the rightholder has expressly opted out. Rights-holders may reserve their works through contractual or machine-readable notices, which AI developers are required to respect. The EU AI Act further strengthens this model by imposing transparency obligations on general-purpose AI providers, including disclosure of training data sources and compliance with opt-out reservations. While offering developers a statutory safe harbour for AI training, the EU regime also places greater compliance and accountability obligations than the largely fair use–based U.S. approach.

A Path Forward

Rather than adopting a compulsory licensing regime, India should pursue a more balanced framework that combines statutory exceptions, creator autonomy, and market-based solutions:

  • Introduce a Statutory TDM Exception (with safeguards):

Amend the Copyright Act to explicitly permit non-infringing copying of works for AI training under a text and data mining exception, limited to analytical and non-expressive use. The exception should apply only where the user has lawful access and should include a remuneration trigger for commercial AI use, ensuring clarity while protecting creators’ economic interests.

  • Preserve Authorial Autonomy through Opt-Out Mechanism:

Allow creators to reserve their works from AI training through technical measures or a national rights registry. By default, inclusion should depend on consent, ensuring that authors who wish to participate can opt in and receive compensation, while others retain full control over their works.

  • Encourage Voluntary Licensing:

Support the development of optional collective AI licensing frameworks managed by CMOs, where registered works are pooled and licensed to AI developers. This enables efficient licensing and fair remuneration without statutory compulsion, leaving creators free to participate based on perceived value.

  • Mandate Transparency & Compliance Standards:

Require AI developers to disclose high-level categories of training data, respect anti-scraping signals, and comply with copyright notices. Transparency obligations can build trust without forcing disclosure of trade secrets.

  • Ensure Reasonable and Proportionate Remuneration:

Where remuneration applies, royalty rates should be realistic, differentiated by type of work, and set through representative governance mechanisms to ensure meaningful participation of smaller creators and prevent excessive compliance burdens.

Conclusion

Generative AI must develop within a legal framework that continues to respect creators’ rights. While India’s One Nation, One Licence proposal reflects a genuine attempt to protect artists and journalists, its compulsory and uniform approach risks undermining both copyright principles and AI innovation. A more balanced framework is achievable by introducing limited statutory exceptions for AI training with fair remuneration, preserving creators’ ability to opt out or reserve rights, and encouraging voluntary collective licensing models. When combined with transparency obligations on AI developers, such as dataset disclosures and respect for access controls, this hybrid approach can support innovation while safeguarding creative interests, ensuring that technology and creativity progress together under clear and equitable legal norms.

Image generated on Gemini