Advisory by MeitY on Artificial Intelligence Deployment

The Ministry of Electronics and Information Technology (MeitY) recently on 15th March, 2024 issued a revised advisory concerning the deployment of artificial intelligence (AI) models by Intermediaries.[i]

This revised advisory was made to supersede a previous advisory issued on March 1, 2024, which, inter alia, required prior and explicit permission from the Government before deploying any AI Technologies which were under testing / unreliable.[ii]

The March 1 advisory drew criticism from various quarters, with concerns raised by startup founders, such as Aravind Srinivas, the CEO of Perplexity, who had called it a “bad move by India” in a social media post on X.[iii]

The revised advisory addresses some of these concerns by removing the obligation of obtaining explicit government permission before deploying AI Technologies.


Key points of the revised advisory include:

  1. Adherence to the IT Rules: The advisory requires Intermediaries to ensure that the use of AI models, including LLMs (large language model) (such as ChatGPT) and Generative AI (such as DallE) (“AI Technologies”) do not permit its Users to upload, host, or publish, etc., any unlawful content that violates Rule 3(1)(b) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”) or violate the Information Technology Act, 2000 or other laws in force.

View: Does this mean that when a User writes a text prompt that violates IT Rules (thereby uploading unlawful content), it will make the AI Platform liable for violation of this advisory? An answer to this is pertinent particularly when the text prompt entered by the User can never be fully controlled by an AI Platform. Another important point to note is that although the conversation is mostly private, between the User and AI, AI Platforms cannot possibly stop Users from taking screenshots and posting them online.

  1. Electoral Integrity: It has been advised that any Intermediary either by itself or through its AI Technologies, do not permit any bias or discrimination or do not threaten the integrity of the electoral process.

View: The ambit and definition of the term “integrity of the electoral process” needs to be clarified to assess what sorts of output would be considered as restricted by the advisory. Such as, if a piece of factual information is given about a political leader, would it threaten the electoral process?

  1. Consent Popup: AI Technologies which are undertested/unreliable should only be made available to Users after appropriately labelling the possibility of unreliability of the output. This may be done through the use of a “consent popup” or equivalent mechanisms.

View: It is unclear, if the current mechanism used by platforms such as ChatGPT, where they mention the unreliability of its output below the search bar, could be considered as an “equivalent mechanism”, or would they be required to give a “popup” like mechanism. Also, right now it does not seem that any AI Platform can guarantee their technology is fully reliable, especially about the accuracy of facts. This restriction then seems to apply to any and all AI Technologies, increasing compliance costs and use of resources.

  1. Informing Users about Consequences: Every Intermediary has to inform the users in their terms of service or user agreements (T&C) about the consequences of dealing with unlawful information, which may include: (1) disabling access to information; (2) removal of information; (3) suspension or termination of access or termination of user rights to user account; as the case may be, and; (4) any punishment under applicable laws.

View: These guidelines are in line with the obligations placed on Intermediaries under Rule 3(1)(c) of the IT Rules.

  1. Identification of Deepfakes and Misinformation: In case any Intermediary permits the synthetic creation of text/audio/visual information, which could be considered misinformation or deepfake, then such information should be labelled or embedded with a permanent unique metadata or identifier. This is to help identify the computer resource through which such information was created. In case any change is made to the information, then the metadata should be changed, to help identify the user or computer resource which made such change.

View: This is a useful tool to help identify and punish creators of deepfake videos that commit illegal activities or make harmful statements, target certain celebrities or individuals, create misinformation amongst people, and in some cases, commit fraud.


It was clarified that all Intermediaries are required to follow the guidelines and compliances mentioned in this advisory.

However, regarding the legal enforceability of the advisory, MeitY officials have clarified that the advisory serves as a guidance rather than a regulatory framework, emphasizing the need for caution and responsibility in AI deployment.

The effectiveness of such measures, especially given the evolving nature of AI technology, is still to be tested, including its application in the digital landscape.

The guidelines in this advisory are in addition to the guidelines given in the advisory dated 26th December 2023, which mandated intermediaries to communicate clearly and precisely to the Users about the content that is prohibited, particularly those specified under Rule 3(1)(b) of the IT Rules.

End Notes:




Image created on Dall-e


Comments are closed.