EU’s AI Act: Navigating dangers of Artificial Intelligence


In December 2023, the European Union (“European Union”) took a significant step forward in addressing concerns about artificial intelligence (AI) and machine learning technologies. They reached a provisional agreement on the draft of the Artificial Intelligence Act on 2nd February, 2023 (“AI Act”). You can access the draft of the AI Act here.

The AI Act is anticipated to come into effect between May and July 2024,[i] and is designed to establish ethical guidelines and transparency standards for AI systems. Overseen by the newly created EU AI Office, this legislation will hold significant consequences for noncompliance, with penalties ranging from €7.5 million or 1.5 percent of global revenue to €35 million or 7 percent of global revenue, depending on the infringement and company size.

With these stringent measures in place, it’s essential for providers, developers, and implementers of AI models or AI-based systems to grasp the implications of the forthcoming AI Act for their businesses.

We will delve into a few key points below.

Applicability (Article 2)

The proposed AI Act applies to, amongst others, providers and developers of AI systems used within the EU, regardless of their location. Therefore, similar to the General Data Protection Regulation (GDPR), Indian companies or other international companies offering AI technology within the EU may also face penalties for noncompliance.

A “deployer” is someone utilizing an AI system, excluding personal non-professional use. A “provider” is someone developing AI systems, or has put the system into service under its own name or trademark.

Four Tiers of Regulations

The AI Act adopts a risk-based approach and has categorized the AI systems into four tiers of risk. They are: 1) Unacceptable Risk, 2) High Risk, 3) Limited Risk, and Minimal Risk. The AI Act provides for different level of regulation and control for AI models based on the risk tier under which it falls. The majority part of the AI Act focuses on regulating high-risk AI systems, while a smaller section deals with limited-risk AI systems.

However, for general purpose Artificial Intelligence, such as Generative AI (DallE and ChatGPT etc.), additional obligations are given as detailed in point 5 below.

  1. Unacceptable Risk (Prohibited)

Under the AI Act, practices posing “unacceptable risk” are strictly prohibited. These include:

  • Using manipulative, deceptive, or subliminal techniques to influence decisions causing significant harm.
  • Exploiting vulnerabilities based on age, disability, or specific social/economic situations to cause harm.
  • Using biometric data to categorize individuals based on their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
  • Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or closed-circuit television (CCTV) footage.

AI practices which pose such unacceptable risk would be subject to severe and high penalties such as fines amounting to €35 million euros (which is approx. +300 crores in Indian Rupees), or 7 percent of a company’s annual revenue, whichever is greater.

  1. High Risk (Heavily regulated)

High-risk systems under the AI Act encompass a broad range of AI applications, potentially including biometric identification, educational or vocational training, employment evaluation, and financial or insurance-related systems. Such as using AI in hiring and law enforcement.

While the exact boundaries of high-risk AI technologies remain uncertain, the AI Act stipulates that systems posing no significant risk to the health, safety, or fundamental rights of individuals generally won’t be classified as high-risk.

Among other compliances, deployers and providers of high-risk AI technology should prepare to adhere to the following requirements under the AI Act:

  • Register with the centralized EU database.
  • Implement a compliant quality management system.
  • Maintain adequate documentation and logs.
  • Undergo relevant conformity assessments.
  • Comply with restrictions on the use of high-risk AI.
  • Ensure ongoing regulatory compliance and be prepared to demonstrate such compliance upon request.
  1. Limited Risk (Transparency obligations)

A smaller section of the AI Act handles limited risk AI systems, subject to lighter transparency obligations: developers and deployers must ensure that end-users are aware that they are interacting with AI (chatbots and deepfakes).

  1. Minimal Risk (unregulated)

Minimal risk is unregulated (such as AI enabled video games and spam filters).

  1. Additional Obligations for General Purpose AI Models

The policymakers have chosen to regulate powerful general-purpose models such as the generative models that create images, code and video – Dall E, Chat GPT, etc. – in their own two-tier category,[ii] first tier being GPAI and the second tier being Systemic GPAI.


A General Purpose Artificial Intelligence (GPAI) model refers to an AI model that exhibits significant generality, capable of competently performing various tasks across different domains and that can be integrated into a variety of downstream systems or applications. However, this does not cover AI models used, before release in the market, for research, development and prototyping activities.

Providers of GPAI models must fulfill certain obligations under the AI Act, including:

  • Creating technical documentation that outlines the training and testing processes, as well as evaluation results.
  • Providing information and documentation to downstream providers intending to integrate the GPAI model into their own AI system, ensuring they understand its capabilities and limitations.
  • Establishing a policy to comply with the Copyright Directive.
  • Publishing a sufficiently detailed summary about the content used for training the GPAI model.

The first tier covers all general-purpose models, except those used only in research or published under an open-source licence which would be required to comply the latter two obligations.

Systemic GPAI

GPAI models are categorized as “systemic” if the cumulative compute used for their training exceeds 1025 floating point operations per second (FLOPS). Providers must promptly notify the Commission if their model meets this criterion, with a two-week deadline.

In addition to the four obligations outlined above, providers of systemic GPAI models must also:

  • Conduct model evaluations, including adversarial testing, to identify and mitigate systemic risk, documenting these processes.
  • Assess and mitigate potential systemic risks and their sources.
  • Record and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities promptly.
  • Ensure adequate cybersecurity protection.


After entry into force, the AI Act will apply and compliance will be required:

  • within 6 months for prohibited AI systems,
  • within 12 months for GPAI (general purpose AI),
  • within 24-36 months for high risk AI systems.

Concluding Remarks

The European Union’s approach represents a welcome change, serving as one of the first steps taken towards regulation of AI in the international space, which might hopefully give further push and confidence to other lawmakers in foreign countries for having their own regulations in place.

India’s current intellectual property laws are adequate to cover prevalent AI technologies, according to the Union Minister of State for Commerce and Industry’s statement on February 9, 2024.[iii] However, as AI continues to advance and its capabilities expand, there may be a need for a more comprehensive regulatory framework to address emerging risks effectively.

The Indian Government has started taking steps on this end, by releasing an advisory dated 15th March, 2024, concerning the deployment of artificial intelligence (AI) models by Intermediaries. MeitY have officials clarified that the advisory serves as a guidance rather than a regulatory framework. You can access the summary of this advisory here:

End notes:




Image generated on Dall-E