top of page

EU AI Act: Navigating the World's First Comprehensive AI Law

Updated: Sep 14, 2024


In a landmark move, the European Union (EU) has proposed the world's first comprehensive AI law - the EU AI Act. This Blog breaks down the essentials of this groundbreaking regulation, offering a quick guide for understanding its key provisions.

The Basics

  • Definition of AI: Aligned with the recently updated OECD definition, providing a common understanding of AI technologies.

  • Extraterritorial Reach: Applies to organizations beyond the EU, reflecting the global impact of AI technologies.

  • Exemptions: Recognizes exemptions for national security, military, defense, research and development (R&D), and partial exemptions for open source initiatives.

  • Compliance Grace Periods: Organizations are granted grace periods ranging from 6 to 24 months for full compliance, acknowledging the need for adjustment.

  • Risk-Based Approach: Categorizes AI into Prohibited AI, High-Risk AI, Limited Risk AI, and Minimal Risk AI, each subject to specific regulatory requirements.

  • Generative AI: Imposes specific transparency and disclosure requirements for Generative AI, ensuring openness in AI development.


Prohibited AI: Drawing the Line

The EU AI Act identifies specific AI applications deemed detrimental to societal values and human rights. Prohibited AI includes:

  • Social credit scoring systems

  • Emotion recognition systems at work and in education

  • AI used to exploit people's vulnerabilities (e.g., age, disability)

  • Behavioral manipulation and circumvention of free will

  • Untargeted scraping of facial images for facial recognition

  • Biometric categorization systems using sensitive characteristics

  • Specific predictive policing applications

  • Law enforcement use of real-time biometric identification in public (except in limited, pre-authorized situations)


High-Risk AI: Navigating Sensitive Terrain

High-Risk AI applications are those with the potential for significant societal impact. These include:

  • Medical devices

  • Vehicles

  • Recruitment, HR, and worker management

  • Education and vocational training

  • Influencing elections and voters

  • Access to services (e.g., insurance, banking, credit, benefits)

  • Critical infrastructure management (e.g., water, gas, electricity)

  • Emotion recognition systems

  • Biometric identification

  • Law enforcement, border control, migration, and asylum

  • Administration of justice

  • Specific products and/or safety components of specific products


Key Requirements for High-Risk AI

High-Risk AI comes with a set of key requirements to ensure responsible deployment:

  • Fundamental rights impact assessment and conformity assessment

  • Registration in a public EU database for high-risk AI systems

  • Implementation of risk management and quality management systems

  • Data governance, including bias mitigation and representative training data

  • Transparency, including instructions for use and technical documentation

  • Human oversight, such as explainability, auditable logs, and human-in-the-loop

  • Accuracy, robustness, and cybersecurity, including testing and monitoring


General Purpose AI: Ensuring Transparency and Accountability

Distinct requirements exist for General Purpose AI (GPAl) and Foundation Models:

  • Transparency for all GPAl, including technical documentation, training data summaries, copyright and IP safeguards, etc.

  • Additional requirements for high-impact models with systemic risk, such as model evaluations, risk assessments, adversarial testing, and incident reporting.

  • Generative AI: Individuals must be informed when interacting with AI (e.g., chatbots), and AI content must be labeled and detectable (e.g., deeplakes).


Penalties & Enforcement: Striking a Balance

The EU AI Act brings a robust enforcement framework to ensure compliance:

  • Up to 7% of global annual turnover or €35m for prohibited AI violations

  • Up to 3% of global annual turnover or €15m for most other violations

  • Up to 1.5% of global annual turnover or €7.5m for supplying incorrect information

  • Caps on fines for SMEs and startups

  • Establishment of the European 'AI Office' and 'AI Board' centrally at the EU level

  • Market surveillance authorities in EU countries to enforce the AI Act

  • Any individual can make complaints about non-compliance


Note: The EU AI Act has not yet been enacted; a political agreement was reached on December 8.


Written By: Jaanvi Sharma


bottom of page