top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

EU AI Act Update: European Union Parliament Approval Passes Overwhelmingly…What You Need To Know Now

By Chris Hackney, Founder and Board Member - AI Guardian

Fact Checked by Robin Hackney 




EU symbol tied to circuitry, emblematic of EU oversight of technology regulation

With the European Union parliament’s passage of the EU Artificial Intelligence Act (EU AI Act) on March 13th the European Union took a historic step towards regulating artificial intelligence. This groundbreaking regulation aims to ensure the ethical development and deployment of AI systems, prioritizing safety, fairness, and transparency.


For businesses operating in the EU or developing AI systems targeting European markets, understanding the intricacies of the EU AI Act is crucial. This blog post covers the latest key aspects of the legislation, its implementation timeline, and practical steps companies can take to prepare for it this year.


Risk-Based Governance Approach: Not All AI is Created Equal

The EU AI Act categorizes AI systems based on their potential risk. This tiered approach ensures proportionate regulation, focusing on the most critical applications. The act defines four risk categories:


  • Unacceptable Risk (Prohibited): Systems deemed a threat to fundamental rights, safety, or livelihoods (e.g., social scoring with real-world consequences). The act explicitly prohibits the development, placement on the market, and use of such systems within the EU.  Examples include:

  • Social Scoring Systems with Real-World Consequences: Imagine a system that assigns citizens a social credit score based on factors like online behavior or political affiliations, with direct consequences for employment, access to loans, or social services.

  • Real-Time Remote Biometric Identification for Law Enforcement in Public Spaces: This could involve widespread facial recognition technology used for surveillance purposes without proper safeguards.

  • High-Risk (Strict Compliance Requirements): Systems posing significant risks (e.g., facial recognition, biometric identification). These systems require strict compliance measures, including conformity assessments and detailed risk management procedures.

  • Limited Risk (Self-Declaration): Systems posing minimal risks (e.g., chatbots, spam filters). While AI Act oversight remains, the focus here is on self-declaration by developers. Companies creating limited-risk AI will need to:

  • Conduct Risk Assessments: Evaluate their AI systems for potential risks and implement a risk management system.

  • Develop Technical Documentation: Maintain technical documentation outlining the generative AI system's development process, data used, and functionalities.

  • Self-Declare Compliance: Declare that their AI system adheres to the act's requirements for limited-risk systems.

  • Minimal Risk (Minimal Requirements): Systems deemed unlikely to cause harm (e.g., simple image or video filters). These systems face minimal regulatory requirements, but AI governance best practices are still encouraged. Examples include:

  • Simple Games with AI Elements: Mobile games that utilize basic AI for character behavior fall under this category.

  • Spam Detection Algorithms: Basic algorithms designed to identify spam emails with limited user interaction.


Pyramid depicting the 4 levels of risk defined by the EU AI Act - unacceptable, high, limited and minimal
EU AI Act Risk Levels Source: European Commission report on the AI Act

Strict Compliance Requirements for High Risk AI Systems

High-risk AI systems are considered to pose significant risks to fundamental rights or safety. As such, the EU AI Act requires companies developing or deploying high-risk AI to navigate a series of compliance requirements to utilize them in the EU market.


In particular, the Artificial Intelligence Act requires companies to provide:

  • Conformity Assessments: Evaluates and verifies that the AI system meets the AI Act's technical and non-technical standards, promoting responsible development and deployment. capAI provides a robust framework for conducting Conformity Assessments based on AI Act guidance. The act offers companies two distinct pathways for conducting conformity assessments: internal assessments and external assessments by notified bodies. Whether an internal or external assessment is conducted depends on several factors, including:

  • Complexity of the AI System: Highly complex systems with novel functionalities might benefit from the specialized expertise of notified bodies.

  • Internal Expertise: Companies with a strong track record in AI development and a robust quality management system might be well-positioned for internal assessments.

  • Resource Availability: Internal assessments can be more resource-intensive, requiring dedicated personnel and documentation efforts.

  • Risk Management Systems: Companies must establish comprehensive risk management procedures to identify, assess, mitigate, and monitor risks throughout the AI lifecycle.

  • High Transparency Standards: Clear and accessible information needs to be provided to users about the system's capabilities, limitations, and potential risks. This includes information on how the system makes decisions and the type of data it uses.

  • Human Oversight: High-risk AI systems should be designed and operated with meaningful human oversight to ensure accountability and prevent unintended consequences.



4 steps required for high-risk systems defined by the EU AI Act,
Steps Required for high-risk AI systems by the EU AI Act; Source: European Commission report on the AI Act

A Phased Approach to EU AI Act Enforcement

With the endorsement of the EU AI Act by the European Union parliament the stage is set for the act’s official publication in the Official Journal of the European Union (OJEU) which will start the clock ticking on the Artificial Intelligence Act’s enforcement period. On enforcement, the AI Act adopts a phased approach over set periods of time based on the risk category of the AI model and its application.


Regulation Level

Timeline

Banned AI Practices

6 months from enactment (Q4 2024)

General Purpose AI (GPAI)

1 year from enactment (Q2 2025)

Overall Compliance & High-Risk AI System Compliance

2 years from enactment (Q1 2026)


In addition, EU member states have 18 months to transpose the act into their national laws. This translates to a deadline of September 2025 based on the current pace of passage.


The Sting of Non-Compliance: EU AI Act Penalties and Infractions

Just like GDPR before it, the EU AI Act has teeth. Failure to comply can result in significant financial penalties ranging from 1.5% to a staggering 7% of a firm's global sales revenue, depending on the severity of the offense. 

Violation Type

Potential Fine Limits

Supply of Incorrect, Incomplete or Misleading Information

€7.5 million or 1.5% of global annual turnover (whichever is higher)

Violation of AI Act Obligations

€15 million or 3% of global annual turnover (whichever is higher)

Violation of Banned AI Applications or Non-Compliance Related to Requirements on Data

€35 million or 7% of global annual turnover (whichever is higher)


Penalties will be determined based on the severity of the infringement based on the judgment of the European AI Office - a new governing body within the EU Commission. In the context of the EU AI Act, the distinction between "Minor Infractions" and "Significant Infractions" is based on the extent to which the violation impacts fundamental rights, public safety, or the integrity of AI systems. Examples of what might fall under each category:


Minor Infractions:

- Failing to provide adequate documentation or transparency reports as required.

- Minor non-compliance with technical standards for AI systems.

- Inadequate implementation of user data protection measures, not leading to severe consequences.


Significant Infractions:

- Deploying high-risk AI applications without conducting a Fundamental Rights Impact Assessment (FRIA).

- Using banned AI practices, such as untargeted scraping of facial images or emotion recognition in inappropriate contexts.

- Significant breaches of data privacy or security, particularly involving sensitive data.

- Non-compliance leading to direct harm to users' fundamental rights or public safety.


Beyond financial penalties, the act also empowers national authorities to take corrective actions, such as ordering the withdrawal of non-compliant AI systems from the market.


Preparing for Compliance: What Companies Need to do Now

“The EU AI Act will have far-reaching implications not only for the European market, but also for the global AI landscape. U.S. companies must ensure they have the right guardrails in place to comply with the EU AI Act and forthcoming regulation, without hitting the breaks on the path to value with generative AI.” - Steve Chase, vice chair of AI and digital innovation at KPMG

The enforcement window before global implementation provides a valuable opportunity for companies to prepare. Here are some key actions to consider:


  1. Assessing AI Systems: Determine which AI systems and models fall within the Act's scope and conduct gap analysis against key requirements.

  2. Strengthening Data Governance: Enhance methods for data collection, storage, and lawful use, focusing on transparency and risk mitigation.

  3. Aligning with Technical Standards: Keep abreast of emerging technical standards that will define compliance specifics.

  4. Integrating with Existing Regulations: Understand the interaction between the EU AI Act and other regulations like GDPR and DSA.

  5. Conducting FRIAs and Conformity Assessments: Develop expertise to conduct FRIAs ( that determine the balance between AI benefits and fundamental rights considerations) and Conformity Assessments. The Conformity Assessment process is multi-layered, involving technical and non-technical evaluations. Companies developing high-risk AI systems should begin exploring notified bodies and familiarizing themselves with the conformity assessment process well before the early 2026 deadline. FRIAs involve a systematic examination of potential risks to fundamental rights, such as privacy, non-discrimination, and freedom of expression. The results of these assessments should inform design, development, and deployment decisions, mitigating potential biases and harms. For more detail on Conformity Assessments visit our Essential Conformity Assessment Components blog post.

  6. Invest in AI Governance Solutions: Consider implementing AI governance and compliance software solutions. These tools can streamline risk assessments, document development processes, and ensure ongoing compliance.



A Fast Developing Compliance Road Ahead

The EU AI Act represents a transformative approach to AI governance with a worldwide impact on how generative artificial intelligence will be integrated into ongoing business operations. Emphasizing risk management, data integrity, and human rights, it resonates with the growing emphasis on ethical AI and data protection. Early preparation allows companies to not only meet compliance requirements but also to emerge as frontrunners in ethical AI practices. This strategic advantage will be invaluable in the global market increasingly driven by artificial intelligence technologies.

Comments


bottom of page