top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

EU AI Act Explained - Deep Dive Series: Everything You Need to Known About the Act's Risk-Based Approach & AI System Risk Categories

By Chris Hackney, Founder and Board Member - AI Guardian

Fact Checked by Robin Hackney 



Map of Europe depicted as a circuit board, overlaid with the European Union stars

The European Union (EU) has taken an important step towards shaping the future of artificial intelligence (AI) with the landmark EU AI Act. This groundbreaking legislation, passed into EU law in early 2024, establishes the world's first comprehensive framework for regulating AI development and deployment.

At the core of the act lies a risk-based approach, which tailors regulations to the potential harm an AI system might pose. This ensures proportionate oversight, focusing stricter scrutiny on high-risk applications like facial recognition, while allowing for lighter touch regulation for less risky systems like spam filters.


But what exactly constitutes a high-risk AI system? And how does the EU AI Act categorize and regulate different AI applications? Let’s unpack the intricacies of the risk-based approach, providing a clear roadmap for companies navigating this evolving regulatory landscape.


The Four Pillars of AI Risk: A Breakdown of AI System Categories

The EU AI Act classifies AI systems into four distinct risk categories, each with its own set of regulations and compliance requirements.


Pyramid showing the AI risk levels as defined ny the EU AI Act (unacceptable, high, limited and minimal risk)
EU AI Act Risk Levels Source: European Commission report on the AI Act


AI Systems with Unacceptable Risk (Prohibited):  

These AI systems are deemed a clear threat to fundamental rights, safety, or livelihoods and are explicitly banned within the EU.  Prohibited AI systems (as defined in Title II, Article 5) include:

  • Social Scoring Systems with Real-World Consequences: Systems that assign social credit scores based on online behavior or political affiliations, with direct consequences for employment, access to loans, or social services.

  • Real-Time Remote Biometric Identification (RBI) for Law Enforcement in Public Spaces: This could involve widespread facial recognition technology used for surveillance purposes without proper safeguards. Three exceptions to this prohibition are allowed with authorization by a judicial or other independent body with appropriate limits in time, geographic reach and the databases searched:

    • Searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited

    • Preventing substantial and imminent threat to life, or foreseeable terrorist attack

    • Identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organized crime, and environmental crime, etc.)

  • Deceptive Techniques to Distort Behavior: Deploying subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm.

  • Exploiting Vulnerable Communities: Exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behavior, causing significant harm.

  • Biometric Categorization Systems: That infer sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labeling or filtering of lawfully acquired biometric datasets or when law enforcement categorizes biometric data.

  • Criminal Profiling at Individual Level: Assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.

  • Emotional Inference in Sensitive Environments: Inferring emotions in workplaces or educational institutions, except for medical or safety reasons.


High-Risk AI Systems (Strict Compliance Requirements): 

High-risk AI systems pose significant risks to fundamental rights or safety. AI systems identified as high-risk (as defined in Title III, Article 6 and Annex III) include AI technology used in:

  • Critical infrastructures (e.g. transport, utilities), that could put the life and health of citizens at risk

  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (ex scoring of exams)

  • Safety components of products (e.g. AI system applications in robot-assisted surgery)

  • Employment, management of workers and access to self-employment (e.g. resume-screening software for recruitment procedures);

  • Essential private and public services (e.g. credit scoring denying loan applications)

  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);

  • Migration, asylum and border control management (e.g. automated examination of visa applications)

  • Administration of justice and democratic processes (e.g. AI solutions to search for court rulings)


Based on their potential to have a material negative impact on wide-spread communities, these types of high-risk AI systems have the most stringent compliance measures associated with them before they are allowed on the market.


4-step process to comply with the EU AI Act - develop, conformity assessment, registration, declaration of conformity needs
Steps for high-risk AI systems to comply with the EU AI Act. Source: European Commission report on the AI Act

In particular, the Artificial Intelligence Act requires companies to provide:

  • Conformity Assessments: Evaluates and verifies that the AI system meets the AI Act's technical and non-technical standards, promoting responsible development and deployment. capAI provides a robust framework for conducting Conformity Assessments based on AI Act guidance. The act offers companies two distinct pathways for conducting conformity assessments: internal assessments and external assessments by notified bodies. Whether an internal or external assessment is conducted depends on several factors, including:

  • Complexity of the AI System: Highly complex systems with novel functionalities might benefit from the specialized expertise of notified bodies.

  • Internal Expertise: Companies with a strong track record in AI development and a robust quality management system might be well-positioned for internal assessments.

  • Resource Availability: Internal assessments can be more resource-intensive, requiring dedicated personnel and documentation efforts.

  • Risk Management Systems: Companies must establish comprehensive risk management procedures to identify, assess, mitigate, and monitor risks throughout the AI lifecycle.

  • High Transparency Standards: Clear and accessible information needs to be provided to users about the system's capabilities, limitations, and potential risks. This includes information on how the system makes decisions and the type of data it uses.

  • Human Oversight: High-risk AI systems should be designed and operated with meaningful human oversight to ensure accountability and prevent unintended consequences.


Certain high-risk AI systems may get approval to avoid compliance stipulations of the EU AI Act in some limited use cases, including if the AI system:

  • Performs a narrow procedural task

  • Improves the result of a previously completed human activity

  • Detects decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review

  • Performs a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.



AI Systems with Limited Risk (Self-Declaration): 

General purpose AI (GPAI) AI systems with limited risk (as defined in Title IV, Article 52) pose minimal risks to fundamental rights or safety.


Examples of what are considered limited risk AI systems:

  • Spam Filters: Systems that help manage email inboxes.

  • Customer Service Chatbots: Tools that improve customer service through automated exchanges with customers.

  • Simple Image or Video Filters: Simple filters that adjust brightness, contrast, or color in photos without advanced facial recognition or emotional analysis features.


While some oversight remains, the focus with these types of AI systems is on self-declaration by developers. Companies creating limited-risk AI systems will need to:

  • Conduct Risk Assessments: Evaluate their AI systems for potential risks and implement a risk management system.

  • Develop Technical Documentation: Maintain technical documentation outlining the system's development process, data used, and functionalities.

  • Self-Declaration Compliance: Declare that their AI system adheres to the act's requirements for limited-risk systems.


AI Systems with Minimal Risk (Minimal Requirements): 

General purpose AI (GPAI) AI systems with minimal risk are deemed unlikely to cause harm (as defined in Title IX, Article 69)  These systems face minimal regulatory requirements, but AI governance best practices are still encouraged. Examples include:

  • Simple Games with AI Elements: Mobile games that utilize basic AI for character behavior fall under this category.

  • Spam Detection Algorithms: Basic algorithms designed to identify spam emails with limited user interaction.

  • Educational Tools with Limited Personalization: AI-powered language learning applications that tailor basic exercises based on user performance but don't collect sensitive data.

  • Basic Recommendation Systems: Recommendation algorithms for online shopping or music streaming that rely on non-invasive data like purchase history or listening habits.

  • AI-powered Image Enhancement Tools: Simple filters that adjust brightness, contrast, or color in photos without advanced facial recognition or emotional analysis features qualify as minimal risk.


Even minimal risk AI systems should be designed with transparency in mind.  Users should have a basic understanding of how the AI works and what data it uses.  This could be achieved through clear user interfaces, disclaimers, or readily available information about the AI system's capabilities.


AI Governance - The Path Ahead

The EU AI Act is a big step forward in the governance of artificial intelligence. Its risk-based approach offers a clear framework for responsible AI development and deployment, fostering trust and promoting innovation. While navigating the act's intricacies might seem daunting, it also presents a valuable opportunity.  By proactively preparing for compliance and adopting responsible AI practices, companies can position themselves as leaders in the ethical development of AI, gaining a significant competitive advantage in the global marketplace.


EU AI Act Explained

This article provides a host of detail on the EU AI Act’s risk-based approach, but the act is a comprehensive piece of legislation with many facets. We encourage you to read AI Guardian’s other articles in the EU AI Act Deep Dive Series that cover other critical aspects of the act, such as the role of Conformity Assessments and the importance of Fundamental Rights Impact Assessments (FRIA). In the meantime, if you're looking for a solution to streamline your AI governance journey, our AI governance and compliance software can be your trusted partner.  Visit our solutions page to learn more about how we can help your organization navigate the EU AI Act and embrace a future of responsible AI.

Comments


bottom of page