top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

Understanding EU AI Act Compliance Requirements: Essential Conformity Assessment Components

By: Chris Hackney, Founder and Board Member - AI Guardian

Fact Checked by Robin Hackney 




Wall full of dart boards representing conformity both as boards and though the cluster of green darts

In an era where Artificial Intelligence (AI) shapes many aspects of life, the European Union (EU) has taken a significant step in AI regulation with the recent passage of the EU AI Act. This landmark legislation establishes a comprehensive legal framework for AI, designed to ensure responsible and safe AI deployment across the EU. A central aspect of compliance with this act is the Conformity Assessment mandate for High-Risk AI Systems (HRAIS). This blog post delves into the intricacies of these assessments, underscoring their importance in the broader context of AI governance and Responsible AI policies.


The EU AI Act Compliance: A Risk-Based Regulatory Framework


The EU AI Act categorizes AI systems into four risk levels: low or minimal risk, limited risk, high risk, and unacceptable risk. Each level carries obligations proportional to its risk, with AI systems posing unacceptable risks being prohibited. High-Risk AI Systems (HRAIS), however, are subject to stringent obligations, especially regarding transparency. The Act's text emphasizes the significance of protecting individuals' health, safety, and fundamental rights from potential risks posed by these systems and includes Conformity Assessment requirements as part of this mission.



Conformity Assessments: Ensuring Accountability


Conformity Assessments (CAs) are mandatory for HRAIS to comply with the EU AI Act. These assessments are a regulatory tool to ensure accountability in AI development and deployment. They involve demonstrating compliance with the requirements set out in Title III, Chapter 2 of the legislation, which addresses various facets of AI system operation and impact.



When and Who Performs Conformity Assessments?


CAs are required to be performed before placing a HRAIS in the EU market or before its first use in the EU. While typically undertaken by the providers of HRAIS, in certain cases, the responsibility may fall to the product manufacturer or distributors/importers. The Act highlights two scenarios for this alternate responsibility: if the HRAIS is introduced under the manufacturer's name or if distributors/importers modify its intended purpose or make substantial modifications.


How are Conformity Assessments Conducted?

There are two methods for conducting CAs:  internally by the AI system provider (or the responsible actor) or externally through a notified third-party entity. The choice depends on the adherence to harmonized standards. If full compliance with these standards is demonstrated, providers can opt for internal assessments. In cases where full internal compliance is not possible or external verification is deemed necessary, a third-party assessment is required.



Determining Internal vs Third-Party Assessment Pathway

For internal assessments, entities must ensure that their quality management systems align with Article 17's provisions, confirm the adequacy of technical documentation, and verify the consistency between documentation and the system's design and development. Successful completion requires drafting an EU Declaration of Conformity and affixing a Conformité Européene (CE) Mark on the product.


In contrast, third-party assessments are mandated for HRAIS involving remote biometric identification or inferences about personal characteristics based on biometric data. Notified bodies, designated by national notifying authorities, conduct these assessments, ensuring AI’s technical documentation and quality management system conformity. Successful assessments result in an EU Technical Documentation Certificate, valid for four years and renewable upon re-assessment.


For additional detail consult Article 43 of the EU AI Act provides which provides explicit guidance on which cases necessitate an internal conformity assessment and which require a third-party process.



Conformity Assessment Core Components


The EU AI Act compliance requires seven key components that a conformity assessment must cover:


1. Risk Management: Evaluating and addressing potential risks associated with the AI system.


2. Data Governance: Ensuring proper handling, quality, and integrity of data used by the AI system.


3. Technical Documentation: Maintaining detailed documentation about the AI system’s design, development, and functionalities.


4. Record Keeping: Keeping thorough records of the AI system’s operations and compliance measures.


5. Transparency and Provision of Information: Ensuring transparency in the AI system's operations and providing clear information to users.


6. Human Oversight: Implementing mechanisms for human oversight to prevent or minimize risks.


7. Accuracy, Robustness, and Cybersecurity: Ensuring the AI system's accuracy, resilience to attack and errors, and robust cybersecurity measures.


These requirements must be met before the AI system is put into use or enters the market, and the provider must continuously ensure compliance throughout the system's lifecycle.



Future Outlook and Technical Refinements

The EU Commission, under Article 73 of the Act, is empowered to adopt delegated legislations to update conformity assessment provisions and Annexes VI and VII, considering technical progress. This process will involve consultation with the proposed AI Office and relevant stakeholders.



Broader Implications and Compliance Requirements

The Act's provisions will affect providers and deployers of AI systems used in or impacting the EU, irrespective of their establishment location. This global reach necessitates compliance by entities even outside the EU if their AI systems' output is used within the EU. The Act categorizes AI systems based on their risk level, with high-risk systems requiring a fundamental rights impact assessment and transparency in decision-making processes affecting citizens' rights. Examples of systems classified under different risk categories include chat bots or deepfakes (limited risk) and AI used in sensitive systems like welfare, employment, education, transport (high risk).


Penalties for Non-Compliance

Non-compliance with the Act's obligations, including Conformity Assessments, carries significant financial penalties, calculated as a percentage of the liable party's global annual revenue or a fixed sum, with provisions for proportionate caps for SMEs and start-ups. This structure is similar to fines under the European General Data Protection Regulation (GDPR).

Comments


bottom of page