top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

EU AI Act Enters into Force: What Businesses and Developers Need to Know

By Bart Layton, CEO - AI Guardian

Fact Checked by Robin Hackney


The countdown has officially begun! The European Union (EU) AI Act has now entered into force (as of August 1, 2024), and the clock has begun ticking for parties with obligations set forth within the Act. In this post, I will explore how the act impacts the two broadest populations of impacted groups – businesses using AI systems (referred to in the AI Act as “Deployers”), and the developers that make those systems (“Providers”). In particular, I will focus on the next steps businesses and developers should take to begin their journey toward compliance with the AI Act. With the first provisions in effect within 6 months, the time to start is now.



Executive Summary: Next Steps for Businesses and Developers to Comply with the EU AI Act (0-6 months)


Infographic: timeline of next steps for EU AI Act compliance

How long do I have to comply with the EU AI Act?

The first provisions of the AI Act apply in Feb 2025 (6 months from the publication of this post). While the Prohibition of “Unacceptable Risk” AI uses consumes most of the media attention, it is largely irrelevant to most legal businesses outside of military, defense, and law enforcement sectors. The AI Literacy provision (Article 3), however, is broadly applicable, is also due in February, and warrants proper attention.


As we look further out, most of the remaining provisions apply 18 months later, in Aug 2026. Two years may seem like a long time at first, but veterans from the earliest GDPR implementations will confirm those two years can fill up fast and will go by quickly.


What should I start doing now to comply with the EU AI Act?

Here are 5 key steps for complying with the EU AI Act to begin now so you can satisfy the AI literacy provision in February and ascertain which other EU AI Act requirements will apply to your business over the next two years:

1.        Inventory your current/intended uses of AI – 2-6 weeks

2.        Evaluate the risk inherent in your AI use cases – 1-2 weeks

3.        Refresh your AI policy (or publish one if you haven’t already) – 1-3 weeks

4.        Measure organizational AI literacy when you roll out that AI Policy update – 2-4 weeks

5.        Train and educate your teams in the capabilities and risks of AI – 4-10 weeks


Timing will vary by organization, but large enterprises should expect these to take near the full extent of their 6-month window. More detail below.


Does the EU AI Act apply to me if I’m not based in the EU?

Your company doesn’t have to be EU-based for the EU AI Act to apply. If your company uses AI to produce an output used in the EU, the Act most likely applies. Similarly, the Act applies to companies who offer an AI system to the EU market, regardless of the country in which that company is based.


Even if the EU AI Act doesn’t presently apply to you, and you have no intent of doing business in the EU or with EU businesses/data in the future, the 5 steps above are essential for any organization using or considering AI.


Does the EU AI Act apply to me if I’m not using AI (and have no plans to)?

Even those with no near-term plans to use AI would be wise to challenge the assumption that they won’t be using AI in the future or that they’re not using it already. Many common tools include AI components, and more are being added every day. Also, a recent report from Microsoft and LinkedIn revealed that many employees/teams (nearly 60%) are using AI more than their employers know about1. And if we’ve learned anything from past regulations, what you don’t know can still hurt you. Don’t get caught off guard – take steps to uncover shadow AI within your organization.



The EU AI Act: An Overview


Definition and Purpose

The AI Act is a regulatory framework designed to ensure that AI systems impacting the EU meet high standards of safety and ethics. The core goal of the AI Act is to protect public safety and uphold fundamental rights while enabling and fostering innovation. The AI Act’s authors seek to accomplish this by establishing guidelines and outlining AI governance measures to ensure that AI technologies are transparent, accountable, and secure.


Historical Context

A regulatory surge is at hand, driven and shaped by the rapid advancement of AI – particularly Generative AI technologies like OpenAI’s ChatGPT and Google’s DeepMind suite – and the global recognition and fear of the potential risks that come with it. At this point, over 60 countries around the world have announced some level of national AI policy, though only a small subset of those have developed substantive proposals that extend beyond political marketing. With the establishment of the EU AI Act, the European Union reprises their role as global vanguard of technology regulation (a few short years after establishing GDPR). Meanwhile, the AI Advisory Body of the United Nations has expressed an intent to establish a global AI Office to “fill gaps and bring coherence to the fast-emerging ecosystem of international AI governance responses”2, which we’ll be following closely. As AI regulatory proposals continue to come forth, the EU AI Act establishes a baseline that others will be held against and compared to.



Key Provisions of the EU AI Act


Risk Categorization of AI Systems


EU AI Act Risk Levels Source: European Commission report on the AI Act

We’ve covered this before, but one of the core elements of the EU AI Act is its risk-based approach to regulation. AI systems are categorized into four levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.

  • High-Risk AI Systems: High-risk AI systems have useful application but carry significant potential for harm to the health, safety or fundamental rights of a person. As such, many systems that serve healthcare, transportation, law enforcement, and similar sectors fall under this category. Because of the risks involved, these systems must comply with rigorous data governance, transparency in AI, and human oversight requirements.

  • Limited Risk AI Systems: This category includes systems with less potential for harm. While still subject to some AI regulations, these systems face less stringent requirements compared to high-risk AI systems. For example, AI used in customer service chatbots may be classified as limited risk and would therefore be subject to transparency requirements. To be compliant, such a system must disclose to its users that it is an AI system and modify outputs to include a machine-readable indication that the content is AI-generated.

  • Unacceptable Risk AI Systems: This category is reserved for AI systems with the greatest potential for harm to the health, safety, or fundamental rights of people, such that they are viewed by the EU as essentially having no place in our society. Some of the cited uses are clearly and exclusively problematic – notably, where AI is used for subliminal manipulation or deception or exploitation of a vulnerable person or group in a way that is likely to cause harm. Others within this category are more conditional. Facial recognition, emotional recognition, and biometric identification/categorization all have potential for beneficial application, but could cause great harm in certain contexts. It is the context of these that determines its Companies producing or using these technologies should take care to evaluate the setting, subject, and intent of use against the definitions in the AI Act. Furthermore, they should consider the potential for misuse and build in safeguards to ensure that their AI system cannot be intentionally or unintentionally applied in any way that falls under the Unacceptable Risk classification. Many systems with the potential to fall under this category if used improperly will have intended uses that land within the High-Risk category.


Obligations for Providers and Users

At a high level, providers/developers (those who make the AI systems available to EU markets) must ensure that their products are compliant with the Act’s requirements for them to remain on or enter the market beyond certain dates (which vary based on the Risk Classification of that AI system / use). This involves conducting risk assessments, implementing data governance measures, and ensuring transparency in AI.


AI deployers/users (businesses and persons who use AI systems in a professional context) must operate the systems in compliance with the Act’s requirements, including maintaining transparency in AI and ensuring that AI systems are used in a manner consistent with their intended purpose. We’ll explore this more deeply in a future continuation of our EU AI Act Deep Dive series.



Implications for Businesses and Developers


Compliance Challenges

Navigating the EU AI Act presents several challenges for businesses and developersThe Act’s comprehensive requirements can be daunting. Businesses must carefully assess their AI systems, ensure they meet the compliance standards, and document their practices accordingly. Ensuring compliance may require significant investment in terms of time and resources. Businesses will need to allocate funds for compliance audits, data management improvements, and staff training. The Act’s provisions may require detailed interpretation and adaptation, particularly for emerging AI technologies. Businesses will need to stay informed about regulatory updates and seek expert advice when necessary.


Overcoming Common Compliance Challenges

Begin by assessing the current state of your AI systems in relation to the Act’s requirements. Identify areas that need improvement and develop an AI Compliance Roadmap. Your roadmap should outline the steps needed to achieve and maintain compliance, including timelines, dependencies, and responsible parties. Consult with legal and compliance experts to ensure that your interpretation of the Act’s provisions is accurate and that your compliance strategies are effective.



Next Steps to Ensure Compliance with the EU AI Act


Here I lay out 5 concrete actions any organization can and should complete over the next 6 months to comply with the EU AI Act. These steps also serve as an effective guide for any organization looking to implement Responsible AI Governance or comply with AI-relevant regulations. In future posts, I will expand on this with recommended steps for Deployers and Providers to comply with requirements for High-Risk and Limited-Risk AI.


For now, to align with the EU AI Act Literacy Requirement in February, and determine which Risk Classifications apply to you…


1. AI Inventory: First, you need to inventory how you are using and planning to use AI.  This should include in-house AI development, use of third-party AI platforms, use of third-party tools with AI features, and use of General Purpose AI models (like OpenAI’s ChatGPT).

Note: take action to uncover “shadow AI” – AI use that is associated with your organization but otherwise not known to / approved by leaders accountable for compliance / governance.  Compliant use is still expected for (1) AI features within tools your organization uses, (2) use of AI by individuals within your organization, (3) use of AI by contractors and third-parties, even if you weren’t previously aware of that AI use.  Remember – 58% of the workforce is using AI tools outside of their employer’s oversight/awareness.


2. Risk Classification: Next, you should assess the risk level of your AI use.  The EU AI Act classifies AI use into 4 Risk Levels.  While it may be unlikely that any of your uses will fall under the Unacceptable Risk category, uses aligned to the High-Risk and Limited-Risk categories are not uncommon and come with important obligations.


3. Policy Refresh: Now, it’s time to refresh your AI policy (or publish one, if you haven’t already) so that it reflects what compliant AI use looks like for your organization.  With the above understanding of your use of AI - including the associated Risk Levels and corresponding provisions - you can tailor your policy to support your business and compliance needs.  A good AI policy fosters innovation within guardrails that keep you (and your customers) protected.


4. Policy Rollout + AI Literacy Baseline: But a policy is just words on a page until your AI policy is adopted.  It is important that all employees and contractors understand, accept, and adopt your AI policy – regardless of whether or not they use AI in their work today.  While rolling out your AI policy, take the opportunity to solicit further input from individuals, including: (1) if they are using or expect to use AI in their roles, (2) if they are aware of any AI use beyond the known inventory of AI uses recognized by leadership, and (3) what training, education, or experience they have that aligns with and supports their expected professional interactions with AI.


5. AI Education: Expect that members of your team will have gaps in their education/experience as it relates to their potential use of AI – this is likely and common.  Remember – 46% of AI users are novices with only months of very limited prior experience.  Plan for this in advance and align training resources accordingly.  To align with the Feb 2nd, 2025 date established in the AI Literacy provision of the EU AI Act, we recommend making training available to teams with AI exposure by mid-November of 2024.  This allows for a 60-day window for completion of the necessary coursework and just over two weeks for any follow-up and other administrative tasks.



Case Studies: Navigating the EU AI Act


Examples of Businesses Adapting to the Act

Companies like Siemens proactively engaged with leaders in responsible AI earlier within the Act’s lifecycle to better understand the regulatory changes and implications. Similarly, they’ve explored parallels to prior EU directives that provide reference examples of how high-level requirements have translated into industry-specific guidance in the past. Their efforts and experiences provide valuable insights into best practices for compliance.


In terms of best practices / leading practices, the most effective strategies we’ve seen include proactive engagement with regulators, effective* internal communications, and investment in relevant education for staff and contractors. (*consistent delivery of simple, digestible messages, distributed over time across multiple channels; check out tools like Cyrano Video for easy recording and distribution of video content that gets 10x engagement over traditional corporate communications and email newsletters – not a paid referral)


Challenges Faced by Early Adopters

Proactive compliance is not without its challenges. Early adopters can incur unnecessary costs attempting to interpret and align with complex requirements themselves without the right guidance. Companies can avoid substantial throw-away work and re-work by leveraging qualified experts and engaging with regulators early and often. Organizations newly aiming to comply with the EU AI Act should learn from early adopters by investing time in a compliance plan that defines (1) a triage process to identify areas of regulatory ambiguity, (2) a discovery approach to address that ambiguity, and (3) a roadmap of key milestones that includes when that ambiguity must be identified and clarified so that the corresponding compliance measures can be implemented in time.



Future of AI Regulation in the EU

EU and AI imagery combined

Predicted Changes and Updates

Future predictions for AI regulation in Europe and beyond? The EU AI Act is likely to evolve as AI technologies and their applications continue to advance. Expect future updates to address emerging risks and refine existing provisions. Businesses should take steps to stay informed about potential amendments and be prepared to adapt their compliance strategies accordingly.


Global Impact

The EU AI Act sets a precedent for AI regulation worldwide. Expect its influence to drive other regions to adopt similar frameworks and shape the global landscape of AI governance. International businesses should be prepared to navigate varying regulatory environments and ensure compliance across jurisdictions.



Conclusion: Embracing the EU AI Act

a symposium of AI leaders gathered in the EU

The EU AI Act represents a significant step towards comprehensive and balanced AI regulation. And with many nations seeking to address the swelling tide of AI concerns, it establishes a blueprint for others to leverage as they establish their own AI regulations. Expect copycat guidance to roll out across nations seeking to bolster their image as responsible technology leaders by borrowing from European Union AI laws. Complying with the EU AI Act may be mandatory for businesses using AI systems in EU markets and the developers providing those systems, but when similar AI regulations inevitably arise throughout global markets, any business that has taken action to understand and align to the EU AI Act now will stand far ahead of its competitors.


Start your AI compliance journey today by conducting a thorough assessment of your AI systems and developing a clear AI Compliance Roadmap to meet the EU AI Act’s requirements.


For more information on how to comply with the EU AI Act and to schedule a consultation, please contact us at info@aiguardian.com.

  References:

Comments


bottom of page