20th August 2024
“Welcome to the second of our snapshot pieces on the landmark European Union AI Act. Billed as the world’s first comprehensive legal framework on AI, the Act entered into force on 1 August 2024 and most of its provisions will apply in 2 years’ time (although note the AI literacy requirement which applies from 2 February 2025). Crucially, UK businesses may be subject to the Act if they deploy AI in the EU.”
In our first article in this series we gave an overview of the key elements of the Act, summarising the headline points. In this article our Technology & Digital and Commercial experts Sally Mewies and James Crayton look at the fundamental question of who has to comply.
Will your business be caught? Read on to find out more.
As we explained in our first article, the EU AI Act focuses regulation of AI systems on 4 categories of risk:
Obligations are placed on various operators in the AI value chain: providers (developers) and their authorised representatives; deployers (users); importers; distributors; and product manufacturers. We discuss these below.
The nature and extent of the obligations imposed depends on the category of operator and the level of risk which applies to the AI system in question.
Most obligations fall on the providers, and then deployers, of high-risk systems. We’ll be delving into high-risk systems and their associated obligations in the next article in this series.
Regardless of the risk category, providers and deployers of all AI systems must take measures to ensure a sufficient level of AI literacy of their staff and others dealing with the operation and use of AI systems on their behalf (Article 4 of the Act). This obligation applies from 2 February 2025. In practice, this means implementing appropriate training programmes and policies and procedures to ensure compliance.
Note that both providers and deployers will be caught if they’re located/established outside of the EU but the output produced by the AI system is used in the EU.
This is particularly relevant in relation to high-risk AI systems. The product manufacturer will be considered the provider where these systems are safety components of products covered by EU product safety legislation listed in the Act.
Note that any deployer, importer, distributor, or other third party will be considered a provider of a high-risk AI system if:
Identify the role the business plays in relation to any AI systems and general-purpose AI models so that you can start to assess any potential exposure to the Act. Does it fall within the list of operator types set out above? Assessing the business’ use of or interaction with AI systems against the various risk categories in the Act will further help to isolate the areas you need to be looking at to comply.
If your business is caught by the Act, start thinking now about the steps you’re going to put in place to make sure your staff and others are sufficiently AI literate.
The Act is long and complex and we’ll be helping to break it down bit by bit over the next few months and beyond. Next time we’ll be focusing on what comprises a high-risk AI system and the obligations that follow from that. If you have any queries in the meantime, and/or need advice on complying with the AI literacy requirement, please get in touch with Sally or James.
Confused about what AI is and how to introduce it into your business? Click here to access our guide to demystifying AI.
[1] ‘General-purpose AI models’ include large generative AI models which are capable of carrying out a wide range of distinct tasks and may be integrated into a lot of AI systems. They are specifically defined in the Act.