Skip to main content
Comment & Opinion

EU AI Act: An introduction to high-risk AI systems

Welcome to the fourth of our snapshot pieces on the landmark European Union AI Act. In this article we focus on high-risk AI systems which attract strict obligations under the Act. Read on to find out if your business is affected, the steps to take now and how we can help.”

- Sally Mewies, Partner and Head of Technology & Digital

The story so far…

In our first article in this series we ran through the key elements of the Act, summarising the headline points including the definition of ‘AI system’. Our second article looked at the fundamental question of who has to comply, while the third article focused on the Act’s ‘AI literacy’ requirement.

High-risk AI systems: The key message

The burden on the whole supply chain for high-risk AI systems is considerable because they attract strict obligations under the Act. As we’ll come on to see, the most onerous obligations fall on the ‘providers’ (suppliers) of these systems.

The relevant provisions don’t take effect until 2 August 2026. But if you’re going to be caught, now is the time to get to grips with what’s expected of you. In-scope organisations need to start building a compliance programme which can be adapted as more clarity emerges around these obligations. We’re waiting on future guidance and harmonised standards to shed further light. Here are the key points to be aware of now.

An image of a robot hand, it is touching a holographic screen. A visual metaphor for the topic of this article, the EU AI Act: Does it apply to you

What is a high-risk AI system?

High-risk AI systems fall into 2 types:

  • AI systems that are products or safety components of products covered by existing legislation listed in Annex I of the Act and which are required to undergo third-party conformity assessments. This includes products such as medical devices.
  • AI systems listed in Annex III of the Act. They include certain use cases in biometrics, critical infrastructure, education and vocational training, employment, and the delivery of essential services and benefits such as evaluating creditworthiness.

An AI system listed in Annex III won’t be considered high-risk if it doesn’t pose a significant risk of harm to people’s health, safety or fundamental rights, including by not materially influencing the outcome of decision-making. What does this mean? Examples include AI systems intended to perform a narrow procedural task or improve the result of a previously completed human activity.

Note the exception won’t apply if the system is used to profile individuals. The UK’s data regulator, the Information Commissioner’s Office, describes ‘profiling’ under the UK GDPR as analysing aspects of an individual’s personality, behaviour, interests and habits to make predictions or decisions about them.

Due to the strict obligations associated with high-risk AI systems, it will be particularly important for organisations to be able to assess whether or not a system is in fact high-risk. We’re currently waiting for European Commission guidance that will set out practical examples. Watch this space.

Obligations associated with high-risk AI systems

All high-risk AI systems must be compliant with the following requirements:

Risk management system. A high-risk AI system must have a risk management system with the capability, as far as technically feasible, to identify, analyse, manage and mitigate known and foreseeable risks with using the AI system. It also needs to be able to estimate and evaluate emerging risks, particularly those that may occur through improper use.

Data governance. A high-risk AI system involving training, validating and testing data needs a data governance and management policy. This is to make sure data is collected fairly in line with data protection laws and steps are taken to eliminate bias. Training data should be, to the best extent possible, free from errors and complete in view of the intended purpose. It’s possible to use special category data for training systems to eliminate bias, but you must follow the strict process in the Act.

Technical documentation. This must be drawn up before the system is put on the market. The requirement is extensive. The documentation must contain sufficient detail to show compliance. It must include information specified in the Act, such as a description of the system’s purpose, how it interacts with other software and hardware, instructions for the user, and how it’s been developed.

Record-keeping. A high-risk AI system has to have the capability to log events over its lifetime. The log should provide an element of traceability in the functioning of the system. The extent will vary depending on the system’s purpose.

Transparency. High-risk AI systems must be supplied with instructions for use. These need to be clear enough for users to understand and use the system appropriately.

Human oversight. High-risk AI systems must be subject to human oversight through tools that allow a human to monitor performance and prevent or minimise risks to health, safety or fundamental rights. The Act contemplates this either being a mechanism built into the system or one the user can deploy.

Accuracy and robustness. High-risk AI systems must be designed and developed so they have an appropriate level of security and accuracy. The European Commission will produce guidance on this. The key aspects here are protection against bias where a system continues to learn, protection against third-party attacks, and operational resilience.

The supply chain

Crucially, on top of the requirements set out above, the various operators in the supply chain for high-risk AI systems – providers (suppliers), deployers (users/customers), importers and distributors – have their own obligations, with the most onerous falling on the providers.

In addition to making sure the system is compliant with the above requirements, the provider’s obligations – which are extensive – include putting in place a wide-ranging quality management system, retaining specified documents, taking immediate corrective action when needed, post-market monitoring and serious incident reporting.

The quality management system is, in a sense, overarching. It needs to be documented in written policies, procedures and instructions and includes, among others: a strategy for regulatory compliance; the risk management system referred to above; systems and/or procedures for data management, quality control and testing, record-keeping, post-market monitoring and serious incident reporting; and an accountability framework setting out the responsibilities of management and other staff.

Additional deployer obligations include following provider instructions and ensuring human oversight of the system by an individual with the necessary skills, training and authority within the organisation.

Note that the product manufacturer will be considered the provider where the system is a safety component of a product covered by EU product safety legislation listed in the Act.

Any deployer, importer, distributor, or other third party will be considered a provider of a high-risk AI system if:

  • in respect of such a system already placed on the market or put into service, they put their name or trademark on it or substantially modify it; or,
  • in respect of a non-high-risk AI system (including a general-purpose one) already placed on the market or put into service, they modify its intended purpose to make it high-risk.

High-risk AI systems: How we can help

The EU AI Act is a complex piece of legislation. Our Technology & Digital experts are here to help break down these complexities and provide you with practical, commercial advice to help you meet your compliance obligations. This includes advising on appropriate contractual wording across the supply chain. Please get in touch with Sally Mewies or any member of our Technology & Digital team.

Our people