10th February 2025
“Digital and technology is one of 8 growth-driving sectors central to the government’s Modern Industrial Strategy. So far this year we’ve already seen the Prime Minister announce the government’s hugely ambitious plan to ‘unleash’ AI across the UK, a key feature of this growth agenda. There’s no doubt that AI will continue to be a major focus for businesses and the public sector during 2025. Join us as we take a look at what’s in store in the world of tech and digital from a legal and regulatory perspective.”
We publish a regular round-up of legal and non-legal tech-related news stories. To receive this and other similar updates direct to your inbox, click here to register.
If you have queries about any of the topics discussed below, or need further advice or assistance, please get in touch with Sally Mewies, Nick Stubbs, or one of our Technology & Digital experts.
AI is set to dominate the legal and regulatory agenda as the pace of development increases and the first compliance requirements kick in. With AI use touching on many areas including data protection and intellectual property rights, there are numerous challenges for businesses to navigate as they innovate, especially those businesses operating in multiple jurisdictions. As new tools come online and businesses realise their digital transformation projects, implementing effective AI governance frameworks will be key.
2025 is shaping up to be busier than ever, with a host of new laws and guidance expected across AI, data, cyber, and more. Collaboration between regulators and other bodies will intensify, both nationally and internationally. Despite the divergent approaches to regulation that we’re seeing globally, this collective effort will be essential as businesses look for some harmonisation of standards to ease their compliance challenges.
We’ll be looking at:
The Prime Minister has agreed to take forward all 50 recommendations in the AI Opportunities Action Plan to ramp up AI adoption across the country and make it a global hub for AI investment and innovation. There are 3 main pillars to the plan:
AI safety is still an important feature through the work of the UK’s AI Safety Institute, but the focus is very much on growth.
There’s a lingering question mark over how the government will balance its net zero goals with providing for AI’s massive energy consumption. A dedicated AI Energy Council will be established.
The government’s full digital and technology sector plan is expected to be published in spring.
The UK is sticking with its plan to empower existing regulators to manage the risks associated with AI systems in the sectors and areas they regulate, rather than implementing wide-ranging AI-specific legislation similar to the EU AI Act (discussed below). There are, however, plans to introduce targeted legislation around the development of the most powerful AI systems (‘frontier AI’). We should hear more about what this will look like in the coming months.
We expect to see more practical guidance coming from the various regulators over the course of 2025, although there are no specific deadlines by which this needs to be done and some regulators appear to be more advanced than others. Work will continue to deepen the regulators’ understanding of AI in their sectors, with the use of sandboxes and other initiatives.
The use of AI in the workplace and in recruitment is a hot topic that isn’t going away.
Compliance with data protection law is a key consideration. Last year we saw the Information Commissioner’s Office issue a series of recommendations to AI developers and providers on the use of AI tools in recruitment and share key questions organisations should ask when procuring AI tools to help with their employee recruitment. ICO guidance on recruitment and selection is expected this autumn.
A government consultation on workplace surveillance technologies is also expected this year.
In April 2024 the TUC published a draft Artificial Intelligence (Regulation and Employment Rights) Bill to regulate the use of AI systems by employers in relation to workers, employees and jobseekers to protect their rights and interests in the workplace. We await further information on the progress (if any) of this Bill in 2025.
Using the Algorithmic Transparency Recording Standard is now mandatory for all government departments and the intention is to roll out mandatory use to the wider public sector.
In a related development, the Public Authority Algorithmic and Automated Decision-Making Systems Bill has just had its third reading. The Bill aims to regulate the usage of algorithmic and automated decision-making systems across the public sector, making their use more transparent and fair, and mitigating against risks such as bias and discrimination.
It’s rare for a private member’s bill like this to become law, but it does keep the debate on this topic alive and put some pressure on the government as it progresses with its plans. The government’s position appears to be that the ATRS combined with proposals on automated decision-making in the Data (Use and Access) Bill (discussed below) are sufficient.
The picture in the EU is quite different. The EU AI Act came into force in August 2024 and the first obligations started to kick in on 2 February 2025. This includes a requirement for the providers and deployers of AI systems to take measures to ensure that staff and others dealing with the operation and use of AI systems on their behalf are ‘AI literate’. This will involve putting in place appropriate policies, procedures and training programmes, and reviewing relevant contractual arrangements.
UK businesses will be caught by the Act if the output produced by an AI system is intended for use in the EU or they put an AI system into service or place it or a general-purpose AI model (such as a large generative AI model) on the EU market.
“The EU AI Act focuses regulation of AI systems on 4 categories of risk. Extensive obligations apply from 2 August 2026 in relation to those systems classified as ‘high-risk’. We expect to see businesses using 2025 to assess whether they’re in scope and which risk categories their AI systems fall into. Crucially, we’re waiting for a raft of practical guidance, codes of practice, templates and standards from the European Commission and its AI Office to assist with interpretation and implementation of the Act.”
We recently ran a webinar on the EU AI Act and the impact of AI generally to help you navigate the regulatory landscape and prepare your business. The webinar will be available to view on demand shortly.
After stalling pending adoption of the EU AI Act, an AI Liability Directive could now be back on the cards in 2025.
On that topic, the Master of the Rolls said in a recent speech that “one of the biggest fields of legal activity in years to come is likely to be the claims that will be brought in respect of the negligent or inappropriate use of AI, and also the negligent or inappropriate failure to use AI”. The UK Jurisdiction Taskforce is due to publish a legal statement on issues of AI liability. Watch this space.
Despite the divergent approaches to regulation, there’s no doubt that international collaboration in relation to AI will continue in earnest in 2025. This joint statement on the adoption of AI principles in the telecoms industry is a recent example.
An AI Action Summit is currently being held in France, building on the momentum from the inaugural AI Safety Summit held in the UK in November 2023 and the AI Seoul Summit in 2024.
We’ll continue to see initiatives from bodies including the UN, G7 and the Organisation for Economic Co-operation and Development. With the new president of the United States already tearing up the previous administration’s executive order on AI and issuing his own, it remains to be seen what role the US will play.
Data reform is back on the agenda in 2025 after a Data (Use and Access Bill) was introduced to Parliament. With a focus on wider data reform such as the introduction of smart data schemes, this proposed legislation doesn’t represent a major overhaul of the current data protection regime.
Proposed changes to data protection law include a new lawful basis for processing personal data where processing is necessary for the purposes of a ‘recognised legitimate interest’, and expanding the lawful basis for solely automated decision-making. Proposed changes to the Privacy and Electronic Communications Regulations 2003 include increasing fines for breaches to align with the UK GDPR.
The Bill isn’t expected to affect the UK’s data adequacy status, which is due to be reviewed by the European Commission in June 2025.
“2024 was a busy year for the ICO and that’s set to continue in 2025 as the regulator furthers its work in the focus areas of AI, online tracking and children’s privacy. We’re also expecting new and updated guidance on a range of topics including the use of storage and access technologies such as cookies, the Internet of Things, cloud computing, encryption, anonymisation and pseudonymisation, profiling and behaviour ID tools for online safety, and handling cyber incidents.”
In his recent response to the government on economic growth, the Information Commissioner recognises that regulatory uncertainty risks being a barrier to businesses investing in and adopting transformational tech.
He says that the ICO will produce a single set of rules for those developing or using AI products, to make it easier for them to innovate and invest responsibly while safeguarding people’s information rights; and that the regulator would support the government in legislating for such rules to become a statutory Code of Practice on AI, to provide further regulatory certainty to businesses wanting to invest in AI in the UK. Watch this space.
The ICO ran a consultation series throughout 2024 on data protection in generative AI and published its outcomes report in December. Its core AI and data protection guidance will be updated in 2025 following the changes to data protection law once the Data (Use and Access) Bill is passed. In the meantime, the ICO has published this blog post debunking data protection myths about AI.
The ICO recently announced its online tracking strategy, with online advertising its focus in 2025. The regulator says that one of its main aims for the year is bringing the top 1,000 UK websites into compliance. It has published guidance for organisations implementing or considering implementing ‘consent or pay’ models and is currently consulting on draft updated guidance on storage and access technologies (formerly its ‘cookies guidance’).
Last but not least, the ICO’s final approach to public sector regulation is due to be published in 2025 following a recent consultation.
Early in 2024 the ICO published the second edition of its Tech horizons report, highlighting a further 8 technologies it believes may have a particularly significant impact on our societies, economies and information rights in the next 2 to 7 years: genomics; the metaverse; neurotech; quantum computing; the commercial use of drones (see our recent article for practical advice on this topic); personalised AI; next-generation search; and central bank digital currencies (such as the digital pound discussed below).
We’ve already seen ICO reports on preparing for the quantum-enabled future (next steps include an update to the ICO’s encryption guidance in line with the transition to post-quantum cryptography) and genomics. We expect to see further reports on the other emerging tech during 2025.
Over in Europe, the Data Act will apply from September 2025. UK businesses could fall within scope if they use, collect or manage data in the EU. Among other things, manufacturers will have to design their products in a way that allows both business and consumer users to take full advantage of the data created while using connected devices.
In relation to international data transfers, the European Commission is due to consult on standard contractual clauses for the transfer of data to third country controllers and processors who are directly subject to the GDPR. And it’s a case of wait and see whether the change of president in the US affects the continued operation of the EU-US Data Privacy Framework and its UK Extension.
As the latest annual review from the UK’s National Cyber Security Centre highlights, cyber underpins every aspect of everyday life and the cyber threat grows more complex each year.
“Here in the UK, 2025 will see publication of a new Cyber Security and Resilience Bill with a long-awaited update to the EU-legacy Network and Information Systems Regulations 2018. The proposed changes include expanding the remit of the existing legislation to protect more digital services and supply chains, and mandating increased incident reporting to give government better data on cyberattacks.”
The government is currently consulting until 8 April 2025 on proposals to introduce legislation to counter ransomware, considered the greatest of all serious and organised cybercrime threats. The proposals include a ransomware incident reporting regime, with the government exploring whether this should be economy-wide or only affect organisations and individuals meeting a certain threshold.
The intention is that UK victims will only be required to report an individual ransomware incident once. The relevant government departments will coordinate so that these reporting regime proposals and those in the new Bill don’t create duplication and confusion.
The NCSC’s annual review showed how AI is transforming the cyber threat. We can expect to see more and more guidance coming from the NCSC and its international partners on this and other topics, including this recent blog post on preserving integrity in the age of generative AI.
In a significant development, the government has just responded to a 2024 call for views on AI cybersecurity and published a final voluntary code of practice and accompanying implementation guide. The code sets out baseline cybersecurity principles to help secure AI systems and the organisations which develop and deploy them. It will be used as the basis for a new global standard.
The government also responded to a call for views on a code of practice for cyber governance. An updated code will be published early this year.
We’re still waiting for the outcome of a call for views on a draft code of practice for software vendors to improve the resilience and security of software.
Last year we saw UK data centres given ‘Critical National Infrastructure’ status, opening up greater support in preventing and recovering from critical incidents such as cyberattacks; and new rules for critical third parties to strengthen the operational resilience of the UK’s financial sector kicked in on 1 January 2025. They’re closely aligned with other similar regimes internationally such as the EU’s Digital Operational Resilience Act, which became fully applicable from 17 January 2025 and whose impact isn’t limited to EU-based businesses.
Meanwhile, in-scope regulated firms in the financial sector have until 31 March 2025 to comply with certain steps under their own operational resilience requirements; and the Financial Conduct Authority is currently consulting until 13 March 2025 on separate proposals for firms to report operational incidents and their material third party arrangements.
A new Product Regulation and Metrology Bill was introduced in September 2024, a couple of months before the government published its long-awaited response to the 2023 UK Product Safety Review. The Bill is currently making its way through the legislative process and is one to watch in 2025.
The Bill is very short on detail because it gives the Secretary of State wide powers to make future regulations on the marketing and use of products, including marketing through online marketplaces. So we won’t have any specific details until those regulations are published.
“The background briefing notes to the King’s Speech say that the Product Regulation and Metrology Bill will respond to new product risks and opportunities to enable the UK to keep pace with technological advances, such as AI. We’ll need to wait and see what emerges on that front once the Bill becomes law.”
Over in Europe, the Council of the EU adopted the Cyber Resilience Act to boost the security of digital products – the equivalent of the UK’s product security regime which came into effect on 29 April 2024. The bulk of the provisions will apply from 11 December 2027. UK businesses selling such products on the EU market will need to prepare for compliance.
And the EU’s longstanding product liability rules were also updated in line with the digital age and circular economy. Member states will need to transpose the new rules into national law by 9 December 2026. A ‘product’ specifically includes software, which includes AI systems. Software developers or producers, including AI system providers under the EU AI Act, will be treated as manufacturers. Products can be standalone or integrated/interconnected with other products as a component. UK businesses could be caught if they place in-scope products on the EU market.
It’s unlikely we’ll see any immediate changes to the UK’s own product liability rules.
“The financial services sector is another of the 8 growth-driving sectors identified in the government’s Modern Industrial Strategy. A call for evidence was published in November 2024 to inform the development of the financial services sector plan expected to be published in spring 2025. Innovation and tech is one of 5 core policy pillars seen as central to the sustainable growth of the sector – ‘enabling and supporting increased digital adoption, including technologies such as AI, which have the potential to increase productivity and open up new products and services’. Fintech is named as one of 5 priority growth opportunities.”
Towards the end of last year there was a flurry of activity around the use of AI in financial services. We expect this to continue in 2025 as the financial regulators deepen their understanding and continue to actively engage with stakeholders.
The FCA launched a new AI Lab and began seeking views about the current and future uses of AI in UK financial services, as well as the financial services regulatory framework. An AI Spotlight (which we attended) and AI Sprint held at the end of January 2025 will help inform the FCA’s regulatory approach to AI and how it can create the right environment for growth and innovation.
We also saw the FCA and Bank of England publish the results of a survey of AI and machine learning in UK financial services, providing insights for the benefit of regulated firms and to further the regulators’ understanding of AI in the sector. According to the survey, 75% of firms are already using AI, with a further 10% planning to use it over the next 3 years. An AI Consortium is being established by the Bank to provide a platform for public-private engagement.
Recent FCA research found that around a third of people thought they could complain to the FCA if something went wrong with their crypto investment. Crypto is high-risk and largely unregulated in the UK, but that’s about to change.
Plans for crypto regulation will progress in 2025, after the government confirmed in November 2024 that it will implement HM Treasury’s detailed policy proposals in full (although it won’t be bringing stablecoin into UK payments regulation at this time). The FCA has already published various discussion papers and this crypto roadmap shows the consultations and other activities to come during 2025 and beyond. The regime will go live in 2026.
HM Treasury published the National Payments Vision in November 2024, with a focus on innovation and the technologies of the future. This includes a push to deliver seamless account-to-account payments through open banking and to enable the use of safe and trustworthy digital identity products. It’s expected that smart data powers and other measures in the Data (Use and Access) Bill will help to progress these initiatives. The FCA and Payment Systems Regulator have just recently set out the next steps for open banking.
We’ve also just seen the Bank of England and HM Treasury publish a progress update on the possibility of a digital pound. After completing the design phase over the next couple of years, the Bank and government will assess the policy case for a digital pound and decide whether to proceed.
Online providers are now legally required to protect their users from illegal harm, including fraud, under the Online Safety Act. Ofcom published its illegal harms statement in December 2024. Providers must complete their risk assessments by 16 March 2025, after which they’ll need to take measures to protect their users.
There are separate rules relating to paid-for fraudulent advertising and a draft code of practice is expected from Ofcom during 2025.
Despite progress being made, the government says that overall fraud rates are still too high and the scale of fraud originating on online platforms and telecoms networks continues to pose a significant risk. It has called on the tech and telecoms sectors to reduce the scale of incidents and losses from fraud taking place on their platforms and networks and expects updates on progress and action taken at the next Joint Fraud Taskforce in March 2025.
And finally, the FCA and PSR are due to publish an update in Q1 2025 following their summer 2024 consultation on big tech and digital wallets.
“The new regulatory regime for digital markets under the Digital Markets, Competition and Consumers Act came into force on 1 January 2025. It allows the Competition and Markets Authority to impose additional conduct requirements on firms with ‘strategic market status’ and to make ‘pro-competitive interventions’ (including divestment of businesses) where it finds that a factor or combination of factors relating to a relevant digital activity is having an adverse effect on competition.”
The CMA has already launched its first SMS designation investigation under the new regime, in relation to Google’s position in search and search advertising services. This was swiftly followed by investigations into Apple and Google in relation to their mobile ecosystems, including operating systems, app stores and mobile browsers.
The CMA’s independent inquiry group has provisionally recommended that the CMA use its powers under the Act to consider whether to designate the 2 largest providers, Amazon Web Services and Microsoft, with SMS in relation to their respective digital activities in cloud services.
Designated firms will also have to comply with certain merger control changes which also came into force on 1 January 2025.
April 2025 will see changes to the enforcement of consumer protection law and updated rules on unfair commercial practices, including a ban on fake reviews and steps to tackle ‘drip pricing’. The Act significantly expands the CMA’s powers to enforce consumer protection laws, allowing it to impose fines of up to 10% of worldwide group turnover on businesses breaking the rules.
Reforms to the regulation of subscription contracts aren’t expected before spring 2026 at the earliest.
Whatever your technology needs are, we’ve got the expertise to help you. Our multidisciplinary Technology & Digital team offers the full range of services, from dealing with contract drafting and competition issues to regulatory compliance and dispute resolution.
If you have queries about any of the topics discussed above, or need further advice or assistance, please get in touch with Sally, Nick or one of our Technology & Digital experts.
Confused about what AI is and how to introduce it into your business? Click here to access our guide to demystifying AI
“One major sticking point for unlocking AI is the ongoing dispute over how copyright law applies to the training of AI models. The government says that the status quo cannot continue and is currently consulting on proposals to clarify the law for the creative industries and AI developers. It’s hoped that this will provide some much-needed certainty for the users and suppliers of AI systems. Litigation in the UK and other jurisdictions is also expected to provide important guidance on AI and IP rights during 2025.”
Matthew Lingard , Director, Intellectual Property, Trade Marks & Designs