2nd November 2023
“With AI increasingly dominating the global agenda, the AI Safety Summit being hosted by the UK is a helpful step in promoting understanding of the potential risks of the technology, while creating the right conditions to unleash its potential to deliver great benefits across society. The bringing together of minds from across the globe is no mean feat, and the impetus for international collaboration on this hugely important topic is encouraging. With some divergences of approach at a national level, it remains to be seen how effective these efforts will be in producing a global set of principles for managing risk in AI . We’ll continue to monitor developments with great interest”.
– Sally Mewies, Partner, Technology & Digital
Over the past 2 days the UK has been playing host to representatives from international governments, leading AI companies, civil society groups and research experts, at the first and much-anticipated global AI Safety Summit. Walker Morris Partner Sally Mewies and Director Luke Jackson, from our multidisciplinary Technology & Digital group, take a look at what we learned and what comes next.
First, a bit of background. Particularly over the past year or so, AI has been increasingly dominating the news, with generative AI systems such as ChatGPT bringing the technology into mainstream consciousness. Governments, regulators and other bodies have been grappling for some time with how best to harness the many potential benefits of AI, while putting sufficient guardrails in place to protect against the potential risks (ranging from discrimination and privacy issues, replacement of jobs and exploitation by bad actors, to the extinction of humankind).
We’re already seeing differing approaches to regulation. In the EU, for example, a new AI Act is in the advanced stages. Here in the UK, the government has no plans to introduce specific legislation or a single regulator for AI, preferring instead to empower existing regulators to come up with tailored, context-specific approaches that suit the way AI is being used in their sectors [1].
With concerns rising, including key figures signing an open letter saying that the race to develop AI systems is out of control and calling for a pause, the UK announced that it would host the first global AI Safety Summit to consider the risks of AI, especially at the frontier of development (so-called ‘frontier AI’), and discuss how they can be mitigated through internationally coordinated action.
In the run-up to the Summit, the government published a discussion paper on the capabilities and risks of frontier AI.
Day 1 kicked off with the Bletchley Declaration, with 28 countries from all corners of the globe agreeing to the safe and responsible development of frontier AI. A variety of roundtable discussions were held on both understanding frontier AI risks and improving frontier AI safety. The summaries of the discussions have now been published. These are the key messages we picked out:
In another major development, both the UK and the US announced the establishment of separate AI safety institutes. In a speech ahead of the Summit, the UK Prime Minister announced an institute whose work will be “available to the world”, while the US Commerce Secretary chose Day 1 of the Summit to announce the US’ own version – an indication already of the national approaches we will see emerging underneath this concerted global effort.
The discussions have continued today, with the PM wrapping up the Summit with a press conference and an evening live stream conversation with Elon Musk on X, formerly Twitter.
Developments have been evolving at pace, with a flurry of initiatives announced in particular in the run-up to the AI Safety Summit.
On 30 October, the G7 leaders issued a statement on the “Hiroshima AI Process”, with publication of international guiding principles and an international code of conduct for organisations developing advanced AI systems.
The UN Secretary-General launched a high-level advisory body on the risks, opportunities and international governance of AI; while President Biden issued an executive order on safe, secure and trustworthy AI, and the US Vice President announced an array of new initiatives to advance the safe and responsible use of AI.
Here in the UK, we’re: uniting with global partners to accelerate development in the world’s poorest countries using AI; boosting investment in British AI supercomputing; making the country “AI match-fit” with a £118 million skills package; and accelerating the use of AI in life sciences and healthcare with a £100 million investment.
We’ve also seen leading frontier AI companies including DeepMind outline their safety policies following a request from government.
It’s been made very clear that this first AI Safety Summit is just the start of the discussions, and it’s essential that the momentum isn’t lost going forward. The Republic of Korea will co-host a mini virtual summit on AI in the next 6 months, with France agreeing to host the next in-person AI Safety Summit in a year’s time.
The government is expected to publish the widely-awaited response to its AI regulation white paper later this year. We’ll be continuing to monitor developments. You can keep up to date by signing up to our regular Technology & Digital round-up here.
*Don’t miss our webinar with Lexology on 16 November on “Unlocking and controlling AI” – you can sign up here.*
Whatever your technology needs are, we’ve got the expertise to help you. Our multidisciplinary Technology & Digital group offers the full range of services, from dealing with contract drafting and competition issues to regulatory compliance and dispute resolution.
Click here to download our recent GC report on digital adoption and the transformative power of in-house legal teams. This series of content tackles some of the crucial tech and legal issues our clients encounter in relation to the development, implementation and operation of technically innovative services and products.
We’re here to help, so please get in touch if you need any advice or assistance.
[1] See our earlier briefing
Webinars
Tuesday 20 February 2024, 11am - 11:30am