On 13 January 2025, the UK’s prime minister, Sir Kier Starmer, unveiled the UK's AI Opportunities Action Plan (the Plan), with the aim of positioning the UK as a global leader in artificial intelligence. The Plan highlights the government’s pro-innovation approach to AI regulation and outlines a number of measures the government will take to meet its goals.
The Plan outlines 50 recommendations, split between three sections: (i) investing in the foundations of AI; (ii) adopting cross-economy AI adoption; and (iii) fostering homegrown AI. The government has maintained a light touch on regulation in the Plan. This is not surprising as the government wants to attract technology companies to invest in the UK. The Plan further indicates that the UK will not pursue wholesale regulation of AI technologies on a statutory basis.
What is the UK’s approach to AI regulation?
The UK has always taken a rather lax approach to AI regulation in the past and has generally used a patchwork of regulatory principles, sector-specific best practices, and pre-existing legislation (such as the Consumer Rights Act) to regulate the commercial use of AI. Some notable recent examples include:
- The ICO advice on federated learning: On 28 November 2024, the Information Commissioner’s Office (ICO) provided guidance on data protection considerations when organisations use federated learning (i.e., the machine learning approach where models are trained across decentralized servers holding local data). This technique can present a more data-secure way of training AI models. The ICO recommends conducting a data protection impact assessment (DPIA) to more closely analyse the risks when using federated learning, combining federating learning with other privacy-enhancing technologies, and conducting a motivated intruder test.
- Ofcom’s statement on the scope of the Online Safety Act in relation to AI: On 8 November 2024, Ofcom released a statement explaining that the Online Safety Act applies to products and tools containing generative AI chatbots, such as generative AI chatbots that allow users to share generated text or images with other users, sites that allow the sharing of generated content such as deepfakes, and sites that include chatbots that can generate mature content. Such organisations must comply with various duties, including undertaking risk assessments, implementing appropriate measures based on those risk assessments, and providing ways for users to report content that is illegal or harmful to children.
- The ICO’s guidance on the use of AI in recruiting: On 6 November 2024, the ICO published its findings and recommendations following an audit into whether AI tools being used for recruitment have been processing personal data fairly. The ICO found that some AI tools were not processing data fairly and were filtering applicants based on protected characteristics such as gender and ethnicity. The ICO also found that the AI tools were collecting far more data than was necessary and had therefore failed to comply with the principle of data minimisation. The ICO provided various recommendations, such as clearly informing candidates how their personal data was being used, conducting DPIAs, ensuring that a lawful basis has been provided for the processing of applicant personal data, and giving the AI provider clear instructions on how applicant personal data should be processed.
The Plan therefore does not present a shift in the UK’s position of AI regulation, but rather evidences a significant investment by the UK government into the development and integration of AI in the private and public sectors.
How does the UK’s approach compare to the approaches of in the EU and the United States?
The UK’s approach to governing AI has been shaped by Brexit and a general inclination towards a more flexible regulatory framework, in contrast to the EU’s more prescriptive model. The UK has aimed to balance the need for oversight while maintaining a competitive edge in AI development. We saw this approach as early as 2021 when the previous UK government published a white paper on AI regulation which proposed an adaptable and non-prescriptive regulatory framework based on guiding principles rather than detailed rules.
The EU, on the other hand, has adopted a more structured and precautionary approach. The EU AI Act, which is due to come into force in August 2026 (with some provisions taking effect as early as February 2025), introduces a number of obligations on the creators and users of AI systems, based on the risks associated with that system. High-risk AI systems – such as those which conduct social scoring or biometric identification – will face an outright ban in the EU. Lower-risk models, including general purpose AI tools such as ChatGPT will be permitted, but must meet stringent transparency and resilience requirements. Non-compliance with the EU Act could result in significant penalties, with fines up to €15 million.
The United States has favoured a hands-off approach, with innovation-driven policies and fragmented regulations at the state level. Unlike the EU, the United States does not have a comprehensive federal AI law. At the federal level, various sector-specific guidelines exist, such as the Food and Drug Administration regulations for medical AI and general principles issued by the National Institute of Standards and Technology. The U.S. government has also introduced a blueprint for an AI Bill of Rights to guide the development of AI in the United States, focusing on promoting transparency in AI systems. Although, unlike the EU regulations, the document is not legally binding. This leaves many states implementing their own laws around AI, creating a patchwork of regulations across the country.
For instance, the Algorithmic Accountability Act, proposed in 2023, seeks to enforce transparency and accountability for companies using AI in critical decision-making, though as of this writing, it has not yet passed. States like California have taken a more proactive approach, with laws like the California Privacy Rights Act addressing data privacy and Tennessee’s ELVIS Act protecting individuals’ rights over their voice and image, indirectly affecting AI.
U.S. federal discussions on AI regulation continue, focusing on issues like bias, safety, and ethical considerations. These efforts suggest an evolving regulatory landscape, with the potential for dedicated AI legislation in the near future.
The differing approaches across the EU, the UK, and the United States will have a significant impact on businesses that use AI systems. With each jurisdiction progressing on its own regulatory timeline, businesses are encouraged to adopt a proactive approach to compliance and begin mapping their obligations in each jurisdiction. Early revisions to company policies and strategies will help businesses mitigate potential liabilities and ensure timely alignment with emerging legal frameworks.
Client Alert 2025-022