Overview and background

The UK government has published its statutory report and economic impact assessment on copyright and AI (together, the Report), meeting its 18 March 2026 deadline under sections 135 and 136 of the Data (Use and Access) Act 2025 (Data Act).

In December 2024, on behalf of the government, the Department for Science, Innovation and Technology , the Intellectual Property Office, and the Department for Culture, Media and Sport jointly launched a consultation proposing four options to resolve unsettled regulatory questions concerning the text and data mining (TDM) of copyright-protected works in developing AI training data sets:

  • Option 0: do nothing;
  • Option 1: strengthen copyright to require licensing in all cases;
  • Option 2: introduce a broad TDM exception without opt-out; and
  • Option 3: introduce a TDM exception with rightsholder opt-out and transparency measures. This was the Government’s “preferred choice”. In January 2026, Secretaries of State Kendall and Nandy told the House of Lords that the government had been “wrong” to express a preference.

The response to the consultation was emphatic. Of the 11,520 submissions, 81% of respondents stated Option 1 (mandatory licensing) as their preferred approach. Only 3% supported the government’s “preferred” Option 3. The Report acknowledges that most responses were submitted by rightsholders and representatives of the creative industries, with many being based on template letters that were widely distributed by interested stakeholders during the consultation process. The House of Lords Communications and Digital Committee published its own paper shortly before the Report, recommending that the government rule out the opt-out model entirely and adopt a licensing-first approach instead. Nevertheless, the Report states that each response was read individually by officials.

So, what does the Report say?

No copyright exception, no preferred option

The Report’s central conclusion is that the opt-out exception is abandoned. However, no alternative has been endorsed in its place. Instead, the government proposes to gather further evidence on how copyright laws affect AI development and deployment; consider alternative approaches; and monitor international legal, technological, and market developments.

The Report’s language is carefully hedged throughout, but some interesting inferences can be drawn from how certain issues have been phrased. For example, the impact assessment states that “the impact of existing copyright law in relation to Text and Data Mining (TDM) on the UK economy is therefore uncertain, with the current regime potentially limiting growth.”  For a Labour government that has very publicly placed growth at the centre of its economic agenda with its regulators, this is a signal worth noting.

There are also implicit admissions about the current legal position. For example, the impact assessment states that “under the status quo, UK copyright law would continue to act as a significant constraint on competitive general-purpose model training in the UK” and that “permission would usually be needed to copy protected works at different stages of AI training and development that take place in the UK.” Taken together, these statements provide the clearest suggestion yet that the government considers it unlikely that unlicensed general-purpose AI training is permitted under current UK copyright law.

Additionally, the government’s framing of the relationship between the AI sector and the creative industries is also worth noting. The impact assessment states that “the success of the AI sector and the [creative industries] are intertwined” and characterises the creative industries as generating “high-quality content that is needed to train the best AI models.” These statements position creative output, at least in part, as a necessary strategic input to AI development – a “means to an end”. The corollary is that the government is tying future growth of the creative industries to AI adoption, stating that AI has “the potential to transform creators’ workflows, amplifying their productivity and giving them powerful new tools.” For now, at least, the government’s own numbers pose a rather stark asymmetry between the sectors. The creative industries contributed £146 billion Gross Value Added (GVA) in 2024 (approximately 6% of the UK economy), whereas the AI sector contributed £12 billion. Yet, the impact assessment projects that the AI sector could reach £20–90 billion GVA by 2030, and that economy-wide AI adoption could add £55–140 billion in productivity gains. Clearly, the government sees the potential growth trajectory of the AI sector as a goal deserving of significant political weighting – even if (as it acknowledges) the creative industries are currently twelve times the size of the AI sector.

“Focused exceptions” as alternatives are on the table

A substantive component of the Report is the discussion of alternative approaches (Section C), particularly on a theoretical “focused exception” that could be targeted at specific types of use. The government suggests an exception could be designed around several potential parameters, including: 

  • scientific and non-commercial research;
  • distinguishing between analytical/non-generative and generative uses;
  • specific public interest purposes (e.g., national security, health care); and
  • task-specific rather than general-purpose AI.

The Report notes that the main beneficiaries of a focused exception would be “UK-based SMEs, and individual developers and users in the UK”, rather than large firms that develop general-purpose foundation models. Read alongside the impact assessment’s separate finding that limited frontier foundation model training currently occurs in the UK, this could suggest that the government is considering the current strengths and capabilities of the UK’s AI ecosystem. A more focused exception may be unlikely to move the needle on attracting frontier model development compared with a more permissive alternative, but it could provide legal clarity for the parts of the value chain where the UK has genuine expertise, for example in fine-tuning, narrow model development, and applied research. A separate alternative – a broad exception with statutory remuneration (a compulsory licence/levy model similar to the approach being explored in India) – was noted, but attracted less respondent support.

Transparency

Over 90% of consultation respondents agreed that AI developers should disclose the sources of their training material. This was the single area of strongest consensus across both the creative and technology sectors, though they differed on the detail. Respondents on behalf of the creative industry favoured mandatory, granular disclosure of individual works used. Conversely, technology firms generally supported high-level, voluntary disclosure.

On output transparency and labelling, the Report notes relative consensus in favour of labelling wholly AI-generated works, with a more nuanced approach for AI-assisted content. Again, the government proposes only to “work with industry to explore good practice”, rather than to legislate, at this time.

Technical tools and standards

A significant constraint on any effective copyright solution to model development is the limited effectiveness of the technical tools and standards currently available to support them. The Report provides a detailed overview of the current landscape for technical protection tools (Section F). It notes significant progress in market-led development, including the Robots Exclusion Protocol and its emerging extensions, Cloudflare’s pay-per-crawl system, content provenance tools such as C2PA, and adversarial tools such as Glaze and Nightshade. One estimate cited in the impact assessment suggests 79% of top news sites already deploy some form of AI-scraping protection.

However, the Report also identifies persistent gaps. Metadata can be stripped by third-party platforms (particularly social media). The Robots Exclusion Protocol operates at the site level, not the individual work level. Rightsholders struggle to distinguish between AI crawlers and search engine crawlers, creating a risk that blocking one may harm discoverability. The result is that there is no single standard that enables identification of a copyright work online, its owner, and the permissions needed to use it. The absence of government-mandated standards means that the effectiveness of any future opt-out or rights reservation mechanism, should one eventually be legislated, remains uncertain.

Licensing

The government also proposes no intervention in the training data market. It notes that the licensing market for this need is growing, with approximately 14 major deals reported in 2024 involving UK-based content providers, though these have predominantly been between large rightsholders and large U.S.-based AI developers. The impact assessment observes that it is “often not clear whether these include any means of remunerating individual creators who operate on a freelance basis, or smaller SME creative businesses.” The government will therefore monitor its Creative Content Exchange pilot and keep collective licensing under review.

Computer-generated works

The UK has, since 1988, provided copyright protection for “computer-generated works” – works generated by a computer “in circumstances such that there is no human author” – under section 9(3) of the Copyright, Designs and Patents Act 1988. The provision was introduced to address a prominent concern from the 1980s, namely that valuable works produced by computers may fall outside the remit of traditional copyright protection entirely if no human author could be identified. This is unusual internationally, and the UK has historically been heralded for having such future-facing legislation, particularly before the growth of the internet. However, in its nearly four decades on the statute books, the provision has been applied in only one reported case to a limited degree (Nova Productions Ltd v. Mazooma Games Ltd [2006] EWHC 24 (Ch)).

Accordingly, the government is now proposing to remove this protection. The rationale is that copyright should incentivise and protect human creativity, and a provision granting rights over works with no human author sits uneasily with that principle. The Report further found minimal evidence that the right is actively used or economically significant. The important boundary issue of a work that sits between “wholly AI-generated” and “AI-assisted” remains a critical legal question, but the Report does not attempt to define it.

Digital replicas

Notably, the Report’s consideration of digital replicas is more developed than the consultation’s initial level of discussion. It acknowledges that AI makes it easier to create realistic imitations of individuals’ voices and likenesses, that this could cause harm, and that existing UK law – including passing off, defamation, the Online Safety Act and GDPR – may not cover the full range of risks. In response, the government proposes to explore a range of options, including the potential introduction of a new personality or digital replica right. This would be a significant development in UK intellectual property law, if pursued. Currently, the UK has no equivalent of the U.S. right of publicity or the personality rights found in some civil law jurisdictions. The government is aware of this fact, noting that this “would be a significant step” requiring careful consideration, particularly around balancing protection with innovation and freedom of expression. Given the government’s exploratory tone and the absence of any preferred model, it appears highly unlikely there will be any legislative action on digital replicas in the near term.

Our thoughts

Overall, the findings of the Report, while academically interesting, are of limited practical assistance. The government has not reached any determinative conclusions nor settled on any concrete course of action. While it is encouraging that the government is committed to taking the time to “get this right”, to borrow the words of the Technology Secretary, both rightsholders and AI companies remain in the dark. This lack of certainty is perhaps unsurprising, as the genesis of the Report was a watering down by the House of Commons of the Lords’ far more concrete legislative proposals in the Data Act last year, aimed at bringing an end to the (then) Bill’s continued back-and-forth between the Houses.

However, hidden in the subtext among the nearly 200 pages, there are hints of government opinion. The government’s pivot from its previously preferred broad TDM with opt-out approach, coupled with suggestive statements on the current status of UK law regarding training models on home territory using copyright materials, suggests that it is attempting to take stakeholders’ feedback to the consultation seriously.  

What comes next

Section 136 of the Data Act required the Report to “consider and make proposals” on technical standards, transparency, licensing and enforcement. On almost every topic, the Report proposes only to “work with industry”, “monitor”, “keep under review”, or “gather further evidence”. Whether these satisfy the requirement under the Data Act is debatable – there is a credible reading under which the statute contemplated something more directive than the Report’s repeated commitments to further study.

Accordingly, the Report does not commit to a legislative timetable. The government has committed to commissioning further research and pilots, but the Report gives limited indication of when it will report or how it will feed into policy. For now, there are several key dates to watch out for:

  • King’s Speech (circa May 2026): Will a formal AI/copyright Bill be announced?
  • The government’s response to the House of Lords Communications and Digital Committee is due by early May 2026. The Committee recommended that the government publish a final decision within 12 months.
  • The UK’s leading AI/IP dispute, Getty v. Stability AI, is due for appeal later in 2026.
  • The government’s Creative Content Exchange pilot is targeted for summer 2026.

We will continue to track these developments closely. If you would like to discuss any of the issues raised in this article, please contact any of the team members listed below.

Client Alert 2026-072

Related Insights