At a Glance
- For companies training or deploying AI models and products in(to) the UK or making use of UK data, there are no immediate changes to the law. Proposed consultations and reviews may result in future legislation.
- Unlicensed reproduction of copyright-protected content during AI training will infringe UK copyright unless a statutory exception applies. AI developers should audit existing training datasets for UK-sourced content and confirm whether licenses are required.
- Similarly, rights holders should monitor for content scraping, and proactively enforce their rights using technical tools currently available (e.g., watermarking, metadata) and licensing.
- AI developers should demonstrate transparency by ensuring, for example, that AI outputs are clearly labelled, to get ahead of potential legislative developments in this area.
In a new report on copyright and AI, (Report) the UK government has formally abandoned its previous preference for a broad text and data mining (TDM) copyright exception with rights-holder opt-out for AI model training. This follows overwhelming opposition from rights holders and a lack of consensus among stakeholders.
The Report sets out a range of key issues, including transparency, technical tools, digital replicas, and the protection of computer-generated works. These will be covered by further consultation and evidence gathering, leaving open the possibility of legislative developments down the line. However, there are no immediate plans for new legislation, specific regulatory powers, or substantive copyright reform. For now, rights holders must continue to rely on the existing legal framework of licensing and proactive enforcement, while AI developers are advised to document data provenance to improve transparency. The UK government’s “wait and see” AI approach continues to diverge from certain jurisdictions, notably the EU and US, but there will be continued monitoring of international developments, which will likely influence future policymaking.
Background
The UK government is seeking to strike a balance between protecting the valuable intellectual property of the UK’s creative industries, while promoting the UK’s AI industry. Both sectors are seen as critically important to the UK economy, with the creative sector contributing £146 billion to UK GDP and supporting 7% of all UK jobs, and the AI sector, which is already the third largest in the world, growing at 23 times the rate of the rest of the UK economy.
The Report and accompanying economic impact assessment were required under the Data (Use and Access) Act 2025, which we previously wrote about. This was the result of disagreement between the two legislative chambers during the passage of the bill regarding the appropriate legislative options, with the requirement for the government to produce a report and impact assessment meeting detailed parameters included in the legislation as a compromise. The impact assessment seeks to evaluate the possible economic impact of the options (rather than a full cost-benefit analysis), due to significant gaps in the evidence and uncertainty as to how the market will develop.
Some of the key findings and outcomes from the (more than 120-page) Report are set out below.
Abandonment of Government’s Preferred TDM Option
The Report follows a consultation launched in December 2024, which had set out the following options for copyright and AI policy:
- Maintain the status quo with respect to copyright and related laws (option 0)
- Require licensing in all cases for copyright works (option 1)
- Amend copyright law to permit a broad data mining exception (option 2)
- Amend copyright law to permit a data mining exception with an opt-out for rights holders, coupled with additional transparency measures (option 3)
Each of the options 1 through 3 incorporated a combination of various proposals, including:
- Revising copyright law
- Measures on copyright licensing
- Requirements for AI training data transparency
- Measures relating to technological tools and standards
The government made clear its preferred policy option: a new TDM exception to copyright law to allow AI developers to use copyright-protected works to train AI models, with an opt-out for rights holders enabling them to “reserve their rights” and prevent training using their works (option 3). This would have aligned the UK more closely with the EU, but was strongly rejected by a majority of respondents. Creative-industry stakeholders argued it would allow AI developers to benefit financially from their work without any compensation for rights holders and that opt-out mechanisms would be impractical to implement at scale. Their preference (supported by 81% of all respondents) was to strengthen copyright protection in the UK such that a licence would be required in all cases (option 1). By contrast, some technology sector respondents expressed concerns that even the limited TDM exception proposed as the government’s preferred approach would make the UK uncompetitive compared with other jurisdictions, with their stated preference being for a broad TDM exception with no opt-out (option 2).
The UK government has now stated that it will not move forward with a particular preferred approach, citing the rejection of its preferred policy option in the consultation, gaps in the evidence on the impact of existing copyright law on the development and deployment of AI, and rapid developments in the AI sector and international landscape. Instead, the government will continue to gather evidence and monitor the evolving international regulatory landscape, before taking further steps. Options such as a statutory licensing scheme or levy to compensate rights holders, coupled with a broad TDM exception and focused exceptions for specific purposes (e.g., scientific research) are mooted in the Report.
Enforcement, Transparency, and Getty Images v Stability AI
Copyright infringement is enforced through civil litigation in the UK, primarily in the Intellectual Property Enterprise Court (IPEC) or the High Court. Claimants must establish protectability, ownership, and infringement. Unlike in the US, there is no registration system for copyright in the UK, which can add complexity and cost for rights holders in litigation.
The Report emphasises that limited UK law requirements in respect of disclosure of training data sources by AI developers also adds to the difficulty and expense of enforcement. In addition, the Report notes the Getty Images v Stability AI judgment (under appeal), which we wrote about previously. This case raised the possibility of secondary infringement for those deploying AI models in the UK, even where they have been trained outside the UK. In its judgment, the court held that a trained AI model could, in principle, constitute an article capable of being an infringing copy. However, on the facts of the case, Getty (hampered by limited transparency in the model) failed to prove that the model was an infringing copy, because the court found no evidence of retained copies of the relevant Getty works in Stability’s AI model, Stable Diffusion. By contrast, and emphasising the fact-specific nature of the Getty ruling, it was noted in the Report that in a recent judgment in Germany (GEMA v OpenAI), the court held that a large language model in that instance did retain copies of training data (specifically the copies of song lyrics).
While over 90% of respondents to the consultation agreed that AI developers should disclose the sources of their training materials, there were differences of opinion with respect to granularity and proportionality of disclosure. The general view was that the introduction of transparency requirements should not be contingent on any new copyright exception, but should apply in any case alongside existing copyright law.
The Report highlights the lack of consistent rules across different jurisdictions with respect to transparency and the difficulty that this presents to AI developers and dataset providers operating internationally. The Report confirms that there is no existing plan to introduce a requirement into UK law for AI developers or providers to make information publicly available about the copyright works used for AI training data. The suggestion is that rather than statutory transparency obligations, the government will encourage industry-led best practices and will continue to monitor international rules and leave this subject for review, as with other areas.
Technical Tools and Standards
The Report also discusses the role of technical tools and supporting standards that copyright holders might use to help them control access to and use of their works by others, including with respect to the purpose of developing AI systems. The government recognises the ongoing challenges faced by those in the creative industries with respect to being able to express preferences about the use of their works in AI training and to enforce their rights. While most standards are industry-led and voluntarily applied, the Report proposes continued monitoring of international developments and working with experts to support best practices and adoption of market-led tools. No new regulation is anticipated at this stage, but the government has left open the option to revisit this if evidence of need emerges.
Computer-Generated Works (CGW)
The Report, as an ancillary consideration, includes a proposal that the existing copyright protection for computer-generated works created without a human author should be removed. Most respondents argued that the protection afforded to such works departs from the original intent of copyright — i.e., the encouragement and reward of human creativity. By contrast there remains general support for retaining copyright protection for works created with human assistance (i.e., AI-assisted works).
Digital Replicas or Deepfakes
As noted in the Report, the government’s consultation received mixed responses on the best approach to protecting people’s images and voices from being used without permission without having the effect of stifling innovation. In response the government has stated its intention to explore a range of options, including potentially introducing a new digital replica or personality right in law rather than continuing to rely on the current patchwork of civil and criminal protections. This is an area of particular focus in the UK, with the UK government also currently working with industry stakeholders to develop and implement a deepfake detection evaluation framework, to set industry standards on AI detection and notification.
Government Proposals and Next Steps
The government is trying to achieve a difficult balance between competing interests; and its approach can best be summed up as “wait and see” with continued consultation on specific proposals, which may yet lead to legislative developments.
It also announced a few specific plans for the next phase of analysing the crossover between copyright protections and AI, including:
- A formal consultation on digital replicas, to be launched this summer
- A taskforce to propose best practice for labelling AI-generated content, with an interim report expected in the autumn
- A review of the mechanisms creators can use to control use of their works online, including best practices on input transparency, industry standards, and technical solutions, with a gap analysis and suggestions for potential government action (e.g., new legislation)
- Facilitating a working group of independent and smaller creative organisations to discuss the potential for a government role to support their licensing of content
- Establishing a Creative Content Exchange (CCE), which is intended to provide a “trusted” marketplace for digitised creative assets and is currently in pilot phase
Implications for Businesses
For companies training or deploying AI models and products in(to) the UK or making use of UK data, there are no immediate changes to the law following this Report. However, the government has clearly stated that it no longer has a preferred approach. All options are open with respect to enforcement and transparency for copyright-protected works used in AI model training. Proposed consultations and reviews may result in future legislation.
Unlicensed reproduction of copyright-protected content during AI training will infringe UK copyright unless a statutory exception applies. There is additionally the possibility of secondary infringement for imported models trained outside of the UK (with the Getty Images v Stability AI judgment subject to appeal). AI developers should therefore audit existing training datasets for UK-sourced content and confirm whether licenses are required. Similarly, rights holders should monitor for content scraping, and proactively enforce their rights using technical tools currently available (e.g., watermarking, metadata) and licensing.
Those working with AI-generated avatars or synthetic voice representations based on real individuals should be aware of the likelihood of legislation in this area and continue to monitor developments.
AI developers should demonstrate transparency by ensuring, for example, that AI outputs are clearly labelled, to get ahead of potential legislative developments in this area. Irrespective of the legal position in the UK, those seeking access to the EU market must still meet the requirements set out in the EU AI Act, which we have written about, including transparency with respect to training data.