/ 9 min read / Reed Smith Client Alerts

AI transparency in the UK and EU: What’s the latest?

Authors

Tom Gates,
Camille Pellicano

What’s the latest?

Recent publications and policy developments in the UK and EU have brought renewed attention to labelling AI-generated content. Regulators are increasingly concerned about transparency, trust, and the misuse of AI – particularly deepfakes.

This article looks at the European Commission’s first draft Code of Practice on Transparency of AI-Generated Content (the draft Code), which is intended to operationalise the EU AI Act’s transparency obligations in Article 50, and the UK House of Commons Library research briefing on AI content labelling (the UK Briefing), which surveys techniques, limitations, and emerging market practice but does not create binding UK obligations.

  • What’s the UK position? There is currently no UK legislation specifically requiring AI-generated content to be labelled. On 20 January 2026, the House of Commons published the UK Briefing, which highlights the benefits of standardised labelling and acknowledges the technical challenges involved. It remains unclear whether future UK legislation will introduce mandatory labelling requirements.
  • What’s the EU position? In the EU, the position is more developed. Article 50 of the EU AI Act imposes specific transparency obligations for certain AI-generated content. At a high level: (i) providers (those placing an AI system on the market or putting it into service) must ensure certain AI-generated or manipulated outputs are marked in a machine-readable format and detectable as artificially generated or manipulated, as far as technically feasible; and (ii) deployers (those using an AI system) must disclose deepfakes and certain AI-generated content. On 17 December 2025, the European Commission published the draft Code. While voluntary in form, it is designed to set out practical measures, testing, and documentation that can be used to demonstrate (and assess) compliance with Article 50.

Why does this matter?

Synthetic media risks are highest when content features talent (voice, likeness, or performance), is highly shareable, and passes through multiple distribution channels. Even without a formal UK labelling regime, aligning internal practices and vendor contracts with emerging expectations can reduce reputational risk and avoid platform friction (for example, takedowns, labelling disputes, and consumer trust issues).

For legal teams, the issue is not only future compliance, but also reputational exposure and operational readiness. Both the UK and EU positions frame transparency as essential to building trust as deepfakes become more prevalent. Below we summarise key measures in the draft Code and compare them with the UK Briefing.

From a risk perspective, the EU approach is building towards demonstrable compliance (including records that support feasibility assessments, marking choices, and disclosure decisions). The UK position is more indirect: in practice, platform rules, advertising standards, consumer protection, and reputational risk will often drive the “effective” labelling standard (particularly for voice/likeness content and public-facing campaigns that can be reshared out of context).

Provider commitments: Marking techniques and detection

Marking

In the EU, providers of AI systems that generate synthetic audio, image, video, or text must ensure outputs are marked in a machine-readable format and detectable as artificially generated or manipulated, “as far as technically feasible”, considering the content type, implementation costs, and current technology. Article 50 also includes carve-outs (for example, where an AI system performs an assistive function for standard editing or does not substantially alter the input or its semantics, or where use is authorised for certain criminal law purposes). The draft Code is explicit that no single marking technique is sufficient on its own. Instead, it points to a multi‑layered approach, typically combining provenance/metadata and imperceptible watermarking, supplemented where necessary by fingerprinting and other verification methods, especially where downstream processing may strip metadata or transform content.

For content where metadata embedding is tricky (particularly plain text), the draft Code suggests using digitally signed provenance mechanisms (for example, a signed manifest or “provenance certificate”) as an alternative way to verify origin and support an audit trail of creation and modification. For multimodal outputs such as video with audio, the draft Code contemplates synchronising marks across formats so that verification does not fail if one modality is separated or edited. The draft Code also recognises a role for forensic detection methods that do not rely on active marks, which is particularly important where content has been edited, clipped, re-encoded, paraphrased, compressed, or subjected to manipulation.

The UK unsurprisingly takes a different approach and acknowledges that providers may choose to add watermarks or content credentials, but there is no legal obligation to do so, and that there is no consensus on the “best” marking approach. The UK Briefing is useful context for why the draft Code adopts a layered approach. It highlights that there is no industry-wide standard for watermarking/credentials, and even widely used marks may not be readable across systems. It also states that both visible and invisible marks can be removed, mimicked, or degraded.

Labelling as a default

To help deployers meet their own disclosure duties under Article 50, the draft Code proposes that providers should build tools that enable perceptible labelling (for example, an icon/label that can be displayed to end users) and that this functionality should, in principle, be enabled by default so that deployers are not relying solely on bespoke, manual workarounds. The UK Briefing, by contrast, describes provider-led provenance features (such as content credentials) as uneven and often opt-in, reflecting the absence of a statutory baseline and the lack of standardisation.

The draft Code also proposes that providers should preserve existing marks and include anti-tampering provisions in their terms of use. However, the UK Briefing recognises that removing both visible and invisible marks may remain technically possible – so contractual protections should be backed up by practical controls and detection measures.

Detection tools

The draft Code proposes that providers should offer detection and verification tools (for example, via a free interface or API) so users and third parties can check whether content is AI-generated/manipulated and, where relevant, retrieve provenance information. It also contemplates continuity arrangements so verification remains available even if a provider ceases operations – this would be operationally challenging in practice and is likely to be an area where the final Code clarifies expectations. The UK Briefing similarly points to detection as essential in a fragmented ecosystem, but frames it as a matter of market practice. Platforms often combine automated detection with user disclosures, and detection outcomes can vary materially across tools and standards.

More broadly, the draft Code expects marking and detection solutions to be effective, reliable, and robust against common transformations and adversarial attacks, with an emphasis on real‑world testing, monitoring, and proportionate compliance frameworks (including staff training and documentation that can be assessed by regulators). A recurring theme is interoperability, requiring measures that work across services and formats rather than only within a single provider’s ecosystem.

Deployer commitments: Disclosure and transparency

Approaches to labelling

Article 50 requires deployers to disclose deepfakes (image, audio, and video) and certain AI-generated or manipulated text published to inform the public on matters of public interest. Importantly, disclosure is not required for public interest text that has undergone human review or editorial control, provided a person or entity holds editorial responsibility. This exception is not a “free pass” – it implies a need for documented sign-off if an organisation intends to rely on it. The draft Code provides practical guidance on how deployers can implement this.

To standardise disclosure, the draft Code proposes a common taxonomy (distinguishing “fully AI-generated” from “AI-assisted” content) and a common icon (an EU-wide version is in development). Disclosures should be accessible – for example, with suitable contrast, screen-reader compatibility, alt text, and audio descriptions where relevant. Operationally, the draft Code envisages deployers (i) documenting labelling practices with examples, (ii) establishing secure channels to flag mislabelled or unlabelled content, and (iii) correcting issues without undue delay (which, in practice, may require clear internal ownership and playbooks for takedown/relabelling across channels).

The UK Briefing explains why “labelling” is not a single design problem. It distinguishes between “process-based” labels (confirming content is AI-generated or AI-assisted) and “impact-based” labels (alerting users to potentially deceptive content, such as deepfakes). It also highlights that label wording and prominence can materially change user interpretation, as users may not understand whether “AI-generated” means fully synthetic or partly edited, and labelling can reduce perceived accuracy and trust even where content is true. For legal teams, a recurring practical risk is misclassification (for example, over-labelling routine AI assistance or under-labelling synthetic voice/likeness content in a way that increases deception risk).

Disclosure of deepfakes

The draft Code takes a prescriptive approach to deepfake disclosure and is unusually specific by content modality:

  • Audio-only deepfakes: for content under 30 seconds, a short audible disclaimer should be included at the start; for longer audio, disclaimers should also be repeated during and at the end.
  • Video deepfakes: disclosures may need to persist throughout playback (where technically feasible) to reduce the risk that clipped or reshared excerpts mislead audiences.
  • Where a screen is available: the icon should be displayed at first exposure.

The practical impact will depend on how the final Code defines “deepfake” and the extent to which it captures lower‑risk uses (for example, automated announcements or routine dubbing).

Good news for creative industries: where a deepfake forms part of an “evidently artistic, creative, satirical, fictional or analogous work”, the draft Code’s disclosure expectations are lighter and more flexible.

The UK Briefing does not prescribe what impact-based labels should look like, but it references Ofcom’s July 2025 discussion paper and the Deepfake Defences toolkit. Ofcom’s emphasis is consistent with the draft Code’s direction of travel – combine watermarking, provenance metadata, AI labels, and context annotations as complementary measures, rather than relying on any single technique. The UK Briefing’s main point, however, is that there is no consensus on label design and that the “best” approach turns on the objective (process transparency vs harm signalling) and the distribution context (news, entertainment, paid ads, influencer content, etc.).

Beyond the draft Code: New standards in audio content generation

The draft Code describes a multi-layered approach to marking and labelling audio content but does not mandate a single technical standard. In practice, provenance frameworks like Content Credentials and C2PA (Coalition for Content Provenance and Authenticity) indicate the likely direction of travel, because they can carry cryptographically signed provenance information that is portable across services, although adoption is not uniform and the framework does not solve all manipulation/removal risks on its own.

The UK Briefing highlights that without an industry-wide watermarking standard, no single detection system can read all labels, and a 100% detection rate is unrealistic in the near term. For legal teams, the practical consequence is that transparency programmes should not be built solely around a technical control; they need to blend (i) technical measures where available, (ii) clear internal disclosure rules and sign-off, (iii) contractual flow-down to agencies/creators, and (iv) platform policy compliance and monitoring.

Key steps for legal teams to take

  • Map your role in the chain. Identify where you are a deployer (publishing content) versus where you are also acting as a provider (for example, if you offer a consumer-facing tool that generates content). This mapping drives whether Article 50 duties fall directly on you or need to be managed contractually through vendors.
  • Set disclosure rules by risk. Use a two-track approach to determine content that is routine “AI-assisted” content and content that constitutes a deepfake or high risk synthetic media (voice/likeness, public-interest contexts, election/health/finance themes).
  • Document editorial control. If you publish AI-assisted public-interest text in the EU and want to rely on the editorial control carve-out, ensure there is meaningful human review, a named editor/owner, and a retained audit trail showing the review performed and the basis for sign-off.
  • Contract for provenance and preservation. Update agreements to require preservation of provenance of watermarks where used, prohibit tampering, mandate cooperation on investigations, and require prompt remediation (relabelling/takedown) where content is flagged as mislabelled or deceptive.

Looking ahead

The draft Code signals that clear disclosure of AI-generated and manipulated content is becoming a baseline expectation in the EU, with an emerging set of implementation measures (marking “stacks”, detectors, accessibility requirements, and documentation) that organisations can begin aligning to now. While the UK does not yet impose equivalent statutory obligations, the UK Briefing indicates that practical expectations will continue to be shaped by platform policies, publisher standards, and user perception. For legal teams, the strategic challenge here is to operationalise transparency in an environment where the technology remains imperfect (marks can be stripped or degraded and standards are still converging). With UK regulation on AI labelling still uncertain, the draft Code is likely to become a practical standard to work towards. The final version, expected in Q2 2026, should clarify open questions (including definitions, feasibility thresholds, and how prescriptive modality-specific disclosures will be) and will be an important milestone for in-house compliance roadmaps.

Client Alert 2026-037

Related Insights