The last month has seen an uptick in regulatory activity concerning agentic AI, with UK and EU regulators each setting out how existing frameworks (including consumer protection, competition, data protection and cybersecurity) may apply to AI systems that can plan, decide and act autonomously.
CMA maps the consumer protection landscape
On 9 March 2026, the Competition and Markets Authority (“CMA”) published a research paper examining how agentic AI may affect consumers and how traders should mitigate risk. The paper recognises the potential benefits of AI agents (e.g. convenience, improved decision-making, time savings) but flags several risks that the CMA considers heightened by the autonomous nature of these systems. These include:
- the use of ‘dark patterns’ that optimise engagement, conversion or other commercial objectives in a manner that leads to “worse personal outcomes” for consumers;
- the erosion of consumer agency, where individuals delegate purchasing decisions in a manner they can no longer easily scrutinise; and
- the possibility of ‘agentic collusion’, where multiple businesses deploy autonomous agents that optimise pricing or commercial strategies in a manner that dampens typical market competitive pressures.
The paper makes clear that existing UK consumer protection law, including the CMA's direct enforcement powers under the Digital Markets, Competition and Consumers Act 2024, will apply regardless of whether the relevant conduct is carried out by a human or an (agentic) AI system. That means potential fines of up to 10% of global annual turnover for breaches, including where misleading practices have been delivered through an AI agent.
Shortly before the paper was published, on 4 March 2026, the CMA also published a short piece on “AI and Collusion: Frontiers, Opportunities and Challenges”, discussing how AI systems (including pricing and optimisation tools) may increase the risk of anti-competitive coordination and what steps businesses can take to mitigate that risk.
CMA compliance guidance for businesses
Alongside the research paper, the CMA issued separate guidance for traders deploying AI agents in consumer-facing contexts. The guidance is structured around four key obligations:
- Disclose AI agent use to customers. Where the fact that a consumer is interacting with an AI agent rather than a person might affect their transactional decisions, traders should inform them. Traders must not overstate a system’s capabilities.
- Design agents to comply with consumer law. AI agents must be trained with applicable statutory rights in mind, including but not limited to the Consumer Rights Act 2015 and the Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013.
- Monitor performance. Traders are expected to undertake regular human-led reviews of agents’ performance, including their marketing materials, pricing information and handling of refunds and consumer complaints.
- Act quickly when problems arise. If an agent is producing non-compliant outcomes, traders must refine the agent’s workflows promptly. The CMA emphasises that this is particularly important where agents may interact with large numbers of people and/or vulnerable consumers.
DRCF foresight paper on cross-regulatory implications
On 31 March 2026, the Digital Regulation Cooperation Forum (“DRCF”, comprising the CMA, FCA, ICO and Ofcom) published a foresight paper titled "The Future of Agentic AI." The paper provides a cross-regulatory signal of how UK regulators are thinking about agentic AI and its implications. It:
- establishes a five-level autonomy spectrum for agents (ranging from a merely reactive ‘tool’ through to a truly autonomous actor that requires little human input);
- catalogues risks including algorithmic collusion, prompt injection, data minimisation failures and consumer rights challenges; and
- examines cross-regulatory implications across governance, data protection, cybersecurity and competition.
All four regulators agree that most AI agents fall within existing UK regulatory frameworks. The DRCF has committed to further horizon-scanning work through 2026 and 2027 across three areas:
- the future of interfaces between users, firms and digital services;
- consumer robotics and physical AI; and
- the near-term consumer experience of new technology.
In addition, individual regulators continue to develop their own workstreams, including the ICO's forthcoming statutory Code of Practice on AI and automated decision-making.
ICO consultation on automated decision-making and profiling
On 31 March 2026, the ICO launched its consultation on draft automated decision-making guidance, following the Data (Use and Access) Act 2025. Although not specific to agentic AI, the guidance is relevant to many deployments that personalise, recommend, rank, approve, refuse or otherwise materially influence outcomes for individuals. It also reinforces a core theme of the developments above: organisations should expect accountability, transparency and governance obligations to apply to increasingly autonomous decision systems, including where those systems are procured from third parties.
EU AI Act Service Desk addresses agentic AI
The EU AI Act (AI Act) Service Desk has updated its FAQ to address how agentic AI systems are treated under the AI Act. The FAQ confirms that the AI Act's risk-based classification framework broadly applies to agentic systems in the same manner as to other AI systems.
In particular, the prohibitions on harmful manipulation and exploitation of vulnerabilities under Article 5(1)(a) and (b) may be relevant to agent design. From 2 August 2026, agents classified as high-risk AI systems will be subject to Chapter III requirements intended to support safety and trustworthiness for intended use. More information about proposed amendments to AI Act under the EU Digital Omnibus (including potential delays to the enforcement of high-risk AI rules) is available here.
Transparency obligations under Article 50 of the AI Act will also apply where systems interact with natural persons or generate content. The European Commission describes its regulatory thinking on agents as ‘preliminary’ and has indicated that the AI Office will continue to monitor developments, including through a recent call for tenders on technical assistance for AI agent safety evaluation.
European Commission Second Draft Code on AI Transparency
On the topic of transparency, the European Commission published its second draft Code of Practice on Transparency of AI-Generated Content in March. The key features of the first draft Code of Practice are covered in-depth in our February article, however the second draft introduces some key changes, including:
- a two-layer marking approach, requiring digitally signed metadata and imperceptible watermarking techniques interwoven within content;
- removal of the 'common taxonomy' distinguishing 'fully AI-generated' from 'AI-assisted' content; and
- several elements being made optional, such as provisions related to fingerprinting/logging, implementation of forensic detection mechanisms, and provenance chain transparency.
The third and final version of the Code is expected by early June, ahead of the transparency rules coming into force on 2 August 2026 (subject to changes proposed under the EU Digital Omnibus).
Key takeaways from recent developments
- Autonomous execution is the regulatory focus. Across the UK and EU, the practical concern is less about generative output in isolation and more about systems that can initiate actions, shape decisions or interact at scale with limited human oversight. For consumer-facing deployments, risk attaches not only to outputs, but also to interaction design and choice architecture, including how consumers can understand, challenge or override agent-led decisions.
- UK and EU approaches may differ, but are converging in substance. The DRCF paper signals where cross-regulatory scrutiny is likely to intensify, particularly where agents interact with end users, handle sensitive data, or affect market outcomes at scale. For multinational businesses, this supports a single cross-border governance approach built around deployment controls, testing and transparency by design.
- Procurement does not displace accountability. Where AI systems are procured from third parties, the compliance focus remains on the deployed use case, including transparency, governance and rights-related safeguards.
- Consumer law compliance is moving earlier in the product lifecycle. For consumer-facing agents, CMA guidance makes clear that compliance should be built in from the beginning, not treated as a post-launch exercise. Scrutiny is also likely to extend beyond what the agent says to how the interaction is designed, including disclosures, choice architecture and complaint flows.
- Transparency is becoming operational. The EU’s work on marking and labelling points towards compliance that depends on technical architecture and vendor capability (metadata, watermarking, logging and detection), rather than disclosure language alone.