Key takeaways
- Ofcom finalises new codes under the OSA, setting out child safety duties for online services.
- Services must assess risks by age group and address new harm types, including body stigma and depression content.
- Key deadlines are approaching, with risk assessments due by 24 July 2025 and mitigation measures applying from 25 July 2025.
On 24 April 2025, Ofcom published its updated Protection of Children Codes of Practice (the Codes) and related guidance under the Online Safety Act 2023 (OSA), marking a major milestone in the UK’s regulatory landscape for the protection of children online. The Codes, accompanied by comprehensive risk assessment guidance and supporting materials, establish a detailed set of expectations for online user-to-user and search service providers to safeguard young users from a spectrum of online harms.
See our previous materials for more details on who the rules apply to, the key obligations and the consequences of non-compliance. Also see our summary of the consultation version of the Codes and our OSA tracker.
Below, we summarise the key changes from the consultation version and outline the associated timeframes.
How do the Codes fit with the wider OSA, and what is the significance of this step?
The OSA contains codes of practice on tackling illegal harms, which are already in force. See our summary of the illegal harms codes.
This latest publication sets out the key next steps for in-scope services accessible to children (defined as individuals under the age of 18) with a separate set of codes focused on the child safety duties. It contains the measures recommended by Ofcom to comply with those duties as well as guidance on the separate children’s risk assessments which must be completed.
While following the Codes is not strictly mandatory, services that adopt the prescribed measures will be deemed compliant with their OSA obligations regarding children’s safety. Those opting for alternative measures must keep detailed records and be prepared to demonstrate that their systems meet the same high standard.
Not only is the publication significant for specifying what steps services need to take and prepare for, but it also establishes key deadlines. See below for further details.
What has changed in what Ofcom requires in relation to children’s risk assessments?
There are various changes in Ofcom’s guidance on completing children’s risk assessments, many of which are minor, related to the format or aimed at consistency with the existing illegal harms assessment guidance. However, several key changes will require companies that have already been working on their assessments to make further changes as follows:
New categories of non-designated content (NDC)
Ofcom identifies two specific kinds of harm that meet the criteria for NDC:
(i) Body stigma content: content that shames or otherwise stigmatises certain body types or physical features.
(ii) Depression content: content that promotes depression, hopelessness or despair.
These harms must now be separately identified and assessed in providers’ risk assessments, alongside any other NDC relevant to the service. Ofcom has updated its register of children’s risks and risk assessment guidance to reflect these new categories, and providers will need to revisit their risk assessment templates and processes accordingly. This is likely to be one of the biggest changes for companies given that this assessment is now required and it affects many of the measures, including those related to age assurance, as set out below.
Assessment by age group
One size no longer fits all when it comes to children’s risk assessments. Service providers must specifically consider the different age groups of children who may use the service and assess the relevant risks and appropriate mitigations accordingly.
Updated risk level table and evidence requirements
Ofcom has made targeted tweaks to its risk level table, offering some clarity on how providers should assess the risk level of their service with respect to content harmful to children.
The updated table places emphasis on the amount of harmful content on a service and the associated harm to children. Importantly, the occasional appearance of harmful content will not automatically bump a service into the medium or high-risk categories, so long as the scope for harmful content to impact children remains limited.
Ofcom has reiterated that risk assessments must be evidence-led, with emphasis on two key areas:
(i) the presence of a material amount of harmful content, and
(ii) the strength and effectiveness of existing controls.
In short, if harmful content is meaningfully present, providers can expect the bar for justification to be higher, and the likelihood of a medium or high-risk designation to increase accordingly. Providers should ensure their assessments are not only well-considered but also well-documented.
Providers are expected to consider risks as they exist at the time of the provider’s risk assessment cycle, taking into account the effectiveness of any existing controls in place. Providers might also consider insights from content moderation systems, consultations with technical or independent experts and other evidence as part of the inputs of the risk assessment.
Monitoring the effectiveness of safety measures
Step 4 of the risk assessment guidance and methodology has been revised to make clear that providers must monitor and evaluate the effectiveness of their safety measures. This includes assessing the remaining level of risk after those measures are applied, as well as the actual impact of any controls in place.
Reviewing and updating the assessment
The guidance provides clearer direction on when a service update should trigger a review. Routine tweaks are not the focus, but changes that impact user experience or safety – such as introducing direct messaging, encryption or live streaming – are. Providers should consider how any changes might affect child users and overall risk.
Additional resources
Ofcom has added a helpful suite of additional materials, including case studies for the risk level table and new tables illustrating the proportion of risk factors that might be considered significant across different harm types. These resources are a welcome addition to help providers fine-tune their final assessments.
What specific measures have changed in the Codes for user-to-user services?
Below we focus on the material changes in the Codes for user-to-user services. Note that the separate draft code for search services, which includes different measures, is not covered here.
Governance
While most changes are clarificatory – such as confirming that annual reviews of risk management must cover all identified risks (PCU A1) and that responsibility for certain duties should rest with a named individual (PCU A2) – there are some notable additions.
Ofcom has introduced a new concept: “content harmful to children proxy”. This refers to harmful content – whether primary priority content (PPC), priority content (PC) or NDC – that is identified in the provider’s terms as subject to moderation action. Providers must now monitor this proxy content as part of their harm-tracking obligations.
Services are also now expected to track any increases in NDC identified during assessments, reflecting broader updates to the NDC framework throughout the Codes.
Age assurance
Ofcom has widened the scope of highly effective age assurance (HEAA) measures, particularly in relation to services disseminating harmful content. In particular:
- For services where the primary purpose is the dissemination of PC, the previous requirement that such content be medium or high risk has been removed.
- Where dissemination of PPC/PC is not the principal purpose, Ofcom has introduced a requirement that any service that chooses to prohibit PPC/PC in its terms, but cannot technically remove such content in practice, will need to apply HEAA. Ofcom has said that it expects this to apply to only a very small number of organisations.
- In general, it remains the case that services which allow PPC or PC and are medium to high risk for PC harms will likely need to take HEAA steps.
- Where services have content recommender systems, the requirements around HEAA have been updated to also include references to NDC which is medium or high risk.
There is a new right for adult users to appeal against an age decision which results in them being restricted from a service or part of a service.
Content moderation
The changes are generally clarificatory or aim to bring the Codes in line with the illegal harms codes. Key changes include the need to take into account age group considerations when designing moderation processes (which aligns with the new emphasis on assessments discussed above), a focus on extending certain measures to small but risky services, and the inclusion of proxy content in the moderation requirements to more readily clarify the obligations for services which moderate against their own terms rather than the specific defined harms.
Given the inclusion of the specific types of NDC listed by Ofcom, providers will also need to reassess how they moderate such harms, even though NDC in general terms was always covered by the measures.
Ofcom confirms that it does not currently require providers to use proactive technologies for moderation but notes it is actively reviewing this position and plans a further consultation on potential future measures. Providers should watch this space.
Content recommendation
Ofcom has introduced several updates to how recommender systems must handle content harmful to children. These changes emphasise both design obligations and the treatment of specific content types once risk thresholds are met.
PPC: Recommender systems must be designed to exclude PPC from children’s user journeys where there is medium or high risk – even where such content is already prohibited by the provider’s terms. The Codes now refer to excluding “content indicated potentially [emphasis added] to be PPC” rather than “content likely to be PPC”, broadening the net. Providers can apply this filter to all users or limit it to children using HEAA, depending on implementation.
PC and NDC: For PC and NDC, services must either exclude the content from children’s recommender systems (a newly introduced option) or reduce its prominence. This applies where content is assessed as medium or high risk. Guidance clarifies that once a service is in scope, these measures apply to all types of PC, regardless of which types specifically triggered the risk classification.
Expanded scope of PCU E3 (enabling children to give negative feedback): The updated Codes confirm that PCU E3 now applies to services assessed as medium or high risk across two or more kinds of content harmful to children – including PPC, PC (excluding bullying) and NDC. The obligation itself has also been tightened: Instead of simply lowering the prominence of content a child has flagged negatively; providers must now exclude that content altogether from recommender feeds.
Complaints
Some updates have been made to the complaints handling requirements, including:
Simplified access requirements: The requirement for complaints mechanisms to operate “in as few steps as reasonably practicable” has been softened. Providers must now ensure access via “no more steps than reasonably necessary” – a subtle but helpful clarification that allows for a more flexible approach to designing user pathways.
Lighter touch for smaller services: The obligation to inform complainants whether their information will be shared, provide an indicative resolution timeline, and explain next steps no longer applies to smaller or lower-risk services, easing the compliance burden for providers with limited scope or scale.
Disregarding manifestly unfounded complaints: In line with the illegal harms regime, providers may now disregard manifestly unfounded complaints, provided they have an appropriate internal policy in place. This measure aims to support efficient handling of bad-faith or abusive complaints without undermining genuine user protections.
User support
Content restriction notifications: The requirement to provide information to children when they restrict content or interactions has been narrowed. It now applies only to tools that allow children to indicate why they are restricting content – specifically where they can flag content as harmful.
Support signposting split and clarified: The previous combined obligations (relating to signposting children to support services) have now been split into three distinct measures in PCU F3, F4 and F5, with more precise requirements. Providers must review third-party policies, including whether consent is required, before directing users to external support. The updated guidance also clarifies when and how quickly signposting must occur in different scenarios.
Is Ofcom also planning further changes to the illegal harms codes?
Ofcom has launched a new consultation (responses open until 5pm on 22 July 2025) proposing to extend key user safety controls – currently limited to large services – to smaller platforms under the illegal harms code, which already contains provisions specifically related to children.
The proposals target user-to-user services likely to be accessed by children, or those where no highly effective age assurance is in place to restrict certain features to adults. If adopted, the following measures would apply more broadly:
Blocking and muting
These controls would be required where a service offers user connections, content posting or user communication functionalities, and:
(i) The service has fewer than seven million monthly UK users and is high risk for at least one of the following: coercive or controlling behaviour; harassment, stalking, threats and abuse; hate; grooming; or encouraging or assisting suicide; or
(ii) It has between 700,000 and seven million monthly UK users and is medium risk for any of the harms listed above.
Disabling comments
This control would be required where a service includes commenting functionality, and:
(i) The service has fewer than seven million monthly UK users and is high risk for at least one of the following: harassment, stalking, threats and abuse; hate; grooming; or encouraging or assisting suicide; or
(ii) It has between 700,000 and seven million monthly UK users and is medium risk for any of the harms listed above.
If implemented, this would represent a significant shift in obligations for smaller services that were previously outside the scope of these requirements. Stakeholders may wish to engage with the consultation process and assess potential compliance impacts early.
What are the key dates and next steps?
- Children’s risk assessments must be completed by 24 July 2025.
- Risk mitigation measures must be operational from 25 July 2025 (subject to parliamentary approval of the Codes).
- Ofcom’s consultation on expanding certain measures to smaller services closes on 22 July 2025.
In-depth 2025-122