The UK's online safety regulatory framework reached a significant milestone in 2025 with the first wave of risk assessments submitted under the Online Safety Act (OSA). Ofcom has now published its Year 1 Online Safety Risk Assessments Report, providing crucial insights for online platforms on what it expects from service providers and where improvements are urgently needed. This article summarises the key findings and lessons that all in-scope platforms should follow as they prepare for the next assessment cycle. Ofcom’s Year 2 risk assessment submissions are due between May and July 2026. Assessments must be submitted annually, or when changes are made to the platform.
Compliance Landscape: What Ofcom Found
Ofcom received 104 risk assessment records from supervised service providers, comprising 72 illegal content risk assessments and 32 children's risk assessments. Specialist teams systematically reviewed over 10,000 pages of information to assess compliance with the new regime. Almost all providers met their procedural duties and shared records on time. Many records were high quality, and most identified expected risk factors. However, Ofcom identified notable gaps across many records. 11 providers were asked to revisit their records and 5 to reconsider risk levels. Enforcement action was taken against two providers following the review of their assessments.
Five Areas of Improvement
Ofcom has outlined five key areas where platforms must improve their approach for Year 2 submissions.
- Assess All Kinds of Illegal and Harmful Content
Providers must assess all relevant kinds of illegal content and harmful content to children individually, assigning risk levels and providing evidence-based explanations. Many records did not clearly show that providers had separately assessed all 17 kinds of priority illegal content (soon to be 18 subject to secondary legislation designating cyber-flashing as a priority offence). Providers also often presented risks according to their own terms of service rather than the categories specified in Ofcom's guidance.
All records should also capture a risk level for "other illegal content." Weak justifications for low risk levels must be addressed with robust evidence, as a "negligible" risk level should only be assigned where evidence shows it is extremely unlikely that harm could occur on the service.
- Improve Analysis of Features, Functionalities, and Other Characteristics
Providers should thoroughly assess inherent risks from service design choices, functionalities, and user base characteristics. While most records identified expected risk factors from Ofcom's Risk Profiles, few providers made thorough use of these factors to understand how their service design choices present higher risks.
Encrypted messaging, for example, is a risk factor linked to increased risks from certain kinds of illegal content, but many providers did not thoroughly investigate how this might increase risks when assigning risk levels for certain illegal offences. Similarly, content recommender systems are linked to increased risks from certain kinds of harmful content, such as self-harm content, but several providers failed to explain how they mitigate these increased risks.
Where evidence of risk levels is not conclusive, providers should err on the side of caution and select the higher risk level. Providers must also give greater consideration to the impact of illegal and harmful content on UK users, particularly how the size of their user base could affect the number of users encountering such content.
- Demonstrate Confidence in own Controls and Systems
Providers must explain how their controls mitigate risks and use evidence to verify their effectiveness. Records should include detailed information on controls and their impact on risk levels. Where a control applies for multiple kinds of illegal or harmful content, the record should explain how it specifically reduces risk for each kind of harm.
Information regarding the effectiveness of controls was surprisingly low across the records reviewed. Some providers relied on assumptions about the intended use of their service rather than demonstrating actual control effectiveness. Ofcom's guidance makes clear that providers should consider both intended and unintended uses of their service.
- Base Decisions on Relevant Evidence
For larger services, Ofcom expects records to make use of external evidence inputs for particular kinds of illegal and harmful content. Evidence use varied broadly across providers, with many not showing how the inputs had impacted risk level they had assigned.
There was too little engagement with user reports and complaints, despite these being core evidence inputs. From larger services, Ofcom noted limited references to the role of independent experts, external commissioned research, or engagement with relevant groups such as law enforcement agencies, something it found surprising given publicly available information showing that some providers do engage with these groups.
Many assessments showed that providers have evidence gaps around certain harms not covered explicitly by their terms of service or content policies, including controlling or coercive behaviour, foreign interference, and proceeds of crime.
- Implement Appropriate Risk Governance
Providers must provide information on how the risk assessment was carried out in practice and name a person responsible for risk assessments. Records were often missing critical governance information, and 69 out of 104 assessments did not name a person responsible for the risk assessment.
Risk assessments must also be kept up to date, including assessments for significant changes to service design or operation. Many records did not indicate how providers intend to keep the risk assessment current, including how they plan to carry out assessments for new features, functionalities, and controls. It is a duty for providers to proactively assess the impact of proposed changes to their service before making such changes, which is vital to ensure services are safe by design.
Year 2 Enforcement and Expectations
Ofcom will keep its risk assessment enforcement programmes open and will request Year 2 illegal content and children's risk assessment records between 1 May and 31 July 2026. Records must be contemporaneous, intelligible, and capture all aspects of each assessment, including assessments performed prior to significant changes during the period.
Providers of Category 1 and Category 2A services face additional obligations, including the duty to publicly summarise their risk assessment findings by October 2026.
- Category 1 captures user-to-user services that use a content recommender system and have more than 34 million monthly UK users or users are able to forward/reshare content and uses a content recommender system and has more than 7 million monthly UK users;
- Category 2A captures search services that has more than 7 million monthly UK users.
Ofcom expects to publish the register of categorised services in July 2026 and will set out its expectations for providers on meeting their public summary duty in the first half of 2026. We will provide comments on these expectations upon publication.
Practical Checklist for Providers
Key actions for providers:
- Complete scope: Separate risk levels for all 17 (soon to be 18) illegal content types plus "other illegal content"; CSEA sub-levels for grooming, image-based CSAM, and CSAM URLs.
- Strengthen rationales: Evidence-based explanations for low ratings; err upward where evidence is inconclusive.
- Deepen design analysis: Trace how features shape inherent and residual risk.
- Evidence control efficacy: Link controls to specific harms with metrics.
- Broaden evidence inputs: Include user reports, independent expertise, and law enforcement input.
- Formalise governance: Name the responsible person; document escalation processes.
- Plan for currency: Annual and event-triggered reviews with contemporaneous records.
- Meet Year 2 deadlines: Submit records between 1 May and 31 July 2026.
The Broader Regulatory Context
It is interesting to see that out of an estimated 100,000 sites that are predicted to be in scope of the OSA, less than 0.1% provided risk assessments. The regulator has faced significant challenges in its first year of enforcement, including high-profile opposition from platforms such as 4chan (see our previous article here). The online safety landscape is also a highly charged political area, requiring careful consideration before launching investigations—as illustrated by recent debates around platforms such as Grok. To meet these expanding responsibilities, Ofcom has substantially increased its workforce, with staff numbers rising from 937 to 1557 between 2019 and 2025. This represents a 66% increase, which Ofcom's annual reports attribute directly to the implementation of the Online Safety Act.
Ofcom has made clear that where improvements are not made, enforcement action will follow rapidly. The same message applies: engage with Ofcom to help avoid or mitigate sanctions.
--
Katten will be able to advise on your OSA requirements, assist you in conducting risk assessments and advise on the implementation of measures to ensure you are OSA compliant. If you would like more information, please contact Terry Green (terry.green@katten.co.uk) and Larry Wong (larry.wong@katten.co.uk).


/Passle/5fb3c068e5416a1144288bf8/SearchServiceImages/2026-01-20-21-12-25-580-696fefb9af056dbfb5cd7077.jpg)
/Passle/5fb3c068e5416a1144288bf8/SearchServiceImages/2026-01-20-17-52-09-279-696fc0c96f659f88c54376ed.jpg)
/Passle/5fb3c068e5416a1144288bf8/SearchServiceImages/2026-01-20-09-18-00-398-696f484877080ab44108f314.jpg)
/Passle/5fb3c068e5416a1144288bf8/SearchServiceImages/2026-01-14-19-54-38-449-6967f47ee6086bc11f8a9972.jpg)