Since the Online Safety Act (OSA) came into force less than a year ago, Ofcom has launched investigations into more than 90 platforms and issued six fines for non-compliance, including penalties against an AI nudification site for failing to have robust age checks in place. This month, Ofcom has opened two new investigations into two generative AI services: X (in relation to its Grok AI chatbot) and Novi Ltd (in relation to its Joi.com service). Both investigations concern alleged failures to comply with duties under the OSA, and together they illustrate a notable shift in the regulator's enforcement focus towards AI-powered platforms.
How AI Chatbots Fall Under the Online Safety Act
As use of chatbot technology increases, so does the potential for harm. Under the OSA, a chatbot that meets the Act's definitions of a regulated service, or forms part of one, is covered by the Act's rules. Crucially, any AI-generated content shared by users on a user-to-user service is classed as user-generated content and is regulated in the same way as content created by humans. This means, for example, that a social media post containing harmful imagery produced by AI is regulated identically to similar content created by a person.
However, some chatbots are not covered by the Act. Chatbots fall outside regulation if they only allow users to interact with the chatbot itself (and no other users), do not search multiple websites or databases when responding to users, and cannot generate pornographic content.
The Investigation into X and Grok
On 12 January 2026, Ofcom opened a formal investigation into X Internet Unlimited Company following reports that the Grok AI chatbot was being used to generate and share deeply concerning content, including alleged non-consensual intimate images and child sexual abuse material. The investigation into X focuses on several core provisions under the OSA:
- Illegal Content Risk Assessments (Sections 9 and 10): Regulated services must carry out a suitable and sufficient illegal content risk assessment and must conduct an updated assessment before making any significant changes to their service. Ofcom is examining whether X failed to assess the risk of users encountering illegal content before introducing or modifying the Grok feature.
- Illegal Content Safety Duties (Section 11): Services must take or use proportionate measures to prevent individuals from encountering priority illegal content, including intimate image abuse and child sexual abuse material, and must implement systems designed to minimise the length of time such content is present and swiftly take it down when made aware of it. Ofcom is also examining whether X had regard to protecting users from breaches of privacy laws.
- Protection of Children (Sections 12, 20, and 21): Where a service is likely to be accessed by children, providers must carry out a suitable and sufficient children's risk assessment and use proportionate systems, including highly effective age assurance, to prevent children from encountering primary priority content such as pornography. Ofcom is examining whether X failed to implement adequate age assurance measures.
- Duties about Freedom of Expression and Privacy (Section 22): When deciding on and implementing safety measures and policies, regulated services must have particular regard to protecting users from a breach of any statutory provision or rule of law concerning privacy. Ofcom is examining whether X had regard to protecting users from breaches of privacy laws, given the nature of the content allegedly generated by the Grok chatbot.
Ofcom has confirmed that X has since implemented measures to prevent the Grok account from being used to create intimate images of people. However, the investigation remains ongoing to determine what went wrong and what further remedial steps are being taken.
The Investigation into Novi Ltd
On 15 January 2026, Ofcom announced a separate investigation into Novi Ltd in relation to its generative AI service, Joi.com. This investigation forms part of Ofcom's broader enforcement programme into age assurance measures across the adult content sector.
- Children's Access Assessments (Section 36): Providers must carry out and retain a written record of a children's access assessment to determine whether the service is likely to be accessed by children. The investigation into Novi Ltd is examining potential failures to comply with this duty.
- Protection of Children (Section 12): The investigation is also examining whether Novi Ltd has failed to implement highly effective age assurance measures to prevent children from encountering pornographic content on its service.
A turning point?
The Grok incident has prompted widespread calls for stronger legal protections, with members of the UK government describing themselves as "deeply alarmed" and victims criticising governments for moving too slowly. Courts have already begun to recognise the severity of such harms: in a landmark 2023 ruling, a judge held that the impact of image-based abuse on victims is akin to the impact of other kinds of abuse, and the Judicial College Guidelines were subsequently amended in April 2024 to include image-based abuse within the definition of "abuse" for the first time.
However, many are saying a fundamental change in approach is needed: if regulation focuses only on cleaning up harm after it has occurred, it will always lag behind the technology. Preventing AI-enabled abuse requires acting earlier on system design, company responsibility, and structural safeguards. This raises a critical question: is the Online Safety Act, designed primarily with traditional user-generated content in mind, truly fit for purpose in addressing the distinct challenges of AI-generated abuse, or is bespoke legislation now required?
Looking Ahead
These investigations demonstrate that, for now, Ofcom is applying the OSA with equal force to AI-powered services as to traditional platforms and has stated it will not hesitate to investigate where there is a suspicion that companies are failing in their duties, especially where there is a risk of harm to children. Providers of generative AI services operating in the UK should therefore ensure that their risk assessments, content moderation systems, and age assurance measures meet the standards required under the Act. However, as the calls for systemic change grow louder, both regulators and industry should be prepared for the possibility that more targeted AI-specific legislation may follow.


/Passle/5fb3c068e5416a1144288bf8/SearchServiceImages/2026-01-23-00-29-21-840-6972c0e18843c7f0c9f41d6c.jpg)
/Passle/5fb3c068e5416a1144288bf8/SearchServiceImages/2026-01-17-17-52-27-459-696bcc5b9d520cc682b64d92.jpg)
/Passle/5fb3c068e5416a1144288bf8/SearchServiceImages/2026-01-20-21-12-25-580-696fefb9af056dbfb5cd7077.jpg)
/Passle/5fb3c068e5416a1144288bf8/SearchServiceImages/2026-01-20-17-52-09-279-696fc0c96f659f88c54376ed.jpg)