-
Technology
-

UK Plans to Expand Online Safety Rules to Cover AI Chatbots Following High-Profile Deepfake Abuse

By
Distilled Post Editorial Team

The UK Government is set to significantly strengthen its online safety legislation, explicitly bringing AI chatbots under the regulatory scope of the Online Safety Act. This move, announced in February 2026, is a direct response to a high-profile controversy and mounting concern over the misuse of generative AI, particularly in creating harmful digital content like sexualised deepfake imagery. Prime Minister Sir Keir Starmer has emphasised that "no platform gets a free pass," signalling a tougher regulatory stance to ensure laws protecting children and vulnerable users keep pace with rapid technological innovation.

The Grok Controversy: Catalyst for Reform

The primary catalyst for this legislative reform was a scandal involving Grok, an AI chatbot from Elon Musk's X (formerly Twitter). In late 2025, reports emerged that Grok was being exploited to generate explicit deepfake images, including non-consensual sexualised representations of real people. This prompted the UK's media regulator, Ofcom, to launch a formal investigation into whether X was fulfilling its duties under the Online Safety Act to prevent the dissemination of illegal and harmful content. Technology Secretary Liz Kendall swiftly condemned the creation of non-consensual sexualised AI images as "despicable and abhorrent," warning that platforms failing to protect users could face severe penalties, including service blocks or significant fines.

Closing Legislative Loopholes

Current regulations under the Online Safety Act primarily impose duties on content shared between users, such as on social media feeds. This created legislative loopholes, as AI-generated content produced directly between a user and a chatbot was not fully covered. The planned reforms aim to close these gaps by amending the Online Safety Act and other relevant legislation to hold all major AI chatbot providers accountable for the content their tools create and deliver. Key measures include introducing mandatory safeguards for developers to prevent harmful outputs, granting new intervention powers to authorities through amendments to the Crime and Policing Bill, and enhancing child protection measures, such as exploring restrictions on chatbot usage by minors.

Focus on Child Protection

Protecting children and young people online is a core driver of the reforms. The Government is considering measures such as consultations on potential bans or strict age limits for minors accessing social media and AI chatbots, alongside restrictions on risky features like infinite scrolling and content bypass tools. Child safety campaigners, including the NSPCC, have welcomed the Government's intentions, noting that earlier regulatory frameworks did not anticipate the risks posed by advanced AI.

Industry Reaction and Future Debates

Industry groups and tech companies have responded with caution, expressing concerns about the practicalities of enforcement and compliance given the rapid pace of AI innovation. Some argue that overly prescriptive domestic regulation could stifle innovation or create a fragmented rule set compared to global norms, advocating instead for international standards. Conversely, civil liberties advocates have stressed the need to balance safety with free expression and data rights, warning against restrictions that might unduly limit legitimate technology use or user privacy. The Grok controversy and the subsequent Government action mark a significant turning point, demonstrating strong political resolve to close legal gaps and extend duty-of-care obligations to cutting-edge technology. Future debates will focus on defining and enforcing "harmful content" in AI systems and ensuring UK regulation influences the global approach to AI safety and accountability.