-
Business
-

UK Government Urges Urgent Action Over AI Deepfake Abuse on X

By
Distilled Post Editorial Team

Technology Secretary Liz Kendall has called for urgent action after the artificial intelligence chatbot Grok was used on X to generate non-consensual sexualised images of women and girls, reigniting concerns over AI-enabled abuse and the responsibilities of technology platforms.

The chatbot, developed by xAI, has been shown responding to user prompts that digitally alter images of real people, including requests to make women appear partially undressed, placed in bikinis, or depicted in explicit scenarios without their consent. The incidents highlight how generative AI tools can be rapidly misused at scale when safeguards fail.

Kendall described the situation as “absolutely appalling” and said the government would not tolerate the spread of degrading and abusive imagery online. She emphasised that women and girls should not be exposed to new forms of harm simply because technology has moved faster than enforcement.

The technology secretary said it was “absolutely right” that Ofcom is investigating the matter as a priority, adding that she fully supports the regulator taking whatever enforcement action it considers necessary. On Monday, Ofcom confirmed it had made urgent contact with xAI and was examining concerns that Grok had been used to generate so-called “undressed images” of real individuals. The regulator is assessing whether the platform has met its legal obligations to prevent the creation and spread of harmful content.

While X issued a warning over the weekend instructing users not to use Grok to generate illegal material, including child sexual abuse content, critics argue that warnings alone are insufficient. They say the episode exposes a deeper issue around the deployment of powerful AI systems without adequate guardrails, transparency, or accountability.

In a statement, Kendall was clear that this was not a debate about free speech. “Services and operators have a clear obligation to act appropriately,” she said. “This is not about restricting freedom of expression but about upholding the law.” She pointed to the strengthened provisions under the Online Safety Act, which classify intimate image abuse and cyberflashing as priority offences, including where images are generated using artificial intelligence rather than traditional photography.

Under the Act, platforms are required to prevent such content from appearing in the first place and to act swiftly to remove it when it does. Failure to do so can result in significant regulatory penalties and enforcement action. The case also raises wider questions about the governance of generative AI, particularly as image-generation and manipulation tools become more accessible to the general public. Campaigners have warned that without strict controls, AI risks amplifying existing patterns of harassment and abuse, disproportionately affecting women and girls.

For ministers and regulators, the Grok controversy is likely to be seen as an early test of whether the UK’s new online safety framework has the teeth required to respond to fast-moving technological risks. For technology companies, it is a reminder that innovation does not remove responsibility, and that AI systems deployed at scale must be designed with harm prevention, not just capability, in mind.