-
Technology
-

UK Plans 48-Hour Deadline for Tech Platforms to Tackle Abusive Images, With Blocking Powers on the Table

By
Distilled Post Editorial Team

The UK government has significantly strengthened its online safety framework by mandating that large technology platforms must remove non-consensual intimate images, including "revenge porn" and sexually explicit AI-generated deepfakes, within 48 hours of being reported. Non-compliance will result in severe penalties, such as fines reaching up to 10% of a company’s global turnover and the potential blocking of their services within the UK.

Prime Minister Sir Keir Starmer has declared the rapid dissemination of non-consensual intimate imagery a "national emergency." The new amendments to the Crime and Policing Bill aim to shift the burden of responsibility from victims to powerful tech companies, who possess the necessary resources to detect and remove this harmful material swiftly. This initiative builds upon the existing Online Safety Act 2023, updating the law to address the digital age's problem of rapid circulation and re-uploading of abusive content.

Ofcom to Enforce New Rules with Massive Fines

Enforcement of these new rules will fall to the UK’s communications regulator, Ofcom. The potential consequences for companies that fail to comply are substantial, including massive fines that could reach billions of pounds, the possibility of being blocked entirely from operating in the UK, and direct regulatory action. The core objective is to ensure that a single victim report leads to the removal of the image across multiple platforms and prevents its re-sharing.

Tackling Online Abuse as Violence Against Women and Girls (VAWG)

The push for tougher regulations is a response to escalating concerns over the misuse of AI and social media for the spread of harmful content. While sharing such images is already illegal, victims frequently face slow and inconsistent takedown procedures. The government views online abuse as a critical component of violence against women and girls (VAWG) and is treating it with the same urgency as child abuse or terrorism content. Technology Secretary Liz Kendall stated, "The days of tech firms having a free pass are over… you report once and you’re protected everywhere.”

Proactive Protection Measures and Hash-Matching Technology

Ofcom is moving quickly to implement proactive protection measures, planning to mandate that platforms use technologies such as hash-matching to automatically detect and block illegal intimate images at the source. This system, similar to those used for child sexual abuse material, will prevent the re-uploading of known harmful content without relying solely on individual user reports. Final rules are expected by May 2026, with enforcement potentially beginning this summer, subject to parliamentary approval. The government also intends to combat "rogue websites" that host abusive content by advising internet providers on how to block these services.

Support, Opposition, and Global Impact

These measures have garnered strong support from survivors and advocacy groups, who believe the 48-hour rule will significantly reduce harm and restore control to victims. However, the regulations have reignited debates concerning freedom of expression and the practical challenges of universal, rapid enforcement across vast digital platforms. Some leaders in the tech industry have voiced concerns that heavy penalties could stifle innovation or lead platforms to restrict services in the UK. Nonetheless, Ministers remain firm that safety and accountability must be the primary priorities.

The amendments are now proceeding to Parliament for debate. If approved, this law will position the UK at the forefront of global efforts to tackle non-consensual intimate image sharing, fundamentally shifting the responsibility for making the online world safer, especially for women and girls, from the victims to the platforms themselves.