-
Technology
-

EU Regulators Launch Wide-Ranging Privacy Investigation into X’s Grok AI Over Explicit Image Concerns

By
Distilled Post Editorial Team

EU Launches Formal Investigation into X's Grok AI Chatbot

Regulators in the European Union have launched a formal investigation into Elon Musk's social media platform X, focusing on its artificial intelligence chatbot, Grok. The enquiry, spearheaded by Ireland's Data Protection Commission (DPC), is being conducted under the stringent General Data Protection Regulation (GDPR) and addresses significant concerns that Grok has been used to generate and distribute sexualised deepfake images, including content depicting minors. This "large-scale" probe marks a critical move against the misuse of generative AI tools, and the DPC holds the authority to impose fines of up to four per cent of the company’s global annual revenue for major privacy law breaches.

Focus of the GDPR Probe: Allegations and Key Assessment Areas

The investigation centres on allegations that Grok, developed by Musk’s xAI and integrated into the X platform, prompted users to create non-consensual and sexually explicit images of real individuals, including children. Regulators view these outputs as potentially harmful deepfakes involving highly sensitive personal data. While X implemented some restrictions, regulators contend that harmful content has persisted, indicating that current safeguards are inadequate. The DPC's GDPR enquiry will assess three key areas: whether X adhered to core GDPR provisions regarding lawful processing, transparency, and privacy by design; whether X deployed adequate risk assessments and safeguards before integrating Grok's image generation tools; and how X responded to and mitigated risks once evidence of harmful imagery became apparent.

International Regulatory Backlash

This Irish probe is part of a wider, international regulatory backlash against Grok, signalling mounting political and regulatory unease with generative AI on vast social networks. Other key actions include a separate probe by the European Commission under the Digital Services Act (DSA) concerning X's handling of illegal and harmful content, and an enquiry by Ofcom in the UK under the Online Safety Act regarding X's failure to protect users from AI-generated sexual content. French authorities have also taken criminal steps, and countries like Indonesia have temporarily blocked the chatbot.

Implications for Data Protection and AI Governance

Authorities have stressed the sensitive nature of the alleged content, particularly concerning deepfake images of minors, which can legally amount to Child Sexual Abuse Material (CSAM). A key mandate for the DPC is to assess how personal data was utilised in generating or disseminating the AI outputs, requiring X to demonstrate lawful and fair processing of individuals' privacy rights. This enquiry reinforces the notion that regulators will actively apply existing legal frameworks, such as data protection laws and digital services rules, to ensure tech platforms are held accountable when their technologies amplify harmful content at scale. The outcome of the DPC's investigation is highly anticipated for the precedent it will set for AI governance, accountability, and online safety across the EU and globally.