AI Deepfake Abuse on X Sparks Global Scrutiny Over Platform Safety
- Campaign On Digital Ethics

- 13 minutes ago
- 3 min read

A class-action lawsuit filed in the United States has triggered renewed global scrutiny of artificial intelligence safety, after X’s AI chatbot Grok was used to generate non-consensual sexualised images of women, including children.
The lawsuit, filed on 23 January 2026 in South Carolina, follows an incident in which a woman, identified as Jane Doe, posted a fully clothed photograph of herself on X. Other users subsequently prompted Grok, an AI chatbot developed by xAI, to manipulate the image into a sexualised deepfake. The altered image circulated publicly for several days before being removed.
Court filings state that Doe experienced significant emotional distress, including fear of reputational and professional harm. The lawsuit alleges that X and xAI failed to implement adequate safeguards to prevent the generation and spread of non-consensual intimate imagery, describing the platform’s conduct as “despicable”.
The case has become a focal point in a broader international debate over the governance of generative AI and platform accountability.
AI Design Under Scrutiny
According to the complaint, Grok’s system design lacked basic content-safety guardrails. The lawsuit alleges that internal system prompts instruct the chatbot that, unless explicitly restricted, it faces “no limitations” on adult or offensive content.
The plaintiffs argue that the absence of default safeguards made foreseeable harm inevitable, particularly in a platform environment already known for harassment and abuse.
Following public backlash in early January, xAI did not immediately disable the image-manipulation capability. Instead, the company restricted access to the feature to paying “Premium” users on X.
This decision effectively monetises abusive behaviour rather than preventing it. Additionally, placing safety controls behind paywalls risks incentivising harmful use while shielding platforms from responsibility.
Neither X nor xAI has publicly explained why the feature was not disabled globally once evidence of harm emerged.
The controversy intensified after the Center for Countering Digital Hate reported that Grok generated more than three million sexualised images in less than two weeks. The organisation also found that over 23,000 of those images appeared to depict children.
xAI has since restricted certain features in specific jurisdictions, but it is clear that the company’s response has been inconsistent and reactive.
A Move Towards Accountability
Authorities in multiple countries have launched investigations or issued warnings in relation to Grok:
European Union regulators have launched formal proceedings under the Digital Services Act, examining whether X failed to assess and mitigate systemic risks.
Brazil issued a 30-day ultimatum requiring xAI to stop generating fake sexualised images or face legal consequences.
India warned that X’s removal of accounts and content was insufficient, raising the possibility of losing intermediary protections.
United Kingdom regulator Ofcom is assessing whether X breached its duties under the Online Safety Act.
Canada expanded a privacy investigation into whether xAI obtained lawful consent to use personal data in image generation.
In South Africa, civil society organisation Moxii Africa (formerly Media Monitoring Africa), issued a letter of demand to X and various government departments, noting that Grok’s undress features violate constitutional rights to dignity and privacy. .
Will this be a Turning Point for AI Governance?
For the Campaign On Digital Ethics (CODE), the Grok case illustrates a broader and recurring failure in platform governance, which is the deployment of powerful technologies without legally enforceable safeguards for dignity, consent, and harm prevention.
CODE has consistently argued that voluntary safety measures and post-hoc moderation are insufficient in an era of generative AI. Systems capable of producing intimate and identity-altering content must be subject to clear legal duties, independent oversight, and meaningful consequence when harm occurs.
As jurisdictions move to regulate AI through instruments such as the EU’s Digital Services Act and emerging online safety laws, CODE maintains that human rights principles, including dignity, privacy, and equality, must be embedded at the design stage, not treated as optional constraints.
The outcome of the Grok litigation in the United States, and the regulatory responses that follow internationally, may help determine whether platforms are finally required to internalise the social costs of the technologies they deploy.



Comments