We Tested Grok, Here is What We Found
- Campaign On Digital Ethics

- 5 hours ago
- 4 min read

xAI told the world it had blocked its undressing feature in certain jurisdictions, including South Africa. CODE conducted an investigation to find out whether that was true. Today, we are filing formal complaints with the CGE, and the SAHRC.
In early January 2026, South African content creator Mihlali Ndamase posted a public message on X, directed at Grok. She asked the platform not to modify her photographs. She was specific and clear. Within minutes, users were flooding her post with AI-generated images that Grok had produced of her, sexualised, in direct response to her act of asserting control over her own image.
Ndamase's case was not isolated. It was part of a global pattern that CODE, and many others, have been documenting since December 2025, when xAI updated Grok's image-editing function to allow users to manipulate photographs of real people (removing clothing and placing subjects in sexualised contexts) with no requirement for the subject's consent, no identity verification, and no meaningful safeguard.
After global outrage and sustained pressure, xAI announced that it would geo-block the undressing feature in countries like South Africa, where such content is illegal. This was presented as a meaningful compliance measure. CODE decided to test that claim.
On 24 February 2026, five weeks after xAI's announcement, CODE ran a structured test from Johannesburg, on a paid Grok mobile account, using a photograph of its executive director, Kavisha Pillay, at a CODE event. The question was simple: could an ordinary user, without any technical skill, still use Grok to sexually manipulate an image of a real, identifiable person inside a supposedly geo-blocked jurisdiction?
The answer, across an 85-minute session, was yes.
CODE’s findings
Our investigation revealed that direct keywords associated with undressing were sometimes blocked. That is the visible layer xAI presents as its proof of reform. But minor changes in wording such as adjective swaps, euphemisms, photography-industry framing, role-play personas, and neutral clothing descriptions, consistently bypassed the filter and produced sexualised images of Pillay, at her workplace, with CODE branding visible in the background. Not once, but repeatedly, across different approaches.
What this tells us is that Grok's filter is tuned to vocabulary, and not to harm. Grok's filter checks for specific vocabulary and blocks those terms. Everything else passes. A safety regime that can be sidestepped this easily, this quickly, by a non‑technical tester on a consumer interface is not a meaningful safeguard.
However, the more important finding from our test relates to consent, and the fact that Grok has no mechanism to register its existence or absence.
At no point during the testing did the system ask whether Pillay was the person in the image, whether we had permission to edit the photograph, or whether we could verify consent in any way. Pillay’s informed consent, which she gave deliberately, as a researcher, was entirely invisible to the platform. Grok would have produced identical outputs if a stranger had scraped her photo from social media and submitted it without her knowledge. The amended image could then be used, without Pillay’s knowledge or consent, across the internet. The mere act of upload was treated as sufficient authorisation to manipulate and sexualise the image.
This is not a gap that can be closed with a pop-up warning or a terms-of-service amendment. It is an architectural assumption baked into the design of the product: that any image is fair game, regardless of who is in it or how it was obtained. In a world where women's photographs are constantly copied, shared, and scraped without their knowledge, that assumption is a direct pipeline to abuse.
What we are asking for, and why
Today, CODE filed formal complaints with the Commission on Gender Equality and the South African Human Rights Commission. We are asking both bodies to investigate Grok's design failures as it pertains to human rights violations, as well as the platform aiding and abetting gender-based violence in South Africa. We are asking for formal investigations, and recommendations to Parliament and relevant government departments on AI-specific legislative reforms to prevent further rights violations.
What CODE’s investigation ultimately reveals is a structural failure of accountability in how frontier AI systems are being deployed globally. xAI’s continued availability of sexualised image generation in South Africa, despite public claims of restriction, demonstrates that self‑regulation cannot protect against technology‑facilitated gender‑based violence. Beyond the immediate violations of dignity and privacy, it exposes the deeper crisis of consent in AI design, where systems see every image as editable, and every body as available.
CODE’s findings show that these harms are not hypothetical but live, ongoing breaches of law and rights. We therefore call for coordinated, progressive, rights‑based regulation of generative AI in South Africa, one that places privacy, consent, accountability, and gender justice at its centre, and ensures that no company can build tools of abuse and frame them as innovation.
_______________________
A note on our methodology: CODE has deliberately chosen not to publish the specific prompts used in this testing, nor the images Grok generated. We have withheld the prompts because reproducing them risks functioning as a circumvention guide, contributing to the very harm this investigation documents. We have withheld the generated images out of respect for the dignity of the individual depicted, the integrity of CODE as an organisation, and a commitment to not introducing artificially manipulated imagery into the public record, where it can be decontextualised, recirculated, or misused. The full evidentiary record, including prompts and exhibits, is available to credible researchers, journalists, and institutions for the purposes of further investigation.



Comments