404: Source Unknown
- Campaign On Digital Ethics

- May 4
- 3 min read

By Kavisha Pillay
South Africa’s draft National Artificial Intelligence Policy, the document designed to govern how this country manages one of the most consequential technologies of our time, was withdrawn last week after journalists at News24 discovered that several of the academic citations were fiction. Before the week was out, the Department of Home Affairs (DHA) had announced the suspension of two officials linked to a revised white paper on citizenship, immigration, and refugee protection – where the same problem had surfaced. The DHA is now reviewing every policy document since 30 November 2022, the date ChatGPT became publicly available, which suggests that they understand that this did not begin with the documents that got caught.
The Campaign on Digital Ethics (CODE) exists precisely because questions like these – who is accountable when algorithms shape public decisions? what rights do the public have when institutions deploy AI without transparency? – do not yet have settled answers in South Africa. CODE consistently argues that we need progressive governance frameworks before the technology outpaces us. Watching the country’s AI governance framework collapse because the people drafting it apparently could not be bothered to open the journals they cited gives me no satisfaction, whatsoever. It actually clarifies the problem with considerable force.
The problem, at its core, was the decision, somewhere in the drafting process, to treat the machine’s output as sufficient. To not read the sources, or check if the arguments being made rested on anything real. That is a human decision, enabled by a tool that makes intellectual shortcuts feel like efficiency.
This is where I think that the public conversation has not gone far enough. We are asking who is responsible, and that matters. But we should also be asking what was paid for. Government departments routinely contract consultants and policy advisors precisely because the work of producing credible, evidence-based outputs requires expertise, time and rigour. If the output of that contracted work is a document whose reference list was generated by an AI that nobody verified, then what exactly did the public pay for along the value chain? The embarrassment here belongs not only to the officials facing suspension but to every node in the process that signed off, reviewed, approved and gazetted a document without applying the basic scrutiny the work required.
There is also a transparency argument that must become part of the public debate. Section 195 of the Constitution holds that public administration must be transparent and accountable. When government departments use AI tools in drafting, researching, developing or deploying public policy our public outputs, we the people, have a right to know. Not because AI assistance is inherently shameful – it is clearly being used, across sectors, at scale – but because disclosure is what makes oversight possible. We cannot hold institutions accountable for how they use a technology, when they are not required to acknowledge using the tools. A simple requirement that public sector documents declare where and how AI was used in their development would not eliminate the risk of another scandal like this one, but it would at least create the conditions for someone to ask the right questions before it gets gazetted by the government.
What concerns me beyond the immediate scandal is the trajectory that we are on. Policy work, at its best, is an act of sustained attention. It requires reading difficult material, sitting with contradictions, arguing positions out, and changing your mind when the evidence demands it. These are not tasks that can be delegated without loss. When we hand the research to a system that generates plausible-looking text from patterns rather than understanding, we are not just risking inaccurate citations. We are evacuating the thinking itself. What does it mean for the quality of South Africa's public institutions when the people responsible for our most consequential decisions stop developing the muscle of grappling with complexity, because the machine will grapple for them, fluently, at speed, and without ever admitting uncertainty?
Minister Malatsi, in withdrawing the policy, said the incident proved why vigilant human oversight over AI is critical. He is right. But oversight requires something to oversee, which means the humans in the room need to have read enough, and thought enough to recognise when something is wrong in AI-generated work. That capacity does not survive indefinite outsourcing.
Human oversight is non-negotiable. Transparency about where and how AI tools are used in public processes must become a democratic requirement, not a courtesy. Accountability, when that transparency is absent and harm results, is what makes the first two principles real.
A policy that advocates for all three of these things, and was built without any of them, is not a lesson in AI's limitations, it is a lesson in ours.



Comments