top of page
Search

The Infrastructure of Influence


By Kavisha Pillay


Now that the 2026 Local Government Elections have been announced, street poles across the country will be decorated with big ideas and false promises. This is perhaps one of the oldest technologies of democratic politics – ink, paper, and a surface to stick it to. In the analogue world, this is standard canvassing. Now that we are in a digital age, the real battle has moved from the pavements to the processor.

 

Electoral decisions are increasingly being shaped inside recommendation systems that most South Africans never see, run by companies that believe that they are not bound by our laws, and governed by rules that were not designed for South Africa’s constitutional democracy in mind. This is a central finding of the Campaign on Digital Ethics (CODE) latest report, The Algorithmic Ballot: Political Advertising and Voter Protection in the Age of Big Tech.


 

The four platforms covered in the report – Google, Meta, TikTok, and X – do not  agree on what political advertising is, let alone how to govern it. That foundational disagreement shapes everything downstream. What does connect all four, however, is indifference to the social, political, and economic realities of our country. South African voters are, in the language of the platforms' own architecture, a second-tier market.

 

The rules that were not written for us

When you open Facebook, TikTok, YouTube or X in Johannesburg, you are using the same platforms as someone in Berlin or New York – but not under the same rulebook. In the European Union (EU), the Digital Services Act (DSA) has forced platforms to build transparency tools, limit targeting, and open their data to researchers. South Africa has not yet adopted an equivalent framework, and the gaps are showing up in our feeds.

 

If you are a researcher in Brussels, you can access granular, real-time data on political advertising through the EU’s DSA, to track disinformation campaigns as they happen. If you are a researcher at a South African university or civil society organisation, you have to work with what platforms choose to give you – aggregated ranges, archived datasets, and partial ad libraries. TikTok's full transparency tools cover the European Economic Area, Switzerland, and the UK, but in South Africa, we are not afforded equivalent access. Similarly, X provides an ad database to users in the EU and US, but no equivalent is available to us in South Africa.

 

The question is why are those in the Global North afforded protections and safety, while we remain vulnerable to opaque algorithmic recommendation systems, disinformation campaigns, and shadow online advertising? We are living in a state of regulatory inequality.

 

The architecture of manufactured fear

In 2023, the Legal Resources Centre and Global Witness ran an experiment. They submitted 38 advertisements to Facebook, YouTube, and TikTok for verification. The ads contained hate speech directed at migrants. YouTube and TikTok approved every single one. Meta rejected the English and Afrikaans versions of the most explicit ad but approved the isiZulu and isiXhosa versions, and approved 36 of the remaining 37 ads across the board.

 

What this tells us is that the safety systems are calibrated for some languages and not others.

 

Christopher Wylie, the whistleblower who exposed Cambridge Analytica, described the system that firm built as a "psychological warfare tool." The harvested data from 50 million Facebook profiles was used to identify personality traits linked to specific voting behaviours – who responds to fear, who to hope, who to outrage. Once identified, those voters could be served a steady, reinforcing loop of content that hardened their existing views and made those dispositions feel like conclusions.

 

In the South African context, fears about property ownership, economic hardship, affirmative action policies, or about physical safety, are real anxieties with real histories. An algorithm that can identify a user likely to respond to those pressures can serve them content that amplifies those anxieties in a continuous loop. The user is not being persuaded by a manifesto they can interrogate. They are being managed by a system they cannot see, in ways they cannot verify.

 

Whose ad is it anyway?

Verification is supposed to answer one question: who is really behind this ad? In practice, it is doing less than promised. During the 2024 elections, CODE's research into the Meta Ad Library revealed that shadow organisations, entities that are not registered political parties, were among the biggest spenders on election-related advertising, sometimes outspending the actual parties contesting the ballot. None of their ads directly told you who to vote for. But from the messaging used by organisations like Ask South Africa, Constitutional Hill, and Pledge To Vote SA, the intent was clear enough.

 

The challenge is not that these groups exist, but that voters have no way of knowing who is funding them, or the nature of the relationship between those organisations and registered parties. When the money trail is opaque, democratic participation becomes something closer to a performance.

 

AI has entered the chat

All four platforms now say they have rules for AI-generated content in political advertising. Most of those rules rely on the same weak starting point: advertiser honesty. You tick a box saying your content is AI-generated, or you don't.

 

In practice, that means a convincing deepfake, such as a fabricated video of a candidate saying something they never said, timed for maximum impact, can travel across WhatsApp, TikTok, X, and Meta before a single fact-checker has had the chance to see it. The democratic risk is not just that one person is fooled, but that entire communities lose faith in the possibility of knowing what is real.

 

In South Africa's main languages, enforcement is thinner still.

 

What South Africa is owed

CODE's recommendations in The Algorithmic Ballot are directed at the platforms, at Parliament, at the IEC, and at civil society. The full roadmap is in the report. But the underlying demand is simple: South Africa must be treated as a democracy worthy of the same protections afforded to voters in Washington, Brussels, and London.

 

That means DSA-level transparency tools extended to South African researchers. It means moderation teams with real capacity in South African languages to detect content in which hate and disinformation can circulate. It means local domicile requirements for political advertisers, mandatory disclosure of ultimate funders, and a dedicated digital platforms and elections framework that does not depend on the goodwill of companies whose primary obligation is to their shareholders.

 

Digital political advertising has become the infrastructure for influence. That infrastructure should protect and deepen democratic participation, not destabilise it. When the underlying rules are weak, unevenly enforced, or shielded from scrutiny, they signal to powerful actors that there is room to experiment without consequence.

 

The choice facing platforms and public institutions is not a technical one, but a political one. Either South Africa is treated as a democracy worthy of the same protections afforded to voters in the Global North, or we continue to be governed by a two-tier system in which the Global South remains a testing ground for weaker rules.

 

The recommendations in this report are a roadmap out of that inequality.

 

What happens next will show whether those who profit from, and preside over, our digital infrastructure are prepared to share responsibility for the integrity of South Africa's elections.

 
 
 

Comments


Address

173 Oxford Road, 

Rosebank, Johannesburg

Email

Connect

@CODE_DigiEthics

bottom of page