top of page
Search

Pockets of Progress: We are Entering the Era of Tech Accountability


For much of the past decade, legal and regulatory action against major technology platforms has moved slowly relative to the scale of documented harm. That is beginning to change. We are now in a phase where governments, courts, and regulators are willing to name design, data, and deployment choices as harms, and to impose real costs on tech firms for them. This cluster of recent decisions and enforcement actions shows that the “move fast and break things” era is giving way to a contested, but tangible era of tech accountability.


Since the 2000s, the dominant story about major social media platforms was that they were neutral intermediaries, shielded from responsibility by laws like Section 230 in the United States, and by narrow readings of liability globally. The recent New Mexico and California cases against Meta and Google rupture that narrative by treating engagement-maximising interfaces and recommendation engines as product design choices that can be unfair, deceptive or defective, especially when aimed at children.


In New Mexico, a jury found Meta had violated the state’s Unfair Practices Act by turning Instagram into a space that exposed children to exploitation and mental health harms, while misleading families about safety. The verdict imposed USD 375 million in civil penalties, calculated as thousands of individual violations, and explicitly linked harms to platform design and failure to protect minors. Within a day, a Los Angeles jury awarded USD 6 million in damages to a plaintiff who began using YouTube and Instagram as a child, after concluding that these platforms were negligently designed to be addictive and that the companies failed to warn about the resulting risks.


These cases are legally significant because they separate content from design: the core allegation is not simply that harmful material exists on social media, but that built-in mechanisms such as infinite scroll, autoplay, algorithmic amplification and notification loops were engineered in ways that foreseeably harmed children.


That framing makes it harder for companies to rely on traditional speech or intermediary-liability defences, and invites courts to treat feeds and recommender systems like other consumer products subject to defect and failure-to-warn standards.


The EU’s Digital Services Act Is Beginning to Produce Enforcement

In parallel, the EU’s Digital Services Act (DSA) is operationalising a systemic duty of care for online platforms, including adult-content sites and social media services used by minors. The European Commission’s recent preliminary findings against several adult sites state that they failed to implement adequate age verification and risk-mitigation measures to stop children accessing explicit material, potentially breaching specific DSA obligations on protecting minors.


Additionally, the Commission’s move to open formal proceedings against Snapchat, focused on age verification and child-safety risks, signals that youth-oriented platforms will be judged on the adequacy of their controls, not just their content rules on paper.


Courts Are Now Policing Generative AI and Non-Consensual Imagery

This new accountability era is not limited to social media; it is extending to generative AI and the circulation of non-consensual intimate images (NCII). A Dutch court recently ordered X (formerly Twitter) to stop generating and distributing non-consensual intimate images, including child pornography, via its Grok generative AI system. The court imposed a penalty of €100,000 per day on each defendant for non-compliance.


The Dutch ruling is part of a broader pattern of enforcement. Regulators in Ireland, the European Commission, and the UK have each opened separate proceedings against X in recent months, under GDPR, the DSA, and the Online Safety Act respectively. Civil suits have also been filed in the United States this month.


Back home, in South Africa, in February 2026, the Campaign on Digital Ethics (CODE) filed a complaint with the South African Human Rights Commission (SAHRC) related to the creation of NCII by Grok. Our complaint included evidence of a circumvention test, conducted by CODE, which demonstrated the ineffectiveness of the geo-blocking restrictions announced by xAI. On 17 March 2026, CODE received formal correspondence from the SAHRC confirming that our complaint had been accepted on the grounds that it contained a prima facie violation of a human right. The SAHRC noted that it is in the process of determining how best to engage with xAI, and will provide an update in due course.


Implications for the Global Accountability Agenda

These developments collectively mark a shift from viewing digital harms as unfortunate side-effects to treating them as predictable outcomes of specific governance and design choices. Courts are starting to accept theories that link youth mental-health harms to engagement-driven architectures, and to award damages and penalties that make those harms legible in financial terms. Regulators like the European Commission are deploying structural tools such as risk assessments, audits, and formal proceedings, to interrogate how platforms operationalise their responsibilities to children and other vulnerable groups.


For organisations like CODE, this creates both an opportunity and a mandate. There is new space to demand age-appropriate design, algorithmic transparency, and independent oversight of recommender systems, drawing on concrete legal precedents rather than abstract ethics. At the same time, cases like the Dutch Grok ruling remind us that accountability must extend into the AI layer, where synthetic content can reproduce and amplify existing patterns of sexual and gender-based violence if left unchecked.


The emerging era of tech accountability in South Africa will be defined by contestation: between profit and safety, between private power and public regulation, and between calls to ban children from platforms and the constitutional imperative to make digital spaces genuinely safe and inclusive. For CODE, the challenge ahead is to help turn these early cases and policy debates into durable norms, where child safety, consent, and democratic oversight are built into digital systems from the start, rather than bolted on after harm has already occurred, and where African legal and advocacy voices shape what accountability looks like in a deeply unequal, data-hungry digital economy.


 
 
 

Comments


Address

173 Oxford Road, 

Rosebank, Johannesburg

Email

Connect

@CODE_DigiEthics

bottom of page