South Africa’s Draft AI Policy Allegedly Contains AI Hallucinations Which Cited Research Papers Which Don’t Exist

The country's draft AI policy is riddled with citations to research papers that don't exist

3 Min Read
Minister of Communications and Digital Technologies of South Africa, Solly Malatsi

When South Africa’s Department of Communications and Digital Technologies released its Draft National Artificial Intelligence Policy for public comment in April 2026, it was supposed to mark a turning point — a signal that Africa’s most industrialized economy was ready to take the governance of artificial intelligence seriously.

Instead, it set off a different kind of conversation entirely.

According to a report by News24, several of the academic citations embedded in the policy pointed to papers that don’t exist.

Authors credited with foundational research had never written on those topics. The references looked authoritative. They just weren’t real.

The diagnosis was almost immediate: AI hallucinations. The drafters, it appeared, had fed prompts into a generative AI tool — the same category of technology the policy was meant to govern — and published the output without verifying a single citation.

The Irony Is the Story

It would be difficult to script a more pointed irony. A government department attempting to regulate AI apparently deployed AI carelessly to write the regulation, then failed to check the AI’s work.

The result is a draft policy containing what critics are calling fabricated research dressed up in the grammar of credibility. The citations problem compounds what was already a substantive policy critique.

What the Government Says

The Department of Communications and Digital Technologies has not publicly disputed the findings outright.

Officials have characterized the draft as a “work in progress” and a “point of departure” — language designed to manage expectations while the document remains in public consultation, with comments open through approximately June 2026.

First drafts of major policy documents are rarely final. But it doesn’t fully address the core problem: a government department responsible for digital and communications governance appears to have outsourced core research functions to an unverified AI tool, then presented the output as a credible basis for national legislation.

It hasn’t been confirmed if officials will rebuild the document’s evidence base using human-verified sources before it progresses further through the legislative pipeline.


Stories published using AI will be attributed to this AI generator author
Joseph-Albert Kuuire is the creator, editor, and journalist at Tech Labari. Email: joseph@techlabari.com Twitter: @jakuuire