South Africa’s government has finally published its draft national AI policy, and it is nothing if not ambitious.
Communications minister Solly Malatsi gazetted the 86-page document last Thursday, opening a 60-day public comment window. Cabinet approved it in late March. The country now has until 10 June to tell the government what it thinks.
What the government is thinking, apparently, is that AI needs a lot of oversight — specifically, seven brand-new institutions’ worth of it.
Seven Bodies, Zero Budget Lines
The headline proposal in the draft is an entirely new institutional architecture for AI governance. The policy calls for the creation of a National AI Commission (or National AI Office), an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson Office, a National AI Safety Institute, and an Integrated AI-Powered Monitoring Centre.

Then there’s the most unusual proposal of all: an AI Insurance Superfund, modelled on the country’s Road Accident Fund, designed to compensate people harmed by AI systems when liability is hard to pin down.
On top of that, the policy would expand the mandate of existing telecoms regulator Icasa and create a National AI Regulatory Forum pulling together agencies including the Competition Commission, the South African Reserve Bank, the Financial Sector Conduct Authority, and the CSIR.
The ambition is breathtaking. The funding plan is not.
The 86-page document does not attach specific budget figures to any of the proposed bodies — only a vague commitment to secure funding in year two of a three-year implementation roadmap. For a government that has struggled to properly resource the institutions it already has, that gap is hard to ignore.
A Policy That Admits It Isn’t Finished
In an unusual move, the communications department included an explanatory note alongside the draft acknowledging that it is “a work in progress.” The document is described as “a point of departure and indication of government’s current thinking” rather than a firm plan.
It raises the question of why a work in progress was sent to Cabinet for approval and gazetted for public comment in its current state.
The regulatory approach itself remains unsettled. The draft presents four broad options — an ethics-first model, a flexible iterative approach using regulatory sandboxes, an economy-focused strategy, and alignment with global standards.

It then floats several additional frameworks on top of those: principles-based regulation, a guardrails approach, a “just AI” framework focused on redressing inequality, and sector-specific AI legislation.
The policy’s own position is that some combination of all of these would be ideal, calibrated to different sectors. That may be true. But it makes it genuinely difficult to know what the government is actually committing to.
The Substance Underneath the Structure
Beneath the institutional sprawl, the draft does contain meaningful proposals. It identifies education, healthcare, and agriculture as priority sectors for AI deployment and calls for AI to be woven into school curricula from primary level upward.
It proposes community-based AI education centres for underserved areas and a labour market transition strategy to address job displacement.
On infrastructure, the policy pushes for investment in supercomputing capacity, 5G and future 6G networks, high-speed fibre, and last-mile satellite connectivity. It proposes that universal internet access be framed as a socioeconomic right, and calls for “regional AI factories” — decentralised compute hubs meant to keep data local and stimulate regional economies.
There are sensible ethical commitments too: mandatory human rights and gender impact assessments for high-risk AI systems, human-in-the-loop requirements for critical decisions, and transparency obligations for public sector AI. The policy also proposes protections for children against manipulative AI, including exploitative advertising and features designed to maximise screen time.
The draft draws on the African philosophy of ubuntu — with its emphasis on community and shared responsibility — as a guiding framework. Whether that translates into enforceable standards is another matter.
The Real Question
South Africa is not wrong to want a coherent AI governance framework. The technology is reshaping industries and labour markets faster than most governments can keep up, and a country with the inequality and informality of South Africa has legitimate reasons to think carefully about who benefits and who gets left behind.
But a framework that proposes seven new oversight bodies without a funding plan, that defers key regulatory choices to future consultations, and that the department itself describes as a work in progress, is not yet that framework.
The 60-day comment window is the government’s best opportunity to hear from the people who will actually build, deploy, and be affected by AI systems in South Africa.
The question is whether the final policy will reflect what they say — or whether it will arrive looking much the same as it does now: a bureaucrat’s dream, and everyone else’s headache.

