
On April 2, South Africa’s Department of Communications and Digital Technologies (DCDT) published a draft version of its AI policy for public comment.
South Africa’s proposal decentralises AI oversight by assigning different agencies to monitor its development. The regulator designated AI technologies as “unacceptable,” “high,” “limited,” and “minimal” risk, marking its risk tolerance and what technologies can be comfortably applied to finance and other critical systems that affect the public.
Failing its own AI test: Yet, in a somewhat dramatic twist, critics of the proposed AI policy have found at least six of the citations in the draft to be fabricated. According to local publication News24, the policy includes referenced articles that were either never published, could not be linked to existing academic journals, or were simply fabricated by AI hallucinations.
It’s a bit of a head-scratcher and an embarrassing situation for a country’s policy against AI risk and ethical use of the technology to be written by AI.
Political critics of Solly Malatsi, the country’s Minister of Communications and Digital Technologies, including Khusela Diko, Chairperson of the Portfolio Committee on Communications, have asked South African regulators to withdraw the policy.
Malatsi responded on Saturday, saying he asked the DCDT Director General to “investigate and take action against anyone found to be responsible for any wrongdoing,” suggesting that regulators are now looking inward to find where the lapses occurred.
In another post on Sunday, Malatsi confirmed the claims to be true and withdrew the draft policy.
“The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened,” he wrote on X, adding that there will be consequences for those “responsible for drafting and quality assurance.”
Zoomout: South Africa’s cabinet will be scrambling over the next few days to save face in what could potentially be a major public embarrassment resulting from a lack of detail. Whether somebody at the DCDT office used AI or not, the draft policy provided a framework to tackle AI risks as the technology gains more prominence in public systems.
In Nigeria, the Central Bank is urging banks to use AI in money-laundering systems to combat fraud. South Africa’s policy showed that awareness, where most other countries’ frameworks, including Nigeria and Kenya, focused on centralising AI oversight. The intention is good, but the method of delivery may be less than perfect. Right now, South Africa’s cabinet is scrambling, and it is peak theatre.













Leave a Reply