- South Africa withdrew draft AI policy due to fabricated citations.
- At least six of 67 cited academic references were AI-generated.
- The incident highlights growing concerns about AI hallucinations in research.
South Africa has withdrawn its draft national AI policy after it was discovered that several academic citations in the document were fabricated by artificial intelligence. Communications minister Solly Malatsi pulled the draft after at least 6 of its 67 academic references turned out to be AI-generated hallucinations, citing journal articles that simply do not exist.
The minister called it a failure that went beyond a technical error, saying it had directly compromised the integrity of the policy.
How Did The Fake Citations Come To Light?
South Africa’s News24 first flagged the issue after finding that at least six of the document’s 67 academic citations did not exist, even though the journals they referenced were real. Editors of the journals, including the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy, independently confirmed that the cited articles were fake.
Malatsi did not hold back in his response. “The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened,” he said. He added: “This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy.”
ALSO READ: Elon Musk Tells Court AI ‘Could Kill Us All’, And He’s Been Worried Since 2015
The draft had been opened for public comments and was aimed at positioning South Africa as a leader in AI innovation. It also proposed setting up a national AI commission, an AI ethics board, and an AI regulatory authority, along with tax breaks, grants, and subsidies to support private-sector involvement in AI infrastructure. The document is expected to be revised before it is reissued.
Why AI Hallucinations In Research Are A Growing Problem
The incident reflects a wider and growing concern around the use of generative AI in academic and official work. A study published in the journal Nature found that over 2.5% of academic papers published in 2025 contained at least one potentially hallucinated citation, compared to just 0.3% in 2024. That translates to over 110,000 papers published in 2025 carrying invalid references.
AI models like OpenAI’s ChatGPT and Google’s Gemini are built to predict the next likely word in a sequence, not to verify facts. When data is limited in a particular area, these models fill the gaps with information that sounds plausible but is incorrect.
ALSO READ: Are You Still Paying For YouTube Premium Just For Picture-In-Picture? You May Not Have To
In the case of citations, the model draws on its training data to predict what a reference looks like and produces one that appears credible, even when it does not exist.
Malatsi said there would be consequences for those responsible, and acknowledged the broader lesson at stake: “This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It’s a lesson we take with humility.”


