Revision #1
System
about 6 hours ago
South Africa's AI Policy Imploded Because Nobody Checked the Footnotes
On April 26, 2026, South Africa's Communications and Digital Technologies Minister Solly Malatsi pulled the country's draft National AI Policy from public comment — less than three weeks after it was published in the Government Gazette. The reason: an unknown number of the document's academic citations were fabricated, almost certainly hallucinated by an AI tool that someone on the drafting team used without verifying the output [1][2].
The irony is difficult to overstate. A policy designed to govern artificial intelligence was undone by artificial intelligence. But the incident is more than an embarrassment. It exposes systemic failures in how governments produce policy documents, raises questions about institutional capacity across the continent, and lands in a global context where AI-generated fiction is infiltrating courts, consulting reports, and academic conferences at an accelerating pace.
What Happened
The 86-page draft National AI Policy was approved by Cabinet on March 25, 2026, and published in the Government Gazette on April 10 for a 60-day public comment period, with a deadline of June 10 [3][4]. News24 broke the story on April 24, reporting that the document contained academic references that could not be verified. Editors of several journals confirmed that articles attributed to their publications were never published [1].
Of the 67 references listed in the draft's bibliography, at least six were identified as outright fabrications — citations to papers, authors, and in some cases journals that do not exist [5]. Others pointed to non-peer-reviewed material. News24 reported that "several authors credited with foundational research had never written on the topics attributed to them" [1].
Malatsi's response came two days later. "This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy," the minister said in a statement [3][6]. He described AI-generated citations included without verification as "the most plausible explanation" and pledged "consequence management for those responsible for drafting and quality assurance" [2][3].
The Department of Communications and Digital Technologies (DCDT) initially attempted to minimize the damage. Before the withdrawal, department officials argued that "technical referencing matters do not affect the substance, integrity or policy direction" of the draft [7]. That position did not hold.
Who Drafted It and Who Failed to Check
No individual officials have been publicly named as responsible for the drafting. Malatsi instructed the Director-General of the DCDT to investigate and "act against anyone found to have done wrong" [8]. But public reporting has not clarified key questions: Was the document produced entirely in-house? Were external consultants or AI tools contracted? At what stage were the fabricated references introduced?
What is clear is that the document passed through multiple institutional checkpoints — drafting, departmental review, Cabinet approval — without anyone verifying whether the cited sources existed [9]. Khusela Diko, chair of the Parliamentary Portfolio Committee on Communications, called the withdrawal necessary and urged a rigorous review "without using ChatGPT this time" [7].
The absence of a fact-checking step is not unusual in government policy production. Internal review processes typically focus on policy substance and legal alignment, not bibliographic verification. But when AI tools are used in drafting, the failure mode changes. A human researcher who fabricates a citation is committing deliberate fraud. A large language model that generates a plausible-sounding reference is doing what it was statistically trained to do. The question is whether anyone in the chain understood that distinction.
What the Policy Actually Proposed
The draft was not a trivial document. It proposed establishing an AI Ethics Board, an AI Safety Institute, and an AI Insurance Superfund to compensate individuals affected by "AI-driven outcomes" [10]. It addressed data sovereignty, compute infrastructure, and sectoral applications in agriculture, healthcare, and public services.
Critics had already raised substantive concerns before the citations scandal broke. Technology investor Stafford Masie argued the policy risked "regulating away" South Africa's participation in the global AI economy [2]. A separate analysis from TechPolicy.Press found the draft left critical governance decisions unresolved, with multiple provisions marked "OPTION" — effectively deferring choices on minimum terms for foreign compute infrastructure investment, data sovereignty requirements, and technology transfer conditions [10].
The TechPolicy.Press analysis argued that South Africa was treating itself "as a consumer of AI systems rather than a stakeholder in their governance," despite controlling 88% of global platinum-group metal reserves essential to semiconductor manufacturing [10]. The citations scandal has now overshadowed these substantive policy debates.
The Cost of the Withdrawal
No timeline has been provided for a replacement draft. The original policy had been in development since at least 2024 [7]. Its withdrawal means South Africa remains without a formal AI governance framework at a moment when other African nations are moving ahead.
The concrete costs include:
- Regulatory limbo: Companies and public sector projects that were waiting on the framework to guide AI procurement, deployment, and compliance decisions have no policy anchor.
- Credibility damage: The DCDT's standing as the institution "entrusted with the role to lead South Africa's digital policy environment" — Malatsi's own framing — has been undermined [6].
- Lost consultation input: Any public comments submitted before the withdrawal are effectively voided.
- Opportunity cost: Months of bureaucratic and political capital spent moving the draft through Cabinet and the Gazette process have been wasted.
How South Africa Compares to Its Continental Peers
South Africa ranks second in Sub-Saharan Africa on Oxford Insights' 2025 Government AI Readiness Index, with a score of 52.91, just behind Mauritius at 53.27 [11]. It leads the region in data infrastructure and technology sector capacity. But the index measures readiness, not execution.
Among peer economies, the comparison is instructive. Egypt adopted a National AI Strategy in 2021 and established a centralized National Council for Artificial Intelligence, scoring 81 on the Africa AI Readiness Index [12]. Kenya launched its National AI Strategy 2025–2035 in March 2025, targeting specific economic sectors [12]. Nigeria introduced a National Digital Economy and E-Governance Bill requiring high-risk AI systems to obtain licenses and submit annual impact assessments, and jumped 31 places to 72nd globally in the 2025 index [12].
South Africa's incident does not necessarily reflect negligence over institutional capacity, but it does reflect a gap between ambition and process maturity. The country has the technical infrastructure and policy expertise to lead on the continent. What it apparently lacked was a quality assurance process calibrated for the specific risks of AI-assisted document production.
The Global Epidemic of Hallucinated Citations
South Africa's case sits within a rapidly growing pattern. A database tracking AI-generated fabrications in court filings has cataloged 1,227 cases globally as of early 2026, up from 200 a year earlier, with roughly five to six new cases documented daily [13]. Of these, 1,022 involved fabricated case citations, 323 involved false quotes from real cases, and 492 involved misrepresented holdings [13].
The problem extends well beyond courtrooms. In October 2025, a A$440,000 report prepared by Deloitte for the Australian government was found to contain hallucinated academic sources and a fabricated quote from a federal court judgment. The firm issued a partial refund [14]. The following month, a separate CA$1.6 million Deloitte report for Newfoundland and Labrador's government was found to contain at least four false citations [15]. At the 2026 International Conference on Learning Representations, 20% of sampled papers contained at least one AI hallucination [16].
Academic research on AI policy in Africa has surged — from under 3,000 papers in 2018 to nearly 37,000 in 2025, according to OpenAlex data. This explosion of literature makes manual citation verification increasingly difficult, while simultaneously creating more plausible-seeming source material for AI tools to hallucinate.
The Steelman Case for the Officials
The instinct to blame individual civil servants deserves scrutiny. Generative AI tools are now embedded in standard government productivity software — Microsoft 365 Copilot, Google Workspace's Gemini integration — and are available by default to employees who may have received no training on their limitations [17]. AI hallucination — the generation of plausible but fabricated information — is not a bug that users are warned about at the point of use. It is an inherent characteristic of large language models that requires specific literacy to recognize.
If a civil servant used an AI writing assistant to help draft sections of the policy document, the fabricated citations would have appeared in the same format and with the same apparent authority as legitimate ones. Without explicit training on the hallucination problem, and without institutional protocols requiring independent verification of AI-generated content, expecting individual officials to catch the error is arguably unreasonable.
This does not absolve the institution. It shifts the question from individual blame to systemic responsibility: Who authorized the deployment of AI tools in the policy drafting workflow without safeguards? Does the DCDT have guidelines on AI use in document production? If not, the failure lies at the institutional level, not with the individual drafter.
What Verification Standards Exist — and What's Missing
No country currently mandates citation verification for AI-assisted government policy documents as a general requirement. The closest analogues come from the judiciary. In the United States, over 300 federal judges have adopted standing orders requiring attorneys to disclose AI use and certify that citations have been verified by a human [17][18]. Judge Brantley Starr of the Northern District of Texas pioneered this approach in 2023, warning explicitly about hallucinated citations [17].
The EU AI Act, which entered into force in August 2024 and becomes fully applicable in August 2026, classifies AI systems by risk level and imposes transparency obligations, but does not specifically address citation verification in government documents [19]. The OECD's AI Principles, updated in 2024, call for transparency and accountability but operate as recommendations, not enforceable standards [20].
The gap is structural. Courts have moved faster than legislatures because judges experienced the consequences directly — fabricated case law in filings before them. Governments producing policy documents face the same risk but have not yet developed comparable safeguards. South Africa's incident may accelerate that process.
Legal and Constitutional Implications
The legal consequences of submitting a policy document with fabricated citations are less clear-cut than in a court filing, where sanctions for fraudulent citations are well-established. South Africa's Constitution requires administrative action to be lawful, reasonable, and procedurally fair under the Promotion of Administrative Justice Act (PAJA) [10].
Several questions remain open:
- Validity of prior consultations: If stakeholders submitted comments on the draft in reliance on the cited research, those consultations may need to be restarted entirely rather than simply incorporated into a revised draft.
- Official misconduct: Malatsi's promise of "consequence management" suggests disciplinary proceedings are possible, but the legal basis depends on whether officials knowingly submitted unverified content or failed to follow existing quality assurance protocols [3].
- Liability: If any entity — a company, university, or government department — made decisions in reliance on the draft policy's cited research, there is at least a theoretical basis for a claim of detrimental reliance, though proving damages would be difficult given the policy's draft status.
No formal legal proceedings have been announced. The Director-General's investigation is ongoing [8].
What Comes Next
South Africa now faces a choice about how to rebuild. The substantive policy questions — how to govern AI deployment, whether to establish a safety institute, how to negotiate with hyperscaler companies seeking to build compute infrastructure on the continent — remain unresolved and urgent.
The DCDT could treat this as a narrow quality-assurance failure, fix the citations, and re-publish. Or it could use the episode to fundamentally rethink how AI tools are used within government, establishing verification protocols that could serve as a model for other countries facing the same challenge.
The broader lesson is not specific to South Africa. Every government that uses AI tools in policy production — and that is increasingly all of them — faces the same risk. The question is whether they will learn from South Africa's public embarrassment or wait for their own.
Sources (20)
- [1]Govt used fake, made-up research for SA's AI policynews24.com
News24 investigation revealing that the 86-page draft AI policy contains academic references that could not be verified, with journal editors confirming articles attributed to them were never published.
- [2]Malatsi withdraws AI policy after fictitious sources scandaltechcentral.co.za
Minister Malatsi withdraws draft National AI Policy after internal confirmation that it contains fictitious sources, pledging consequence management for responsible officials.
- [3]Minister announces withdrawal of draft AI Policysanews.gov.za
Official South African government news agency report on Malatsi's withdrawal statement, including Cabinet approval timeline and the June 10 comment deadline.
- [4]South Africa Has Withdrawn Its AI Policy Because It Was Full of Fake Citationstechlabari.com
Analysis of the withdrawal noting that AI-generated citations passed through the drafting and quality assurance process without verification.
- [5]Malatsi withdraws AI policy tainted by fictitious referencesnews24.com
Follow-up News24 report confirming at least six fabricated references among the 67 listed in the draft policy's bibliography.
- [6]Malatsi withdraws national artificial intelligence policy over AI errorscapetownetc.com
Report on the withdrawal including Malatsi's statement that the DCDT did not deliver on acceptable standards for an institution leading digital policy.
- [7]Minister Malatsi takes the heat after reports of using fake, made-up research for SA's AI policycitizen.co.za
Coverage including parliamentary reaction from Khusela Diko and the DCDT's initial attempt to downplay the issue as a 'technical referencing matter.'
- [8]South Africa's draft AI policy embroiled in controversy over fake citationsnoah-news.com
Report on Malatsi instructing the Director-General to investigate and act against anyone found to have done wrong in the policy compilation.
- [9]South Africa's AI policy draft retracted over fictitious references, raising public-sector credibility concernsnoah-news.com
Analysis describing the incident as a warning to every government ministry across the region quietly using AI tools for drafting without verification.
- [10]South Africa Has AI Leverage. Its Draft Policy Leaves It Unused.techpolicy.press
Policy analysis finding the draft left critical governance decisions unresolved, with South Africa controlling 88% of global platinum-group metals yet treating itself as a consumer of AI systems.
- [11]Government AI Readiness Index 2025oxfordinsights.com
South Africa ranks second in Sub-Saharan Africa with a score of 52.91, leading the region in data infrastructure and technology sector capacity.
- [12]AI Regulation in Africa 2026: New Laws, Compliance Risks, and Startup Opportunitiestechinafrica.com
Overview of African AI regulation including Egypt's 2021 strategy, Kenya's 2025-2035 AI Strategy, and Nigeria's digital economy bill requiring high-risk AI licensing.
- [13]1,227 Fabricated Citations and Counting: Inside the AI Hallucination Crisis Hitting Courts Worldwideblog.platinumids.com
Database tracking 1,227 global cases of AI-fabricated content in court filings as of early 2026, up from 200 a year prior, with 5-6 new cases daily.
- [14]Deloitte was caught using AI in $290,000 report to help the Australian governmentfortune.com
Deloitte issued a partial refund for an Australian government report containing fabricated academic references and an invented court quote generated by Azure OpenAI.
- [15]Not again: Deloitte's $1.6 million report contains AI hallucinationscybernews.com
Second Deloitte incident involving CA$1.6 million Newfoundland health workforce report with at least four false AI-generated citations.
- [16]GPTZero uncovers 50+ Hallucinations in ICLR 2026gptzero.me
Investigation finding 20% of sampled papers submitted to ICLR 2026 contained at least one AI hallucination.
- [17]What You Need to Know: AI Disclosure Rules in Legal Filingseve.legal
Over 300 US federal judges have adopted standing orders requiring AI disclosure and citation verification in court filings.
- [18]New panic over old mistakes: judicial sanctions and hallucinated citationsslaw.ca
Analysis of judicial sanctions for hallucinated citations including the Sixth Circuit's $15,000 per-attorney fine in Whiting v. City of Athens.
- [19]EU Artificial Intelligence Actartificialintelligenceact.eu
The EU AI Act entered into force August 2024, becoming fully applicable August 2026, classifying AI systems by risk level with transparency obligations.
- [20]AI Principles Overview - OECD.AIoecd.ai
OECD AI Principles adopted 2019, updated 2024, providing values-based guidance for policymakers and AI actors across member nations.