24/07/25
When legal shortcuts go too far, AI becomes a liability rather than a tool. A high-profile case in South Africa has reignited urgent calls for ethical clarity in the use of generative AI in legal proceedings, after lawyers were found to have cited non-existent court decisions generated by ChatGPT.
In the Supreme Court of Pietermaritzburg, lawyers acting for a former municipal mayor submitted an appeal citing case law purportedly generated through ChatGPT. The judge, suspicious of several citations, conducted a review and found many of the referenced cases to be fictitious. The court criticised the lawyersā ālazinessā in failing to verify their sources and ordered their law firm to pay additional court costs. The case has since been referred to the KwaZulu-Natal Legal Practice Council (LPC) for potential disciplinary action, illustrating the rising risks of unverified AI use in legal proceedings.
š Regulatory and Professional Standards
Under the existing ethical and procedural frameworks governing legal professionals in South Africa, including the Legal Practice ActĀ and Rules of the Legal Practice Council, legal representatives must act with integrity, diligence and professionalism. The submission of fabricated or unverified information, even if sourced from AI, constitutes a breach of these duties.
Globally, the misuse of AI in legal filings has become a recurring issue. In June 2023, a similar case in the US gained attention when lawyers were sanctioned for filing ChatGPT-generated legal citations in Mata v. Avianca. Although generative AI tools are not banned, the expectation remains that lawyers retain full responsibility for verifying any content presented to a court.
The Legal Practice Council has stated it does not currently see the need for new AI-specific rules, arguing that existing codes and professional conduct standards are sufficient to cover such misconduct. However, internal discussions are ongoing, and growing pressure from the legal community may accelerate regulatory reform.
Meanwhile, legal practitioners are encouraged to use freely accessible tools such as the LPCās online legal library and to attend continuing legal education webinars designed to reinforce ethical awareness in the digital age.
š§ Comparative Analysis and Ethical Considerations
This South African case brings into sharp focus the tension between legal innovation and professional accountability. AI tools can streamline research, but when used irresponsibly, they become vectors of misinformation. The duty to the court and oneās client remains with the legal practitioner, not with the tool.
Comparatively, the European Union is taking proactive steps through the forthcoming AI Act, which classifies AI applications by risk level. Although litigation tools are not currently listed as high-risk, the use of AI in court proceedings is likely to attract greater scrutiny as part of wider digital justice reforms. In the United Kingdom, the Solicitors Regulation AuthorityĀ has issued guidance on AI use, emphasising the non-delegable nature of professional obligations.
Advocate Tayla Pinto, a South African expert in AI law, argues that while regulation need not be technology-specific, legal professionals must be educated and held accountable. Her counterpart, Mbekezeli Benjamin of Judges Matter, insists that existing codes should be amended to impose explicit AI-related duties and penalties. Both agree on one principle: ethical lawyering cannot be outsourced.
š Evidence and Practical Lessons
The facts in the MavundlaĀ case are stark. A legal team used ChatGPT to prepare court documents, including fictionalised case law. Upon review, the court dismissed the matter for lack of merit and criticised the submission as unprofessional. The costs of the misuse were not only financial but also reputational.
This incident was not isolated. The same issue has surfaced in at least two other known cases in South Africa, including a matter involving a state mining regulator. Globally, lawyers are beginning to face fines, suspensions, and formal investigations for misusing AI-generated legal content.
The warning is clear: reliance on AI-generated materials without human verification will be treated as negligence or misconduct. It may constitute misleading the court and can be penalised under existing disciplinary frameworks. AI may assist with legal drafting, but it does not relieve lawyers from their duty to verify facts, check citations, and present arguments with integrity.
Law firms and in-house legal teams must consider internal policies on AI use, including mandatory checks, digital literacy training, and disclosure requirements when AI tools are used in legal submissions.
š Conclusion
This case from South Africa illustrates that artificial intelligence, while transformative, introduces significant legal and ethical risks when used without verification. Courts and regulatory bodies are making it clear: lawyers remain fully responsible for the content they file, whether human-written or machine-assisted.
The legal profession must balance innovation with diligence. As AI tools evolve, so too must the standards by which legal services are rendered. NUR Legal advises law firms, legal departments, and AI solution providers on how to align with regulatory requirements and mitigate the legal risks of AI adoption.
š© Contact NUR LegalĀ for guidance on AI policies, legal compliance, and ethical practice in digital legal services.
#AIinLaw #LegalEthics #ProfessionalMisconduct #ChatGPT #SouthAfricaLaw #LegalPracticeCouncil #AICompliance #FakeCases #GenerativeAI #NURLegal
KƤtrin SƤrap
