The integration of artificial intelligence (AI) into legal systems is transforming the practice of law, offering unprecedented efficiency, accessibility and innovation. However, the recent case Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others – in which false legal citations were generated by AI – demonstrated that these advancements come with substantial ethical concerns and socio-economic implications.
“The adoption of AI in legal settings introduces a range of ethical concerns,” says Lucinda Kok, a lecturer in the Department of Private Law at the University of Pretoria (UP). “AI-powered legal tools process vast amounts of data, often including sensitive personal information. Without robust security measures and strict compliance with data protection regulations, there is a heightened risk of breaches that could compromise the legal rights of individuals. Institutions must prioritise data protection through continuous monitoring, policy updates and stringent cybersecurity protocols to safeguard user information.”
Additionally, AI algorithms trained on historical legal data may inadvertently reinforce existing biases, leading to discriminatory outcomes in legal proceedings.
“If left unchecked, these biases could perpetuate systemic inequities rather than promote fairness,” Kok says. “Ensuring AI impartiality requires diverse and representative training data, ongoing audits and transparent AI development practices. Legal professionals must critically evaluate AI-generated outputs to prevent undue reliance on potentially flawed recommendations.”
Transparency is another ethical consideration.
“AI tools should be designed with clear, explainable mechanisms that allow legal practitioners and the public to understand how decisions are reached,” Kok explains. “This fosters trust in AI applications and ensures accountability in legal proceedings.”
Furthermore, integrating AI literacy and digital education into legal training can equip practitioners with the skills to navigate and assess AI-driven tools effectively, ensuring ethical and informed decision-making.
One of the most pressing challenges in the AI revolution within law is ensuring equitable access to AI-driven legal resources. Socio-economic disparities create barriers to technology adoption, particularly for individuals in rural or disadvantaged communities who may lack digital literacy, reliable internet access or technological infrastructure. These factors can limit their ability to benefit from AI-enhanced legal services, exacerbating existing inequalities in gaining access to justice.
“To bridge this digital divide, policymakers and legal institutions must prioritise inclusivity and digital equity,” Kok says. “This includes investing in accessible legal technology, expanding digital literacy initiatives and developing AI-driven tools that accommodate users with varying levels of technological proficiency. AI should serve as a means to enhance access to justice rather than deepen socio-economic disparities.”
Despite these challenges, AI presents immense potential to revolutionise legal systems. From automating administrative tasks and streamlining case analysis to enhancing legal research and expanding access to justice, AI can significantly improve efficiency in legal practice. AI-driven tools can provide cost-effective legal assistance to marginalised populations, offering insights into legal procedures, document preparation and rights awareness.
However, rather than replacing human judgement, AI should complement it. Legal reasoning requires critical thinking, ethical deliberation and contextual understanding – elements that AI cannot fully replicate. Thus, AI should function as a supportive tool that enhances, rather than undermines, the role of legal professionals in interpreting and applying the law.
“Ultimately, the success of AI in law depends on a balanced approach that prioritises innovation while preserving fundamental legal and constitutional principles,” Kok says. “By fostering collaboration between legal professionals, policymakers and society, AI can be harnessed responsibly to enhance justice, fairness and accessibility in legal practice.”
Lucinda Kok
July 2, 2025
This edition is curated around the concept of One Health, in which the University of Pretoria plays a leading role globally, and is based on our research expertise in the various disciplines across healthcare for people, the environment and animals.
Paediatric neurosurgeon Professor Llewellyn Padayachy, Head of the Department of Neurosurgery at the University of Pretoria’s (UP) Steve Biko Academic Hospital, is redefining how brain-related diseases are diagnosed and treated, especially in low-resource settings. He’s at the forefront of pioneering work in non-invasive techniques to assess and measure raised pressure inside the skull,...
Africa faces immense challenges in neurosurgery, such as severe underfunding, a lack of training positions and a high burden of disease. There is one neurosurgeon per four million people, far below the WHO’s recommendation of one per 200 000. This shortage, compounded by the lack of a central brain tumour registry and limited access to diagnostics, severely impacts patient outcomes.
Copyright © University of Pretoria 2025. All rights reserved.
Get Social With Us
Download the UP Mobile App