The global rise of artificial intelligence holds both promise and danger, and Africa needs its own experts to balance the two

Posted on March 18, 2020

A question of fairness

Financial consultancy PwC says AI could contribute up to US$15.7 trillion to the global economy by 2030. But the technology is not inherently benign. While it can be a tool to achieve transformation, it also has the potential to reinforce structural inequalities and biases, and to perpetuate gender and racial imbalances. One of the problems with AI is that its ‘intelligence’ depends entirely on the data from which it learns. That data could be biased for a variety of reasons—for instance, most medicines have been tested on white men, meaning there is less medical data on women or people of colour.

AI algorithms trained on biased data inherit that bias. For instance, gender biases mean that loan agreement data from 30 to 40 years ago suggest that more men get such loans than women, says Vukosi Marivate, who holds a chair in data science at the University of Pretoria in South Africa. “If we were to use that data—without trying to correct it—to build an automated system to grant loans, it would be biased,” he says.

There are examples of AI discriminating against black people from other parts of the world. When the head of Facebook’s AI unit, Lade Obamehinti, tested its Portal Smart Camera (which uses algorithms to identify multiple subjects during video calls), the camera kept focusing on her male colleagues—not on her. Upon examining the datasets used for training the camera’s AI, she found an uneven representation for skin tone and gender. 

In another example, a viral 45-seconds video shared in 2017 showed an automatic no-touch soap dispenser that appeared to dish out soap to a white person’s hand but not a black person’s. The light sensors in the dispenser had not been trained to recognise darker skin. 

Marivate says he doesn’t know about similar examples from Africa, but that doesn’t mean they have not happened or will not occur. Because we usually trust machines to be right—how many of us have followed our GPS even though we suspected it might be leading us astray?—we might not immediately see the problems, he says. One area of concern, he adds, is the rise of AI-driven closed-circuit TV camera applications for security on the continent. If the algorithms driving these applications don’t recognise black faces properly, people might be wrongly profiled. “Do we understand the shortcomings of the facial recognition systems in these AI deployments?” he asks. 

Avoiding nefarious uses

There are other dangers that come with the growing influence of AI. The flood of personal data coming out of social media applications like Facebook, combined with ever-smarter algorithms, has the potential to stoke political tensions, and can even turn the tides of government elections. In 2018, British consulting firm Cambridge Analytica was outed for having used personal data from Facebook users—ostensibly collected for research purposes—to build an algorithm that influenced voting patterns in the 2016 US general election. 

Developing countries, with their “overstretched and unconsolidated democracies”, are particularly at risk of such nefarious uses, says Clayton Besaw, a research associate at the United States-based One Earth Future foundation, which promotes peace through good governance. He says African countries should introduce regulation to make sure the technology is not abused. “Obviously each country has its own complexities and nuance,” says Besaw. But some, he says, like Kenya or Nigeria, may want to be mindful of politicians or non-state actors who want to use technology to stoke sectarian or ideological flames, promote political violence, or sow distrust in the democratic system.

But, even then, such regulation may not stop states themselves from using the technology against their people. Besaw points to the deal that the government of Zimbabwe struck in March 2018 with Chinese tech firm Cloudwalk to import facial recognition technology, as a case in point. He says such deals are examples of top-down development of AI applications “mainly situated around social control and repression”. Ultimately, he says, African countries will have to find a balance between regulation and freedom that mitigates abuse, while not restricting non-state actors who want to develop beneficial uses. 

Building capacity in Africa

Apart from regulating the sector, another way that African nations can protect themselves against bad AI applications is through training. Hila Azadzoy, who heads up global health at Ada Health, says that one of the solutions is to make sure that the teams working on AI applications are diverse in terms of gender, ethnicity, training, and background. This will increase the likelihood of unconscious biases being recognised and addressed, she says. The Swahili app was developed in partnership with the Muhimbili University of Health and Allied Sciences in Tanzania. “We have also expanded our medical content team to ensure representation of physicians from around the world, and invested in additional language skills, ensuring that we have qualified doctors who are also native speakers of our target languages to work on our medical content,” says Azadoy.

However, in order to contribute to the desired diversity, Africa needs its own AI experts. Both Marivate and Ongere are training young Africans in AI and raising awareness about both the opportunities and the challenges the technology brings. Ai Kenya produces an AI podcast and offers thought leadership in AI ethics in Kenya and beyond. And Marivate is a co-founder of the Deep Learning Indaba, an organisation that runs continental meetings and hands out awards, with the aim of making Africans shapers of AI advances rather than just observers and receivers of foreign technology. 

Ongere, in Kenya, says it is “a big myth” that Africans are not doing AI, and that it always has to be imported. However, Africa remains under-resourced in terms of advanced research, he admits. Still, he says, global tech giants need to collaborate–on an equal footing–with the local communities to build AI-based solutions.

In South Africa, Marivate agrees. Research on AI is vital if Africans want a voice in the development of the new technology, he says. “We cannot have AI research on Africa that has no Africans participating.” 

- Author Eunice Kilonzo

Copyright © University of Pretoria 2024. All rights reserved.

FAQ's Email Us Virtual Campus Share Cookie Preferences