UP professor elected chair of UNESCO expert group crafting global standard on ethics of artificial intelligence

Posted on June 05, 2020

The group, which comprises of 24 renowned specialists with multidisciplinary and pluralistic expertise on the ethics of artificial intelligence, has been appointed by UNESCO for a 24-month period.

The AHEG will formulate a recommendation for the first global standard-setting instrument on the ethics of artificial intelligence, following the decision of UNESCO’s General Conference at its 40th session in November 2019.

The group is made up of four members each from six regions, namely: Western Europe and North America, Eastern Europe, Latin America and the Caribbean, Asia and the Pacific, the Arab States, and Africa. The four representatives in the Africa group are from South Africa, Ghana, Rwanda and Cameroon.

Professor Ruttkamp-Bloem is also a member of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) and the AU High Level Panel on Emerging Technologies. Her work on these platforms focuses, among other things, on developing artificial intelligence for the growth and benefit of humanity.

The work of the AHEG began in March.

She explained that the reason why there isn’t a global instrument for the ethics of AI yet, is not so much because of the fact that it is unchartered territory. “It is more the nature of AI as a disruptive technology, the complexity of its impact on the core sectors such as civil society, the future of work, security and surveillance, the financial sector, and education, and the difference in AI readiness of countries across the globe, that have contributed to the difficulty around formulating a global instrument,” she said.

Professor Ruttkamp-Bloem added that an absence of a global standard where AI is concerned, however, doesn’t necessarily imply that there aren’t efforts across the world to research AI and how it can be used to benefit humanity. There has been an explosion of work on the ethics of AI.

These efforts include the work of the Council of Europe’s Ad Hoc Committee on AI (CAHAI); the work of the European Commission’s High-Level Expert Group on AI, including the Ethical Guidelines for Trustworthy AI; the work of the OECD Expert Group on AI (AIGO), the OECD’s Recommendation of the Council on AI; the G20 AI Principles, and many others.

There are also many documents related to the ethics of AI developed by the private sector, and professional organisations such as the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems and its work on Ethically Aligned Design; the World Economic Forum’s Global Technology Governance: A Multistakeholder Approach; the Montreal Declaration for a Responsible Development of AI, and many more.

While she is not yet at liberty to say specifically what issues the AHEG is considering, she cautioned that the general issues facing the development of AI are complex and include real threats varying from the transgression of the right to privacy to security threats posed by the possible deployment of lethal autonomous weapon systems. Other threats relate to concerns around bias, transparency and accountability in the context of automated decision-making systems.

“Fairness usually refers to structural bias present in data, which is sometimes inadvertently, and sometimes advertently, exacerbated by machine learning processes. Think here of gender- or race- or ethnic or age-related bias as examples. Transparency refers to making evident the processes of the system and links closely to issues of explainability. One – very simplified – way to think about it is that transparency relates to understanding how machine learning systems are designed, developed and deployed, while explainability relates to understanding the outcomes of these systems. Accountability relates to ascribing ultimate human responsibility for the outcomes produced by machine learning systems, and the auditability and traceability of the workings of such systems, as well as the consideration of ethical questions such as for instance, should there be blanket disclaimers allowed in this kind of technology. As mentioned already, a lot is being done on a wide array of platforms to mitigate these threats and challenges,” she said.

Professor Ruttkamp-Bloem said her vision for the trailblazing group is that it will contribute to a global instrument that will ensure that humans remain at the centre of interactions with AI technologies – while she stressed that humans should take up this opportunity with integrity and responsibility and not just take it for granted. “The challenge is to become the best humans we can be very fast.”

“Above all, AI technologies should enhance human flourishing and peace and harmony and protect human rights. I see the most important objective of the work of the Bureau as continuously striving to find the most efficient ways in which to include and reflect the expertise and contribution of every member of the AHEG throughout this process we have embarked on, in order to ensure the content of the Recommendation is as rich and balanced as possible,” she said.

 

- Author Masego Panyane

Copyright © University of Pretoria 2024. All rights reserved.

FAQ's Email Us Virtual Campus Share Cookie Preferences