Artificial intelligence (AI) was once the stuff of science fiction.

But its becoming widespread.

It is used inmobile phone technologyandmotor vehicles.

Want to develop ethical AI? Then we need more African voices

It powers tools foragricultureandhealthcare.

But concerns have emerged about the accountability of AI and related technologies like machine learning.

In December 2020 a computer scientist, Timnit Gebru,was firedfrom Googles Ethical AI team.

Article image

She had previously raised the alarm about the social effects of bias in AI technologies.

Biases in trainingdatacan have far-reaching and unintended effects.

There is already a substantial body of research about ethics in AI.

The Conversation

In recent years, manyframeworksandguidelineshave been created that identify objectives and priorities for ethical AI.

This is certainly a step in the right direction.

But its also critical tolook beyondtechnical solutions when addressing issues of bias or inclusivity.

Biases can enter at the level of who frames the objectives and balances the priorities.

This is especially pertinent when considering the growth of AI research and machine learning across the African continent.

It’s free, every week, in your inbox.

Context

Research and development of AI and machine learning technologies are growing in African countries.

This might not be a problem if the principles and values in those frameworks have universal program.

But its not clear that they do.

For instance, theEuropean AI4People frameworkoffers a synthesis of six other ethical frameworks.

It identifies respect for autonomy as one of its key principles.

This principle has beencriticizedwithin the applied ethical field of bioethics.

It is seen asfailing to do justice to the communitarian valuescommon across Africa.

For machine learning to be effective at making useful predictions, any learning system needs access to training data.

In most cases, both these features and labels require human knowledge of the problem.

But a failure to correctly account for the local context could result in underperforming systems.

For example, mobile phone call records havebeen usedto estimate population sizes before and after disasters.

However, vulnerable populations are less likely to have access to mobile devices.

So, this kind of approachcould yield results that arent useful.

Going forward

AI technologies must not simply worsen or incorporate the problematic aspects of current human societies.

Being sensitive to and inclusive of different contexts is vital for designing effective technical solutions.

It is equally important not to assume that values are universal.

Also tagged with