In 2016, aProPublica investigationrevealed a recidivism assessment tool that used machine learning was biased against black defendants.

And Googlerefrained from renewing its AI contractwith the Department of Defense after employees raised ethical concerns.

Those are just a few of themany ethical controversies surrounding artificial intelligencealgorithms in the past few years.

Ethical AI and the importance of guidelines for algorithms — explained

Theres a six-decade history behind the AI research.

It’s free, every week, in your inbox.

AI systems can invisibly threaten the autonomy of humans who interact with them by influencing their behavior.

Article image

The companies that run these systems provide very few controls over the AI algorithms.

Meanwhile, various studies have shown that search results can have a dramatic influence on the behavior of users.

One of the greatest concerns of current artificial intelligence technologies is the threat ofadversarial examples.

This happens mainly because AI algorithms work in ways that arefundamentally different from the human brain.

Adversarial examples can happen by accident, such as an AI system that mistakessand dunes for nudes.

But they can also be weaponized into harmful adversarial attacks against critical AI systems.

There are already been several efforts to build robust AI systems that are resilient to adversarial attacks.

AutoZOOM, a method developed by researchers at MIT-IBM Watson AI Lab, helpsdetect adversarial vulnerabilities in AI systems.

Since machine learning models are based on statistics, it should be clear how accurate a systems is.

Privacy and data governance

AI systems must guarantee privacy and data protection throughout a systems entire lifecycle.

Machine learning systemsare data-hungry.

The more quality data they have, the more accurate they become.

Thats why companies have a tendency to collect more and more data from their users.

Companies like Facebook and Google have built economic empires by building and monetizing comprehensive digital profiles of their users.

But how responsible are these companies in maintaining the security and privacy of this data?Not very much.

Theyre also not very explicit about the amount of data they collect and ways they use it.

However, more needs to be done.

Many companiesshare sensitive user information with their employees or third-party contractorsto label data and train their AI algorithms.

The idea does not sit well with the users, who expect to enjoy privacy in their homes.

Transparency

The European Commission experts define AI transparency in three components: traceability, explainability and communication.

AI systems based on machine learning and deep learning are highly complex.

They develop their behavior based on correlations and patterns found in thousands and millions of training examples.

Often, the creators of these algorithms dont know the logical steps behind the decisions their AI models make.

This makes it very hard to find the reasons behind the errors these algorithms make.

Explainable AIhas become the focus of several initiatives by the private and public sector.

Another important point raised in the EC document is communication.

The company later updated the service to present itself as Google Assistant.

Diversity, non-discrimination, and fairness

Algorithmic biasis one of the well-known controversies of contemporary AI technology.

For a long time, we believed that AI would not make subjective decisions based on bias.

One consideration to note however is that fairness and discrimination often depends on the domain.

For instance, in hiring, organizations must check that that their AI systems dont make decisions.

The social aspect of AI has been deeply studied.

Some companies have started to acknowledge this and correct the situation.

The move was aimed at making the experience more social.

The environmental impact of AI is less discussed, but is equally important.

Training and running AI systems in the cloud consumes a lot of electricity and leaves a huge carbon footprint.

This is a problem that will grow worse as more and more companies use AI algorithms in their applications.

One of the solutions is to uselightweight edge AI solutionsthat require very little power and run on renewable energy.

Another solution is to use AI itself to help improve the environment.

For instance, machine learning algorithms can help manage traffic and public transport to reduce congestion and carbon emissions.

In most cases, ethical guidelines are not in line with the business model and interests of tech companies.

Thats why there should be oversight and accountability.

When unjust adverse impact occurs, accessible mechanisms should be foreseen that ensure adequate redress.

Knowing that redress is possible when things go wrong is key to ensure trust, the EC document states.

you might read the original articlehere.

Also tagged with