Events over the past few years have revealed severalhuman rights violationsassociated with increasingadvances in artificial intelligence (AI).

Algorithms created to regulate speech onlinehave censored speechranging from religious content tosexual diversity.

AI systems created to monitor illegal activities have been used totrack and target human rights defenders.

How ‘human rights by design’ can save us from AI misuse

The list goes on.

Our conclusion is that they leave much to be desired.

Ethics and values

Some companies voluntarily adoptethical frameworksthat are difficult to implement and have little concrete effect.

Article image

A side event at the 76th Session of the UN General Assembly on New Tech and Human Rights.

The reason is twofold.

First, ethics are founded on values, not rights, and ethical values tend todiffer across the spectrum.

It’s free, every week, in your inbox.

The Conversation

Even frameworks that are mandatory like CanadasAlgorithmic Impact Assessment Tool act merely as guidelines supporting best practices.

Ultimately, self-regulatory approaches do little more thandelay the development and implementation of laws to regulate AIs uses.

And as illustrated with the European Unions recently proposedAI regulation, even attempts towards developing such laws have drawbacks.

It permits companies to adopt AI technologies so long as their operational risks are low.

Just because operational risks are minimal doesnt mean that human rights risks are non-existent.

At its core, this approach is anchored in inequality.

It stems from an attitude that conceives of fundamental freedoms as negotiable.

So the question remains: why is it that such human rights violations are permitted by law?

Although many countries possess charters that protect citizens individual liberties,those rights are protected against governmental intrusions alone.

Companies developing AI systems arent obliged to respect our fundamental freedoms.

But even laws that are anchored in human rights often lead to similar results.

Although an important step towards more acute data protection in cyberspace, this law hasnt had its desired effect.

The reason is twofold.

First,the solutions favoreddont always permit users to concretely mobilize their human rights.

They have a go at protect human rights while ensuring that the laws adopted dont impede technological progress.

But this balancing act often results in merely illusory protection, without offering concrete safeguards to citizens fundamental freedoms.

Any solution must also includecitizen participation.

Legislative approaches seek only to regulate technologys negative side effects rather than address their ideological and societal biases.

But addressing human rights violations triggered by technology after the fact isnt enough.

Technological solutions must primarily be based on principles ofsocial justice and human dignity rather than technological risks.

They must be developed with an eye to human rights so you can ensure adequate protection.

One approach gaining traction is known as Human Rights By Design.

Here, companies do not permit abuse or exploitation as part of their business model.

Rather, they commit to designing tools, technologies, and services to respect human rights by default.

This approach aims to encourage AI developers to categorically consider human rights at every stage of development.

It ensures that algorithms deployed in society will remedy rather than exacerbate societal inequalities.

It takes the steps necessary to allow us to shape AI, and not the other way around.

Also tagged with