Artificial intelligence is already making decisions in the fields of business, health care and manufacturing.

But AI algorithms generally still get help from people applying checks and making the final call.

Pop culture has long portrayed our general distrust of AI.

Worried about AI ethics? Worry about developers’ ethics first

Hesays:

I was the logical choice.

It calculated that I had a 45% chance of survival.

Sarah only had an 11% chance.

Article image

That was somebodys baby 11% is more than enough.

A human being wouldve known that.

It’s free, every week, in your inbox.

The Conversation

Unlike humans, robots lack a moral conscience and follow the ethics programmed into them.

At the same time, human morality is highly variable.

The right thing to do in any situation will depend on who you ask.

For machines to help us to their full potential, we need to see to it theybehave ethically.

The self-driving future

Imagine a future with self-driving cars that are fully autonomous.

But what if things go wrong?

As dramatic as this may seem, were only a few years away from potentially facing such dilemmas.

Tesladoes not yet producefully autonomous cars, although it plans to.

In other words, the drivers actions are not disrupted even if they themselves are causing the collision.

Instead, if thecar detects a potential collision, it sends alerts to the driver to take action.

In autopilot mode, however, the car should automatically brake for pedestrians.

But would we want an autonomous car to make this decision?

Whats a life worth?

If its decision considered this value, technically it would just be making a cost-benefit analysis.

This may sound alarming, but there are already technologies being developed that could allow for this to happen.

The health-care industry is another area where we will see AI making decisions that could save or harm humans.

Experts are increasinglydeveloping AI to spot anomaliesinmedical imaging, and to help physicians in prioritizing medical care.

Another example is an automated medicine reminder system.

How should the system react if a patient refuses to take their medication?

And how does that affect the patients autonomy, and the overall accountability of the system?

AI-powered drones and weaponry are also ethically concerning, as they can make the decision to kill.

There are conflicting views on whether such technologies should be completelybanned or regulated.

For example, the use of autonomous drones can be limited to surveillance.

Some have called for military robots to be programmed with ethics.

But this raises issues about the programmers accountability in the case where a drone kills civilians by mistake.

Philosophical dilemmas

There have been many philosophical debates regarding the ethical decisions AI will have to make.

The classic example of this is thetrolley problem.

People often struggle to make decisions that could have a life-changing outcome.

Examples of failures and bias in technology implementation have includedracist soap dispenserand inappropriateautomatic image labelling.

AI is not good or evil.

The effects it has on people will depend on the ethics of its developers.

So to make the most of it, well need to reach a consensus on what we consider ethical.

Also tagged with