Is my car hallucinating?

Is the algorithm that runs the police surveillance system in my city paranoid?

Is that how my toaster feels?

We created near-sentient algorithms — but now they’re devolving into bigots

This all sounds ludicrous until we realize that our algorithms are increasingly being made in our own image.

As weve learned more about our own brains, weve enlisted that knowledge to create algorithmic versions of ourselves.

Algorithms are becoming the near-sentient backdrop of our lives.

Article image

The most popular algorithms currently being put into the workforce are deep learning algorithms.

These algorithms mirror the architecture of human brains by building complex representations of information.

Being like our brains, these algorithms are increasingly at risk of mental health problems.

It’s free, every week, in your inbox.

Anyone could understand how it worked even if they couldnt do it themselves.

AlphaGolearnedby watching others and by playing itself.

Computer scientists and Go players alike are befuddled by AlphaGos unorthodox play.

Its strategy seems at first to be awkward.

Only in retrospect do we understand what AlphaGo was thinking, and even then its not all that clear.

To give you a better understanding of what I mean by thinking, consider this.

Programs such as Deep Blue can have a bug in their programming.

They can crash from memory overload.

Algorithms such as AlphaGo are entirely different.

Their problems are not apparent by looking at their source code.

They are embedded in the way that they represent information.

That representation is an ever-changing high-dimensional space, much like walking around in a dream.

Solving problems there requires nothing less than a psychotherapist for algorithms.

Take the case of driverless cars.

Under most normal conditions, the driverless car will recognize a stop sign for what it is.

But not all conditions are normal.

Subjected to something frighteningly similar to the high-contrast shade of a tree, the algorithm hallucinates.

How many different ways can the algorithm hallucinate?

To find out, we would have to provide the algorithm with all possible combinations of input stimuli.

This means that there are potentially infinite ways in which it can go wrong.

Crackerjack programmers already know this, and take advantage of it by creating what are called adversarial examples.

In the algorithmic world, this is called overfitting.

When this happens in a brain, we call it superstition.

The biggest algorithmic failure due to superstition that we know of so far iscalledthe parable of Google Flu.

Google Flu used what people bang out into Google to predict the location and intensity of influenza outbreaks.

Like an algorithmic witchdoctor, Google Flu was simply paying attention to the wrong things.

Algorithmic pathologies might be fixable.

But in practice, algorithms are often proprietary black boxes whose updating is commercially protected.

The algorithmic faultline that separates the wealthy from the poor is particularly compelling.

But if we cant detect bias in ourselves, why would we expect to detect it in our algorithms?

By training algorithms on human data, they learn our biases.

As Caliskan noted: Many people think machines are not biased.

But machines are trained on human data.

And humans are biased.

Social media is a writhing nest of human bias and hatred.

Algorithms that spend time on social media sites rapidly become bigots.

These algorithms are biased against male nurses and female engineers.

They will view issues such as immigration and minority rights in ways that dont stand up to investigation.

Given half a chance, we should expect algorithms to treat people as unfairly as people treat each other.

But algorithms are by construction overconfident, with no sense of their own infallibility.

But algorithms can also have mental-health problems based on the way they are built.

They can forget older things when they learn new information.

Imagine learning a new co-workers name and suddenly forgetting where you live.

When things become pathological is often a matter of opinion.

Evidence-based on Ronald Reagans speech patterns nowsuggeststhat he probably had dementia while in office as US president.

In many cases, it takes repeated malfunctioning to detect a problem.

Diagnosis of schizophrenia requires at least one month of fairly debilitating symptoms.

The problem is not visible in our hardware.

Its in our software.

The many ways our minds go wrong to make each mental-health problem unique unto itself.

There is a lot that can go wrong in minds such as ours.

Carl Jung once suggested that in every sane man hides a lunatic.

As our algorithms become more like ourselves, it is getting easier to hide.

This article was originally published atAeonbyThomas T Hillsand has been republished under Creative Commons.

Also tagged with