Whether any or both of those threats are real ishotly debatedamong scientists and thought leaders.
It’s free, every week, in your inbox.
So far, so good.

What can go wrong?
But contrary to what the media portrays, notallAI algorithms are opaque.
They are composed of hand-coded rules, meticulously put together by software developers and domain experts.

This means that the developers dont necessarily have the final say on how the AI algorithms behave.
But again, not all machine learning models are opaque.
This provides developers with a chance to discover potentially problematic factors and correct the model.

Theres not a feature-by-feature breakdown on how the AI algorithm is making decisions.
), and it might even spot relevant patterns that will go unnoticed to human experts.
But a deep learning system doesnt need to make errors before its opacity turns problematic.

Suppose an angry customer wants to know why an AI app has turned down their loan app.
When you have an opaque system, you could just shrug and say, The computer said so.
ONeil questions inWeapons of Math Destruction.

Some of this secrecy is justified.
Algorithms are, after all, mindless machines that play by their own rules.
They dont use common sense judgment to identify evil actors who twist the rules for devious intentions.

This only shows the fine line that organizations walk on when they use AI algorithms.
When AI systems are not transparent, they dont even need to make errors to wreak havoc.
Even the shadow of a doubt about the systems performance can be enough to cause mistrust in the system.
On the other hand, too much transparency can also backfire and lead to other disastrous results.
ONeil wroteWeapons of Math Destructionin 2016, before rules likeGDPRand CCPA came into effect.
Other developments, such as theethical AI rules of the European Commission, also incentivize transparency.
Who bears the damage of AI algorithms?
In her book, ONeil explores many cases where algorithms causing damage to peoples lives.
For example, ONeil says, The new recidivism models are complicated and mathematical.
There are two more factors that make the damage of dangerous AI algorithms even more harmful.
First, the data.
Machine learning algorithms rely on quality data for training and accuracy.
The second problem is the feedback loop.
On the topic of policing, ONeil argues that prejudiced crime prediction causes more police presence in impoverished neighborhoods.
This creates a pernicious feedback loop, she writes.
The policing itself spawns new data, which justifies more policing.
And our prisons fill up with hundreds of thousands of people found guilty of victimless crimes.
It sends more police to arrest them, and when theyre convicted it sentences them to longer terms.
This drives their credit rating down further, creating nothing less than a death spiral of modeling.
Being poor in a world of WMDs is getting more and more dangerous and expensive.
As a statistician would put it, can it scale?
ONeil writes inWeapons of Math Destruction.
Consider the Google Search example we discussed earlier.
A tiny mistake in Googles AI algorithm can have a massive impact on public opinion.
Likewise, Facebooks ranking algorithms decide the news that hundreds of millions of people see every day.
If those algorithms are faulty, they can be gamed to spread fake, sensational news by malicious actors.
Even when theres not a direct malicious intent, they can still cause harm.
So, what should be done about this?
We need to acknowledge the limits of the AI algorithms that we deploy.
Big Data processes codify the past.
They do not invent the future.
Doing that requires moral imagination, and thats something only humans can provide.
Sometimes that will mean putting fairness ahead of profit, ONeil writes.
you could read the original articlehere.