For decades, weve been trying to develop artificial intelligence in our own image.
It’s free, every week, in your inbox.
He also discusses whats missing in different approaches to creating AI.

Here are some key takeaways from the book.
InThe Alignment Problem, Christian goes through many examples where machine learning algorithms have caused embarrassing and damaging failures.
A popular example is a Google Photos classification algorithm thattagged dark-skinned people as gorillas.

The problem was not with the AI algorithm but with the training data.
Whats worse is that machine learning models cant tell right from wrong and make moral decisions.
Obviously, none of the AIs creators wanted the model to select candidates based on their gender.

Modeling the world as it is is one thing.
But as soon as you beginusingthat model, you arechangingthe world, in ways large and small.
There is a broad assumption underlying many machine-learning models that the model itself will notchangethe reality its modeling.

In almost all cases, this is false, Christian writes.
Often the ground truth is not the ground truth, Christian warns.
The model is then left to explore the space for itself and find ways to maximize its rewards.

It has also found many uses in robotics.
But each of those achievements also proves that purely pursuing external rewards is not exactly how intelligence works.
For one thing, reinforcement learning models require massive amounts of training cycles to obtain simple results.
Reinforcement learning systems are also very rigid.
Should AI imitate humans?
An example is self-driving cars that learn by observing how humans drive.
Imitation can do wonders, especially in problems where the rules and labels are not clear-cut.
But again, imitation paints an incomplete picture of the intelligence puzzle.
We humans learn a lot through imitation and rote learning, especially at a young age.
But imitation is but one of several mechanisms we use to develop intelligent behavior.
Indeed, it may be catastrophic.
Youll do what youwoulddoif you were them.
But youre not them.
And what you do is not whattheywould do if they wereyou.
But this too presents a challenge.
And theyre becoming pervasive in every aspect of our lives.
Our digital butlers are watching closely, Christian writes.
What comes next?
Advances in machine learning show how far weve come toward the goal of creating thinking machines.
you’ve got the option to read the original articlehere.