The late Stephen Hawking called artificial intelligence the biggest threat to humanity.
But Hawking, albeit a revered physicist, was not a computer scientist.
Elon Musk compared AI adoption to summoning the devil.

But Elon is, well, Elon.
And there are dozens of movies that depict a future in which robots and artificial intelligence go berserk.
But they are just a reminder at how bad humans are at predicting the future.

Its very easy to dismiss warnings of the robot apocalypse.
As for the AI that we have today, it can best be described as idiot savant.
Our algorithms can performremarkably well at narrow tasksbut fail miserably when faced with situations that require general problemsolving skills.

Russell certainly knows what hes talking about.
It’s free, every week, in your inbox.
Because if super-intelligent AI takes us by surprise, it will be too late.

In the first few chapters ofHuman Compatible, Russell elaborates on the shortcomings of current approaches to developing AI.
Focusing on raw computing power misses the point entirely.
Speed alone wont give us AI, Russell writes.

Running flawed algorithms faster computer does have a bright side however: You get the wrong answer more quickly.
Its not hardware that is holding AI back; its software.
What is general AI?
That is something that, after six decades, is still being debated among scientists.
This definition is in line with observations made byother leading AI researchers.
We have not been able to create such systems yet.
These systems break as soon as they face problems and situations that fall outside their rules or training examples.
Consider a robot that is supposed to learn to stand up.
I believe this capability is the most important step needed to reach human-level AI.
Probably not, given the current attention being given to the AI control problem.
One apparent option would be to ban the development of general-purpose, human-level AI systems.
So banning AI research is not a solution.
But a solution is needed nonetheless.
Some scientists have likened these concerns to worrying about overpopulation on Mars.
Others claim that raising concerns about the threat of AGI will cast doubt over the benefits of AI.
Therefore, the best thing to do is nothing and keep quiet about the risks.
Again, Russell rejects these claims.
We simply wouldnt be having this discussion at all.
Second,if the risks are not successfully mitigated, there will be no benefits, he writes.
Of course, highlighting the problem is not the same as solving it.
So how do we prevent AI from going haywire?
At this stage, its really hard to see how the super-intelligent AI story.
But a good place to start is to rethink our approach to defining and creating artificial intelligence.
Russell addresses this at the beginning ofHuman Compatible.
Instead, Russell suggests, we should insist that AI that is focused on understanding and achieving human objectives.
A machine that assumes it knows the true objective perfectly will pursue it single-mindedly.
And this last point is very important, because it is exactly what current AI systems lack.
regardless of the harm their functionality brings to humans.
These problems are likely to grow as AI algorithms become more efficient at performing their tasks.
Finally, Russell suggests that the source of information about human preferences is human behavior and choices.
The AI will continue to learn and evolve as human choices evolve.