Welcome toAI book reviews, a series of posts that explore the latest literature on artificial intelligence.
It’s free, every week, in your inbox.
This myth discourages scientists from thinking about new ways to tackle the challenge of intelligence.

Your first thought is that it must have been raining.
But its sunny and the sidewalk is dry, so you immediately cross out the possibility of rain.
As you look to the side, you see a road wash tanker parked down the street.

You conclude that the road is wet because the tanker washed it.
Were constantly inferring things based on what we know and what we perceive.
Most of it happens subconsciously, in the background of our mind, without focus and direct attention.

AI researchers base their systems on two types of inference machines: deductive and inductive.
Deductive inference uses prior knowledge to reason about the world.
This is the basis ofsymbolic artificial intelligence, the main focus of researchers in the early decades of AI.

An ML model trained on relevant examples will find patterns that map inputs to outputs.
Abductive inference is what many refer to as common sense.
The problem is that the AI community hasnt paid enough attention to abductive inference.

They were reformulations of logic programming, which is a variant of deduction, Larson told TechTalks.
Abduction got another chance in the 2010s asBayesian networks, inference engines that venture to compute causality.
InThe Myth of Artificial Intelligence, he refers to them as abduction in name only.

For the most part, the history of AI has been dominated by deduction and induction.
But pure symbolic AI has failed for various reasons.
Symbolic systems cant acquire and add new knowledge, which makes them rigid.

Its curious here that no one really explicitly stopped and said Wait.
This is not going to work!
That would have shifted research directly towards abduction or hypothesis generation or, say, context-sensitive inference.
Deep learning technology has unlocked many applications that were previously beyond the limits of computers.
And it has attracted interest and money fromsome of the wealthiest companies in the world.
Larson dismisses the scaling up of data-driven AI as fundamentally flawed as a model for intelligence.
While both search and learning can provide useful applications, they are based on non-abductive inference, he reiterates.
One example is IBM Watson, which became famous when it beat world champions at Jeopardy!
More recent proof-of-concept hybrid models have shownpromising resultsin applications where symbolic AI and deep learning alone perform poorly.
Larson believes that hybrid systems can fill in the gaps in machine learningonly or rules-basedonly approaches.
In The Myth of Artificial Intelligence, Larson describes attempts to circumvent abduction as the inference trap.
In open-ended scenarios requiring knowledge about the world likelanguage understanding, abduction is central and irreplaceable.
Because of this, attempts at combining deductive and inductive strategies are always doomed to fail…
The field needs a fundamental theory of abduction.
In the meantime, we are stuck in traps.
This shift has made it very difficult for non-profit labs and small companies to become involved in AI research.
The monopolization of AI is in turn hampering scientific research.
The illusion of progress on artificial general intelligence can lead to anotherAI winter, he writes.
Larson hopes scientists start looking beyond existing methods.
it’s possible for you to read the original articlehere.