But designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems.
Mitchell describes the first fallacy as Narrow intelligence is on a continuum with general intelligence.
At the same time, we have deep learning systems that can convert voice data to text in real-time.

It is the easy tasks, the things that we take for granted, that are hard to automate.
Mitchell describes the second fallacy as Easy things are easy and hard things are hard.
Consider vision, for example.

Over billions of years, organisms have developed complex apparatuses for processing light signals.
We humans have inherited all those capabilities from our ancestors and use them without conscious thought.
Case in point: We still dont havecomputer visionsystems that are nearly as versatile as human vision.

Think of the how you handle objects, walk, run, and jump.
These are tasks that you’ve got the option to do without conscious thought.
But these kinds of skills remain alarge and expensive challengefor current AI systems.
We use terms such as learn, understand, read, and think to describe how AI algorithms work.
The wishful mnemonics fallacy has also led the AI community to name algorithm-evaluation benchmarks in ways that are misleading.
AI without a body
Can intelligence exist in isolation from a rich physical experience of the world?
This is a question that scientists and philosophers have puzzled over for centuries.
Mitchell calls it the Intelligence is all in the brain fallacy.
Meanwhile, theres growing evidence that this approach is doomed to fail.
Our intelligence is tightly linked to the limits and capabilities of our bodies.
Mitchell supports the idea that emotions, feelings, subconscious biases, and physical experience are inseparable from intelligence.
Its not at all clear that these attributes can be separated.
Common sense in AI
Developing general AI needs anadjustment to our understanding of intelligence itself.
We are still struggling to define what intelligence is and how to measure it in artificial and natural beings.
Common sense includes the knowledge that we acquire about the world and apply it every day without much effort.
We learn a lot without being explicitly instructed, by exploring the world when we are children.
These include concepts such as space, time, gravity, and the physical properties of objects.
This kind of knowledgeis missing in todays AI systems, which makes them unpredictable and data-hungry.
This kind of knowledge is crucial to areas such as natural language processing.
No one yet knows how to capture such knowledge or abilities in machines.
you could read the original articlehere.