Consider the animal in the following image.
Your biological neural connection is reprocessing your past experience to deal with a novel situation.
It should be placed in its own separate category (viverrids).

It’s free, every week, in your inbox.
[Read:Artificial vs augmented intelligence: whats the difference?]
The brain, they believe, contains many wonders that go beyond the mere connection of biological neurons.

A paper recently published in thepeer-reviewed journal Neuronchallenges the conventional view of the functions of the human brain.
Thats the kind of description usually given to deep neural networks.
And it certainly isnt a fish.

Its probably a mammal, given the furry coat.
(In my defense, it is a very distant relative of felines if you insist.)
Artificial neural networks, however, are often dismissed asuninterpretable black boxes.

They do not provide rich explanations of their decision process.
These tuned parameters then allow them to determine which class a new image belongs to.
and only look for consistency between the pixels of an image.

Another problem that the artificial intelligence community faces is the tradeoff between interpretability and generalization.
Scientists and researchers are constantly searching for new techniques and structures that can generalize AI capabilities across vaster domains.
And experience has shown that, when it comes to artificial neural networks,scale improves generalization.

And these networks have proven to be remarkably better at performing complex tasks such ascomputer visionandnatural language processing.
But simpler models are also less capable in dealing with the complex and messy data found in nature.
This is one of the long-sought goals of artificial intelligence, creating models that can extrapolate well.

Artificial neural networks, on the other hand, do not have such capabilities, the popular belief is.
This is the belief that the authors of Direct Fit to Nature challenge.
Extrapolation (left) tries to extract rules from big data and apply them to the entire problem space.

Interpolation (right) relies on rich sampling of the problem space to calculate the spaces between samples.
The internet is rich with all sorts of data from various domains.
Scientists create vast deep learning data sets from Wikipedia, social media networks, image repositories, and more.

One argument against this approach is the long tail problem, often described as edge cases.
That is, the long-tail phenomenon is in part a sampling deficiency.
In fact, the need for sampling from the long tail also applies to the human brain.
We often overestimate the generalization capacity of biological neural networks, including humans.
The brain is a three-pound mass of matter that uses little over 10 watts of electricity.
Deep neural networks, however, often require very large servers that canconsume megawatts of power.
But hardware aside, comparing the components of the brain to artificial neural networks paints a different picture.
The largest deep neural networks are composed of a few billion parameters.
This makes it inevitable for human brains to learn new tasks without learning the underlying rules.
To be fair, calculating the input entering the brain is complicated.
But we often underestimate the huge amount of data that we process.
Calculus is perhaps the best example of learning to apply rules across different contexts.
This is one area that direct-fit models fall short, Hasson and Nastase acknowledge.
Scientifically, it is called System 1 and System 2 thinking.
So what do we need to develop AI algorithms that have System 2 capabilities?
This is one area where theres much debate in the research community.
In Direct Fit to Nature, the authors support the pure neural networkbased approach.
There is no other substrate from which System 2 could arise.
An alternative view is the creation ofhybrid systemsthat incorporate classicsymbolic AIwith neural networks.
Counterintuitively, imposing these sorts of limitations (e.g.
a body) on a neural internet can force the neural internet to learn more useful representations.
you’re able to read the original articlehere.