What is the master algorithm that allows humans to be so efficient at learning things?

40% off TNW Conference!

Artificial neural networks are inspired by their biological counterparts and venture to emulate the learning behavior of organic brains.

What is the difference between artificial neural networks and biological brains?

But as Zador explains, learning in ANNs is much different from what is happening in the brain.

Each layer of the neural connection will extract specific features from the input image.

But when it comes to humans and animals, learning finds a different meaning.

Visualization of a neural network’s features

The differences between artificial and natural learning are not limited to definition.

Clearly, children do not rely mainly on supervised algorithms to learn to categorize objects, Zador writes.

Theres ongoing research on unsupervised orself-supervised AI algorithmsthat can learn representations with little or no guidance from humans.

But the results are very rudimentary and below what supervised learning has achieved.

But even such a hypothetical unsupervised learning algorithm is unlikely to be the whole story, Zador writes.

At the same time, innate abilities do not enable animals to adapt themselves to their ever-changing environments.

Thats why they all have a capacity to learn and adapt to their environment.

And theres a tradeoff between the two.

Innate and learning abilities complement each other.

For instance, the brain of human children has the wiring to distinguish faces from other things.

Then, throughout their lives, they learn to associate specific faces to people they know.

Specifically, the genome encodes blueprints for wiring up their nervous system, Zador writes in his paper.

So, what exactly does the genome contain?

The answer varies in different beings.

But for a complex system such as the human brain, which has approx.

Evolution vs learning

So, biological brains have two sets of behavior optimization mechanisms.

The genome doesnt encode representations or behaviors directly; it encodes wiring rules and connection motifs, Zador writes.

The genome itself is not constant either.

It goes through infinitesimal transformations and mutations as it is passed from one generation to the next.

Artificial neural networks, on the other hand, have a single optimization mechanism.

They start with a blank slate and must learn everything from scratch.

And this is why it take them huge amounts of training time and examples to learn the simplest things.

In this view, supervised learning in ANNs should not be viewed as the analog of learning in animals.

Another thing that is currently missing from artificial neural networks is architectural optimization.

ANNs on the other hand, are only limited to optimizing their parameters.

They have no recursive self-improvement mechanism that can enable them to create better algorithms.

These architectures have helped create networks that efficiently solve different problems.

But theyre not exactly what the genome does.

Other scientists have suggested combining neural networks with other AI techniques, such as symbolic reasoning systems.

you could read the original articlehere.

Also tagged with