This is one thing that boththe pioneersandcritics of deep learningagree on.
Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.
Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems.

But heres what we know about LeCuns masterplan.
A clarification on the limits of deep learning
40% off TNW Conference!
Supervised learning is the category of machine learning algorithms that require annotated training data.

[Deep learning] is not supervised learning.
Its not justneural networks.
You dont directly program the system.

You define the architecture and you adjust those parameters.
There can be billions.
But the confusion surrounding deep learning and supervised learning is not without reason.

Where does deep learning stand today?
Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection.
They are completely built around it.

But the way these AI programs learn to solve problems is drastically different from that of humans.
In most cases, reinforcement learning agents take an insane amount of sessions to master games.
The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.

Reinforcement learning systems are very bad attransfer learning.
In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI.
What if you want to train a car to drive itself?
The three challenges of deep learning
LeCun breaks down the challenges of deep learning into three areas.
First, we need to develop AI systems that learn with fewer samples or fewer trials.
Basically, its the idea of learning to represent the world before learning a task.
This is what babies and animals do.
We run about the world, we learn how it works before we learn any task.
Once we have good representations of the world, learning a task requires few trials and few samples.
Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth.
The second challenge is creating deep learning systems that can reason.
The question is, how do we go beyond feed-forward computation and system 1?
How do we make reasoning compatible with gradient-based learning?
How do we make reasoning differentiable?
Thats the bottom line, LeCun said.
But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested.
There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text.
Capsule networks, invented by Geoffry Hinton, address some of these challenges.
But learning to reason about complex tasks is beyond todays AI.
We have no idea how to do this, LeCun admits.
It could be the future of a video or the words missing in a text, LeCun says.
Transformers dont require labeled data.
They are trained on large corpora of unstructured text such as Wikipedia articles.
(But they are stillvery far from really understanding human language.)
So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols.
But the success of Transformers has not transferred to the domain of visual data.
We can produce distributions over all the words in the dictionary.
We dont know how to represent distributions over all possible video frames, LeCun says.
For each video segment, there are countless possible futures.
The neural web connection ends up calculating the average of possible outcomes, which results in blurry output.
LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models.
In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.
The future of deep learning is not supervised
I think self-supervised learning is the future.
In supervised learning, the AI system predicts a category or a numerical value for each input.
In self-supervised learning, the output improves to a whole image or set of images.
Its a lot more information.
To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.
If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says.
The next revolution in AI will not be supervised, nor purely reinforced.