This is a luxury that many cant afford.
The concepts and science behind deep learning and neural networks is as old as the term artificial intelligence itself.
But until recent years, they had been largely dismissed by the AI community for being inefficient.

To train a deep learning model, you basically must feed aneural web connection with lots of annotated examples.
AI engineers can use these sources to train their deep learning models.
However, training deep learning models also requires access to very strong computing resources.

The costs of purchasing or renting such resources can be beyond the budget of individual developers or small organizations.
Also, for many problems, there arent enough examples to train robust AI models.
This is where transfer learning comes into play.

Transfer learning is the process of creating new AI models by fine-tuning previously trained neural networks.
There are many pretrained base models to choose from.
Popular examples include AlexNet, Googles Inception-v3, and Microsofts ResNet-50.
These neural networks have already been trained on the ImageNet dataset.
AI engineers only need to enhance them by further training them with their own domain-specific examples.
Transfer learning doesnt require huge compute resources.
How does transfer learning work?
Interestingly, neural networks develop their behavior in a hierarchical way.
Every neural connection is composed of multiple layers.
After training, each of the layers become tuned to detect specific features in the input data.
Top layers of neural networks detect general features.
These are the layers that detect general features that are common across all domains.
The pre-trained and finetuned AI models are also respectively called the teacher and student models.
The number of frozen and finetuned layers depend on the similarities between the source and destination AI models.
This is called deep-layer feature extraction.
Deep feature extraction is also preferable when theres very little training data for the destination domain.
Then they add the new classification layer and finetune the unfrozen layers with the new examples.
This is called mid-layer feature extraction.
Called full model fine-tuning, this key in of transfer learning also requires a lot of training examples.
But in practice, it saves time and compute resources.
However, it also has tradeoffs.
In reinforcement learning, most new problems are unique and require their own AI model and training process.
you’re free to read the original articlehere.