One of the key challenges of machine learning is the need for large amounts of data.
One technique that can help address some of these challenges is federated learning.
In many cases sending data to the server is inevitable.

It’s free, every week, in your inbox.
Fortunately, advances inedge AIhave made it possible to avoid sending sensitive user data to tool servers.
These models make it possible to perform on-equipment inference.

Large tech companies are trying to bring some of theirmachine learning applications to users devicesto improve privacy.
On-machine machine learning has several added benefits.
These applications can continue to work even when the machine is not connected to the internet.

They also provide the benefit of saving bandwidth when users are on metered connections.
And in many applications, on-equipment inference is more energy-efficient than sending data to the cloud.
Training on-machine machine learning models
On-machine inference is an important privacy upgrade for machine learning applications.

But one challenge remains: Developers still need data to train the models they will push on users devices.
This is the problem federated learning addresses.
Federated learning starts with a base machine learning model in the cloud server.
In the next stage, several user devices volunteer to train the model.
These devices hold user data that is relevant to the models software, such as chat logs and keystrokes.
Then they train the model on the devices local data.
After training, they return the trained model to the server.
Popular machine learning algorithms such asdeep neural networksand support vector machines is that they are parametric.
Once the final model is ready, it can be distributed to all users for on-rig inference.
The limits of federated learning
Federated learning does not apply to all machine learning applications.
Training machine learning models on irrelevant data can do more harm than good.
For this reason, federated learning must be limited to applications where the user data does not need preprocessing.
Another limit of federated machine learning is data labeling.
Most machine learning models aresupervised, which means they require training examples that are manually labeled by human annotators.
For example, the ImageNet dataset is a crowdsourced repository that contains millions of images and their corresponding classes.
Federated learning is better suited for unsupervised learning applications such as language modeling.
The cloud server doesnt need to store individual models once it updates its base model.
Another measure that can help is to increase the pool of model trainers.
This way, the system doesnt collect trained parameters from any single user constantly.
Federated learning is gaining popularity as it addresses some of the fundamental problems of modern artificial intelligence.
Researchers are constanly looking for new ways to apply federated learning to new AI applications and overcome its limits.
It will be interesting to see how the field evolves in the future.
it’s possible for you to read the original articlehere.