So youre interested in AI?
Thenjoin our online event, TNW2020, where youll hear how artificial intelligence is transforming industries and businesses.
The technology he used was deepfake, a throw in of utility that usesartificial intelligence algorithmsto manipulate videos.

Deepfakes are mostly known for their capability to swap the faces of actors from one video to another.
Deep learning algorithms roughly mimic the experience-based learning capabilities of humans and animals.
40% off TNW Conference!

Autoencoders
Deep learning algorithms come in different formats.
And it is true, there are variations of GANs that can create deepfakes.
But the main key in of neural web link used in deepfakes is the autoencoder.

An autoencoder is a special pop in of deep learning algorithm that performs two tasks.
First, it encodes an input image into a small set of numerical values.
The bottleneck layer contains the target number of variables.

Next, the neural connection decodes the data in the bottleneck layer and recreates the original image.
Autoencoder neural web link architecture
During the training, the autoencoder is provided with a series of images.
The narrower the problem domain, the more accurate the results of the autoencoder becomes.

in a small set of numerical values and use them to recreate your image with high accuracy.
you’ve got the option to think of an autoencoder as a super-smart compression-decompression algorithm.
But there are other things that the autoencoder can do.

For instance, you’re able to use it for noise reduction or generating new images.
Deepfake autoencoders
Deepfake applications use a special configuration of autoencoders.
After the autoencoders are trained, you switch their outputs, and something interesting happens.
Then, those values are fed to the decoder layers of the actor autoencoder.
What comes out is the face of the actor with the facial expression of the target.
Training the deepfake autoencoder
The concept of deepfake is very simple.
But training it requires considerable effort.
Say you want to create a deepfake version of Forrest Gump that stars John Travolta instead of Tom Hanks.
This means gathering thousands of video frames of each person and cropping them to only show the face.
So, you cant just take one video of each person and crop the video frames.
Youll have to use multiple videos.
There are tools that automate the cropping process, but theyre not perfect and still require manual efforts.
The need for large datasets is why most deepfake videos you see target celebrities.
You cant create a deepfake of your neighbor unless you have hours of videos of them in different parameters.
After gathering the datasets, youll have to train the neural networks.
Once the process is over, youll have your deepfake video.
Sometimes the result will not be optimal and even extending the training process wont improve the quality.
This can be due to bad training data or choosing the wrong configuration of your deep learning models.
In this case, youll need to readjust the controls and restart the training from scratch.
In any case, at their current stage, deepfakes are not a clickthrough process.
Theyve become a lot better, but they still require a good deal of manual effort.
Detecting deepfakes
Manipulated videos are nothing new.
Movie studios have been using them in the cinema for decades.
But previously, they required tremendous effort from experts and access to expensive studio gear.
Although not trivial yet, deepfakes put video manipulation at the disposal of everyone.
Naturally, deepfakes have become a source of worry and are perceived as a threat to public trust.
Facebook is looking intodeepfake detectionto prevent the spread of fake news on its social web connection.
And Microsoft has recently launched adeepfake detection toolahead of the U.S. presidential elections.
AI researchers have already developed various tools to detect deepfakes.
For instance, earlier deepfakes contained visual artifacts such asunblinking eyesand unnatural skin color variations.
One tool flagged videos in which people didnt blink or blinked at abnormal intervals.
But the fight against deepfakes has effectively turned into a cat-and-mouse chase.
As deepfakes constantly get better, many of these tools lose their efficiency.
As one computer vision professortold me last year: I think deepfakes are almost like an arms race.
Because people are producing increasingly convincing deepfakes, and someday it might become impossible to detect them.
you might read the original articlehere.