Deepfake technology is a technique that manipulates videos using high-powered computers and deep learning. The result is a very realistic-looking video of an event that never happened. Here’s a very good example:
In this clip, you can see former US President Barack Obama speaking. Except, it’s not Obama speaking; it’s comedian Jordan Peele with old video footage of the former president doctored so that it matches what Peele is saying.
How did Deepfakes Start and Who Created Them?
People started becoming aware of deepfake technology when a Reddit user named “Deepfakes” posted that they developed a machine learning algorithm that could transpose celebrity faces seamlessly onto porn videos. Of course, they supplied samples and the thread soon became very popular, spawning its own subreddit. The site administrators had to shut it down but by this time the technology became well known and available. Soon people were using it to create fake videos, mostly starring politicians and actors.
However, the idea of manipulating videos is not new. Back in the 1990s, some universities were already conducting significant academic research in computer vision. Much of the effort during this time centered on using AI and machine learning to modify existing video footage of a person speaking and combining it with a different audio track. The Video Rewrite program of 1997 showcased this technology.
So How do Deepfakes Work?
A deepfake video exploits two machine learning (ML) models. One model creates the forgeries from a data set of sample videos, while the other tries to detect if the video is indeed a fraud. When the second model can no longer tell if the video is counterfeit, then the deepfake is probably believable enough as well to a human viewer. This technique is called generative adversarial network (GAN). You can learn more about GANs in this definition.
GAN does better when the data set it can work with is large. That’s why much of the early deepfake footages tend to feature politicians and showbiz celebrities. They have many videos that GAN can use to create very realistic deepfakes.
What are the Dangers of Deepfake Technology?
For now, the novelty of deepfake videos makes them fascinating and fun to watch. But lurking beneath the surface of this seemingly amusing technology is a danger that could get out of hand.
Deepfake technology is evolving to a point where it will most likely be difficult to tell fake videos apart from real ones. This could have disastrous consequences, especially for public figures and celebrities. Careers and lives may be compromised and even ruined outright by malicious deepfakes. People with ornery intentions could use these to impersonate people and exploit their friends, families, and colleagues. They can even use fake videos of world leaders to start international incidents and even wars.
Is It Possible to Detect Deepfakes?
At present, it may still be possible to spot badly-generated deepfakes with the naked eye. The lack of human nuances, such as blinking, and details that may be off, such as wrongly-angled shadows, are dead giveaways that are usually easy to spot.
But as the technology becomes more advanced and GAN processes become better, it will soon be impossible to tell whether or not a video is authentic. The first GAN component, the one that creates the forgeries, will continue to improve over time. That’s what the ML is for — to continuously instruct the AI so that it gets better and better. At some point, it will overtake our human capacity to recognize what is real and what is fake. In fact, experts believe that perfectly real digitally manipulated videos are anywhere from 6 months to one year away.
That is the reason initiatives to create AI-based countermeasures to deepfakes are ongoing. But as the technology continues to evolve, these countermeasures need to keep pace. Just recently, Facebook and Microsoft, along with other companies and a bunch of prominent US universities have formed a consortium behind the Deepfake Detection Challenge (DFDC). This initiative seeks to motivate researchers to develop technologies that can detect if AI has been used to alter a video.