Deepfake technology is a technique that manipulates videos using high-powered computers and deep learning. The result is a very realistic-looking video of an event that never happened. Here’s a very good example.
In this clip, you can see former U.S. President Barack Obama speaking. Except, it’s not Obama speaking; it’s comedian Jordan Peele superimposed on a doctored old video footage of the former president to match what Peele is saying.
What Is a Deepfake?
A deepfake is a digitally forged image or video of a person that makes them appear to be someone else. It is the next level of fake content creation that takes advantage of artificial intelligence (AI).
The impersonated individual could be a famous personality, such as a celebrity, politician, or business owner that is targeted by a misinformation campaign. However, deepfakes can also be used to spread false information about anyone.
How Did Deepfakes Start and Who Created Them?
People started becoming aware of deepfake technology when a Reddit user named “Deepfakes” posted that he developed a machine learning (ML) algorithm that could transpose celebrity faces seamlessly onto adult content videos. Of course, he supplied samples and the thread soon became very popular, spawning its own subreddit. The site administrators had to shut it down but by that time the technology already became well-known and widely available. Soon, people started using it to create fake videos, mostly starring politicians and actors.
However, the idea of manipulating videos is not new. Back in the 1990s, some universities were already conducting significant academic research in computer vision. Much of the effort during this time centered on using AI and ML to modify existing video footage of a person speaking and combining it with a different audio track. The Video Rewrite program of 1997 showcased this technology.
How Do Deepfakes Work?
A deepfake video exploits two ML models. One model creates the forgeries from a data set of sample videos, while the other tries to detect if the video is indeed a fraud. When the second model can no longer tell if the video is counterfeit, the deepfake is probably believable enough as well to a human viewer. This technique is called “generative adversarial network (GAN).” You can learn more about GANs in this definition page.
GAN does better when the data set it can work with is large. That’s why many of the early deepfake videos tended to feature politicians and showbiz personalities. They have many videos that GAN can use to create very realistic deepfakes.
What Are the Dangers of Deepfake Technology?
For now, the novelty of deepfake videos makes them fascinating and fun to watch. But lurking beneath the surface of this seemingly amusing technology is a danger that could get out of hand.
Deepfake technology has evolved to a point where it will most likely be difficult to tell fake videos apart from real ones. That could have disastrous consequences, especially for public figures and celebrities. Careers and lives may be compromised and even ruined outright by malicious deepfakes. People with ornery intentions could use these to impersonate people and exploit their friends, families, and colleagues. They can even use fake videos of world leaders to start international incidents and even wars.
What Are the Different Deepfake Technology Types?
There are three main types of deepfake technology described in greater detail below.
- Face swapping deepfakes: This is the most common type of deepfake that involves swapping one person’s face with another’s in a video or an image.
- Audio deepfakes: These deepfakes replace someone’s voice in a recording with another person’s.
- Textual deepfakes: These deepfakes generate text that appears to have been written by someone else.
How Can Cybercriminals Use Deepfakes?
Cybercriminals can use deepfakes in many ways to deceive and defraud people. Some of the most common uses of deepfakes by cybercriminals include:
- Impersonating high-profile individuals: Cybercriminals can use deepfakes to create videos or audio recordings that make it appear that high-profile individuals are saying or doing something they never said or did. The deepfakes can be used to spread misinformation, damage reputations, or even commit financial fraud.
- Stealing personal information: Attackers can use deepfakes to create realistic-looking fake IDs or passports to steal money or commit identity theft.
- Infiltrating organizations: Cybercriminals can use deepfakes to create videos or audio recordings that make it appear like they are speaking to someone from a legitimate organization. They can then be used to access sensitive information or trick people into making payments.
Deepfake technology is still in the early stages but continues to rapidly evolve. It’s essential to be aware of its potential dangers and critical of information we see online.
Is It Possible to Detect Deepfakes?
At present, it may still be possible to spot badly generated deepfakes with the naked eye. The lack of human nuances, such as blinking and details that may be off like wrongly angled shadows, are dead giveaways that are usually easy to spot.
But as the technology becomes more advanced and GAN processes improve, it will soon be impossible to tell if a video is authentic or not. The first GAN component, the one that creates forgeries, will continue to improve over time. That’s what ML is for—to continuously instruct the AI so it gets better and better.
At some point, it will overtake our human capacity to recognize what is real and what is fake. In fact, experts believe that perfectly real digitally manipulated videos are anywhere from 6 months to one year away. That is the reason initiatives to create AI-based countermeasures to deepfakes are ongoing. But as the technology continues to evolve, these countermeasures need to keep pace.
Just recently, Facebook and Microsoft, along with other companies and a bunch of prominent U.S. universities have formed a consortium behind the Deepfake Detection Challenge (DFDC). This initiative seeks to motivate researchers to develop technologies that can detect if AI has been used to alter a video.
What Is a Shallowfake?
Shallowfakes are videos manipulated using basic editing tools, such as using speed effects to show something fake. In effect, some shallowfake videos make their subjects seem impaired when they are slowed down or overly aggressive if sped up. An example of a popular shallowfake is the Nancy Pelosi video that was slowed down to make her look drunk.
An example of a sped-up shallowfake, meanwhile, is the video that Sarah Sanders tweeted of CNN reporter Jim Acosta, which made him look more aggressive than he really was while talking to an intern.
A shallowfake is sometimes also called a “dumbfake.” Other videos that fall under this category are mislabeled to make them look like they happened in a place other than where they actually took place. This kind of fake content can lead to disastrous consequences, such as the violence that broke out in Myanmar.
What Is the Difference between a Deepfake and a Shallowfake?
Creating shallowfakes does not require using deep learning systems. But just because shallowfake videos don’t use AI, they don’t differ much in terms of quality or quantity compared to deepfakes. The name merely indicates how the video was produced and what technologies (i.e., deep learning) were avoided to create it.
Are Shallowfakes Easily Identifiable?
While it’s easier to tell if a video is a shallowfake because it is more crudely made than a deepfake, politicians, academicians, and other experts believe it can still cause a lot of damage to the subject. And even if the real video (i.e., before shallowfake alteration) is easy to locate on the Internet, the less discerning could still fall for and spread fake content without thinking twice.
Are Deepfakes and Shallowfakes Covered by Existing Cybercrime Laws?
California has made deepfake distribution illegal since 2019. But politicians did admit that enforcing the said law (i.e., AB 730), which makes circulating doctored videos, images, or audio files of politicians illegal within 60 days of an election, is hard to implement.
If the misinformation from both deepfakes and shallowfakes is not handled properly, many of their subjects’ reputations could suffer.