Deepfake technology is a technique that manipulates videos using high-powered computers and deep learning. The result is a very realistic-looking video of an event that never happened. Here’s a very good example:
In this clip, you can see former U.S. President Barack Obama speaking. Except, it’s not Obama speaking; it’s comedian Jordan Peele with old video footage of the former president doctored so that it matches what Peele is saying.
What Is a Deepfake?
A deepfake is a digitally forged image or video of a person that makes them appear to be someone else. It is the next level of fake content creation that takes advantage of artificial intelligence (AI).
The impersonated individual could be a famous personality, such as a celebrity, politician, or business owner that is targeted by the misinformation campaign. However, deepfakes can also be used to spread false information about anyone.
How Did Deepfakes Start and Who Created Them?
People started becoming aware of deepfake technology when a Reddit user named “Deepfakes” posted that he developed a machine learning (ML) algorithm that could transpose celebrity faces seamlessly onto porn videos. Of course, they supplied samples and the thread soon became very popular, spawning its own subreddit. The site administrators had to shut it down but by this time the technology became well-known and available. Soon people were using it to create fake videos, mostly starring politicians and actors.
However, the idea of manipulating videos is not new. Back in the 1990s, some universities were already conducting significant academic research in computer vision. Much of the effort during this time centered on using artificial intelligence (AI) and ML to modify existing video footage of a person speaking and combining it with a different audio track. The Video Rewrite program of 1997 showcased this technology.
How Do Deepfakes Work?
A deepfake video exploits two machine learning (ML) models. One model creates the forgeries from a data set of sample videos, while the other tries to detect if the video is indeed a fraud. When the second model can no longer tell if the video is counterfeit, then the deepfake is probably believable enough as well to a human viewer. This technique is called generative adversarial network (GAN). You can learn more about GANs in this definition.
GAN does better when the data set it can work with is large. That’s why much of the early deepfake footages tend to feature politicians and showbiz celebrities. They have many videos that GAN can use to create very realistic deepfakes.
What Are the Dangers of Deepfake Technology?
For now, the novelty of deepfake videos makes them fascinating and fun to watch. But lurking beneath the surface of this seemingly amusing technology is a danger that could get out of hand.
Deepfake technology is evolving to a point where it will most likely be difficult to tell fake videos apart from real ones. This could have disastrous consequences, especially for public figures and celebrities. Careers and lives may be compromised and even ruined outright by malicious deepfakes. People with ornery intentions could use these to impersonate people and exploit their friends, families, and colleagues. They can even use fake videos of world leaders to start international incidents and even wars.
Is It Possible to Detect Deepfakes?
At present, it may still be possible to spot badly generated deepfakes with the naked eye. The lack of human nuances, such as blinking, and details that may be off, such as wrongly angled shadows, are dead giveaways that are usually easy to spot.
But as the technology becomes more advanced and GAN processes become better, it will soon be impossible to tell if a video is authentic or not. The first GAN component, the one that creates the forgeries, will continue to improve over time. That’s what ML is for—to continuously instruct the AI so that it gets better and better. At some point, it will overtake our human capacity to recognize what is real and what is fake. In fact, experts believe that perfectly real digitally manipulated videos are anywhere from 6 months to one year away.
That is the reason initiatives to create AI-based countermeasures to deepfakes are ongoing. But as the technology continues to evolve, these countermeasures need to keep pace. Just recently, Facebook and Microsoft, along with other companies and a bunch of prominent U.S. universities have formed a consortium behind the Deepfake Detection Challenge (DFDC). This initiative seeks to motivate researchers to develop technologies that can detect if AI has been used to alter a video.
What Is a Shallowfake?
Shallowfakes are videos manipulated using basic editing tools, such as using speed effects, to show something fake. In effect, some shallowfake videos make their subjects seem impaired when they are slowed down or overly aggressive if sped up. An example of a popular shallowfake is the Nancy Pelosi video that was slowed down to make her look drunk:
An example of a sped-up shallowfake, meanwhile, is the video that Sarah Sanders tweeted of CNN reporter Jim Acosta, which made him look more aggressive than he really was while talking to an intern.
A shallowfake is sometimes also called a “dumbfake.” Other videos that fall under this category are mislabeled to make them look like they happened in a place other than where they actually took place. This kind of fake content can lead to disastrous consequences, such as the violence that broke out in Myanmar.
What Is the Difference between a Deepfake and a Shallowfake?
Creating shallowfakes does not require using deep learning systems. But just because shallowfake videos don’t use AI, they don’t differ much in terms of quality or quantity compared to deepfakes. The name merely indicates how the video was produced and what technologies (i.e., deep learning) were avoided to create it.
Are Shallowfakes Easily Identifiable?
While it’s easier to tell if a video is a shallowfake because it is more crudely made than a deepfake, politicians, academicians, and other experts believe it can still cause a lot of damage to the subject. And even if the real video (i.e., before shallowfake alteration) is easy to locate on the Internet, the less discerning could still fall for and spread fake content without thinking twice.
Are Deepfakes and Shallowfakes Covered by Existing Cybercrime Laws?
California has made deepfake distribution illegal since 2019. But politicians did admit that enforcing the said law (i.e., AB 730), which makes circulating doctored videos, images, or audio files of politicians illegal within 60 days of an election, is hard to implement.
—
If the misinformation from both deepfakes and shallowfakes is not handled properly, many of their subjects’ reputations could suffer.
