They are the talk of the town; these fake videos that show people doing and saying things they never did or said. But the deepfakes that are coming out now are nothing more than stunts that show off the creative potential of deep learning and artificial intelligence (AI). Some people are not amused, though. They think these videos can eventually be used for more sinister purposes.
Check out this video of comedian Bill Hader and pay attention to how his face morphs when he does impressions of Tom Cruise and Seth Rogen:
The transformations are quite subtle but remarkable. There were surely moments when you wondered if the interviewee was really Tom Cruise or Seth Rogen and not Bill Hader. That’s the startling and the dangerous power of deepfakes. They can make you believe what you are seeing is the real deal.
A Dangerous Form of Amusement
For now, deepfakes appear harmless but some believe they can be abused.
You see, deepfakes are getting better. The tools that enable people to create them as easily as a spreadsheet are slowly but surely being built. And the people who dabble in this novel artform are building up their skills to become masters of the craft. Very soon, we expect to see fake videos that we absolutely won’t be able to distinguish from real ones. All one would need then is a potentially volatile situation and enough motivation to exploit it, and he/she would have all the necessary ingredients to brew the perfect deepfake storm.
Here’s a simple situation to illustrate. Say, for example, that tension between two neighboring countries continues to escalate. Thankfully, the leaders of both nations are remaining calm and calling on their citizens to temper their aggression. But then a video of one of the leaders proclaiming war on the other country appears. While later on it is found to be a deepfake, people may already have been driven to fight. The damage has been done.
A situation like this is certainly not improbable. And it may well justify taking preemptive action right now. Just recently, the U.S. Congress held a hearing to explore the dangers that AI and deepfakes pose. That’s how seriously they are looking at this issue. California, for one, has already banned the distribution of political deepfakes. But is there enough reason to outlaw the creation and distribution of fake videos?
Overblown Fears
Others, however, say that much of the reaction to deepfakes is overblown. They argue that the problem already exists even without these doctored videos. We are in the information age, after all, and so have long mastered the craft of using IT to spread misinformation. Take the problems healthcare organizations are facing about fighting the current Ebola outbreak in the Democratic Republic of Congo. Healthcare workers are finding it challenging to reach infected people and provide them with early treatment because of politically motivated misinformation that is being spread through text messages and not deepfakes.
That said, deepfakes just happen to be the latest technology to reveal deep-seated problems. Some people view them as weapons to further their own interests. Unfortunately, we, the targets, often have no way to confirm if the information we see is real or not. So it probably doesn’t really make sense to ban deepfake distribution because malicious actors will simply use whatever media, channel, or technology available anyway. We might as well ban all other communication and distribution channels ranging from pen and paper to the Internet.
The Crux of the Matter
All in all, while deepfakes do pose a threat and can be used by malicious people to sow misinformation, banning their creation or distribution will probably do little to solve the issue. What’s needed is to provide the general public with better ways to confirm facts and educate people to be critical and analytical of the information they receive.
