Deepfakes are currently amusing. Soon, however, they can become a nuisance. And if allowed to persist unchecked, they may evolve into the menace some believe they already are. So it’s clear deepfakes need to be managed before things escalate. But we currently have no solutions, and the technology is getting better as more tools become available and more people become more proficient at creating these videos.

Perhaps the answer lies in two currently celebrated technologies. The first of which is blockchain, the darling of the financial world. The other is the tech that helped create deepfakes in the first place—artificial intelligence (AI). Let’s take a look at how each of these can help mitigate the dangers posed by fake videos.

Blocking Deepfakes with Blockchain

People have high expectations with blockchain because of its vaunted security features. After all, if it’s good enough to secure cryptocurrency transactions, then perhaps it holds the key to protect humanity from extinction by bogus videos.

Blockchain’s most compelling feature is its immutable ledger, an add-only database system that records all transactions. From a regulatory perspective, this makes for a convenient registry of all videos produced, or at least the ones whose authenticity is of great importance, such as news videos. Anyone creating a video would be required to register it in this database. Anyone checking for the video’s authenticity simply needs to check the registry to see if the details match.

The second good thing about the blockchain registry is that it is distributed across a network. That is, authentic copies exist on a number of designated servers. Should one server go down for whatever reason, a video’s authenticity can still be confirmed by any of the other servers. The registry will always be available for people to verify if a video is legit.

The third thing going for blockchain is an internal protocol called a “smart contract,” which governs the execution of a transaction and binds all parties to its terms.

These features make blockchain ideal for recording video creation dates from the get-go. When questions arise about its authenticity, the ledger provides a way for verification. But it’s not the only option people are currently looking at, there’s also AI.

The (Artificially) Intelligent Response to Deepfakes

What AI has done, we assume it can undo. Deepfakes were developed using AI and deep learning. A technique called “general adversarial network (GAN)” pits two AI systems against each other. One tries to fool the second one with the video, and the other tries to detect if the video is fraudulent. When the second system can no longer tell if the video is fake, the deepfake has been created.

A solution can take on the role of the other AI system in the GAN to spot deepfakes. But its abilities can be significantly bolstered by exposing it to large amounts of both doctored and authentic videos. Then deep learning can help it tell the difference between the two until it attains a level where even the best deepfakes can’t get past it.

A group from UC Berkeley is attempting to sniff out deepfakes by training their AI to recognize the facial quirks of individuals. The approach is based on people’s tendencies to make unique facial and body movements when they speak. Of course, the downside to this is that you will need to build an extensive archive of people’s videos. And when a subject isn’t in the archives, you will have to search for his/her videos so you can train the AI to recognize his/her facial quirks.

Other approaches under development include recognizing characteristic eye movements and investigating inconsistencies in the video’s pixels to tell if it is legitimate or fake.

It’s Never as Easy as It Seems

As with any solution to a formidable problem, things sound much simpler than they really are. AI and blockchain clearly have features that can help stave off the threat that deepfakes pose. But a multitude of details needs to be ironed out in the process.

For AI, some raised concerns that countering deepfakes by training software could cause even more problems in the future. Privacy advocates, for example, worry that the need to train AI systems to spot false videos will require companies to create repositories of online activities, and there’s no telling how these will be used in the future.

Blockchain, by virtue of its distributed nature, poses a more complex puzzle for defenders of truth in information. It offers more entry points for bad actors to try to exploit, and so keeping the network safe and stable adds an additional layer of concern.

So, AI and blockchain are far from perfect solutions. Still, if people can figure out adoption and application issues, both technologies working together can create a formidable defense against the threat of deepfakes.

Can We Ask for a Favor?
Like our Facebook page to help us spread tech awareness:

People also search for…