Deepfake refers to a video, photo or audio recording that looks real but has been manipulated by artificial intelligence. It’s a term that’s been in the news because of its potential for abuse. Politicians have been spoofed in embarrassing ways, celebrities have been unwittingly cast in pornography and, worst of all, it’s possible to create videos that show people doing or saying things they never did. The fear has prompted companies and government agencies to develop countermeasures, including programs that can identify deepfakes and stop them from spreading.
Many of the tools that make deepfake possible are built on a special kind of artificial intelligence called neural networks. Neural networks, which are designed to resemble the way neurons in a human brain process information, use multiple “hidden layers” of algorithms that transform raw input signals into meaningful output signals.
Deepfake Technology: Pros, Cons, and the Ethical Implications
The most popular tool for creating deepfakes is a generative adversarial network, or GAN. A GAN pits two artificial intelligence algorithms against each other. The first algorithm, the generator, is fed random noise and tries to turn it into an image of a celebrity. The second algorithm, the discriminator, looks at these images and tries to tell whether they are real or not. Over time, the generator gets better at turning noise into faces and the discriminator becomes more accurate.
Some tools, like Adobe’s ImageForensics and Intel’s FakeCatcher, look for signs of manipulation in photos and videos. Other programs, such as the recently launched Reality Defender and Deeptrace, aim to keep deepfakes out of your life entirely by acting like a hybrid antivirus/spam filter. They’ll screen incoming media and divert any obvious manipulations to a quarantine zone, much as Gmail diverts spam.