Even the AI Behind Deepfakes Can’t Save Us From Being Duped

“The dozen or so that I looked at have glaring artifacts that more modern face-swap techniques have eliminated,” says Hany Farid, a digital forensics expert at UC Berkeley who is working on deepfakes. “Videos like this with visual artifacts are not what we should be training and testing our forensic techniques on. We need significantly higher quality content.”

Video: Google

Google says it created videos that range in quality to improve training of detection algorithms. Henry Ajder, a researcher at a UK company called Deeptrace Lab, which is collecting deepfakes and building its own detection technology, agrees that it is useful to have both good and poor deepfakes for training. Google also said in the blog post announcing the video dataset that it would add deepfakes over time to account for advances in the technology.

The amount of effort being put into the development of deepfake detectors might seem to signal that a solution is on the way. Researchers are working on automated techniques for spotting videos forged by hand as well as using AI. These detection tools increasingly rely, like deepfakes themselves, on machine learning and large amounts of training data. Darpa, the research arm of the Defense Department, runs a program that funds researchers working on automated forgery detection tools; it is increasingly focused on deepfakes.

Much more deepfake training data should soon be available. Facebook and Microsoft are building another, larger dataset of deepfake videos, which the companies plan to release to AI researchers at a conference in December.

Sam Gregory, program director for Witness, a project that trains activists to use video evidence to expose wrongdoing, says the new deepfake videos will be useful to academic researchers. But he also warns that deepfakes shared in the wild are always likely to be more challenging to spot automatically, given how they may be compressed or remixed in ways that may trick even a well-trained detector.

As deepfakes improve, Gregory and others say it will be necessary for humans to investigate the origins of a video or inconsistencies—a shadow out of place or the incorrect weather for a particular location—that may be imperceptible to an algorithm.

“There is a future for [automated] detection as a partial solution,” Gregory says. He believes that technical solutions could help alert users and the media to deepfakes, but adds that people need to become more savvy about new possibilities for deception.

Keep Reading

The latest on artificial intelligence, from machine learning to computer vision and more

Videos can, of course, also be manipulated to deceive without the use of AI. A report published last month by Data & Society, a nonprofit research group, notes that video manipulation already goes well beyond deepfakery. Simple modifications and edits can be just as effective in misleading people, and are harder to spot using automated tools. A recent example is the clip of video of Nancy Pelosi slowed down to make it appear as if she were slurring her words.

social experiment by Livio Acerbo #greengroundit #wired https://www.wired.com/story/ai-deepfakes-cant-save-us-duped