How Deepfakes Scramble Our Sense of True and False

“Are you in a precarious situation? … You sound like you can’t talk.” Karah Preiss’ cousin Leslie accused her of being sleepy and distracted and eventually hung up, but didn’t guess the truth. Preiss had placed the call using a software clone of her voice made to demonstrate artificial intelligence’s ability to deceive.

Preiss relates her family experiment in the fifth installment of the Sleepwalkers podcast, a guide to the recent boom in artificial intelligence. The episode examines how AI technology is reshaping perceptions of reality in phone pranks, on Facebook, in Hollywood, and in politics.

Fake videos known as deepfakes are powerful examples of how AI can upend our usual sense of true and false. The term originates from a Reddit account of the same name that in late 2017 posted pornographic video clips with the faces of Hollywood actresses swapped in.

The homemade machine-learning tool used to create those initial deepfakes was soon posted publicly. Deepfake clips are now a staple of both porn sites and YouTube, where one popular meme involves swapping Nicolas Cage into TV shows and movies he didn’t appear in.

Keep Reading

Danielle Citron, a law professor at Boston University, tells Sleepwalkers that deepfakes are being used to harass women, both in private and in public. Last year, a porn video edited to depict Indian investigative journalist Rana Ayyub appeared after she criticized a Hindu nationalist political party. Citron says similar targeted attacks could be used against politicians or CEOs.

The potential for such harm has inspired some people to work on technology to detect deepfakes and other AI spoofs, be they videos, faces, or voices. Sleepwalkers discusses how cameras that cryptographically sign every image could back up the sourcing of video or images. Hany Farid, a prominent expert in detecting faked photos, discusses how creating “fingerprints” of the characteristic body language of politicians like Elizabeth Warren could make it easier to detect fake clips of those people.

Despite such work, it’s far from clear that the truth can always win out over AI fakes, which are rapidly improving. Citron warns that just the concept of high-quality AI fakery may damage our concept of truth. “When nothing is believable, the mischief doer can say ‘Well, you can’t believe anything,’” she says.


More Great WIRED Stories

social experiment by Livio Acerbo #greengroundit #wired https://www.wired.com/story/how-deepfakes-scramble-sense-true-false