Even the AI Behind Deepfakes Can’t Save Us From Being Duped

0 Posted by - 2nd October 2019 - Technology

Last week Google released several thousand deepfake videos to help researchers build tools that use artificial intelligence to spot altered videos that could spawn political misinformation, corporate sabotage, or cyberbullying.

Google’s videos could be used to create technology that offers hope of catching deepfakes in much the way spam filters catch email spam. In reality, though, technology will only be part of the solution. That’s because deepfakes will most likely improve faster than detection methods, and because human intelligence and expertise will be needed to identify deceptive videos for the foreseeable future.

Deepfakes have captured the imagination of politicians, the media, and the public. Video manipulation and deception have long been possible, but advances in machine learning have made it easy to automatically capture a person’s likeness and stitch it onto someone else. That’s made it relatively simple to create fake porn, surreal movie mashups, and demos that point to the potential for political sabotage.

There is growing concern that deepfakes could be used to sway voters in the 2020 presidential election. A report published this month by researchers at NYU identified deepfakes as one of eight factors that may contribute to disinformation during next year’s race. A recent survey of legislation found that federal and state lawmakers are mulling around a dozen bills to tackle deepfakes. Virginia has already made it illegal to share nonconsensual deepfake porn; Texas has outlawed deepfakes that interfere with elections.

Tech companies have promoted the idea that machine learning and AI will head off such trouble, starting with simpler forms of misinformation. In his testimony to Congress last October, Mark Zuckerberg promised that AI will help it identify fake news stories. This would involve using algorithms trained to distinguish between accurate and misleading text and images in posts.

The clips released last week, created in collaboration with Jigsaw, an Alphabet subsidiary focused on technology and politics, feature paid actors who agreed to have their faces swapped. The idea is that researchers will use the videos to train software to spot deepfake videos in the wild, and to benchmark the performance of their tools.

The clips show people doing mundane tasks: laughing or scowling into the camera; walking aimlessly down corridors; hugging awkwardly. The face-swapping ranges from convincing to easy-to-spot. Many of the faces in the clips seem ill-fitting, or melt or glitch in ways that betray digital trickery.

WIRED downloaded many of the clips and shared them with several experts. Some say that deepfakery has progressed beyond the techniques used by Google to make some of the videos.

“The dozen or so that I looked at have glaring artifacts that more modern face-swap techniques have eliminated,” says Hany Farid, a digital forensics expert at UC Berkeley who is working on deepfakes. “Videos like this with visual artifacts are not what we should be training and testing our forensic techniques on. We need significantly higher quality content.”

Google says it created videos that range in quality to improve training of detection algorithms. Henry Ajder, a researcher at a UK company called Deeptrace Lab, which is collecting deepfakes and building its own detection technology, agrees that it is useful to have both good and poor deepfakes for training. Google also said in the blog post announcing the video dataset that it would add deepfakes over time to account for advances in the technology.

The amount of effort being put into the development of deepfake detectors might seem to signal that a solution is on the way. Researchers are working on automated techniques for spotting videos forged by hand as well as using AI. These detection tools increasingly rely, like deepfakes themselves, on machine learning and large amounts of training data. Darpa, the research arm of the Defense Department, runs a program that funds researchers working on automated forgery detection tools; it is increasingly focused on deepfakes.

read more at https://www.wired.com/latest by Will Knight

Tech