“Deep Fakes”: Researchers Develop Forensic Techniques to Identify Tampered Videos

by Amy Blumenthal

Published on June 20th, 2019Last updated on June 20th, 2019

This week computer scientists from the USC Viterbi School of Engineering presented two separate papers on detecting “deep fakes” –manipulated and doctored videos that appear as if a person did or said something –when in fact no such action occurred.

This was brought to the public’s attention last week during congressional testimony and also showcased by a doctored video of Facebook’s Mark Zuckerberg appearing to talk about potential use of data. The real concerns for many is the forthcoming 2020 elections and potential manipulation of presidential candidate’s utterances and the potential for false utterances to cause conflict on a global stage.

Given this pressing context, USC researchers are working on ways to detect manipulated content. Their papers were presented  at the IEEE conference on Computer Vision and Pattern Recognition conference in Long Beach, California.

An Automatic Detection Tool for Fake News and Facebook Videos?

In this first paper, “Recurrent Convolutional Strategies for Face Manipulation Detection in Videos,” computer scientists from the USC Information Sciences Institute (USC ISI) including Ekraam Sabir, Jiaxin Cheng, Ayush Jaiswal, Wael Abd-Almageed, Iacopo Masi, and Prem Natarajan, developed a method that performs with 96 percent accuracy to identify deep fakes when evaluated on large scale deep fake dataset. It works on various types of content manipulations known as deep fakes, faceswaps, and face2face. At the point of publishing, the authors said their detection method was ahead of the content manipulators who quickly modify as new detection methods arise.

While previous methods of detecting deep fakes would often use frame by frame analysis of various aspects of a video, these prior methods, the USC ISI authors contend are quite computationally heavy, take more time, and also have greater room for error. However, the newer tool developed by ISI which was tested on over 1,000 videos is less computationally intensive. It thus has the potential to scale and be used to automatically detect fakes that are uploaded in the millions of profiles on Facebook or other social media platforms in near real-time.

This effort, led by principal investigator Wael Abd-Almageed, a computer vision, facial recognition and biometrics expert, looks at a piece of video content as a whole. The researchers used artificial intelligence to look for inconsistencies in the images through time, not just on a “frame by-frame” basis. This is a key distinction says Abd-Almageed, as sometimes you cannot detect the manipulation on a frame by frame level, but looking for facial motion inconsistencies.

To develop this first forensic tool, the USC ISI researchers used a two-step process. First, they input hundreds examples of verified videos of a person. Then they laid each video on top of one another. Then, using a deep learning algorithm known as a convolutional neural network, the researchers identified features and patterns in a person’s face, with specific attention to how the eyes closes or how the mouth moves. Once they had a model for an individual’s face and the movements surrounding their facial movements, they could develop a tool that compares a newly input video with the parameters of the previous models to determine if a piece of content was outside the norm and thus was not authentic. One can imagine this working in the same way a biometric reader recognizes a face, retina scan or fingerprint.

“If you think deep fakes as they are now is a problem–think again. Deep fakes as they are now are just the tip of the iceberg and manipulated video using artificial intelligence methods will become a major source of misinformation,” Abd-Almageed says. One can imagine a world where everyone guards their video assets as much as they guard their bank PIN number.

A preprint of the paper, “Recurrent Convolutional Strategies for Face Manipulation Detection in Videos” can be found below. The project is funded by the Defense Advanced Research Projects Agency (DARPA) MediFor program.

Full paper: https://arxiv.org/abs/1905.00582

Protecting World Leaders from Deep Fakes

Computer scientists at USC worked in conjunction with researchers at UC Berkeley and Dartmouth to detect deepfakes.

Half of the researchers generated “deep fake” videos of public figures that could be detected by the other researchers on the team. What were their sources? Publicly available videos on youtube.com and the insights of comedians.

Comedians, the authors say, provided important insights to signatures in a person’s speech and their affiliated mannerisms including facial expressions and gestures. Their impersonations, while exaggerated, point out that certain characteristics are hallmarks of individual’s speech and alerted the team to track movements affiliated with speech as clues to determine authenticity. These gestures were more important to identify fake content than by pixel and colors say the authors.

The researchers generated nearly undetectable fakes so that the other the half of the team could detect which videos might be fake versus authentic. They applied best-in-class technologies to test the limits of detection.

The team of researchers which included Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano and Hao Li operated on the premise that in a few years, the fakes will no longer be flawed, that they will be nearly perfect. The goal is to provide a tool to recognize motion signatures that are unique to a person, called soft-biometrics.

This research was funded by Google, Microsoft, and the Defense Advanced Research Projects Agency. The full paper is available below.

http://openaccess.thecvf.com/content_CVPRW_2019/html/Media_Forensics/Agarwal_Protecting_World_Leaders_Against_Deep_Fakes_CVPRW_2019_paper.html?fbclid=IwAR1z8PKtK7YM9wgFKrOWyQWOLGVakGuoAhrsyJd6tPRlxbwU9wcc8uaaKAY

Published on June 20th, 2019

Last updated on June 20th, 2019

Want to write about this story?