The now infamous Nancy Pelosi video, which was altered in a way that makes Pelosi appear intoxicated, is the latest example of the potential disruptive power of photo and video manipulation.
As an edited version of actual video footage, the Pelosi video does not meet the definition of being a deepfake--which is the creation of an entirely fake video of someone doing or saying something they did not do or say. Pornography is where early deepfake technology experimentation changed from clunky, obvious shams, to creating fakes that looked much more authentic.
Artificial intelligence has now entered the game, and those computers are clever. Just over a week ago, you could see the Mona Lisa speak. Researchers from the Samsung AI Center and the Skolkovo Institute of Science and Technology in Moscow released a paper describing their algorithm, which creates a deepfake from a single photographic image of a person.
The Pelosi video was an obviously manipulated video, yet it still was viewed millions of times and promoted by many people and media companies in a way that suggested it was authentic. Given the 2016 election and actions since then, there are obvious concerns about deepfake technology and its use as political propaganda. However, the potential for nefarious use stretches well beyond elections.
Trying to spot the fakes, which are attempting to be more and more realistic, is a little akin to Sisyphus and his boulder. DARPA’s Media Forensics (MediFor) program is working on technology that would constantly monitor online visual media, test it, and identify fakes.
Scale is an obvious complicating factor. A Wired article looks at researchers trying to make image manipulation detectable by building detection capabilities in at the camera level—technology that could eventually be used for video forensics as well. Blockchain, widely used to authenticate cryptocurrency such as Bitcoin, may also be a valuable tool in video identification. (See Security Management’s basic description of blockchain)