It’s getting harder to spot fake videos, thanks to widely available artificial intelligence. You may have seen a video of a celebrity or politician that either shocked or amused you, believing it to be real at first blush. Sometimes just for fun, but sometimes maliciously damaging, the rise of “deepfakes” is getting everyone’s attention. What exactly is a deepfake, and how are they made?
This growing trend uses technology capable of superimposing a voice and mimicking facial movements to appear authentic. Deepfakes range from silly to sinister. At one end of the spectrum, people edited actor Nicholas Cage into movies he was never in. At the other end, celebrities’ likenesses have been cut into pornographic videos.
Lawmakers are taking notice, citing concerns about privacy issues and the possibility of videos influencing elections. Artificial intelligence will likely soon render it nearly impossible to detect visually what is fake versus real in videos. The U.S. government is developing tools to identify deepfakes as a preventative and foreboding measure.
How it Works
The AI software uses machine learning to study the movements of a person in a real video and then superimposes their face onto another video. As technology advances, it can completely render a fresh video of a person utilizing an image or video. Even just one image can be used to produce a deepfake. However, the more resources the creator has at their disposal, the more realistic the AI can render the deepfake.
Where it All Began
Deepfakes have been around a long time but have become more advanced recently. The movie industry embraced it first. A high-profile demonstration of the tech was the 2016 film Rogue One: A Star Wars Story. Film makers worked in the likeness of actor Peter Cushing as Grand Moff Tarkin through video synthesis. Many viewers felt momentarily confused, since Cushing died in 1994!
In a world of fake news and viral videos, these deepfakes are causing problems. With AI audio synthesis, voices are being replicated with accuracy in conjunction with videos and imagery. Scammers have been known to utilize voice replication in a myriad of elaborate ransoms and terrible tricks on the general public.
Scams and Conspiracy Theories
Scammers have recently been using deepfakes by email or even hacking people’s computers. To underplay the repercussions of these videos is naive, in an age of conspiracy theories such as #PizzaGate, which resulted in a man firing rounds inside a restaurant during his rogue investigation.
Machine learning gathers data from publicly available videos, which explains why politicians and celebrities are commonly used in these early years of deepfakes. There are so many videos available of these people that machine learning has an in-depth database of gestures and movements to utilize and replicate.
The Fight Against Deepfakes
UC Berkely and DARPA are cataloging thousands of videos of politicians to create digital “fingerprints” of their body and facial movements. The goal is to allow the public to vet and debunk deepfake videos.
It is relatively easy to use deepfake software, such as DeepFaceLab, which is available to anybody. The ramifications may seem juvenile, but imagine a hoax emergency alert, or a fake sex video affecting your marriage. Also, just before voting day, a sophisticated video of a politician could sway voters as it sweeps the web.
How to Spot a Deepfake
Following the final season of the HBO drama Game of Thrones, deepfakes circulated using the character Jon Snow, in which he apologizes for the show’s perceived missteps. Viewers easily identified the videos as fake, because his jaw and mouth look disjointed (not to mention the shirtless fella in the background). Others have been historically more convincing and damaging, such as the Nancy Pelosi video where the Democratic politician appears to be slurring her words. (This example is technically a “softfake” since it used only basic video editing).
The AI Foundation has made a browser plugin called Reality Defender to help users identify fake content online. Also, SurfSafe and websites such as Snopes can help the public know fact from fiction on the web.
The government and private websites are working towards removing damaging videos that could be used as harassment, bullying, or even foreign political interference. However, blurred lines are forming in consideration of the First Amendment’s freedom of speech protections.
Before you jump to conclusions when you see your favorite celebrity or a politician saying or doing something unethical or unlikely, consider the possibility that it’s a deepfake. The wild wild web has always been a place of elaborate scams, threats, and nefarious technologies. The more informed we all are, the safer we are from being manipulated by internet trolls near and far.