Deepfakes Explained: What, Why and How to Spot Them

As the 2020 elections loom near, AI-generated deepfakes are hitting the news cycle. But what are deepfakes? What are the implications of this technology? And what can you do to spot them?

Generally, a deepfake is a fake photo, video or story generated by AI neural networks. Deepfake creators use artificial intelligence and machine learning algorithms to imitate the work and characteristics of real humans.

Deepfakes differ from traditional fake media by being extremely hard to identify. Deepfake videos, speeches, and audio clips have the potential to cause enormous damage. Lawmakers and tech companies are on the case, but deepfake-fighting technology has a long way to go.

Deepfakes are already causing problems

In May 2019, Belgian social-democratic political party Socialistische Partij Anders (sp.a) released an exclusive video featuring President Trump offering advice for the Belgian people:

“Dear people of Belgium, as you know, I had the balls to withdraw from the Paris Climate Agreement, and so should you.”

[Donald Trump Deepfakes Video.png]
Source: Facebook Video
Quite naturally, the video went viral and sparked outrage. But, of course, the video wasn’t real. It was a high-tech deepfake.

First, the creators altered audio clips to match Trump’s voice. Then they manipulated previous video footage to match Trump’s mouth movements with the new audio. The result is convincing (if somewhat low-quality).

Sp.a later said that they meant the video as an attention-grabbing stunt. In fact, the creators thought that the poor quality of their deepfake would be enough to alert viewers of its inauthenticity. But despite its subpar quality, this video fooled a lot of people.

Fact is, people aren’t used to this kind of trickery.

Deepfakes are getting better as AI gets better

In April 2018, BuzzFeed showcased how far deepfake video technology has come by combining Jordan Peele’s voice with video of Barack Obama.

In the video, Jordan Peele (as Obama) warns, “We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time. Even if they would never say those things.”

Video source: BuzzFeedVideo

Clearly, deepfake technology is getting more sophisticated (and more dangerous). This is partly due to the nature of artificial intelligence.

Where “traditional” technology requires human time and energy to improve, AI can learn from itself. But AI’s ability to develop itself is a double-edged sword. If an AI is created to do something benevolent, great! But when an AI is designed for something malicious (like deepfakes), the danger is unprecedented.

Even a benevolent algorithm, given enough time and additional content, can learn enough to be dangerous. Stanford University professor and national security expert, Andy Grotto, remarks that deepfake AI content “could be video, it could even be audio, and you feed it enough of that content and over time the algorithm learns how to mimic that content.”

How to spot deepfakes

As deepfake production methods get better, spotting forged videos will become more and more challenging. But here are a few things to look for:

  • Lower-quality sections in the same video
  • Box-like shapes and cropped effects around the mouth, eyes and neck
  • Irregular blinking (as mentioned by the University of Albany researchers)
  • Inconsistent skin tone
  • Movements that aren’t natural
  • Changes in the background and/or lighting

And most importantly: use common sense.

Is the person in the video saying something you’d never expect them to say? Does this quote or article advance someone else’s agenda?

Ask yourself: who benefits?

For more tips and tricks on spotting deepfake videos and other fake content, Buzzfeed’s got you covered: How To Spot A Deepfake Like The Barack Obama–Jordan Peele Video

Deepfakes aren’t limited to just audio and video

Unfortunately, deepfake videos are just one piece of this nightmare puzzle. Now, deepfake AI can write content that mimics the voice and style of specific humans. And this technology comes from none other than Elon Musk and Sam Altman’s company, OpenAI.

Where a company like Lexalytics, an InMoment company, uses AI to analyze natural language, this system from OpenAI uses artificial intelligence to create natural language. As OpenAI writes on their blog:

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.”

Naturally, Musk and Co. don’t call their AI writer a “deepfake creator” or anything like that. In fact, OpenAI says they won’t release their text-generating AI to the public because of how it could be abused. Additionally, Musk has long warned of the dangers of AI (related).

Axios uses OpenAI’s text-generating AI to create fake news

To demonstrate the capabilities of this text-generating AI, news site Axios fed OpenAI’s creation with two factual, human-written sentences. The text-generating AI used these sentences to create a compelling, yet false, news article about current world affairs.

Here are a few highlights:

“(the Pentagon) warns of a new arms race in AI and says the United states will not sit idly by”

‘The President has directed me to undertake a study of strategy toward a world of artificial intelligence (Defense Secretary James Mattis to the Senate Armed Services Committee)’

“China uses new and innovative methods to enable its advanced military technology to proliferate around the world, particularly to countries with which we have strategic partnerships (said the Pentagon)”

Remember: These statements are fake. They were created by the aforementioned text-generating AI. But somehow they’re still believable.

Governments recognize the gravitas of Deepfake AI

Thankfully, some lawmakers are taking deepfakes very seriously. In a September 2018 letter to the Director of National Intelligence, U.S. representatives Schiff, Murphy and Carbelo requested that the intelligence community assess and report on the implications of deepfake technology.

In their letter, the three congresspeople warn:

“Hyper-realistic digital forgeries — popularly referred to as ‘deep fakes’ — use sophisticated machine learning techniques to produce convincing depictions of individuals doing or saying things they never did, without their consent or knowledge. By blurring the line between fact and fiction, deep fake technology could undermine public trust in recorded images and videos as objective depictions of reality.”

Deepfakes have enormous potential as political tools. In fact, compared to the threat of deepfake AI, the misinformation campaigns surrounding the 2016 elections seem downright primitive.

As Representative Adam Schiff, chair of the House Intelligence Committee, said in a February 2019 statement to CNN,

“During the 2016 election, my gravest fear was that the Russians would dump forged documents among the real, or worse still, add fake paragraphs into real emails. This is still a major concern for the 2020 election, as is the possibility of using deep fakes, and either would represent yet another dangerous escalation of cyber interference in our democracy.”

Technology and regulatory counters to deepfakes

Researchers hope to combine natural language processing, advanced image processing and audio analysis to recognize incongruences and flag suspected deepfakes for manual review. These technologies face massive hurdles before they’ll be considered ready for production. Still, private companies (and the DoD) are throwing a lot of money around.

In September 2018, the DoD’s Defense Advanced Research Projects Agency (DARPA) announced a $2 billion campaign to develop the next wave of AI Technologies called “AI Next”. Agency director Dr. Steven Walker says that they “want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”

One of these initiatives is the Media Forensics (MediFor) program. MediFor claims that, if successful, that their platform will automatically detect manipulations and provide detailed information about how the manipulations were performed.

DARPA also awarded contracts to the nonprofit firm SRI International to develop a next-gen AI system.

And in 2016, DARPA announced $400,000 in funding to the University of Albany’s Computer Vision and Machine Learning Lab to support research into “technologies to identify and recover forged digital images and videos”. So far, the researchers have determined that analyzing blinks could be one way to identify deepfake videos.

Wrapping up and further reading

Can technology “solve” fake news? No, not yet, anyway. In fact, we’ve written about this before. Some private companies (and the U.S. Department of Defense) are trying. But in the meantime?

“AI won’t solve fake news, at least not yet. In fact, artificial intelligence will make things even worse.”

CTV News – Deepfakes explained: How technology is masking reality

Buzzfeed: How To Spot A Deepfake Like The Barack Obama–Jordan Peele Video