Skip to content Skip to navigation Skip to footer

Deepfake Definition

Deepfake is a form of artificial intelligence (AI) that can be used to create convincing hoax images, sounds, and videos. The term "deepfake" combines the deep learning concept with something fake.

Deepfake compiles hoaxed images and sounds and stitches them together using machine learning algorithms. As a result, it creates people and events that do not exist or did not actually happen.

Deepfake technology is most notably used for nefarious purposes, such as to mislead the public by spreading false information or propaganda. For example, deepfake videos could show a world leader or celebrity saying something they have not said, which is also referred to as “fake news” that shifts public opinion.

What Are Deepfakes Used For?

Deepfake technology can be used for a wide variety of appalling purposes, including:

Scams and Hoaxes

Cyber criminals can use deepfake technology to create scams, false claims, and hoaxes that undermine and destabilize organizations. 

For example, an attacker could create a false video of a senior executive admitting to criminal activity, such as financial crimes, or making false claims about the organization’s activity. Aside from costing time and money to disprove, this could have a major impact on the business’s brand, public reputation, and share price.

Celebrity Pornography

A major threat that deepfake poses is nonconsensual pornography, which accounts for up to 96% of deepfakes on the internet. Most of this targets celebrities. Deepfake technology is also used to create hoax instances of revenge porn.

Election Manipulation

Deepfake videos have been used to spread fake videos of world leaders like Donald Trump and Barack Obama, which raises concerns that it could be used for election manipulation. For example, there were widespread concerns that deepfake videos would affect the 2020 U.S. election campaign.

Social Engineering

Deepfake technology has been used within social engineering scams, with audio deepfakes fooling people into believing trusted individuals have said something they did not. For example, the CEO of a U.K. energy firm was tricked into believing he was speaking to the chief executive of the company’s parent company in Germany. The deepfake voice impersonated the chief executive and convinced the CEO to transfer €220,000 to a supposed Hungarian supplier’s bank account.

Automated Disinformation Attacks

Deepfake can also be used to spread automated disinformation attacks, such as conspiracy theories and incorrect theories about political and social issues. A fairly obvious example of a deepfake being used in this way is a fake video of Facebook founder Mark Zuckerberg claiming to have “total control of billions of people's data,” thanks to Spectre, the fictional organization in the James Bond novels and movies.

Identity Theft and Financial Fraud

Deepfake technology can be used to create new identities and steal the identities of real people. Attackers use the technology to create false documents or fake their victim’s voice, which enables them to create accounts or purchase products by pretending to be that person. 

How Was Deepfake Technology Created?

The term "deepfake" first came into the public domain in 2017, when a Reddit user with the username “deepfakes” shared doctored pornographic videos on the site. He did so by using Google’s open-source, deep-learning technology to swap celebrities’ faces onto the bodies of pornographic performers. Modern deepfakes are descended from the original codes that were used to create these videos.

How Are Deepfakes Made?

There are several methods for creating deepfakes. One of the most popular is using the generative adversarial network (GAN), which trains itself to recognize patterns using algorithms, which can also be used to create fake images.

Another method is AI algorithms called encoders, which are used in face-replacement and face-swapping technology. The decoder retrieves and swaps images of faces, which enables one face to be superimposed onto a completely different body. Deepfakes use autoencoders, which go beyond the compression and decompression of classic encoders, enabling cyber criminals to create completely new images. Deepfake applications use two autoencoders, which enable images and movement to be transferred from one image onto another.

How To Spot Deepfakes

Deepfakes can be spotted by recognizing unusual activity or unnatural movement, including:

Unnatural Eye Movement

A lack of eye movement is a good sign of deepfakes. Replicating natural eye movement is challenging as peoples’ eyes usually follow and react to the person they are speaking with.

A Lack of Blinking

A lack of blinking is also a flaw with deepfaked videos. Replicating the natural, human action of regular blinking is difficult with deepfake technology.

Unnatural Facial Expressions and Facial Morphing

Deepfake technology involves morphing facial images, with faces simply being stitched from one image over another. This typically results in unusual or unnatural facial expressions.

Unnatural Body Shape

If a person’s body does not look to have a natural shape, then it is most likely fake. Deepfake technology largely focuses on faces rather than the entire body, which leads to unnatural body shapes.

Unnatural Hair

Fake images cannot generate realistic individual characteristics, such as frizzy or messed-up hair.

Abnormal Skin Colors

Deepfakes are unable to replicate the natural colors of images and videos. This leads to them showing abnormal skin colors.

Awkward Head and Body Positioning

Deepfake images will often feature inconsistent or awkward-looking head and body positioning. Examples of this include jerky movements and distorted images when people move or turn their heads.

Inconsistent Facial Positions

Deepfake images will often feature inconsistent or awkward-looking head and body positioning. Examples of this include jerky movements and distorted images when people move or turn their heads.

Odd Lighting or Discoloration

Similar to the reasons for unnatural skin tones, deepfake images are also prone to discoloration, misplaced shadows, and unusual lighting.

Bad Lip-syncing

Deepfake videos will likely feature lip-syncing that does not align with the words being spoken by the people in the video.

Deepfake vs. Shallowfake

Shallowfakes are videos that appear out of context or are edited using more simplistic tools. A good example of shallowfake is a speech by Nancy Pelosi, the U.S. Speaker of the House, edited to make her voice sound slurred, implying she was drunk.

How To Combat Deepfakes

Steps have already been taken to combat deepfakes and prevent the images and videos from being shared online.

Social Media Rules

Facebook has hired researchers from universities to help it build a deepfake detector, which enforces its ban on deepfakes. Twitter has policies in place to prevent fake content and is working to tag deepfake images that are not immediately removed. YouTube also vowed to block any deepfake content related to the 2020 U.S. election and census.

Research Lab Technologies

Researchers have been working on data science solutions that detect deepfakes. Many of these have quickly become ineffective as the attackers’ technology evolves and creates more convincing results.

Filtering Programs

Filtering programs are also working to prevent deepfakes. AI firm DeepTrace’s program acts in the same way as an antivirus or spam filter and diverts fake content into a quarantine zone, while Reality Defender, from AI Foundation, aims to tag manipulated content before it can do any damage.

Corporate Best Practices

One of the best ways to prevent deepfakes is for employees to understand the signs of fake images and videos. Corporate best practices include advising users on the telltale signs of cyberattacks and fraudulent online activity.

U.S. Legislation

Laws have already been passed in several U.S. states to criminalize deepfake pornography and prevent the technology’s use around elections. A deepfake legislation was also introduced into the National Defense Authorization Act (NDAA) in December 2019.

How Fortinet Can Help

Fortinet security solutions use AI and ML to prevent advanced threats from reaching organizations’ networks and resources. For example, the FortiWeb web application firewall uses advanced features that defend every application from known and zero-day threats and blocks malicious activity. 

Further, the FortiGate next-generation firewalls (NGFWs) filter network traffic and offer deeper content inspection to prevent and block internal and external threats. They also evolve in line with the cybersecurity threat landscape, which ensures organizations are always protected against the latest security risks.