A deconstruction of the major concerns surrounding deepfake tech.
Deepfake technology is an evolving form of artificial intelligence (AI) that can doctor images so effectively that it becomes nearly impossible to tell that they have been edited. A deepfake creator can manipulate media and replace a real person’s appearance, voice or both with similar artificial replicas.
Think of it this way – Today, you can be edited into your favorite film to such accuracy that it will look like you fit right in. However, deepfake technology can do more than this. It can create a fake video of former US President Barak Obama saying “President Donald Trump is a complete dips***.”
Such fake images and videos can be used for nefarious purposes like scamming and hoaxing. This makes it important for you and your organization to understand the threat of Deepfakes so that you can effectively combat it.
How are deepfakes made?
Deepfakes were born in 2017 when a Redditor posted doctored images onto the platform. These images showed the faces of Gal Gadot, Taylor Swift and Scarlett Johansson edited over the bodies of adult film actresses. Deepfakes can be sub-divided into two broad categories – video deepfakes and audio deepfakes. Let’s take a closer look at both of them separately.
To make a video deepfake, you must feed thousands of images of two people (let’s call them A and B) from various angles into an AI encoder. The encoder studies the similarities and differences between A and B’s faces. While doing so, the encoder also compresses these images. After the compression is complete, the images are transferred to AI decoders.
The process requires two decoders (decoder A and decoder B) responsible for the two people involved. To swap A and B’s faces, you simply need to give decoder A, B’s compressed images. The decoder then reconstructs A’s face with the expressions and orientations of B. This process needs to be done repeatedly for every video frame to make the deepfake look as authentic as possible.
To create an audio deepfake, you need clear recordings of the voice that you want to emulate. These recordings should not have any interruptions or ambient noise. The process generally requires 2.5-3 hours of voice recordings.
The process takes a lot of time and even then, the resulting audio tends to sound automated. The audio deepfake is processed through a neural vocoder to make it sound authentic. A neural vocoder is a computer program that uses deep learning networks to fill any frequency gaps present in the audio deepfake.
Risks posed by deepfake technology
The types of risks posed by deepfake technology fall under two broad categories: social engineering and public opinion.
This category involves manipulating people into sharing private information. Deepfake technology can replicate the voice of high-ranking company personnel to ask an employee to send money or important documents.
In March 2019, cyber-criminals scammed the CEO of a UK-based energy firm out of €220,000 (US$243,000). To scam him, the criminals used an AI-generated phone call from the CEO of the firm’s Germany-based parent company. The caller made the request seem urgent, pushing the CEO to act without giving it a second thought.
These voice clones don’t necessarily sound the same as the original. However, to overcome the shortcomings of the voice clone, criminals use behavior manipulation tactics. The caller could sound like an angry boss or an anxious family member to prevent you from thinking rationally.
Threats to the public opinion of a company may also arise from deepfake technology, such as fake videos of influential people that are made for disseminating fake news or disinformation. A fake video of a company’s CEO could adversely influence consumer behavior and potentially affect the stock price. Again, think the Obama and Trump example above!
In the long run, this could hamper the reputation of the company. A study conducted by Moody in mid-2019 found that deepfakes could threaten a company’s credit quality.
How to defend yourself
As a business, you need to know precisely how to defend yourself and your employees against the growing threat of deepfakes. You can divide your defense strategy into three main categories:
Prevention and Awareness
Your employees can be your first line of defense against deepfakes. The company must organize training sessions that cover all aspects of deepfake technology. The sessions must cover what deepfakes are and how they can be used to extort money and information.
The company must also keep its employees up to date on the channels of communication that they use, keeping them as limited as possible. Clearly defined communication channels will ensure that the employee is alert if they get information from outside those sources.
As per a study conducted in 2018, US-based researchers found that deepfake faces don’t blink normally. Closely watching videos to observe the way people blink in them would be a telltale sign of the video’s authenticity. Other key tells of a deepfake are odd lighting, abnormal skin colors and blurry or misaligned visuals.
You must ensure that your organization is ready to deal with a deepfake threat. Every department of the organization must know exactly how to act when a deepfake threat is encountered. Having a defined deepfake protocol can reduce response time and prevent damage to the company’s image.
There is no definitive technological way to combat the threat of deepfakes at present. However, attempts to address the problem are underway. Researchers from the University of Southern California and the University of California Berkeley have been using machine learning as a detection mechanism. This machine learning detection mechanism studies soft biometrics such as facial quirks to identify a video’s authenticity. It is successful 92-96% of the time.
Being prepared for the threat is half the battle won. Keeping yourself and your employees up to date with the advancements in deepfake technology can help you deal with deepfake scams.
Header image courtesy of Unsplash