AI and Identity: What Is Being Done to Prevent Deepfakes?

AI and Identity: What Is Being Done to Prevent Deepfakes?

Deepfakes are a stark reminder that not everything on screen reflects the truth in the digital age.

Every technology has its share of pros and cons, and deepfake technology is no exception. Deepfakes, known for their capacity to manipulate and fabricate audiovisual content with uncanny realism, have evolved from harmless entertainment tools to potent weapons capable of misinformation, propaganda and identity theft. This technological advancement has left no one immune, impacting celebrities and public figures alike.

Take, for instance, the incident involving Hollywood star Scarlett Johansson. She faced the darker side of deepfakes when she had to legally confront an AI app for using an AI-generated version of her voice in an online advertisement without her permission. Although the ad included a disclaimer that the AI-generated content had nothing to do with the actual person, the potential for confusion and damage to her reputation was palpable. This incident brings to light the legal complexities and ethical dilemma surrounding deepfake technology, underscoring the need for stringent measures to protect individuals’ likenesses and identities.

The evolution of deepfakes

Deepfakes use artificial intelligence (AI) technology to fabricate content that’s startlingly lifelike. This digital wizardry alters photos, videos or audio recordings to produce convincing depictions of individuals doing or saying things they never actually did. 

The inception of deepfakes dates back to 2017 when a Reddit user swapped the faces of celebrities like Gal Gadot, Taylor Swift, Scarlett Johansson and others to produce pornographic videos. Fast-forwarding to 2023, and the State of Deepfakes report by a U.S.-based cybersecurity firm Home Security Heroes points out a jaw-dropping 550% increase in deepfake video circulation online since 2019. This significant surge signals a worrying trend in the proliferation of manipulated media.

The surge of deepfakes: How companies are fighting back

Major tech giants have started to step up and confront the deepfake challenge. In November 2022, Intel unveiled FakeCatcher, a tool boasting 96% accuracy in deepfake detection in mere milliseconds. It works by analyzing the “blood flow” in the video, a technique based on photoplethysmography (PPG), which monitors blood circulation changes. FakeCatcher scrutinizes these blood flow signals across the subject’s face to ascertain the authenticity of the video.

Political circles, too, are wary of deepfakes, especially with the looming threat of misinformation. Microsoft has stepped into the arena with its Content Credentials as a Service, specifically targeting U.S. politicians and campaign organizations to counter deepfakes ahead of the 2024 presidential election. This service embeds watermark credentials into digital content, enabling verification and tracking of its origins and alterations, as explained by Microsoft Vice Chair and President Brad Smith. 

Startups leading the battle

Startups, in tandem with major tech companies, are also actively combating deepfakes. NYC-based Reality Defender, for instance, offers an AI-powered platform designed to identify manipulated and AI-generated content across audio, video, images and text. The platform’s web app and API are trained to detect such fabrications. Recently, the startup enhanced its AI-generated text detection tool with an “Explainable AI” feature, which offers users an intuitive, color-coded way to spot AI-generated content.

Images by Reality Defender

Another notable example is the Dutch startup Sensity AI. It has developed a detection platform for spotting deepfakes by using deep learning methodologies similar to those used in creating deepfakes. The platform alerts users via email upon encountering AI-generated content, such as fake human faces in social media and realistic face swaps in videos. As claimed by Sensity AI, its platform can detect AI-generated content from popular AI tools like ChatGPT 3, Stable Diffusion, Midjourney and Dall-E. 

AI
A screenshot of Sensity AI’s video on YouTube

Shaping regulations and policies

On the regulatory front, governments and policy-makers are exploring ways to curb the misuse of deepfake technology. The UK government is looking to implement national standards for the AI industry, mandating explicit labeling of AI-generated photos and videos. Similarly, the European Union’s Digital Services Act enforces labeling requirements for social media platforms, enhancing transparency and helping users discern the authenticity of digital media.

Google is also playing its part, developing policies to guide responsible synthetic content usage on platforms like YouTube. These policies, focusing on mandatory disclosures and content labeling, aim to promote ethical content creation while aligning with Google’s broader commitment to digital responsibility. While it does not specify punitive actions for non-compliance, it falls under the umbrella of Google’s existing policies, which include potential consequences like account suspension or content removal for violations.

Navigating challenges and looking forward 

Despite advancements in addressing deepfakes, the battle is far from over. The rapid evolution of AI technologies has led to an ongoing arms race between creators of deepfakes and developers of countermeasures. Furthermore, the ethical interplay between security and privacy remains a critical concern, particularly when detection methods might infringe upon personal data.

It is imperative to educate the public about deepfakes and their implications. Fostering media literacy and teaching critical evaluation of information can equip people to recognize deepfake content. To surmount these challenges, robust institutional measures are essential to prevent the misuse of deepfake technology.

As we look into the future, collaboration among various stakeholders—tech firms, governments, researchers and the public—will be crucial in staying ahead of deepfake advancements. Investing in research, international cooperation and continual enhancement of detection techniques are vital in mitigating the adverse impacts of deepfake technology.

Also read:

Header image courtesy of Pexels

SHARE THIS STORY

Share on facebook
Share on twitter
Share on linkedin
Share on email

RELATED POSTS