Introduction
It’s all anyone can talk about. In classrooms, boardrooms, on the nightly news, and around the dinner table, artificial intelligence (AI) is dominating conversations. With all this attention, you’d think it was a completely new technology, but AI has been around in various forms for decades. The difference is now it’s accessible to nearly everyone. AI can assist in writing better emails, sparking our creativity, enhancing our productivity, or even leading us on delightful adventures in meme-making. Just because it's AI-generated doesn't mean it's harmful.
Deepfakes are the product of AI-generated media, representing someone’s likeness or voice — or even representing an entirely fictitious person.
It can be used for positive purposes, like using your voice to create podcasts or creating videos to bring history alive.
But what happens when this tool falls into the wrong hands? This is when deepfakes become hazardous. Deepfakes can easily be used to manipulate people into believing false information.
This guide is for anyone looking to understand the deepfake AI landscape and how to protect yourself and your loved ones from getting duped.
What is a deepfake?
In the age of social media and viral videos, it's getting harder to trust what we see online. Enter deepfakes: videos or images that look incredibly real, but are actually cleverly manipulated by AI.
Imagine your favorite celebrity starring in a movie they never actually filmed, or a politician giving a speech they never actually delivered. That's the power of deepfakes. Deepfakes are made using existing videos or photos and with the help of AI, can swap faces, change voices, or even make people say or do things they never did.
While some deepfakes are harmless fun (like that silly video of your cat talking), others have a darker side. They can be used to spread misinformation, manipulate elections, or even ruin reputations. This makes it all the more important to be critical consumers of media and think twice before sharing anything that seems too good to be true.
Generative AI tools such as ChatGPT, Google’s Bard, or Sora, show how AI can craft text, art, videos, and even mimic human speech. But while the tech has advanced, the ethics of how deepfake AI should be used is still in its early stages.
Here are some examples of the impact of harmful deepfakes:
- A deepfake video of a celebrity endorsing a product without the celebrity’s knowledge may encourage you to make a purchase, or back an item and lose money as a result.
- A deepfake audio clip altering a politician’s speech could result in you changing your opinion entirely about supporting a certain politician.
- Misleading video clips might easily go viral, and result in spreading misinformation, even though that content isn’t real at all.
- AI voice simulations are popular on social media and are often the punchline of jokes at the expense of celebrities, politicians, and other public figures.
- Other AI voice scams use phone or audio text messages to target individuals by cloning the voices of loved ones, that are typically in need of financial support to get out of a sticky situation.
To fight against these scams, it’s important to understand how they work. You'll get better at spotting fake media and lower the chances of being tricked if you stay informed.
Understanding how deepfakes work
Deepfakes, a portmanteau of "deep learning" and "fake," are sophisticated digital creations where a person in an existing image or video is replaced with someone else's likeness. This technology leverages powerful artificial intelligence and machine learning techniques to produce or alter video content so that it presents something that didn't actually occur. Here’s a closer look at how deepfakes are created and the technology behind them.
- Data Collection
The first step in creating a deepfake is gathering enough visual and audio data of the person you want to mimic. This usually involves collecting numerous images and videos that include different angles, lighting conditions, and expressions. This data helps in training the AI models to understand and replicate the nuances of the target’s facial and voice characteristics. - Machine Learning Models
At the heart of deepfake technology are machine learning models known as autoencoders and generative adversarial networks (GANs). An autoencoder learns to compress data into a smaller, dense representation and then decompress it back to its original form. In the context of deepfakes, it helps in swapping faces by reducing the high-dimensional data (like images) into lower-dimensional code which can then be modified and expanded back to an image.
GANs consist of two parts: a generator and a discriminator. The generator creates images, while the discriminator evaluates them. Over time, the generator learns to make more accurate forgeries, and the discriminator grows better at spotting the fakes. This adversarial process enhances the quality of the generated outputs, making the deepfakes more realistic. - Training the Model
The collected data is used to train the AI models. This process involves adjusting the parameters of the neural networks until they can convincingly replicate the target’s facial and vocal characteristics. Training a deepfake model requires significant computational power and can take days or even weeks, depending on the complexity of the data and the desired quality of the output. - Video Creation
Once the model is adequately trained, it can be used to create the deepfake video. This involves feeding the model input in the form of videos or images where you want to replace someone’s face or voice. The AI then processes the input, swapping in the learned characteristics of the other person’s face or voice into the video, creating a seamless fake that appears real. - Refinement
The initial outputs may still have imperfections such as unnatural blinking, poor lip-syncing, or jittery facial movements. Additional post-processing and refinement using video editing tools can help correct these flaws to make the final video look more believable.
Common uses of deepfakes
Deepfakes are increasingly being utilized across various sectors for both beneficial and malicious purposes. They are commonly used in entertainment to create realistic special effects, in education to generate engaging content, and by cybercriminals to perpetrate scams and misinformation, posing significant ethical and security challenges. Here’s a look at some of the most common and impactful uses of deepfakes today.
- Entertainment and Media
In the world of entertainment, deepfakes have been used to impressive effect. Filmmakers and video producers use deepfake technology to de-age actors, resurrect performances from actors who have passed away, or enhance the visual effects in movies and television shows. This technology allows for creative flexibility, enabling storytellers to push the boundaries of traditional filmmaking. - Content Personalization
Deepfakes are being explored for their potential to personalize content in marketing and advertising. For example, a commercial could feature a deepfake version of a celebrity speaking different languages, making the content more relatable to diverse audiences. Similarly, personalized video messages using deepfakes can create unique experiences for customers, potentially increasing engagement and customer loyalty. - Education and Training
In educational contexts, deepfakes can be used to create interactive learning experiences. Imagine historical figures brought to life, delivering lectures about their lives and times, or scientists explaining complex theories in first person. This can make learning more engaging and accessible, especially for visual learners. - Art and Social Commentary
Artists and activists are using deepfakes to produce thought-provoking art and commentary on social issues. By altering or reimagining events, deepfakes can challenge viewers’ perceptions and prompt discussions about reality, technology, and ethics. These projects often aim to highlight the potential dangers of deepfake technology while also exploring its artistic possibilities. - Research and Development
Researchers are employing deepfakes in fields like psychology and facial recognition technology. By generating various facial expressions, movements, and scenarios, they can study human behavior, emotional responses, and the effectiveness of facial recognition systems under diverse conditions.
Creation of deepfake technology
Deepfake technology emerged from advancements in machine learning and artificial intelligence, particularly in the area of deep neural networks. Initially developed for improving image recognition and language processing, these techniques were repurposed to create realistic synthetic media. The results are highly convincing videos and audio that can mimic real people.
Deepfake technology entered the public consciousness around 2017 when a developer using the pseudonym "DeepFakes" began posting realistic-looking fake adult videos of celebrities on Reddit. This not only sparked widespread media attention but also led to the development of user-friendly deepfake creation software. Tools like FakeApp and later, more advanced applications, made it possible for everyday users to create deepfakes without extensive technical knowledge, significantly lowering the barrier to entry.
Why are deepfakes risky?
One of the most significant dangers posed by deepfakes is the steady corrosion of trust. As technology surges ahead, it becomes more difficult to know what’s real and what’s fake.
False information
Deepfakes sow seeds of skepticism and make it difficult to believe what we see and hear online. This growing skepticism has significant implications for how we consume information and interact with media. As deepfakes become more sophisticated, distinguishing between real and manipulated content becomes increasingly challenging, not just for individuals but also for organizations that rely on digital media for communication and marketing. The consequences can be extreme.. People may unknowingly share or respond to falsehoods, sparking chaos, and damaging their own and others’ good names.
Moreover, the proliferation of deepfakes can erode public trust in legitimate news sources. As people become more wary of falling victim to fake content, they may also grow distrustful of real, factual information, leading to a broader crisis in information credibility.
To combat these challenges, it is crucial for technology developers, policymakers, and media organizations to collaborate on solutions that can detect and flag deepfake content. Public awareness campaigns can also educate people about the existence and dangers of deepfakes, empowering them to critically evaluate the media they consume.
Privacy violations
In today’s digital age, the advent of deepfake technology has introduced a new frontier in the realm of privacy violations, presenting unique challenges that stretch the boundaries of ethics and legality. Deepfakes, which are synthetic media in which a person's likeness is replaced with someone else's likeness, are becoming increasingly sophisticated and accessible, raising serious concerns about individual privacy rights.
Imagine waking up one day to find a video circulating online that shows you saying or doing things you never actually said or did. This scenario is no longer confined to the realms of science fiction. For many, it’s a jarring reality. Victims of deepfakes often experience a profound violation of their privacy, and the psychological impact can be severe, leading to distress and a sense of powerlessness.
Political upset
In the realm of politics, where trust and credibility are paramount, deepfakes have emerged as a potent tool for causing significant upheaval. These hyper-realistic manipulations of audiovisual content can create false perceptions, stir controversy, and even destabilize electoral processes. The implications are profound, affecting not just individual politicians but entire political landscapes.
Deepfakes can severely damage the reputation of political figures by portraying them in situations or saying things that are out of character or outright false. This erosion of trust can lead to a loss of public confidence not only in the targeted individuals but in the political system as a whole. When voters cannot trust what they see and hear, the foundational trust necessary for democratic governance begins to crumble.
Financial fraud
Deepfake technology can be used in financial deception and fraud. With AI voice cloning in the mix, phone-based phishing attacks are now more believable than ever. Scammers can leverage deepfake technology to mimic the voices of loved ones or respected figures, duping individuals into revealing sensitive information or sending them money. The fallout from scams like this can be both financial and emotional.
Deepfakes in advertising are less threatening to personal well-being but can still take a financial toll. By impersonating respected individuals in manipulated endorsements, fraudsters stand to make money at the expense of unsuspecting consumers and the reputations of those they impersonate.
Related reading:
How can I protect myself from deepfakes?
Deepfake AI threats aren’t uncommon, but they also aren’t impossible to prevent. Protecting yourself against the risks associated with AI-generated video content is paramount in today’s digital landscape. Here are 6 key strategies to stay safe:
- Educate yourself: Stay informed about the capabilities of AI technology, particularly in video generation. Understanding how deepfakes and other AI-generated content are created can help you recognize and mitigate their potential impact.
- Verify sources: Always scrutinize the source of video content. If something seems suspicious or too good to be true, take extra precautions before believing or sharing it.
- Use trusted platforms: Whenever possible, consume video content from reputable sources and platforms that prioritize authenticity and credibility. Be cautious when viewing videos shared on social media or lesser-known websites.
- Protect personal information: Be cautious about sharing personal information or engaging in sensitive conversations over video calls or messaging platforms. Verify the identity of individuals before divulging sensitive information.
- Stay up to date on new technologies: As AI scams become more prevalent, so does the technology to combat them.
- Keep up with laws and regulations: While specific laws around the use of deepfakes are still in flux, some countries— China, for example— require that people give their explicit consent before their faces are used in deepfakes. Similar efforts are being made around the world to establish guidelines for the responsible use of AI and deepfake technologies.
Deepfake examples
Malicious deepfakes are popping up at a rapid pace — so much so that we’ve started a page devoted to AI news, deepfakes, and scams to keep you up to date.
Let’s dive into some real-world examples of deepfakes that showcase the impact of deepfake scams across the political, entertainment, and financial sectors.
In 2020, a deepfake video of then-Speaker of the United States House of Representatives Nancy Pelosi went viral. The video was altered to make it appear as if Pelosi were slurring her words and sparked concerns about the potential misuse of this technology for political manipulation. This incident highlighted the need for individuals to be skeptical of videos that seem too good to be true and to pay attention to video quality and the source when evaluating the authenticity of online content.
Deepfakes have also been used to spread misinformation. In 2024, a deepfake robocall of President Biden went out to thousands of New Hampshire voters in January, just ahead of the state’s first-in-the-nation presidential primary, telling voters to stay home and “save” their votes for the November general election. Such activity, if left unchecked, has the potential to disrupt election outcomes.
Deepfake scams also have hit the entertainment industry, using AI to clone celebrities to sell products. In early 2024, a video showed a representation of Taylor Swift targeting her fans with the promise of free cookware. But when viewers provided their info, they fell victim to identity theft. Similar scams involved celebrities Kelly Clarkson, Tom Hanks, and Hugh Jackman.
Deepfakes also have been used to create realistic visual effects and even bring deceased actors back to life. For example, in the 2016 film "Rogue One: A Star Wars Story," a deepfake of the late actor Peter Cushing was used to recreate the character of Grand Moff Tarkin. This use of deepfakes blurs the line between reality and fiction and raises questions about the ethical implications of using this technology.
How to spot a deepfake
Spotting deepfake examples requires a keen eye and attention to detail. Here are some key indicators that can help you identify real from fake:
- Scrutinize facial features: Deepfake examples often exhibit inconsistencies in the subjects’ eyes. Look for unnatural blinking patterns, mismatched eye movements, or a lack of synchronization between the eyes and the rest of the face. Additionally, pay attention to skin tones and textures; deepfakes may show abrupt transitions or unnatural smoothing of the skin.
- Examine the background: Deepfakes sometimes struggle to seamlessly blend the subject into the background. Check for irregularities in lighting, shadows, and perspective. Notice if the subject appears to be floating or detached from the background, or if there are any abrupt changes in the background as the camera moves.
- Listen for audio discrepancies: Deepfake videos may have audio that doesn't quite match the speaker's lip movements. Listen for any unnatural pauses, pitch changes, or distortions in the audio. The voice may also sound robotic or lack the usual inflections and emotions.
- Consider the overall quality: Deepfakes often have a lower video quality compared to authentic videos. Look for pixelation, blurring, or unnatural movements. Deepfakes may also have a "plastic" or artificial appearance due to the AI-generated nature of the content.
- Utilize deepfake AI detection tools: Several online tools and software applications are specifically designed to detect deepfake examples. These tools employ advanced algorithms to analyze videos and identify potential manipulations. While nothing is foolproof, these tools can provide additional protection against deepfake deception.
By being vigilant and applying these techniques, you can enhance your ability to identify deepfake examples and protect yourself from misinformation and online protection.
Remember, if something seems too good to be true, it's worth taking a closer look to determine its authenticity.
Related reading:
Methods to detecting deepfakes
Deepfakes are increasingly sophisticated and harder to identify with the naked eye. To combat this, various methods have been developed, such as analyzing inconsistencies in audio-visual synchronization, deploying AI-based detection algorithms, and scrutinizing digital artifacts. These techniques are crucial for identifying manipulated content and maintaining trust in digital media.
Deepfake safety basics
As deepfakes become more advanced and prevalent, it's crucial to protect yourself from being deceived by them. Here are some practical tips to help you stay vigilant:
- Maintain a critical eye: Be wary of videos that appear exceptionally polished or too good to be true. Deepfakes often exhibit an unnatural smoothness or lack the subtle imperfections found in genuine footage.
- Scrutinize video quality: Pay close attention to the video's quality. Deepfakes may reveal themselves through unnatural movements, inconsistencies in lighting or shadows, or a lack of background detail.
- Verify the source: Always check the source of the video. Is it from a reputable news organization, a trusted website, or a well-known social media account? If the source is unfamiliar or questionable, treat the content with caution.
- Consider the context: AI fakes usually don’t appear by themselves. There’s often text or a larger article around them. Inspect the text for typos, poor grammar, and overall poor composition. Look to see if the text even makes sense. And like legitimate news articles, does it include identifying information — like date, time, and place of publication, along with the author’s name.
- Evaluate the claim: Does the image seem too bizarre to be real? Too good to be true? Today, “Don’t believe everything you read on the internet,” now includes “Don’t believe everything you see on the internet.” If a fake news story is claiming to be real, search for the headline elsewhere. If it’s truly noteworthy, other known and reputable sites will report on the event — and will have done their own fact-checking.
- Check for distortions: The bulk of AI technology still renders fingers and hands poorly. It often creates eyes that might have a soulless or dead look to them — or that show irregularities between them. Also, shadows might appear in places where they look unnatural. Further, the skin tone might look uneven. In deepfake videos, the voice and facial expressions might not exactly line up, making the subject look robotic and stiff.
- Utilize deepfake AI detection tools: Several software tools are available to help detect deepfakes. These tools employ advanced algorithms to analyze videos and identify potential manipulations.
- Stay informed and educated: Keep yourself updated about deepfake technology and its evolving capabilities. Understanding how deepfakes are created and the techniques used to spot them will empower you to make informed judgments about the authenticity of online content.
Related reading:
- https://www.mcafee.com/ai/news/staying-safe-in-the-age-of-ai/
- https://www.mcafee.com/ai/news/safer-ai-four-questions-shaping-our-digital-future/
- https://www.mcafee.com/ai/news/artificial-intelligence-and-winning-the-battle-against-deepfakes-and-malware/
- https://www.mcafee.com/ai/news/10-artificial-intelligence-buzzwords-you-should-know/
- https://www.mcafee.com/ai/news/how-to-protect-your-privacy-from-generative-ai/