Loading...
Share
Twitter
Facebook
LinkedIn

 

It was the AI deepfake of Katy Perry that looked so good, it fooled her own mother.

As deepfake technology fills our feeds with increasingly uncanny phonies, it’s getting easier to get fooled. And that makes right now the right time to talk about deepfakes, not only with our children but with our parents as well.

In the case of singer Katy Perry and her mom, the truth was revealed over a couple of texts. The exchange took place after her mom complimented Perry on the dress she wore to the 2024 Met Gala in New York City. Yet Perry wasn’t even there. The picture her mom saw was a deepfake. Perry posted a screenshot of the mother-daughter chat with that followed on her Instagram account.

ai_hub_award

 

Perry wasn’t the only one in an AI gown that night. Deepfake images of other stars at the gala flooded social media as well. Lady Gaga, Rihanna, and Dua Lipa also got swept up in it all

So, yes, anyone can get fooled by a deepfake. Even moms. Even when it’s a deepfake of their own kiddo..

Talking to your parents about deepfakes.

Seeing isn’t believing on the internet today. Not at first blush, anyway. And that’s a good place to start when talking to your parents about deepfakes. A proper definition of what a deepfake can help show them why deepfakes are so tricky.

Deepfakes are phony computer-generated images, video, and audio that mimic real people. Deepfakes represent a huge leap from the older techniques used to create fakes, like airbrushing and photoshopping. They use artificial intelligence, commonly shortened to “AI,” which makes deepfakes look increasingly real. And increasingly convincing. As AI technology continues to evolve, it can get tough to tell what’s real and what’s fake.

Anyone can get their hands on the tools to create deepfakes. They’re available for purchase online, from the most basic of tools to highly powerful. It’s all legal. The place where deepfakes cross the line is in how people use them.

As such, deepfakes can range anywhere from lighthearted fun to malicious intent.

.Deepfakes for a laugh:In its most lighthearted forms, deepfakes make for good comedy. You’ll find all manner of clearly labeled parody accounts on social media. One example: Arnold Schwarzenegger singing as Mister Rodgers in a cardigan.

ai_hub_award

Courtesy “brianmonarch” on YouTube

·Deepfakes for fraud: In the past year, fraudsters have pumped the internet full of deepfakes that appear to be well-known public figures that hawk all kinds of promos and deals. Two noteworthy examples include Elon Musk and Taylor Swift. Fraudsters have mirrored Musk’s likeness to sell potential victims on scam investments and cryptocurrency. In early 2024, a Taylor Swift deepfake pushed a screaming deal on high-end cookware. Of course, the deal was really a scam.[i]

.Deepfakes for damaging reputations: We’ve seen audio deepfakes reportedly used against everyday people as well. Not just celebrities. One prominent case occurred in Maryland, where a school’s athletic director allegedly created an audio deepfake of the school’s principal making racist remarks.[ii]

·Deepfakes that spread disinformation:Deepfakes prop up all manner of lies. The bad actors behind these deepfakes are often politically motivated. They want to sway public opinion, whether that’s around an issue, a particular politician, or even polling info. Earlier in 2024, an AI robocall of President Joe Biden urged New Hampshire citizens to “save their vote” in the primary election — falsely alluding that voting in the primary prevents them from voting in the general election later in the year.[iii]

If you share one thing with your parents, it’s this: you can’t take anything on the internet at face value these days. Deepfakes, innocent and malicious alike, can increasingly pass themselves off as the real thing. Everything you see and hear online requires a review.

Helping your parents spot deepfakes.

AI leaves its fingerprints on deepfakes, little signs that what you’re seeing or hearing isn’t real. They can be tough to spot, particularly when more sophisticated AI tools are at play. Yet they're there.

With some help from a fake image of the “puffer-coat Pope” that cropped up in 2023, we can point out several signs of a bogus image.

ai_hub_award

Credit: CNN

Start with the hands in the image. The right hand isn’t fully formed. Many AI tools have a notoriously tough time with rendering fingers properly. Meanwhile, the left hand features some lighting and skin tones that look a bit unnatural. An even closer look shows that the crucifix worn by the Pope only has half a chain. Next, look at the face and the unusual shadows cast by the glasses he wears.

All these subtle hints add up to a deepfake. Yet if you were to see the image while scrolling online, you might not catch any of them and mistake the “puffer-coat Pope” for an actual image.

Deepfake video suffers many of the same issues. Lighting, skin tone, eyes, hands…all of them can get rendered in a way that looks unnatural. On top of that, the voice and facial expressions might not exactly line up, making the subject look robotic and stiff. The lip-synching might even be altogether off.

As for deepfake audio, keep an ear out for edits. Some deepfakes stitch audio together. (Many AI audio tools tend to work better with shorter clips, rather than feeding them one long script.) As a result, a longer clip might sound halting or like it lacks the natural flow of a speaker. Another marker of a possible fake is when the speaker doesn’t appear to breathe. AI tools don’t always account for this natural part of speech. It’s subtle, yet when you know to listen for it, you’ll notice it when a person doesn’t pause for a breath.

However, keep in mind that bit about how sophisticated some deepfakes have become. With improvements in AI tools, many of these flaws have disappeared. Scrutinizing a picture, video, or audio clip on your own might not turn up the truth.

One of the best ways to spot a deepfake: Slow down.

Malicious deepfakes share something in common. They play on emotions. And they play to biases as well. By stirring up excitement about a “guaranteed” investment or outrage at the apparent words of a politician or public figure, deepfakes cloud judgment. That’s by design. It makes deepfakes that more difficult to spot because people want to believe them on some level.

With that, let your parent know to slow down. Especially when they see something that riles them up. This offers one of the best ways to spot a fake. From there, a technique called SIFT can help root out a fake. It stands for: Stop, Investigate the source, Find better coverage, and Trace the media to the original context. With the SIFT method, your parents can indeed slow down and determine what’s real.

Where can your parents start SIFT-ing? Fact-checking organizations like Snopes and Politifact, along with reputable news wire services like the Associated Press and Reuters all offer ways of getting at the truth. This requires an extra step, yet that’s what it takes today when separating fact from deepfake.

Have the deepfake conversation with your parents.

If it hasn’t popped up as a topic of conversation already, now’s a good time to have a talk about deepfakes with your parents. The AI tools behind deepfakes continue to improve, washing away some of the tell-tale signs of a deepfake that older tools left behind. They can look … good. Really good.

When talking to your parents about deepfakes, let them know just how clever they are. And how common they are. It’s not an exaggeration to say that deepfakes have changed the way that we view the internet. So much of what crops up in our feeds calls for scrutiny. Particularly the things that kick our emotions and biases into gear — both of which are potential signs of a deepfake at work.

As new AI detection tools roll out, like our own Project Mockingbird, your parents still have a time-tested tool of their own that they can use right now — their horse sense. By slowing down, checking other reputable sources, and turning to established fact-checking organizations, they can keep from getting duped or suckered by misinformation and scams.

 

 

VPN necessary

Ready to Try AI-powered Protection?

Stay more secure and private with McAfee.