A fake image of a burning Eiffel Tower went viral on TikTok. How social media users can spot AI manipulation and deepfakes.

Earlier in January, millions of people engaged with an artificial intelligence (AI)-generated photo of the Eiffel Tower catching fire, leading many people online to witness the wide-ranging capabilities of AI.

The manipulated image of the Eiffel Tower, which has now been removed from TikTok, where it first appeared, had over 4.5 million likes and thousands of shares and comments.

In the aftermath of the image going viral, several creators have expressed their disappointment about just how many people thought the scene was real.

“You’re telling me there were over 4.7 million people that believed there was a tower made of steel caught on flames,” Wyatt Hoover (@hoovess) says in a post. “I get it. This looks very real, but this is just horrifying to think about.”

AI manipulation and deepfakes

There are many variations of AI manipulation, including what are called “deepfakes” that are a combination of the words “deep learning” and “fake." These manipulated images can be “any of various media that has been digitally manipulated to replace one person's likeness convincingly with that of another.” The term for the technology was coined in 2017, according to a news post from MIT, and has gained popularity ever since.

Going into the 2024 election season, five states have banned election deepfakes during this cycle. While regular deepfakes aren’t against the law, what some choose to do with them can be considered illegal.

Some examples of deepfake videos include impersonations of celebrities Tom Cruise, Tom Hanks and Taylor Swift. While many deepfakes are made with malicious intent — to intentionally mislead people into divulging sensitive information, for example — sometimes they are made for entertainment purposes only.

In addition to videos, AI also has the ability to recreate someone’s voice, which can be used in scams to collect information or money. In fact, ahead of New Hampshire’s January primary vote, a fake robocall of Joe Biden’s voice made the rounds, telling people to stay home from the polls.

Mia Blume, an AI expert and technology design coach in San Francisco, spoke with Yahoo News to explain how people can train themselves to spot AI-generated content.

“I’m not surprised. AI is getting really good at generating images that look pretty real,” she said in an interview. “It’s certainly concerning.”

Some social media platforms have taken steps to help users identify what is AI-generated and what’s real. LinkedIn identifies and notifies users when a photo is AI-generated, and X, formerly known as Twitter, has an option for community notes, allowing users to provide context about different posts. Until users can become proficient in telling the difference between real and fake media, Blume says it's on the platforms to help give context.

“I do think platforms have a responsibility to help people understand where their content comes from,” Blume said. “If it’s actually from a news source, we should know that. If it’s AI-generated, we should know that. And we’re just not there yet.”

How to spot a deepfake?

While Blume believes that generative AI — a deep-learning model that can generate high-quality text, images and other content — can create incredibly realistic photos, there are certain things she thinks people can do to make the distinction on their own.

“We can look for things like pixelization or extra fingers or [images that look as if] someone’s head has been plopped on,” she said. Some generative AI models have also been criticized for the way they handle skin tone variations and diverse cultural backgrounds — leading people to get a sense of certain limitations.

While spotting a deepfake video isn’t completely foolproof, Matt Groh, an assistant professor at Northwestern University and former research assistant at the MIT Media Lab, told MIT’s Ideas Made to Matter that people should pay attention to the subject’s face (“Is someone’s hair in the wrong spot?”); audio (“Does someone’s voice not match their appearance?”); and lighting (“Deepfakes often fail to fully represent the natural physics of lighting.”).

Without cues such as those, it may be tough for users to determine what’s real or not. In 2023, an AI-generated influencer Milla Sofia captured social media’s attention. Sofia currently has over 130,000 followers on Instagram.

Even with hyper-realistic profiles like Sofia’s, there are still ways to find out if what you see online is real, with websites dedicated to determining if the picture is AI-generated or not. According to the website Is It AI?, the tool uses a model trained with a dataset of pictures that were labeled real or AI-generated. From there, it began to learn signs to detect real vs. AI-generated images.

Google also updated its image search in 2023 to add a feature “About this image,” which gives details on whether a photo first appeared online and when it was indexed. Now, when uploading a photo for so-called reverse search purposes, which allows users to get information about a photo, users can have more details to help them come to a conclusion.

Generative AI is a booming industry, with Pitchfork reporting that tech companies and venture capitalists invested over $10 billion in 2023. Furthermore, Statista projects the market will grow to upwards of $66 billion in 2024.

“I think actually going to the news and verifying the sources [is key], but it’s not easy," Blume said. “It takes a lot of work to validate or understand the origin of this content.”

Advertisement