Is that Tom Hanks promoting dental insurance, or is it AI? How to spot a fake celebrity endorsement.

Tom Hanks took to Instagram to tell his fans that he was not, in fact, promoting dental insurance. (Yahoo News, Getty Images)
Tom Hanks took to Instagram to tell his fans that he was not, in fact, promoting dental insurance. (Yahoo News, Getty Images) ((Yahoo News, Getty Images))

Is Tom Hanks really selling dental insurance? Did Gayle King actually endorse a weight-loss drug? Did popular YouTube influencer MrBeast give away free iPhones? Don't believe everything you see on the internet.

Celebrities and well-known personalities have become the target of a rising number of deceptive advertisements on social media in which it appears they're endorsing certain products or services. In actuality, it's their likenesses or voices — often generated from artificial intelligence programs — that have been used without their permission or knowledge to hawk goods.

On Sept. 30, Hanks warned of an AI-generated version of himself in an ad circulating on the internet, where the fake Hanks was promoting dental insurance. The Forrest Gump star clarified in an Instagram post that he had "nothing to do with" the ad.

King called out a similar situation on Oct. 2, sharing a doctored video in which she allegedly endorses a weight-loss drug. The video footage used was from an Aug. 31 clip where the CBS Mornings host was promoting her radio show. "They've manipulated my voice and video to make it seem like I'm promoting [the product]," King wrote on Instagram. "I've never heard of this product or used it! Please don't be fooled by these AI videos."

Gayle King is one of the latest stars to become the target of a rising number of deceptive advertisements on social media in which it appears they're endorsing certain products or services. (Yahoo News, Getty Images)
Gayle King is one of the latest stars to become the target of a rising number of deceptive advertisements on social media in which it appears they're endorsing certain products or services. (Yahoo News, Getty Images) ((Yahoo News, Getty Images))

YouTube creator MrBeast, whose real name is Jimmy Donaldson, rose to fame by offering exorbitant cash prizes through elaborate stunts and challenges. On Oct. 2, Donaldson shared an AI-generated video on X, formerly known as Twitter, where he appeared to give away 10,000 iPhone 15 Pros. "Are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem," he wrote, debunking the fake ad.

Why has there been an increase in fake celebrity endorsements?

The unauthorized use of celebrity likenesses in falsified endorsements or videos isn't new, but the recent increase in volume of manipulated videos is cause for alarm.

In February, Joe Rogan was the subject of a misleading TikTok ad in which he appeared to promote male enhancement pills. In October 2022, Bruce Willis's representative shot down reports the retired actor had sold his likeness to create a "digital twin" for use in future content. Zelda Williams, daughter of the late Robin Williams, called the noncommissioned use of AI to recreate the actor's voice "personally disturbing."

"It's becoming easier to create things that are indistinguishable from the real thing. It is becoming more challenging [to decipher what's real] when you see a video on social media," Will Knight, a senior writer at Wired magazine covering artificial intelligence, told Yahoo Entertainment.

He believes easier accessibility to apps and web services that are able to generate deceptive videos, audio clips and images featuring recognizable celebrity faces are contributing factors to the recent AI explosion.

"One big reason is that the technology needed to make a video with somebody's image faked into it has been becoming more accessible thanks to advances in AI," Knight explained. "It has been possible to create deepfakes where you paste somebody's likeness onto someone else fairly easily."

Some celebrities are reportedly choosing to embrace AI technology in an effort to control their image rights.

"In the case of actors, the value is their likeness. Some of them see more benefit in partnering with something they see as an inevitable tide coming," Rutgers University assistant professor Britt Paris, who studies deepfakes and audio/visual manipulation online, told Yahoo Entertainment. "And others are trying to get a contract that seeks guarantees against the use of AI in situations where they would be using their likeness or posthumous uses of their likeness."

Why are actors being targeted?

"The [Screen Actors Guild] strike is quite relevant," Knight said of the timing. "It's drawing a lot of people’s attention to AI within the industry and outside [of it]. You have actors worrying that their likeness can be copied and manipulated and used in different ways."

Paris agreed, citing "outside political economic factors, like the SAG-AFTRA strike happening right now, and then advancements in technology," as likely factors that may explain the increase in Hollywood stars being unwitting players in AI-generated ads.

Artificial intelligence is one of the major sticking points in the ongoing SAG-AFTRA strike, which is nearing the three-month mark. Actors are currently fighting against major Hollywood studios for protections from their likeness, voice or performances being used without their consent or compensation. The actors union is days into negotiations with studios on a new contract, after writers signed a three-year agreement that ended the WGA strike, on Sept. 27.

There are many who believe that using a celebrity's likeness is often an easy tool for cybercriminals to mislead the public, scam vulnerable people or spread misinformation. Scammers use popular celebrities to "earn your trust first" before leveraging that to their advantage when it comes to AI-generated deepfakes, according to the Better Business Bureau.

AI technology is also gradually becoming democratized, Knight said. "You'll have unscrupulous agents online who will want to get a celebrity to endorse a dental plan, like [what happened with] Tom Hanks."

Whatever comes from the SAG-AFTRA strike could offer a clearer idea of the regulations that may be put in place to combat unofficial AI use. Legally, there are "levers that actors can engage ... that would set precedent," Paris suggested, such as defamation lawsuits. "There are laws against using people's likenesses." In September, Slumdog Millionaire star Anil Kapoor won a landmark case in an Indian court protecting his personality rights — which include his likeness, image and voice — against misuse on digital media.

How can you spot one of these fake ads?

There are several identifying traits that could help suss out a fake celebrity endorsement or at least raise a red flag.

"They usually are trying to sell products that don't seem reputable, that seem too good to be true — miracle cures, 'get rich quick' schemes. Usually, it smells funny," Paris said.

Another question to ask: Does the product they're purportedly selling actually align with their perceived public persona?

Paris cited the fake Hanks ad as an example, which used a computer-generated image of the actor, as well as a replica of his voice, without his consent.

"Whether what they're selling seems like something that ... doesn't make sense with that particular celebrity persona," Paris explained, "as well as the audiovisual cues — the uncanny valley of strangeness that we've come to expect from AI-generated images and audio — are things to look for."

Technical markers within AI-generated celebrity impersonation videos can also provide clues for consumers who may question the authenticity of the clip they're watching.

"If the voice sounds strange or if the mannerisms are odd or if there's a flickering in the face where it feels stilted, I would start to look out for those," Knight said. Growing skepticism and the "erosion of our understanding of what is true" may be the unintended (or intended) consequence. "There is a whole science to spotting counterfeit imagery and videos."

As time goes on, that may be easier said than done.

With the rising popularity of artificial intelligence, Knight cautioned, "it will still be possible for people to make these fake videos and put them up online. It'll be more a question of policing them on social media."

Advertisement