Bobbi Althoff was the victim of deepfake AI porn. 'This world is scary. It's getting scarier,' she says.

When Bobbi Althoff saw that she was trending on Twitter in February, she thought it was because of her podcast.

The Really Good Podcast host and social media influencer, 26, who’s known for her deadpan-style interviews with celebrities, had recently interviewed rapper Wiz Khalifa, and she wanted to check whether people were talking about the episode. After all, the two had smoked together, and the rapper had coached her “through getting high.”

She soon found out that the conversation was about something else entirely. Turns out that deepfake videos of AI-generated pornography using Althoff’s image were circulating on X.

“I was like, ‘What the f*** is this? That’s not my podcast,’” Althoff told Yahoo Entertainment at an event for Hasbro's new card game Fork, Milk, Kidnap.

Deepfake images and videos have become an increasingly bigger problem for celebrities and non-celebrities alike, with the X platform specifically being called out for a lack of oversight similar to notoriously toxic and anonymous message boards like 4chan. In a mere nine hours, the clip had received more than 4.5 million views on X, according to the Washington Post.

A rep for X didn't immediately respond to Yahoo Entertainment's request for comment.

Not only have people like Taylor Swift been a victim of fake and nonconsensual images being spread to the masses, but underage students have had to confront the issue as well.

While some images look noticeably AI-generated (too many fingers on each hand, for example), others are more subtle and can confuse viewers.

“I immediately was like, ‘That looks so fake. There's no way anyone's gonna believe that. So I brushed it off like no biggie," Althoff said.

Lots of online users did believe it, but so did her own team, who messaged her immediately.

“They’re like, ‘Bobbi, when you have a minute, please give us a call,’” she recalled.

It was the serious tone that struck the podcast host and comedian, who called them back to discuss what was happening.

“They were like, ‘We just need to ask: Is what we saw online real?’” Althoff said they asked.

“There’s no way you guys thought this was real,” she responded.

For Althoff, hearing that from people she worked closely with helped her understand just how much of an impact these deepfake images had on the public at large.

“That was when it really set in that people believed that, and that was really horrible,” she said. “I was like, ‘There’s no way people believe this about me,’ and they did.”

While there is no federal law that regulates deepfake porn, some states have made moves on their own to combat the growing problem, like Missouri with its proposed “Taylor Swift Act.” Beyond state legislation, a bill known as the DEFIANCE Act of 2024 has been introduced in Congress that would “improve rights to relief for individuals affected by nonconsensual activities involving intimate digital forgeries.”

Despite these efforts to regulate nonconsensual AI-generated imagery, one of the challenges comes from pinpointing who created them in the first place, as many have originated on anonymous message boards.

In February, the day after the deepfake images of Althoff had originally appeared on the platform, independent internet researcher Genevieve Oh told NBC News that she had “tracked more than 40 posts on X containing the Althoff deepfake video or links to the material.” As of that reporting, only one of the posts had been removed due to X’s rule violation. Oh did not immediately respond to Yahoo Entertainment’s request for comment.

As a mother of two young daughters who has posted about parts of her parenting journey for her 7.4 million TikTok followers, Althoff said she’s especially concerned.

“AI is scary,” she told Yahoo Entertainment. “I’m with my kids, and I’m like, ‘You guys are going to have a rough time. Good luck.’ It’s gonna suck. This world is scary. It’s getting scarier.”

Advertisement