Social media accounts use AI-generated audio to push 2024 election misinformation

PolitiFact identified several videos that used AI-generated audio to promote false claims, many about the 2024 election, on TikTok and YouTube.
PolitiFact identified several videos that used AI-generated audio to promote false claims, many about the 2024 election, on TikTok and YouTube.

A recent social media post made what appeared to be a surprising announcement about the 2024 presidential race.

"Breaking news: Donald Trump has declared that he is pulling out of the run for president," the narrator said in a Feb. 22 TikTok video, which was viewed more than 161,000 times before TikTok removed it from the platform.

Of course, Trump hasn’t dropped out of the 2024 race. As of March 12, he’d secured enough delegates to clinch the Republican presidential nomination.

Besides the false claim, the video had another distinctive element: Its audio appeared to be made with artificial intelligence.

PolitiFact identified several TikTok accounts spreading false narratives about the 2024 election and Trump through its fact-checking partnership with TikTok.

(Read more about PolitiFact's partnership with TikTok.)

Videos on YouTube made similar false claims. These videos, which also appeared to have AI-generated audio, were reshared with lower engagement on Facebook and X.

Generative AI is a broad term that describes when computers create content, such as text, photos, videos and audio, by identifying patterns in existing data. It is likely to play an outsized role in national elections that the U.S. and more than 50 other countries are holding this year.

ChatGPT is a well-known example of generative artificial intelligence, which refers specifically to AI tools that use algorithms to generate outputs and create content that can include text, audio, code, images and more.
ChatGPT is a well-known example of generative artificial intelligence, which refers specifically to AI tools that use algorithms to generate outputs and create content that can include text, audio, code, images and more.

In February in Indonesia, AI-generated cartoons helped rehabilitate the image of a former military general linked to human rights abuses who won the country’s presidential election.

Recent advances in generative AI have made it harder to determine whether online content is real or fake. We talked to experts about how this technology is changing the information landscape and how to spot AI-generated audio. Unlike video, there aren’t visual abnormalities that can hint at manipulation.

When contacted for comment, a TikTok spokesperson said, "To apply our harmful misinformation policies, we detect misleading content and send it to fact-checking partners for factual assessment." Once alerted to the content highlighted in this story, TikTok removed it from the platform.

Experts analyze videos’ use of AI-generated audio

We asked generative AI experts to analyze five TikTokvideos that made false and misleading claims acrossmultipleaccounts to determine whether we’d accurately surmised that the audio was AI-generated.

Hafiz Malik, a University of Michigan-Dearborn electrical and computer engineering professor who studies deepfakes, said his AI detection tool classified four of the videos as having AI-generated audio. Some parts of the fifth video were labeled as "deepfake," while others weren’t.

Siwei Lyu, a University at Buffalo computer science and engineering professor who specializes in digital media forensics, said his AI detection algorithms also classified a majority of the five videos as using AI-generated audio.

TikTok videos make outrageous claims

Trump was a focus of these TikTok accounts; severalof theaccountsused photos of the former president as their profile photos.

These accounts followed a similar playbook: eye-catching headlines often displayed against red backgrounds; videos featuring famous figures, including Trump and Supreme Court Justice Clarence Thomas; and a disembodied narrator.

Some videos made false claims about the well-being of high-profile people — that Trump had had a heart attack and New York Attorney General Letitia James had been hospitalized for gunshot wounds. James sued Trump in 2022, accusing him of fraudulently inflating his net worth; a judge ruled in February that Trump must pay a $454 million penalty.

Another video displayed text that said, "Supreme Court Justice Clarence Thomas joins other justices to remove 2024 race candidate." This appears to misleadingly refer to the Supreme Court’s case on Trump’s ballot eligibility, in which Thomas and the other justices unanimously ruled that individual states cannot bar presidential candidates, including Trump, from the ballot.

All of these TikTok accounts were created within the past few months. The one that appeared the oldest had videos dating to November; the newest began posting content March 1. Three of the accounts had TikTok’s default username of "user" followed by 13 numbers.

Protests were held outside the U.S. Capitol as the House last week approved a bill that would force TikTok’s parent company to sell the popular social media app or face a practical ban in the U.S.
Protests were held outside the U.S. Capitol as the House last week approved a bill that would force TikTok’s parent company to sell the popular social media app or face a practical ban in the U.S.

These accounts collectively posted hundreds of videos and garnered hundreds of thousands of views, likes and followers before TikTok removed the accounts and videos.

We also found YouTube videos making false claims that mimicked the TikTok videos’ format: sensational headlines, Trump photos, audio that sounded AI-generated. Most of them were viewed hundreds or thousands of times.

The two YouTubeaccounts that posted the videos were created in 2021 and 2022 and amassed tens of thousands of followers before we contacted YouTube for comment and the company removed the accounts from the platform.

Before YouTube removed the videos, they were reshared on other social media platforms, including Facebook and X, where very small numbers of people liked or viewed them.

(Read more about our partnership with Meta, which owns Facebook and Instagram.)

A YouTube spokesperson did not provide comment for this story by our deadline.

How generative AI is contributing to more misinformation online

Misinformation experts say the newest generations of generative AI are making it easier for people to create and share misleading social media content. And AI-generated audio tends to be cheaper than its video counterparts.

"Now, anyone with access to the internet can have the power of thousands of writers, audio technicians and video producers, for free, and at the push of a button," said Jack Brewster, enterprise editor at Newsguard, a company tracking online misinformation.

"In the right hands, that power can be used for good," Brewster said. "In the wrong hands, that power can be used to pollute our information ecosystem, destabilize democracies and undermine public trust in institutions."

NewsGuard reported in September 2023 that AI voice technology was being used to spread conspiracy theories en masse across TikTok. The report said Newsguard "identified a network of 17 TikTok accounts using AI text-to-speech software to generate videos advancing false and unsubstantiated claims, with hundreds of millions of views."

A 2023 University of British Columbia study that used a dataset from TikTok found that AI text-to-speech technology simplified content creation, motivating content creators to produce more videos.

Study authors Xiaoke Zhang and Mi Zhou told PolitiFact that increased productivity means generative AI "can be deliberately exploited to generate misinformation at a low cost."

The technology can also help users conceal their identities, which can "diminish their sense of responsibility towards ensuring information accuracy," Zhang and Zhou said.

TikTok requires users to label content that contains AI-generated images, videos or audio "to help viewers contextualize the video and prevent the potential spread of misleading content." TikTok’s community guidelines bar "inaccurate, misleading, or false content that may cause significant harm to individuals or society."

No TikTok videos we reviewed had this generative AI label, although some included labels to learn more about U.S. elections. Brewster said NewsGuard also observed many TikTok users bypassing this policy about identifying AI-generated content.

YouTube’s community guidelines don’t allow "misleading or deceptive content that poses a serious risk of egregious harm." YouTube requires disclosure for election advertising containing "digitally altered or generated materials." The company said in December that it plans to expand this generative AI disclosure to other content.

How to detect AI-generated audio

Experts say existing AI detection tools are imperfect. They add that as detection tools improve, so does generative AI technology.

AI-generated audio lacks the more obvious visual cues of AI-generated images or videos, such as mouth movements that aren’t synced to audio or distorted physical features.

However, there are ways people can identify AI-generated audio.

Malik, the University of Michigan-Dearborn professor, said to listen for abnormalities in vocal tone, articulation or pacing.

"(AI-generated voices) lack emotions. They lack the rise and fall in the audio that you typically have when you talk," Malik said. "They are pretty monotonic."

Brewster also advised that the "old tactics" are still the best way to avoid AI-generated misinformation. Those include cross-checking information with other sites, being attuned to grammatical errors and odd phrasing, and searching for the names of those who posted to see if they have shared false information in the past.

Our sources

This article originally appeared on Austin American-Statesman: Social accounts use AI-generated audio to push election misinformation

Advertisement