States Scramble to Block AI Election Meddling

Credit - Photo-Illustration by TIME; Getty Images

New Mexico Secretary of State Maggie Toulouse Oliver was at a conference on election security last year when she had what she describes as an "oh crap, this is happening" moment. The potential for AI-generated disinformation and deepfakes to disrupt this year’s national elections was the "hottest topic among election administrators" at the event, Toulouse Oliver says, with one terrifying scenario unfolding after another in conversations. As her state’s top election official, she realized time was running out to set up New Mexico’s defenses ahead of this year’s vote. “That was really my wake-up call,” she says.

Thanks in part to that epiphany, New Mexico enacted a new law last month requiring political campaigns to disclose whenever they use AI in their ads, and making it a crime to use AI-generated content to intentionally deceive voters. The focus on “malicious intent” was key, says Toulouse Oliver. "We're cognizant of the First Amendment and we don't want to unfairly penalize folks," she says. These new measures will go hand-in-hand with a new public campaign to raise awareness about AI content targeting voters, which will incorporate TV, radio, billboards and a digital campaign.

Toulouse Oliver’s efforts are part of a larger trend. Skeptical that the federal government and social media companies will impose significant guardrails on AI content ahead of November's election, state and local officials have moved quickly to take matters into their own hands. More than 100 bills have recently been introduced or passed in 39 state legislatures that contain provisions to restrict the potential for AI-altered or generated election disinformation, according to an analysis by the Voting Rights Lab, a group that tracks voting legislation.

"2024 is the first American presidential election year at the intersection of this heightened election-related mis- and disinformation, and the rapid growth of AI-generated content," says Megan Bellamy, vice president of law and policy at Voting Rights Lab. This year's elections, not only in the U.S., but across the world, will serve as a test case of the impact of widely available new generative AI tools, which have made it cheaper, faster and easier than ever before to mass produce altered content. Facing a potential avalanche of voting-related disinformation, state lawmakers have grappled with a variety of approaches to shield voters or penalize those who create and disseminate this content while balancing First Amendment and other legal protections. With regulation of AI content in its early stages, experts say some of the language in the new bills is legally ambiguous and may be hard to enforce.

Read More: Hackers Could Use ChatGPT to Target 2024 Elections

"AI-generated content...targeted strategically because of our election landscape could still do significant damage," says Bellamy, noting some states may feel the impact more than others. "AI-generated content is created to grab voters’ attention, and that alone could lead to chaos and confusion even if there are efforts to try to mitigate the harm."

Many of these laws, like New Mexico’s, focus on transparency by requiring disclosures about the use of AI in election-related content. The Florida legislature passed a similar bill in March, requiring a disclaimer on political ads noting that they were "created in whole or in part with the use of generative artificial intelligence." This applies to all content, including audio, video, text, images or other graphics. (The bill is still awaiting Gov. Ron DeSantis' signature).

Many bills also seek to penalize those who use AI tools to intentionally spread misleading content. A bill signed by Wisconsin Gov. Tony Evers last month imposes a $1000 fine on any group affiliated with a political campaign that fails to add a disclaimer to AI-created or altered content, although it's less restrictive as it does not address disinformation spread by groups that are not tied to campaigns. In Arizona, lawmakers have been debating several approaches, including a bill that would allow candidates for public office to sue the creators of "digital impersonations" created without their consent.

For many state officials, last year's widespread adoption of ChatGPT and other popular AI tools was a wake-up call to the destructive potential the programs could have on the upcoming elections. "The big leap forward with ChatGPT really got people talking in a way they hadn't before about the many applications of AI and what aspects of our national life are particularly vulnerable," says Minnesota Secretary of State Steve Simon. "AI is not a new and independent threat in and of itself, but it is a means to amplify existing threats in a way we wouldn't even have thought seriously about five years ago."

In the spring of 2023, Minnesota became one of the first states to ban the use of AI-generated content in election materials. The new statute prohibits the dissemination of such content if it is created with the intent of hurting a candidate within 90 days of an election. It also criminalizes the dissemination of AI-generated content like deepfakes without the consent of the person depicted. As part of his efforts, Simon has held election security trainings with officials from all of the state's 50 counties, which includes a focus on combating and educating voters about AI-generated content.

More from TIME

"I'm cautiously optimistic about our ability to neutralize its effects," says Simon. Minnesota's new law "is serving notice not just to people who disseminate this, but to the public as well that this is something worth watching and paying attention to."

While the terms "misinformation" and "disinformation" have become heavily politicized since the 2020 election, with many conservative lawmakers reflexively opposing legislation seeking to curb the spread of false information related to voting, state officials say that bills related to AI-generated content have largely been met with bipartisan support. "Very interestingly, there was very little pushback to the legislation," says Simon.

Despite the flurry of new bills, state officials say they know these efforts are a drop in the bucket. It can take a few clicks to create AI-generated content that reaches millions, and days or weeks to verify and track it down. Many officials say that in tandem with legislative efforts, they have been spending significant resources on public awareness campaigns meant to increase voters' skepticism of and resilience to manipulated content, and to raise the visibility of accurate information about the voting process.

Washington Secretary of State Steve Hobbs says he has watched closely as AI-generated content has drastically improved in recent years—first with fascination, then with deepening alarm. There was the fake but convincing Richard Nixon created by MIT researchers admitting to a failed moon landing, a fake President Volodymyr Zelensky surrendering during the start of Russia's invasion in 2022, and, more recently, a fake President Joe Biden telling New Hampshire voters not to vote in the primaries.

"I was looking at the feds going, 'Man, I really hope they do something,'” Hobbs says. “But they weren't. So we did it at the state level." It took two years, but last May Washington's state legislature passed a law that requires advertisers to disclose when election-related material is created or altered by AI, and allows candidates targeted by AI-altered content to sue for damages. Hobbs sees it as a small but necessary step, but notes that the law does little to protect against election manipulation by state actors or foreign entities: "I wanted to do more."

His counterparts agree the challenges remain considerable. "I'm not going to pretend that regulating this is going to be easy, it's a whole new area," says Toulouse Oliver, the New Mexico Secretary of State, adding that she hopes at least the new measures will help catch the worst violators as the election season progresses. "We're in a brave new world."

Write to Vera Bergengruen at vera.bergengruen@time.com.

Advertisement