California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco.
The state could be among the first to test out such legislation, which bans the use of AI to create and circulate false images and videos in political ads close to Election Day.
But now, two of the three laws, including one that was designed to curb the practice in the 2024 election, are being challenged in court through a lawsuit filed Tuesday in Sacramento.
Those include one that takes effect immediately that allows any individual to sue for damages over election deepfakes, while the other requires large online platforms, like X, to remove the deceptive material starting next year.
Get top local stories in Southern California delivered to you every morning. >Sign up for NBC LA's News Headlines newsletter.
The lawsuit, filed by a person who created parody videos featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris, says the laws censor free speech and allow anybody to take legal action over content they dislike. At least one of his videos was shared by Elon Musk, owner of the social media platform X, which then prompted Newsom to vow to ban such content on a post on X.
The governor's office said the law doesn't ban satire and parody content. Instead, it requires the disclosure of the use of AI to be displayed within the altered videos or images.
“It’s unclear why this conservative activist is suing California,” Newsom spokesperson Izzy Gardon said in a statement. “This new disclosure law for election misinformation isn’t any more onerous than laws already passed in other states, including Alabama.”
California
News from across California
Theodore Frank, an attorney representing the complainant, said the California laws are too far reaching and are designed to “force social media companies to censor and harass people.”
“I’m not familiar with the Alabama law. On the other hand, the governor of Alabama had hasn’t threatened our client the way the governor of California did,” he told The Associated Press.
The lawsuit appears to be among the first legal challenges over such legislation in the U.S. Frank told the AP he is planning to file another lawsuit over similar laws in Minnesota.
State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide.
Among the three law signed by Newsom on Tuesday, one takes effect immediately to prevent deepfakes surrounding the 2024 election and is the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates.
The law makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire.
The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a “fraught political climate.”
But critics such as free speech advocates and Musk called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Harris.
“The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has a caption identifying the video as a parody.
It is not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said.
The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed.
It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said.
“In an ideal world, we’d be able to take the content down the second it goes up,” she said. “Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you’re able to dispel it.”
Still, having such a law on the books could serve as a deterrent for potential violations, she said.
Assemblymember Gail Pellerin declined to comment on the lawsuit, but said the law she authored is a simple tool to avoid misinformation.
“What we’re saying is, hey, just mark that video as digitally altered for parody purposes,” Pellerin said. “And so it’s very clear that it’s for satire or for parody.”
Newsom on Tuesday also signed another law to require campaigns to start disclosing AI-generated materials starting next year, after the 2024 election.