California Governor Gavin Newsom has signed eight new AI-related bills into law, with 30 more under review. These laws aim to address concerns around election misinformation, deepfakes, and the rights of actors.
Election deepfakes:
AB 2655: Platforms must remove or label election-related AI deepfakes.
AB 2839: Targets users who post or repost deceptive AI-generated election content.
AB 2355: Requires AI-generated political ads to disclose their origins.
AB 2839 specifically prohibits “knowingly distributing” “deceptive content” “with malice.” This means that enforcement would require proving three things: that the person knew the content was fake, that the content meets the legal definition of deceptive content, and that the act was carried out “with malice” as defined by the law.
SB 942 seems the most interesting to me. The text specifies that a “covered provider” is an AI with more than 1 million California users. However, I wonder if custom LLMs, which might fall under that threshold, could still be capable of generating convincing deepfakes.
I can agree with the last three laws as they address highly specific issues. However, there are fundamental concerns with election deepfakes and how they might infringe on free speech. I believe it would be better to codify that candidates for public office should not be allowed to use AI-generated materials at all. Additionally, I would argue that any election organization, like political action committees (PACs), promoting a candidate should also be prohibited from using AI-generated content in any form. This would prevent situations like Trump’s deepfake of Taylor Swift endorsing him, only for her to later endorse Kamala Harris in real life.