Here’s What’s Now Illegal Under California’s 8 New AI Laws

California Governor Gavin Newsom has signed eight new AI-related bills into law, with 30 more under review. These laws aim to address concerns around election misinformation, deepfakes, and the rights of actors.

Election deepfakes:

  • AB 2655: Platforms must remove or label election-related AI deepfakes.
  • AB 2839: Targets users who post or repost deceptive AI-generated election content.
  • AB 2355: Requires AI-generated political ads to disclose their origins.

Deepfake nudes:

  • SB 926: Criminalizes blackmail involving AI-generated nude images.
  • SB 981: Requires social media platforms to investigate and remove reported deepfake nudes that resemble users.

Watermarks:

  • SB 942: AI-generated content must include disclosure in its metadata.

Actors and AI:

  • AB 2602: Requires consent before creating AI replicas of actors’ voices or likenesses.
  • AB 1836: Prohibits digital replicas of deceased performers without estate permission.

AB 2839 specifically prohibits “knowingly distributing” “deceptive content” “with malice.” This means that enforcement would require proving three things: that the person knew the content was fake, that the content meets the legal definition of deceptive content, and that the act was carried out “with malice” as defined by the law.

That seems difficult to enforce in practice.

SB 942 seems the most interesting to me. The text specifies that a “covered provider” is an AI with more than 1 million California users. However, I wonder if custom LLMs, which might fall under that threshold, could still be capable of generating convincing deepfakes.

Whether it’s enforceable or not, the first step is to clearly define what’s not allowed. In my opinion, this is a good start.

I can agree with the last three laws as they address highly specific issues. However, there are fundamental concerns with election deepfakes and how they might infringe on free speech. I believe it would be better to codify that candidates for public office should not be allowed to use AI-generated materials at all. Additionally, I would argue that any election organization, like political action committees (PACs), promoting a candidate should also be prohibited from using AI-generated content in any form. This would prevent situations like Trump’s deepfake of Taylor Swift endorsing him, only for her to later endorse Kamala Harris in real life.