2024 Year in Review: Navigating California’s Landmark Deepfake Legislation
2024 Year in Review: Navigating California’s Landmark Deepfake Legislation
California is becoming the new leader in regulating deepfakes in the U.S., taking a stand on issues ranging from election interference to sexual exploitation. California Governor Gavin Newsom signed a number of new AI laws in 2024, including many aimed at curbing the use of artificial intelligence to create deepfakes.
Deepfakes are images, audio recordings, or videos that have been altered or manipulated to misrepresent someone as saying or doing something that the person did not actually say or do. The passage of these deepfake laws come as deepfakes have been in the media spotlight. A number of deepfakes have grabbed the public’s attention, including the deepfake image of Pope Francis wearing a white puffer coat in March 2023, the sexually explicit deepfake images of Taylor Swift in January 2024, and the deepfake video with AI-altered audio of Vice President Harris in July 2024, which was shared on X and obtained more than 135 million views.
The California laws follow a trend among U.S. states to regulate deepfakes in recent years, creating a patchwork of rules around the use of AI. The new laws have been both celebrated and criticized, sparking backlash from free speech advocates while drawing praise from actors’ unions and other industry groups. Below, we discuss the context surrounding, and some of the implications of, these new laws.
Before diving into a discussion of these laws, here are some key takeaways from this legislation:
California passed eight new laws in September aimed at addressing some of the different harms that may be caused by AI deepfakes. As noted above, these laws fall into four broad categories:
As discussed below, these laws align with concerns expressed and laws passed related to AI use in other states, but push the boundaries of deepfake regulation in several ways.
Some of the new California laws are designed to ensure that individuals are aware they are interacting with artificial content. While we have not previously seen deepfake-specific disclosure requirements in other states, the new laws follow a California trend to provide consumers with transparency around new technologies. For example, California enacted the Bolstering Online Transparency Act (“BOT Act”) in 2018, allowing businesses to avoid liability for deceptive “bot” usage by posting a clear, conspicuous disclosure designed to inform users that they are interacting with a bot.
California’s AB 2355, effective January 1, 2025, requires that political advertisements using AI-generated or substantially altered content include a disclosure that the material has been altered using AI. California’s SB 942, effective January 1, 2026, requires generative AI (“GenAI”) systems that have over one million monthly visitors and are publicly accessible within California to: (i) make available a free AI detection tool that allows users to assess whether content was created or altered by the provider’s GenAI system; and (ii) offer users the option to include a disclosure in image, video, or audio content created or altered by the provider’s GenAI system.
These laws mirror transparency requirements to ensure that individuals know they are interacting with AI. Under the EU AI Act, deployers of high-risk AI systems that assist in making decisions related to individuals must inform individuals that they are subject to an AI system. Under the Colorado AI Act, a deployer of a high-risk AI system that is a substantial factor in making consequential decisions concerning Colorado residents must provide a description of the AI system, its purpose, and the nature of each consequential decision. Under the Colorado law, deployers and developers of AI systems intended to interact with individuals must also disclose to consumers that they are interacting with an AI system.
Under the Utah AI Act, businesses that use GenAI to interact with an individual must clearly and conspicuously disclose to the individual that they are interacting with AI. In California, it is unlawful to use undeclared bots to communicate with individuals with the intent to mislead the individual about the bot’s artificial identity to incentivize a purchase or sale of goods or services. We expect to see additional states enact similar disclosure requirements as the number of AI laws continues to grow.
Deepfakes may limit an individual’s right to control the use of their own likeness and voice. Several states have passed laws expanding publicity or creative rights to protect against the unsanctioned use of an individual’s persona, including Hawaii, Illinois, Mississippi, and Tennessee. This issue has become increasingly prominent as movie, television, and video game actors have been striking in part over demands to ensure that studios cannot create and use digital replicas without their permission.
California’s AB 2602 makes unenforceable any contract provisions for the performance of personal or professional services that allows for the creation and use of a “digital replica” of an individual’s voice or likeness in place of work the individual would otherwise have performed in person, if: (i) the provision does not include a reasonably specific description of the intended uses of the digital replica, and (ii) the individual was not professionally represented in negotiating the contract by legal counsel or a labor union. A “digital replica” is a computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual. California’s AB 1836 makes a person who produces or makes available the digital replica of a deceased personality’s voice or likeness in an expressive audiovisual work or sound recording without prior consent liable to any injured party. Both laws become effective on January 1, 2025.
These new California bills follow the recently introduced federal NO FAKES Act, which would create a federal right to an individual’s voice and likeness and create legal recourse for people whose digital replicas are created, used, or profited from without consent.
The passage of AB 2602 was celebrated by the performers’ union SAG AFTRA, which had lobbied for increased protections for voice and likeness rights for actors. The passage of these laws may also help resolve the current, ongoing dispute between video game actors, who have been on strike since July, and video game developers. Video game producers have argued that the use of AI is vital to shorten the time it takes to develop new games and minimize production costs, while video game actors are pushing for protection – such as the protection offered in the new California laws – against companies’ use of their voice or digital replica without consent or fair compensation. The parties have recently announced that they will hold in-person negotiations over the strike for the first time since November 2023.
AI supercharges the threat of election disinformation. To promote an informed electorate and to safeguard the integrity of the electoral process, many states have begun to enact laws to regulate the prevention of deepfakes in the political space, including Alabama, Arizona, Colorado, Florida, Hawaii, Idaho, Indiana, Minnesota, Mississippi, New Hampshire, New Mexico, New York, Oregon, Texas, Utah, and Washington. But these laws pose legal questions under the First Amendment, inviting legal challenges seeking to draw the appropriate line between parody and defamation.
California passed two laws to prevent deepfakes in the election context. California’s AB 2839 became effective immediately and prohibits a person, committee, or entity from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content 120 days before an election and, in specified cases, 60 days after an election. California’s AB 2655, effective January 1, requires large online platforms (public-facing internet websites, web applications, or digital applications that had at least 1 million California users during the preceding 12 months) to identify and remove materially deceptive content related to elections during specified periods, provide mechanisms to report such content, and label reported content at least 72 hours after a report is made.
Both of these laws have already faced legal challenges. The X user who created the deepfake video of Vice President Harris sued to block AB 2655 and AB 2839 just days after Governor Newsom signed the laws, arguing that the laws violate the First and the Fourteenth Amendments of the U.S. Constitution and Article I, Section 2(a) of the California Constitution. The U.S. District Court for the Eastern District of California (“E.D. Cal.”) granted a preliminary injunction against AB 2839 as the Court determined that the plaintiff is likely to succeed on a First Amendment facial challenge to the statute. The conservative satire outlet, Babylon Bee, and a blogging lawyer have also challenged AB 2655 and AB 2839 on the grounds that they violate the First Amendment.
As these cases make their way through the judicial system, courts must decide at what point digitally altered content should be considered harmful, and whether the more appropriate remedy for the threats the deepfake laws seek to mitigate is, as the U.S. District Judge in E.D. Cal. stated, “more speech, not enforced silence.” While these cases are based in longstanding First Amendment doctrine, they pose novel questions in the age of social media where speech is easily manipulated.
The creation and dissemination of sexually explicit deepfakes can create significant privacy harms. Many new deepfake laws have focused on providing victims hurt by deepfakes with civil causes of action. Numerous states also have passed laws criminalizing sexually explicit deepfakes, particularly those involving minors: Alabama, Arizona, Colorado, Florida, Georgia, Idaho, Illinois, Indiana, Iowa, Kentucky, Louisiana, Massachusetts, Minnesota, Mississippi, New Hampshire, North Carolina, Oklahoma, South Dakota, Tennessee, Utah, Vermont, Virginia, and Washington.
California’s SB 926 criminalizes the creation and distribution of AI-generated sexually explicit deepfake content under circumstances in which the person distributing the image knows or should know that distribution of the image will cause serious emotional distress, and the person depicted suffers that distress. California’s SB 981 requires social media platforms to establish a mechanism for California users to report sexually explicit digital identity theft. Once reported, the content must be temporarily blocked while the platform investigates, and permanently removed if there is a reasonable basis to believe the content is sexually explicit digital identity theft. Both laws become effective on January 1, 2025.
These new laws help to close a loophole in California’s revenge porn law, in which victims did not have recourse to combat the nonconsensual distribution of sexually explicit deepfakes. SB 926 expands the revenge porn law to include AI-generated content, providing victims, as well as law enforcement, with additional tools to combat this activity. These new legal remedies supplement existing legal tools that victims may use, including claims for defamation, false light invasion of privacy, infliction of emotional distress, and right of publicity violations (if the deepfake is used in a commercial context).
Practices