Disclosure Laws | Technology | Global Trends
From manipulated political videos to AI-generated intimate images, deepfakes—realistic media produced by artificial intelligence—are reshaping the legal landscape. As generative AI tools grow in accessibility and sophistication, states are racing to enact laws that require disclosure of synthetic content. These “deepfake disclosure laws” represent a distinct regulatory strategy: rather than banning the technology outright, they aim to mitigate harm through transparency.
At the time of this writing, over a dozen states have enacted disclosure statutes targeting synthetic media, particularly in political speech and commercial impersonation. This article surveys the current state-level frameworks, explores their constitutional underpinnings, and considers the evolving role of disclosure in synthetic media regulation.
I. Federal Context: Still in Flux
Despite several proposed federal measures—such as the DEEPFAKES Accountability Act and the REAL Political Advertisements Act—Congress has yet to enact comprehensive AI disclosure legislation. These bills typically require watermarks, disclaimers, or metadata tagging for synthetic media, especially in election contexts. However, none have cleared both chambers.
Notably, the TAKE IT DOWN Act, enacted in 2025, addresses the spread of nonconsensual AI-generated intimate images. While it mandates takedown mechanisms on major platforms, it stops short of requiring content labeling at the source.
This legislative void has left states with considerable latitude to experiment with their own disclosure schemes.
II. State-by-State Approaches
California: Election and Personal Rights Protections
California has been a pioneer in regulating deepfakes through disclosure requirements.
- AB 730 (2019) prohibits the distribution of materially deceptive audio or video content that falsely depicts a candidate within 60 days of an election without a disclaimer. Violations are civilly actionable (Cal. Elec. Code § 20010).
- AB 2355 (2024) and AB 2655 (2025) expanded this framework by requiring watermarking on synthetic campaign materials and empowering the Secretary of State to investigate violations.
California also enforces protections against commercial misappropriation of likeness under its long-standing right-of-publicity laws, now extended to cover AI-generated impersonations.
Texas: Criminalizing Election Deception
Texas adopted a criminal approach.
- Under Tex. Elec. Code § 255.004, it is a criminal offense to publish a “deep fake video” with the intent to harm a candidate or influence an election within 30 days of voting, unless a clear disclosure is made.
- SB 751 enhances penalties for malicious synthetic media in electioneering, and SB 20 (2025) criminalizes deepfake child exploitation images.
Minnesota & Washington: Civil and Criminal Liability
Minnesota prohibits the dissemination of synthetic media without disclosure during the 90 days prior to an election, creating both criminal penalties and civil remedies. Similarly, Washington State mandates disclosures on campaign-related synthetic content and imposes civil fines for noncompliance (Wash. Rev. Code § 42.17A.335).
Oregon: Emphasis on Transparency
Oregon’s SB 1571 requires candidates and committees to disclose the use of synthetic content in political ads. Unlike other states, Oregon has refrained from imposing criminal penalties, focusing instead on proactive transparency.
Emerging Laws Elsewhere
- New Mexico (HB 182) and Florida (SB 1798) have enacted statutes targeting political deepfakes, including criminal penalties and labeling mandates.
- Utah’s AI Policy Act (SB 149) requires disclosures for AI-generated content under consumer protection law and establishes oversight authorities.
- Tennessee’s ELVIS Act provides civil and criminal remedies for the unauthorized AI reproduction of voices and likenesses.
III. Legal Rationale for Disclosure
Disclosure laws are typically designed to withstand First Amendment scrutiny by focusing on informing rather than restricting speech. Courts have historically upheld disclosure regimes in campaign finance and advertising contexts (see Citizens United v. FEC, 558 U.S. 310 (2010)).
By requiring disclaimers or watermarks, these laws preserve expressive freedoms while mitigating deception. They aim to serve a compelling government interest—preserving electoral integrity and protecting individuals from misappropriation—without imposing prior restraints or content bans.
Nonetheless, legal scholars warn that enforcement can raise due process and vagueness concerns, particularly when definitions of “synthetic” or “materially deceptive” remain ambiguous.
IV. Enforcement and Practical Challenges
Even well-crafted disclosure laws face implementation hurdles:
- Technical complexity: Sophisticated AI tools can strip metadata or remove watermarks, making detection difficult.
- Enforcement burdens: Many state agencies lack the technical expertise to evaluate synthetic media or pursue violations promptly.
- Platform cooperation: Laws vary in whether platforms bear responsibility for hosting undeclared deepfakes.
- Temporal limitations: Disclosure requirements often apply only within narrow pre-election windows, potentially missing earlier or more subtle forms of influence.
Despite these challenges, public awareness of synthetic media manipulation has increased, pressuring platforms and advertisers to adopt voluntary labeling practices.
V. Looking Ahead: Toward Uniform Standards?
In the absence of federal action, several states—including New York, Colorado, and Hawaii—are pursuing more comprehensive regulatory regimes:
- Colorado’s AI Act mandates risk assessments and transparency obligations for high-risk AI systems, including media-generation tools.
- New York’s “digital replica” law, enacted in 2024, requires written consent for the use of a person’s likeness in AI-generated content and mandates explicit labeling of such material in political contexts.
Industry groups and advocacy organizations have begun calling for model disclosure legislation, similar to the Uniform Law Commission’s efforts in other emerging tech fields. A harmonized approach would ease compliance burdens, protect First Amendment interests, and increase the effectiveness of enforcement across jurisdictions.
Conclusion: Regulation of Innovation
As generative AI accelerates the creation of persuasive, hyper-realistic content, lawmakers are responding with targeted disclosure mandates. These laws reflect a regulatory philosophy rooted in transparency, not prohibition—a constitutional middle ground in an age of synthetic truth.
For legal practitioners, keeping pace with this fragmented landscape is essential. Whether advising political clients, reviewing ad copy, or addressing reputational risk, understanding the contours of deepfake disclosure law has quickly become part of the modern compliance toolkit.