Digital Media | Intellectual Property | Publicity Law

Introduction: When the Face Isn’t Real, but the Influence Is

They don’t age, they don’t sleep, and they never miss a content deadline. Synthetic influencers—AI-generated personalities like Lil Miquela, Imma, and Shudu—are transforming the influencer marketing industry. Meanwhile, deepfake technology is evolving from novelty to commercial tool, enabling advertisers to create hyper-realistic videos of celebrities, influencers, or even unknown individuals—without their consent.

As brands increasingly leverage AI-generated personas and deepfake likenesses in advertising and content, a legal reckoning is emerging around the right of publicity—the right to control the commercial use of one’s name, image, and likeness (NIL). This digital frontier raises high-stakes questions:

  • When does an avatar or AI-generated face violate someone’s publicity rights?
  • Can a fictional influencer be protected like a human one?
  • Are current laws equipped to address synthetic likeness exploitation?

This article examines the collision between deepfake branding, synthetic influencers, and right of publicity law, exploring the legal gaps, litigation trends, and regulatory responses now shaping this high-tech identity crisis.

I. The Rise of Synthetic Influencers and AI Likenesses

Synthetic influencers are AI-created digital characters designed to mimic human personalities. They post on social media, endorse products, and engage fans just like real-life influencers. Brands love them for their control, reliability, and cost-efficiency.

Meanwhile, deepfake tools can now produce videos that show celebrities or influencers speaking brand slogans—without setting foot in a studio. Tools like Synthesia, D-ID, and HeyGen can simulate speech, facial expressions, and body language in convincing fashion, often trained on publicly available content.

But with power comes legal peril.

II. Right of Publicity: The Legal Backbone of Identity Protection

The right of publicity gives individuals the legal right to control and profit from commercial uses of their identity. Unlike copyright or trademark law, it’s rooted in privacy and personality rights and is primarily governed at the state level.

Common elements of a right of publicity claim:

  • Use of name, image, voice, or likeness
  • For a commercial purpose
  • Without consent
  • Causing injury or misappropriation

More than 30 U.S. states recognize this right, either via statute or common law. However, state-by-state inconsistency and the lack of federal standard make enforcement complicated—especially in digital contexts where content travels globally and instantly.

III. When Deepfakes Cross the Legal Line

Deepfake branding—where synthetic video or voice is used to simulate a real person endorsing a product—raises several legal red flags:

1. Unauthorized Commercial Use

Creating a synthetic video of a public figure saying “I love this protein shake” without their consent is a textbook right of publicity violation.

Example: A 2024 class action filed by multiple TikTok creators alleged that an AI marketing firm had cloned their voices and faces into product reviews they never made—sparking public outcry and federal investigation.

2. False Endorsement Under Lanham Act

If a deepfake suggests a false affiliation or endorsement, it may also violate the Lanham Act, which prohibits misleading representations in commerce.

3. Emotional and Reputational Harm

Beyond commercial loss, manipulated likenesses can damage a person’s reputation—raising potential claims for defamation, false light, or intentional infliction of emotional distress.

IV. Are Synthetic Influencers Protected Like Humans?

Ironically, while real people struggle to protect their likeness, fictional AI personas may enjoy stronger protections under IP law.

IP Protections for Virtual Influencers:

  • Copyright may apply to their visual design, voice, and narratives.
  • Trademark may protect their name, persona, or signature phrases.
  • Trade dress may apply to their consistent visual branding.

But these protections accrue to the companies or creators—not the “person” being simulated.

This creates a dangerous asymmetry: human influencers may be deepfaked with little recourse, while brands can fully own and defend synthetic personas that compete for the same sponsorships.

V. The Legal Gaps and Emerging Threats

The rapid advancement of generative AI and deepfake tools has outpaced traditional legal frameworks.

Major Legal Gaps:

  • No federal right of publicity law in the U.S.
  • Lack of harmonized global standards, making cross-border enforcement difficult
  • AI-generated likenesses that resemble “composite” identities (e.g., someone who looks like many real people but isn’t exactly any of them)

Risk Scenario:

An AI influencer trained on facial data from real creators bears a striking resemblance to a popular TikToker but is “technically synthetic.” Can she sue? Possibly—but proving appropriation, injury, and intent becomes far more difficult.

VI. Legislative Responses and Industry Trends

Several jurisdictions are attempting to respond.

1. U.S. State-Level Action

  • California’s AB 730 prohibits deepfake political ads without disclosure.
  • New York Civil Rights Law § 50-f (updated in 2021) protects deceased individuals’ digital likenesses for 40 years.
  • Tennessee’s ELVIS Act (2024) expands postmortem and AI likeness rights protections.

2. Federal Proposals

Proposed federal legislation, including the NO FAKES Act (2023), aims to create a national right of publicity, particularly targeting AI voice and face replication.

3. Platform-Level Safeguards

YouTube, Meta, and TikTok have started requiring disclosure of AI-altered content and offer takedown processes for unauthorized synthetic media—but enforcement remains inconsistent.

VII. Strategic Guidance for Brands and Legal Teams

To avoid becoming the next cautionary headline, legal teams should:

1. Obtain Explicit NIL Rights

Ensure all influencer and celebrity agreements clearly cover:

  • Use in AI-generated media
  • Voice and likeness training rights
  • Postmortem and perpetual use (where allowed)

2. Vet Synthetic Content for Risk

Use AI audit tools to flag unintended likeness overlap and consult with IP counsel before launching synthetic characters or campaigns.

3. Monitor Platform and Regulatory Changes

Keep updated on disclosure requirements, takedown policies, and state-level statutes affecting AI media.

4. Educate Marketing and Creative Teams

Develop internal policies on the ethical and legal boundaries of synthetic branding and influencer creation.

Conclusion: Owning Your Face in the Age of AI

In an era where an influencer can be trained—not born—and where your digital twin can sell products while you sleep, the right of publicity has never been more important—or more precarious.

As technology democratizes media creation, the legal system must evolve to protect real identity in an unreal world. Until then, brands, creators, and legal professionals must navigate the gray zones of AI-driven influence with caution, clarity, and consent.

Subscribe for Full Access.

Similar Articles

Leave a Reply