March 7, 2025
AI Regulatory Frameworks

Amy Swaner
Executive Summary
This article examines the legal implications of deepfake technology—AI-generated synthetic media that can manipulate a person's likeness with unprecedented accessibility and realism. As illustrated by Scarlett Johansson's advocacy following unauthorized use of her image, deepfakes raise critical questions about liability and the adequacy of existing laws.
Existing tort frameworks for fraud and defamation can address many deepfake harms, as the intent and result remain the same regardless of whether traditional or AI methods created the content. Criminal laws apply to deepfakes used for extortion or non-consensual intimate imagery, though comprehensive federal legislation remains lacking. Deepfakes also challenge intellectual property doctrines by blurring lines between original and derivative works.
First Amendment principles extend protection to certain deepfakes, particularly those created for parody, satire, or political commentary. The regulatory landscape is evolving rapidly, with states like California restricting political deepfakes near elections while courts apply traditional constitutional analyses to these novel technologies.
Ultimately, effective regulation will require more nuanced legal tests, technical authentication solutions, and clear platform guidelines to balance technological innovation with individual rights.
Introduction
Actress Scarlett Johannson recently started calling for AI laws to protect people from their likeness being hijacked by AI users to create fake images, voices, and language. You can catch a news story and video here. Her advocacy is not just her newest charitable action. Nor was it just theoretical. Instead, she started speaking out for these laws after her image was used in a self-serving attempt to further their own agenda courtesy of Scarlett’s name and face recognition. Artificial intelligence can create hyper-realistic videos, images, and voice clips of anyone doing or saying anything. As a result, the legal system faces unprecedented questions: Who is liable for the use of fakes and deepfakes? What should the consequences be? Are new laws needed? And finally, how does the First Amendment play into this issue, which has the potential for devastating personal and societal harm?
What is a Deepfake?
Generative AI is full of "fakes." GenAI models ingested hundreds of thousands of images, sounds and videos during their training and now use that multi-modal training to create new images. Most of these "fakes" are useful or, at the very least, harmless. We use them on pitch decks, educational materials, and even to create images for articles on AI. The same cannot be said for their mischevious cousin, deepfakes.
Deepfakes are synthetic media—often video, audio, or images but can also be text—created using artificial intelligence (GenAI) to replace or manipulate a person's likeness, voice, or actions. The term is derived from the term "deep learning," a subset of machine learning, which is commonly used in creating such media. Deepfakes often use generative adversarial networks (GANs) or similar AI techniques to generate highly realistic yet fake content. While deepfakes can be used for legitimate purposes such as entertainment or education, they are also frequently associated with malicious activities like misinformation, political influence, blackmail, identity theft, non-consensual pornography, or reputational damage and economic harm.
Deepfakes raise a number of legal issues. What qualifies as a deepfake? Are they prohibited under current laws or are new laws needed to govern them? And how can we avoid being fooled by deepfakes, which would render them far less harmful? This article explores the legal implications and considerations surrounding deepfakes.
Are Current Laws Sufficient to Combat Deepfakes?
Deepfakes are used for three (3) specific purposes: to embarrass, mislead, or entertain. Various forms of media have been used for these exact three purposes for centuries. Though deepfakes seem new, they are just the latest iteration of doctored images. Their real malignancy comes from the accessibility, ease, and accuracy of these newest doctored images. Rather than skill and intelligence, a GenAI user needs only imagination and access to a powerful GenAI tool. If these deepfakes are just the latest iteration of a long-standing harm, can existing laws sufficiently combat them? We’ll look at the harms of deepfakes, and consider whether existing laws are sufficient to provide adequate remedies.
TORTS
The accessibility and sophistication of deepfake technology have expanded the landscape of potential tortious conduct. When examining the legal remedies available to victims of harmful deepfakes, several established tort doctrines promise recourse. But are they sufficient? While new technology may present novel factual scenarios, the fundamental legal principles governing intentional misrepresentation, reputational harm, and negligent conduct remain applicable.
Generally, when the analysis focuses on the underlying tortious behavior rather than the technological means used to perpetrate it, tort laws are sufficient. The following tort frameworks are particularly relevant to addressing deepfake harms, with fraud and defamation being the most frequently applicable.
Fraud
Fraud is a deliberate deception intended to secure unfair or unlawful gain or to deprive a victim of a legal right. GenAI Deepfakes have already wreaked havoc in the pursuit of fraud, such as a deepfake video that convinced a worker to release $25 million to a fake CFO. The worker was skeptical of the transaction until he was on a video chat with several people, including the CFO. All of the other “people” on the video chat were deepfakes.
Consider the following two scenarios, which show how analogous typical fraud claims and deepfake fraud claims are.
Scenario #1 For example, a startup founder is seeking investments for their new AI software company. To attract funding, the founder:
Falsely claims that the company has secured contracts with major corporations.
Alters financial statements to show inflated revenue and profits.
Provides fake customer testimonials and case studies to mislead investors.
Relying on this information, investors contribute millions to the startup. Later, it is revealed that the contracts were fake, revenue was exaggerated, and customer testimonials were fabricated. The company collapses, and the investors lose their money. This type of harm falls directly under common law and statutory fraud. It is knowingly false claims that are intended to mislead and change behavior for the benefit of the speaker at the expense of the one misled.
Scenario #2 Now in the same situation, imagine that the startup founder uses GenAI to create a deepfake video of a CEO announcing false company information to mislead potential investors. The founder could use Runway MI, Gen 2 to create videos of how the imaginary new software looks and works and videos of testimonials by imaginary customers. The founder could use a free GenAI tool such as ChatGPT, Grok, Gemini, or others to create fake financials. Although GenAI helped to create the misleading information, the intent and the result are the same -- to manipulate and mislead investors. The same laws that were used to punish the “analog” perpetrator of fraud can be used to punish the AI perpetrator. The same holds true for defamation.
Defamation
With GenAI readily accessible, cheap, and easy to use, and with such realistic output, it's no wonder that we have seen an explosion of deepfakes, including defamatory deepfakes. Defamation is a false statement presented as fact that causes injury or damage to a person's reputation. Milkovich v. Lorain Journal Co., 497 U.S. 1 (1990). However, despite being false, defamation still enjoys some level of First Amendment protection.
Recent examples of defamatory deepfakes include manipulated videos of Donald Trump appearing to make inflammatory statements about minority groups, and fabricated footage of Kamala Harris apparently calling for extreme policy positions she never actually endorsed.
Public figures pursuing defamation claims must prove that the defamer acted with actual malice—knowledge of falsity or reckless disregard of the truth. New York Times v. Sullivan, 376 U.S. 254 (1964). This standard becomes particularly relevant with deepfakes, as the very nature of their creation implies knowledge of falsity.
Scenario #1 For instance, if political Candidate A accuses a competitor, Candidate Z, in online forums and social media sites that Candidate Z has accepted bribes and engaged in human trafficking, this is actionable defamation if it is false and done with actual malice. Candidate Z could successfully sue for defamation and recover damages if the defamed Candidate Z can prove reputational damage.
Scenario #2 If Candidate B used GenAI to create fake posts on social media sites alleging that Candidate Y received bribes. They could even use GenAI to create a video of Candidate Y receiving bribes. They could also use GenAI to create a video showing Candidate Z holding people against their will and forcing them to engage in whatever activity makes it appear they are being trafficked. Creating and distributing these deepfake videos showing a political candidate accepting bribes and human trafficking would be considered defamatory. So long as Candidate Y can prove that these videos were false, and damaged their reputation, Candidate Y can use the exact same laws as Candidate Z in Scenario #1.
Deepfakes used to perpetrate Fraud and Defamation can be combatted using the same laws used to punish their low-tech analogs. For these claims, no new law is needed.
Negligence
Negligence principles provide a beneficial legal framework for addressing deepfake harms that occur without malicious intent but still result from a creator's failure to exercise reasonable care. Under traditional negligence doctrine, deepfake creators and distributors may be held liable if they breach their duty of care to foreseeable victims through actions such as inadequate disclosure of synthetic content, insufficient security measures protecting deepfake technology, or careless distribution that enables harmful misuse.
The standard of care for deepfake creators is evolving alongside the technology, with courts likely to consider industry best practices, available safeguards, and the potential severity of harm when determining liability. For instance, a creator who fails to implement readily available watermarking technology on a realistic deepfake or fails to identify it as GenAI-created, might be found negligent if that content is subsequently mistaken for authentic material and causes demonstrable harm. As deepfake technology becomes more mainstream, we can expect negligence law to play an increasingly important role in establishing the boundaries of responsible creation and distribution, particularly in cases where intent to harm cannot be established, but foreseeable damage nonetheless occurs.
CRIMES
Extortion and Blackmail
Deepfakes have created powerful new tools for extortion and blackmail. Perpetrators typically create compromising synthetic media—often of a sexual or embarrassing nature—and threaten to release it unless the victim complies with demands for money, sexual favors, or other concessions. The criminal nature of this conduct is clear under both federal and state statutes prohibiting extortion, blackmail, and coercion, regardless of whether the threatened content is authentic or synthetic. The psychological impact on victims can be severe even when they know the content is falsified, as the potential public humiliation remains a credible threat. Courts have consistently held that such uses of deepfake technology fall outside constitutional protection, as they constitute true threats and criminal solicitation rather than protected expression. Prosecution of these cases presents unique challenges. One of the most difficult is jurisdictional issues when perpetrators operate across borders and technical hurdles in tracing the origin of anonymous deepfakes.
AI-Generated Non-Consensual Intimate Imagery or “DeepFake Porn”
This is commonly referred to as "AI-generated nonconsensual intimate imagery" or "AI-generated NCII." It's also sometimes called "deepfake pornography" when specifically referring to fabricated sexual content that uses someone's likeness without their consent.
Currently, no comprehensive federal law specifically targets AI-generated nonconsensual intimate imagery, leaving a patchwork of state regulations to address this growing concern. Several states have taken the initiative to fill this regulatory gap. California leads with two significant pieces of legislation: AB 602 and AB 1280, both explicitly designed to combat sexually explicit deepfakes. Similar protective measures have been enacted in Texas, Virginia, New York, and Minnesota. In Illinois, the Biometric Information Privacy Act (BIPA), though not created specifically for deepfakes, has proven applicable in certain cases involving the unauthorized use of biometric identifiers.
At the federal level, a number of legislative approaches have been proposed but none were enacted. The DEFIANCE Act (Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act) represented one of the most comprehensive attempts to address deepfake harms. Similarly, the proposed DEEPFAKES Accountability Act and the Preventing Harmful Image Manipulation Act aimed to create federal frameworks for combating this technology when used maliciously. For the time being, a comprehensive AI law—even for something as important as preventing NCII--is unlikely to be forthcoming.
In the absence of specific legislation, victims often turn to existing legal frameworks for recourse. Copyright laws offer protection when the original images belong to the victim. Defamation laws, right of publicity claims, and anti-harassment statutes can provide alternative legal avenues depending on the specific circumstances of each case.
The rapidly evolving nature of generative AI technology presents significant challenges for lawmakers and courts alike. These challenges are further complicated by jurisdictional issues, particularly when content is created or hosted internationally, placing it beyond the reach of domestic laws. Legal experts widely acknowledge that despite these various approaches, significant protection gaps remain, leaving many victims without clear or effective legal remedies. As technology continues to advance, the law struggles to keep pace, highlighting the urgent need for more comprehensive legal frameworks.
Revenge porn laws, originally designed to criminalize the nonconsensual distribution of real intimate images, struggle to address the unique challenges posed by AI-generated deepfake pornography. Many existing laws require that the images be authentic, meaning they must depict the actual victim rather than a synthetic creation. This loophole allows perpetrators to claim that no real photo or video was used, evading liability.
In 2019, Virginia became the first state to amend its existing "revenge porn" laws to include provisions against nonconsensual sexual deepfakes. Virginia Section 18.2-386.2. The amendment made it unlawful to create a non-consensual sexual image “by any means whatsoever.” Virginia strengthened its stance by enacting Senate Bill 731. Texas also expanded its statutes to cover AI-generated nonconsensual intimate imagery (NCII). California passed two laws to combat deepfake porn (SB 926 and SB 981—both of which became effective January 1, 2025).
To date there is no comprehensive federal law that criminalizes the creation or dissemination of deepfake pornography. As a result, victims must rely on a patchwork of state laws, civil claims for defamation or invasion of privacy, or intellectual property laws, none of which were originally designed for this type of harm. The absence of clear legal recourse leaves victims vulnerable, making it imperative for lawmakers to adopt federal legislation specifically targeting AI-generated NCII, ensuring that perpetrators face appropriate legal consequences and victims have effective remedies.
Section 230 of the Communications Decency Act generally shields online platforms from liability for user-generated content, meaning that websites and social media companies are not legally responsible for hosting or distributing deepfakes, even if they cause significant harm. This legal immunity presents a major challenge in combating malicious deepfakes. I will explore the implications and impacts of Section 230 and potential reforms in a subsequent article.
Copyright and Trademark
Deepfakes pose significant challenges to copyright and trademark law by blurring the lines between original and derivative works. When deepfakes incorporate copyrighted images, videos, or audio without permission, they may constitute direct copyright infringement. However at times infringement may be inadvertent. This becomes particularly problematic when AI systems are trained on massive datasets of copyrighted materials without proper licensing or attribution. The transformative nature of deepfakes further complicates matters, as courts must determine whether these AI-generated works qualify as "fair use" or represent substantial copying of protected elements. For a more in-depth discussion of AI and Copyright infringement, see this article.
From a trademark perspective, deepfakes can dilute or tarnish valuable marks by associating brands or personalities with unauthorized or potentially damaging content. When a deepfake places a celebrity endorsing a product they never actually promoted, it may constitute trademark infringement or false endorsement. Likewise, deepfakes featuring branded products in inappropriate contexts may damage brand reputation and consumer perception. The Rogers test, which balances trademark rights against artistic expression (Rogers v. Grimaldi, 875 F.2d 994 (2d Cir. 1989)), becomes increasingly difficult to apply when AI-generated content blurs the line between artistic commentary and commercial exploitation.1 As deepfake technology becomes more sophisticated and widespread, both copyright and trademark doctrines face mounting pressure to adapt to these novel forms of potential infringement that were inconceivable when these legal frameworks were established.
Legal Protections for Deepfakes
First Amendment Protections
The First Amendment protects the freedom of speech and expression, among other things. It allows people and entities to express themselves through words and images without governmental interference. The Supreme Court has consistently held that freedom of expression is a fundamental right essential to democratic society. New York Times v. Sullivan, 376 U.S. 254 (1964). All sorts of speech enjoy these protections, including political speech (Brandenburg v. Ohio, 395 U.S. 444 (1969)), and to a limited extent, even defamatory statements. Gertz v. Robert Welch, Inc., 418 U.S. 323 (1974).
The United States Supreme Court has held that technologies that aid humans in expression receive First Amendment protection, such as the printing press, video recorders, and the internet. Reno v. ACLU, 521 U.S. 844 (1997). By this line of logical reasoning, to the extent GenAI helps users draft text or create expressive images, its output will be protected. But do these First Amendment protections extend to the deliberately falsified images and video output created using GenAI? Surprisingly, the answer is yes, to some extent. These falsified statements and images fall into three categories: entertainment in the form of parodies or satire, and defamation. We considered defamation above, and so will consider parodies and satire in turn.
Parodies
Although parodies are not new—think the Spaceballs movie spoofing Star Wars movies back in the late 1980's—they are easier to create and potentially more realistic using GenAI. Two well-circulated GenAI parodies are:
Better Call Trump—Donald Trump appeared as a Saul Goldman character explaining money laundering to Jared Kushner. Created by YouTubers who were showcasing the abilities of DeepFace GenAI software.
Fortune Telling—Snoop Dogg appeared as “Miss Cleo” reading the futures of other celebrities through tarot cards. Created by BrianMonarch.
Convenience Store Holdup—Donald Trump, Joe Biden, Elon Musk, Barak Obama, Kamala Harris, and other celebrities are shown as attempting to hold up a convenience store. Created by AI Video Creations.
Parodies are generally protected as a form of speech under the First Amendment. Courts have recognized parody as entertainment and beneficial form of social commentary. Campbell v. Acuff-Rose Music, 510 U.S. 569 (1994). This protection was powerfully affirmed in Hustler Magazine v. Falwell, 485 U.S. 46 (1988), where the Supreme Court held that parodies of public figures, even when intended to cause emotional distress, are protected by the First Amendment. Hustler Magazine involved a parody advertisement suggesting that televangelist Jerry Falwell had engaged in an incestuous relationship - clearly false and potentially emotionally harmful content, but nevertheless protected as parody. This precedent is particularly relevant to deepfakes, as it suggests that even highly manipulated content may receive constitutional protection when presented as parody. To maintain this protection, however, parodies must stay clear of unprotected speech such as obscenity or incitement to lawless actions.
In addition, trademark laws protect the identifying image(s) or phrase(s) closely associated with a particular product or service if they are used incidentally. If an incidental use of a trademark or service mark adds to the deeper meaning of the parody, its use is generally acceptable under trademark law. Rogers v. Grimaldi, 875 F.2d 994 (2d Cir. 1989).
Courts have generally found trademark protections unavailable if a parody is making a commentary on a symbol, phrase, image, or the product itself. Louis Vuitton Malletier S.A. v. Haute Diggity Dog, LLC, 507 F.3d 252 (4th Cir. 2007).
Satire
Satire is the use of humor, irony, exaggeration, or ridicule to expose and criticize people's stupidity or vices, particularly in the context of contemporary politics and other topical issues. For example, a deepfake video showing world leaders as kindergarteners arguing over toys to comment on international relations would be considered satire.
Satire is also related to parody but differs in that satire uses a creative work to criticize something else, while parody uses some elements of the original work to criticize or comment on that work itself. Both forms of expression generally receive First Amendment protection, though satire may receive slightly less protection than parody in copyright cases. This distinction was highlighted in Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569 (1994), where the Supreme Court recognized that parody has a stronger claim to fair use because it directly comments on the original work. Similarly, in Fisher v. Dees, 794 F.2d 432 (9th Cir. 1986), the court emphasized that parody, which comments on the original work, is more likely to be considered fair use than satire, which uses the original to comment on something else.
Political Speech
Although political speech doesn't receive absolute protection, it does receive First Amendment protection. The U.S. Supreme Court clarified that political speech deals with matters of public concern and governmental affairs. Connick v. Myers, 461 U.S. 138 (1983). This becomes crucial when considering deepfakes in political contexts.
One of the central reasons political speech enjoys heightened protection is its role in ensuring an informed electorate and holding government officials accountable. Citizens United v. FEC, 558 U.S. 310 (2010), reaffirmed that restrictions on political speech, particularly those aimed at limiting its source or funding, are subject to strict scrutiny to prevent undue government interference in public discourse.
However, deepfakes complicate this protection by blurring the line between legitimate political expression and deceptive misinformation. When AI-generated media is used to falsely attribute statements or actions to candidates or public officials, it raises unique legal and ethical concerns—does such content constitute protected political speech as a satire or parody, or is it a form of fraud, defamation, or election interference? And if deepfakes are part of freedom of expression as a parody, do they undermine public trust in all media? Courts and lawmakers must grapple with this evolving challenge, balancing free expression with the integrity of democratic processes.
Recent Legal Developments
The legal response to deepfakes is evolving rapidly as states and federal lawmakers attempt to address their risks. Several states, including California, Alabama and Arizona, have enacted legislation specifically targeting harmful deepfake applications. A number of these are specific to deepfakes used to sway political elections. For example, Arizona’s law prohibits the use of deepfakes of political candidates within 90 days before an election. California’s AB 2839 and Colorado’s House Bill 24-1147 mandate that deepfake content include clear disclosures indicating that the media has been artificially altered. These efforts reflect a growing recognition that deepfakes can be weaponized to deceive the public, necessitating stronger legal safeguards.
Looking Ahead
As deepfake technology becomes more sophisticated and accessible, the legal system must evolve to address new challenges while preserving constitutional protections for legitimate speech. This may require:
Development of more nuanced tests for distinguishing protected from unprotected deepfake content.
Implementation of technical solutions for authenticating and tracking the origin of synthetic media.
Creation of expedited legal remedies for victims of malicious deepfakes.
Establishment of clear guidelines and penalties for platforms hosting or distributing deepfake content.
Conclusion
The emergence of deepfake technology is forcing the boundaries of the current legal frameworks. As we've explored throughout this article, deepfakes present unique challenges that test the boundaries of existing law and reveal gaps in our regulatory approach. While many deepfake harms can be addressed through traditional legal doctrines—from fraud and defamation to copyright and right of publicity—the unprecedented accessibility, scalability, and accuracy of this technology demands thoughtful reconsideration of how we balance competing interests.
The protection of legitimate expression, including parody, satire, and political commentary, must be carefully weighed against the profound threats that malicious deepfakes pose to individual dignity, public discourse, and democratic processes. As courts and legislators navigate this complex terrain, they must resist the temptation of knee-jerk regulation that might inadvertently interfere with protected speech. Instead, our legal system must develop nuanced frameworks that distinguish between creative expression and harmful manipulation, between innocent entertainment and intentional deception.
Looking forward, the most effective approach will likely combine legal innovation with technological solutions, platform accountability, and digital literacy. It will be easier to note accurate images and videos, rather than rooting out all fakes. Watermarking and content provenance systems may help authenticate genuine media, while expedited legal remedies can provide swift recourse for victims. Perhaps most importantly, the legal profession must take the lead in shaping this discourse—not merely reacting to technological developments but proactively designing frameworks that protect fundamental rights while addressing novel harms. Only through such thoughtful evolution can we protect the legal systems and allow them to fulfill their essential purpose: safeguarding individual rights while preserving the shared foundations of truth upon which our democratic society depends.
1 However, see Jack Daniel’s Properties, Inc. v. VIP Products LLC, 599 U.S. 140, 143 S. Ct. 1578 (2023), challenging the Rogers Test.
NOTE -- nothing in this article is to be construed as legal advice. This article is for informational purposes only, and consists purely of the author's personal opinion. If you have legal questions or concerns, contact a legal professional as soon as possible.
© 2025 Amy Swaner. All Rights Reserved. May use with attribution and link to article.
More Like This
DeepFakes and the Laws that Attempt to Combat and Protect Them
The emergence of deepfake technology is forcing the boundaries of the current legal frameworks
Navigating the Regulatory Maze: A Modern Law Firm's Guide to Data Compliance
Law firms operate in an environment where compliance with regulatory frameworks is not just advisable—it’s essential
An Analysis of the EU's AI Act: Implications for the Future of AI Regulation In the US
And Best Practices to Advise Your Client Wherever They Live