May 9, 2025
AI Regulatory Frameworks
U.S. State

Amy Swaner
Executive Summary
In a recent lawsuit filed by X Corp. against the state of Minnesota—X Corp. v. Ellison, the social media platform challenges Minnesota Statute § 609.771, criminalizing the knowing distribution of materially deceptive deepfake media intended to influence elections. X Corp. argues the law is overbroad, vague, and a content-based restriction that infringes on its First Amendment right to editorial discretion. It also claims the statute functions as a prior restraint and imposes constitutionally impermissible burdens on protected political speech.
The lawsuit follows the Supreme Court’s decision in Moody v. NetChoice, which held that platforms' curation of user content is itself protected speech. That precedent strengthens X Corp.’s argument that Minnesota cannot compel or penalize its content moderation decisions—even in pursuit of election integrity.
While Minnesota will likely argue the law is narrowly tailored to combat fraud and deception, Moody makes clear that even well-intentioned laws cannot override platform autonomy unless they meet the strictest constitutional scrutiny. The case poses a fairly novel and high-stakes question: can the government regulate harmful AI-generated election content without violating the First Amendment?
Introduction
In an election year marked by concerns over AI-driven disinformation, X Corp. (formerly Twitter) has filed a high-profile federal lawsuit challenging Minnesota's new statute, Minn. Stat. § 609.771. This law criminalizes the knowing distribution of materially deceptive synthetic media with the intent to influence voters within 90 days before an election.
The lawsuit, X Corp. v. Ellison, squarely pits two constitutional values against each other: the right to free expression and the state interest in protecting electoral integrity. But this challenge does not arise in a vacuum. It follows directly on the heels of the Supreme Court’s decision in Moody v. NetChoice, LLC, 603 U.S. 707 (2024), which clarified the First Amendment protections afforded to social media platforms. Understanding Moody is critical to evaluating X Corp.'s challenge.
The case represents the first major constitutional test of how AI-generated synthetic media intersects with long-standing First Amendment doctrine. It will likely influence not only election law but the broader contours of digital expression in the AI era.
For a deeper dive into deepfakes and their constitutionality, check out this article.
Deepfakes and the Election Threat
Deepfakes are AI-generated images, videos, or audio that falsely depict individuals doing or saying things they never did. Powered by generative adversarial networks (GANs) and deep learning models, these technologies have become increasingly sophisticated, accessible, and fast to deploy. What once required technical expertise and vast computing resources can now be done on a smartphone in a matter of seconds.
In the election context, deepfakes pose a unique danger: a convincing video of a candidate saying something inflammatory can be generated and go viral within hours, shaping public opinion before the truth can catch up. This ‘velocity of deception’ presents a critical challenge for voters, platforms, and regulators alike. Moreover, detection tools still lag far behind generation capabilities, exacerbating the risk of irreparable harm to candidates and public trust.
Historical Context: Election Integrity Measures
Laws and other innovations to protect fair and free elections are nothing new. The 19th century featured rampant electoral fraud, with political machines like Tammany Hall organizing multiple voting, ballot box stuffing, and voter intimidation. Technological solutions emerged as early as 1858, when Samuel C. Jollie patented transparent glass ballot boxes that allowed the public to see each ballot placed into the box, and each ballot removed to be counted. By the 1880s, Progressive-era reformers drove adoption of the Australian (secret) ballot, ending the era when parties produced their own distinctive ballots that made vote-buying enforceable. Claims of voter fraud have historically been weaponized against marginalized groups, with exaggerated allegations targeting immigrants, as Harvard historian Alexander Keyssar notes. By World War I, most states had implemented registration laws and permanent electoral commissions to combat fraud, showing that balancing election security with democratic access has been a persistent challenge in American governance for over 200 years—well before the digital era introduced new forms of potential deception.
Minnesota’s Deepfake Law: Minn. Stat. § 609.771
Minnesota's statute was enacted to prevent this type of last-minute, high-impact deception. The law prohibits the knowing dissemination or agreement to disseminate synthetic media that is materially deceptive, intended to injure a candidate or influence an election, and published within 90 days before a political party convention or after absentee voting begins. Penalties range from misdemeanor fines to felony charges, depending on severity. It also allows injunctive relief sought by public officials or affected individuals.
Supporters argue the law is a narrowly targeted response to the unique threat posed by synthetic media. Opponents, including X Corp., warn that it risks sweeping in protected political expression and replacing platform discretion with government judgment.
X Corp.'s Constitutional Challenge
X Corp. raises a number of constitutional arguments:
Overbreadth
The statute criminalizes a wide swath of expression. It does not distinguish parody, satire, or political hyperbole. These have had long-standing Free Speech protection, as discussed in greater detail in this article.
Under Broadrick v. Oklahoma, 413 U.S. 601 (1973), a law is overbroad if it punishes a substantial amount of protected speech relative to its legitimate applications. X argues that the law will chill political engagement, especially when platforms must make rapid moderation decisions under threat of prosecution. X essentially argues that the law places platforms like it as make-shift judges and juries.
Vagueness
X contends the statute fails to provide fair notice of what conduct is prohibited. Terms like "materially deceptive" and "intent to influence an election" are subjective and ambiguous. As Grayned v. City of Rockford, 408 U.S. 104 (1972), makes clear, vagueness in laws touching on speech is especially suspect, since it fosters arbitrary enforcement and over-deterrence.
Content-Based Regulation
The statute regulates speech based on its subject matter—political speech about elections. Under Reed v. Town of Gilbert, 576 U.S. 155 (2015), such content-based laws must satisfy strict scrutiny: a compelling state interest and the least restrictive means possible used to meet the state’s needs. X argues Minnesota's statute fails this test.
Editorial Discretion
The most significant new argument, based on the Court’s holding in Moody, is that platform curation is, itself, a protected form of expression. In Moody, the Court held that social media platforms exercise editorial discretion akin to newspapers. Their choices about what to display or remove are protected speech. Laws that penalize platforms for not removing certain content interfere with their editorial function. This is a significant shift that elevates platform curation to the level of constitutionally protected editorial judgment, limiting the government’s power to penalize platforms for their moderation decisions.
Prior Restraint
Section 609.771 permits injunctive relief against those “reasonably believed to be about to violate” the statute. This pre-enforcement restraint is a classic example of a prior restraint, which is almost always unconstitutional unless the government meets the most demanding standards. See Nebraska Press Ass'n v. Stuart, 427 U.S. 539 (1976).
Due Process
Building on the vagueness claim, X invokes Village of Hoffman Estates v. Flipside, 455 U.S. 489 (1982), arguing that criminal laws regulating speech must be especially clear to avoid suppressing protected activity.
Section 230 Preemption
While Section 230 of the Communications Decency Act typically immunizes platforms from liability for user content, X’s claim is weaker here due to Section 230(e)(3), which allows state criminal enforcement.
X Corp. argues that Minnesota’s deepfake statute places it in an untenable constitutional position by forcing it to choose between two constitutionally fraught alternatives. On the one hand, the platform may choose to leave user-generated political content, such as AI-generated images, videos, or audio, in place, risking criminal prosecution or injunctive relief if the state later deems that content “materially deceptive” and “intended to influence an election.” On the other hand, to avoid liability, X Corp. could preemptively remove or suppress a wide range of election-related content, including satire, parody, or controversial political speech, thereby undermining its editorial discretion and chilling protected expression. According to X, this dilemma—comply or censor—is precisely the type of unconstitutional burden on speech that the First Amendment forbids.
Moody v. NetChoice: Recalibrating the Constitutional Framework
The Supreme Court’s decision in Moody v. NetChoice transformed the legal landscape. The Court vacated the lower court rulings and remanded the cases for further proceedings, instructing the lower courts to properly conduct a facial First Amendment analysis, taking into account the full scope of the laws and the specific constitutional challenges. The Court held unequivocally that a platform’s curation of third-party content—what to amplify, suppress, or organize—is expressive conduct protected by the First Amendment.
The Court rejected arguments that the government could regulate platforms to ensure ideological balance or correct perceived bias. It emphasized that private entities must remain free to shape their own expressive products. Notably, the Court drew a direct line to Miami Herald v. Tornillo, 418 U.S. 241 (1974), reaffirming that government may not interfere with editorial control, whether in print or online. The Court emphasized that private entities, including social media platforms, must remain free to shape their own expressive products, without government interference.
Applied to § 609.771, Moody strongly supports X Corp.'s claim that the statute unlawfully intrudes on platforms' protected expressive activity. Even though Minnesota's law targets fraud, the enforcement mechanism burdens the platform's editorial choices—subjecting them to criminal sanction if they err in favor of keeping up politically sensitive material.
While Moody strengthens X Corp.’s position, Minnesota is likely to raise several counterarguments aimed at distinguishing its statute from the laws struck down in that case.
Minnesota's Anticipated Defenses
The state of Minnesota has not yet filed its Answer to this lawsuit. However, X Corp. is bringing a facial First Amendment challenge, and those are difficult to win. Here is what I believe the state will likely argue throughout its defense of this case:
Lack of Standing
Minnesota might file a motion to dismiss arguing that X Corp. lacks standing because it has not faced enforcement and has not identified any imminent or specific violation. The state may contend that X’s alleged injury is too speculative, particularly given the statute’s scienter requirement and narrow scope. Without a concrete threat of prosecution or clearly chilled speech, Minnesota could argue that X cannot demonstrate the injury-in-fact required for Article III standing.
This will likely be unsuccessful, however, because courts routinely permit facial First Amendment challenges, or pre-enforcement First Amendment claims. Courts have long held that a credible threat of enforcement, even in the absence of a pending prosecution, is sufficient to establish injury-in-fact in First Amendment cases. See Virginia v. American Booksellers Ass’n, 484 U.S. 383 (1988).
Targeting Unprotected Categories
Drawing on United States v. Alvarez, 567 U.S. 709 (2012), Minnesota may argue that deliberately deceptive deepfakes created with intent to mislead voters fall within historically unprotected categories of speech like fraud and defamation. Because its law is limited to intentionally misleading material that will or could derail political campaigns in a very short amount of time, Minnesota will likely argue that what it is attempting to prevent is tantamount to fraud.
Robust Scienter Requirements
The law applies only to knowing conduct with specific intent to deceive. This scienter element distinguishes it from vague or overbroad laws that involve innocent mistakes, parody, or satire. It also could strengthen Minnesota’s argument that it is targeting non-protected fraud, since intent is a well-settled requirement in proving fraud.
Narrow Tailoring
Defining clear limitations on the regulation of speech can turn an otherwise invalid restriction of Free Speech into a valid, enforceable one. Minnesota’s statute includes temporal limits (90 days pre-election), applies only to synthetic media. Combined with the compelling interest in election integrity (Burson v. Freeman, 504 U.S. 191 (1992)), Minnesota may argue that this narrowly tailored law withstands strict scrutiny.
Clear Definitions and Notice
Minnesota will likely argue that its definitions provide reasonable notice, especially given the sophistication of the platforms that would be subject to enforcement. Due process is not violated merely because some interpretation is required.
No Section 230 Conflict
Minnesota will likely rely on the fact that Section 230 provides platforms with a great deal of latitude in reviewing posts. And it expressly allows for state criminal enforcement.
The Post-Moody Challenge: Can Free Speech Survive Deepfakes?
Despite these arguments, Minnesota is unlikely to successfully defend this challenge. Moody along with Section 230 reshapes the election battlefield. It elevates editorial discretion to the same constitutional level as journalistic judgment and rejects the notion that the public good—even democratic health—can justify forced editorial decisions. Courts evaluating § 609.771 must now ask whether any statute that chills a platform’s decision to host or remove political content can survive.
This doesn’t mean all deepfake regulation is unconstitutional. But it does mean that any law that imposes penalties on platforms for failing to suppress speech will likely have to survive the most exacting scrutiny. That is a steep climb. X Corp. seeks declaratory and injunctive relief, asking the court to block enforcement of § 609.771 before any prosecution occurs. This is a common remedy in pre-enforcement First Amendment challenges, where the mere existence of a law can chill speech. If the court agrees that the statute likely infringes protected expression, it may grant a preliminary injunction while the case proceeds.
Best Practices for Lawyers: Digital Speech Regulation
Use Precise Language
Define key terms narrowly when drafting or evaluating laws. Avoid vague phrases like “materially deceptive” unless they are clearly scoped to prevent overbreadth and vagueness challenges.Treat Platform Moderation as Expression
Recognize that content curation and moderation decisions are expressive conduct. Encourage clients to document their editorial standards and rationale as part of their First Amendment strategy.Weigh the Role of Section 230, Without Overrelying On It
While Section 230 provides strong civil immunity, it does not bar state criminal enforcement. Combine statutory defenses with robust constitutional arguments, particularly under the First Amendment.
Conclusion
X Corp. v. Ellison is not just a dispute over synthetic media. It is the first major test of how Moody v. NetChoice will be applied beyond content moderation laws. If courts follow Moody's logic rigorously, Minnesota's well-intentioned law will likely fail not because its purpose is illegitimate, but because its method infringes on editorial autonomy that the Court has found is protected by the First Amendment.
With Section 230 and Moody setting a high bar for Deepfake laws, states will need to clear a high bar in their efforts to regulate Deepfakes.
More Like This
X Corp. Takes On Minnesota: Deepfakes, Free Speech, and the Legacy of Moody v. NetChoice
A recent lawsuit filed by X Corp. against the state of Minnesota challenges criminalizing the knowing distribution of materially deceptive deepfake media intended to influence elections. X Corp. argues the law is overbroad, vague, and a content-based restriction that infringes on its First Amendment right to editorial discretion.
A New Playbook for Federal AI Risk and Regulation
It seems clear that the Trump Administration’s goal is to create new compliance expectations without stifling technological growth.
A Glimpse of the Future? Why China’s Labeling Law May Signal Global Trends
China’s new law introduces the most comprehensive requirements to date for the labeling, traceability, and accountability of AI-generated content anywhere in the world.