October 21, 2025
AI in Legal Practice

Amy Swaner
Executive Summary
The proliferation of deepfake technology has created a dual crisis: the normalization of deceptive AI use in mainstream media erodes public trust in all evidence, while bad actors exploit this skepticism through the “Liar's Dividend”—dismissing authentic evidence as AI-generated. As detection capabilities fail to keep pace with improving AI, legal practitioners must implement robust authentication protocols, update contract provisions, and develop AI-specific discovery practices to preserve evidentiary integrity in an era where any media can be credibly questioned or falsified.
In 1865 William Pate took an image of well-known slavery advocate, John C. Calhoun and superimposed President Abraham Lincoln’s head. Images from the Library of Congress show this sleight of hand. Although Pate wasn’t using the images for an inherently bad reason, it was, nonetheless, a deception. Pate’s goal was to make President Lincoln appear more stately, more poised, more ‘presidential.’ What was in the 1800’s, no doubt a time-consuming challenge of skill, has now become an easily achievable result with today’s technology.
We’ve been bombarded over the past couple of years with deepfake incidents. Everything from the AI-generated robocall with President Joe Biden's cloned voice urging Democratic voters not to participate in a primary election, to the high school athletic director who used AI to create a phone call harassing his school principal, to clones of actors and actresses. We’re on the verge of being able to create fake but believable images, videos, and audio with nothing more than a phone and an internet connection. And we are becoming desensitized to the prevalent use of AI to create fake and deceptive images, audio, and video.
The Normalization of Deception
In January 2025, Netflix co-CEO Ted Sarandos confirmed the company used AI to create final footage in the Argentine series “El Eternauta,” boasting the AI scene was delivered “ten times faster” than traditional VFX methods. More troubling is widespread speculation that Netflix used AI to generate ‘evidence’ photographs in the true crime documentary “What Jennifer Did”—images presented to viewers as authentic police evidence without disclosure. Vogue magazine crossed a similar line this year, featuring in the August 2025 edition its first AI-generated model in a Guess advertisement with minimal labeling.

The AI model from the Guess ® ad, published in US Vogue August 2025 issue.
The problem isn't that entertainment companies create fictional content—that's their business. The emerging crisis is based on AI used deceptively in contexts where we expect authenticity. For the entertainment industry, that context is documentaries, news segments, and purported historical recreations. And when we see abundant examples of the normalization of deception, it primes viewers to suspect the evidence they encounter.
We are already beginning to question everything we do not experience personally as a potential deepfake. This normalization creates dangerous precedents that can affect the broader society, from courtrooms to political campaigns.
The Detection Challenge
Our brains remain remarkably adept at detecting artificial content through what roboticist Masahiro Mori termed the “Uncanny Valley” in 1970. We instinctively recognize when something is “off,” even if we can't articulate why. It’s an unsettling feeling when we realize there is a problem but can’t pinpoint exactly what is wrong. Despite all its capabilities, AI-generated content often betrays itself through temporal inconsistencies—hair flowing naturally in one frame but behaving oddly in the next, or shadows shifting in ways that don't match lighting sources.
There's also the paradox of perfection. AI learns from vast datasets and produces statistically correct output that lacks the specific imperfections that make real scenes authentic. A notable example of that paradox is the Guess advertisement mentioned above. Real cinematography has subtle asymmetries, lens characteristics, and human and technical imperfections that contribute to an organic feel. AI might create a perfectly composed and rendered shot that somehow feels too pristine for our imperfect world.
Yet these human detection abilities are beginning to fail us. As AI improves exponentially, the window between “obviously fake” and “indistinguishable from reality” is closing. A year ago, we could detect AI images by counting the fingers on people in an image; now, AI has solved that anomaly. Technological advances will continue to make the uncanny valley increasingly narrow. We are rapidly approaching the time when yesterday's detectable deepfake may be today's unquestioned forgery.
The Liar's Dividend in Action
Another deepfake consequence is the Liar’s Dividend. Researchers at the Brookings Institution have documented what they call the “Liar's Dividend”—the phenomenon where the mere existence of deepfake technology allows bad actors to dismiss authentic evidence as AI-generated. Their recent study found that politicians who falsely allege “deepfakes” about real scandals actually gain more public support than those who remain silent or apologize.
President Trump seemingly praised the Liar’s Dividend recently. Asked about a video showing articles thrown out a window of the White House, he explained the video was fake, though the White House earlier responded that it was part of the construction taking place there. Regardless of whether the video was the product of GenAI or not, President Trump’s recognition of the Liar’s Dividend is found in the comment he made afterward. After lamenting the potential harms of AI, President Trump said, likely somewhat tongue in cheek, “If something happens really bad, just blame AI.” (Link includes video of Pres. Trump’s “just blame AI” statement at time code 1:31.)
The dividend one receives from accusing evidence of being a deepfake is largely a result of distraction from the issue. Those interested in the original bit of evidence turn their attention from the underlying issue to whether the image, audio, or video is authentic. Although this diversion seems to work, it is contributing to the mistrust consumers have with nearly everything we don’t personally experience.
For better or for worse, humans become increasingly complacent toward anything to which we are routinely exposed. The more common these claims of deepfakes become, the more this inures the public to truth. It becomes easier and easier to question the veracity of all evidence. That reframing raises the burden of persuasion for truthful speakers, fuels cynicism and polarization (“nothing can be trusted”), and rewards strategic denial. Victims will likely be hit hardest—it is far easier to cast aspersions and raise doubt than it is to prove something is not a fake--delaying accountability and inflicting additional reputational harm.
In legal and institutional settings, this skepticism translates into cost and friction. Courts, agencies, and newsrooms must devote time and resources to mini-trials over provenance and chain-of-custody before decision-makers will credit what they see or hear. That procedural drag can chill reporting, slow remedies, and benefit those with resources to litigate authenticity disputes. Over time, the liar’s dividend corrodes shared evidentiary baselines—making it harder to establish facts in public controversies, easier for officials to avoid consequences, and more likely that the public disengages or defaults to partisan priors rather than the record. All the while turning evidentiary disputes into mini-trials with what I predict will be far greater frequency.
Deepfake Laws
The Inadequacy of Current Laws
State legislatures are scrambling to respond. Although there are various state laws, such as California's AB 2602 (2024) and AB 1836 (2024), which protect against unauthorized use of digital replicas, California’s SB 942 (2024) which requires a watermark on AI images, and Pennsylvania’s Act 35 (fka SB 649) (2025) makes creating or disseminating deepfakes with fraudulent or injurious intent, or facilitating a third party to do so. Yet these laws address symptoms, not the underlying authentication crisis. The two state laws that come closest to actually addressing the cause of deepfakes, are Colorado’s Consumer Protections for Artificial Intelligence (goes into effect Feb 1, 2026), and the Texas Responsible Artificial Intelligence Act (TRAIGA) (goes into effect Jan. 1, 2026), with Colorado’s law being more broad than Texas’s.
The federal Take It Down Act, passed in 2025, prohibits the nonconsensual online publication of intimate visual depictions of individuals, both authentic and AI-generated. But again, it does not fix the problem, just treats the symptoms. The proposed federal DEEPFAKES Accountability Act, introduced in September 2023, would require creators to digitally watermark AI-generated content. This gets closer to the heart of the matter but is not foolproof. Watermarks can be removed; screenshots destroy them. And the absence of a watermark doesn't prove authenticity.
The Need for Further Laws
The deepfake crisis demands a layered defense strategy that recognizes we cannot control creation but can control distribution and consequences. Rather than attempting the impossible task of regulating every computer capable of generating synthetic media, we should focus on the chokepoints where deepfakes cause actual harm. This means requiring major platforms such as Facebook, YouTube, Instagram and other to detect and label AI-generated content at upload, creating serious criminal penalties for malicious use, and giving victims a meaningful civil cause of action with statutory damages. The technology for detection already exists—platforms use similar systems for copyright and child pornography—but they need both legal mandates and liability exposure to deploy it at scale.
Laws for the Justice System
Some advocate for the regulation of AI models. That not only deters legitimate innovation while bad actors use unregulated tools.
The issue is even more precarious in the judicial system. Fabricated evidence created specifically for litigation generally never touches the public internet. Our evidence rules, written for an era of physical documents and analog photos, are woefully inadequate for a world where any teenager can create convincing fake evidence on their laptop. We need fundamental reform of authentication standards, requiring enhanced verification for digital media. And shifting the burden to the proponent when authenticity is challenged. Ideally, courts need the same technical capacity for detecting synthetic media that they've built for DNA analysis or ballistics—either through in-house expertise or certified vendors. This includes mandatory early disclosure of digital evidence to allow time for forensic analysis, court-appointed experts when authenticity is disputed, and clear jury instructions about the possibility of fabrication.
The evidence in front of a California state court judge in the case of Mendones v. Cushman & Wakefield (23CV028772) illustrates this need perfectly.
Case Study: Mendones
Ariel and Maridol Mendones needed evidence to support their claims in their case filed in California Superior Court. Either lacking evidence or just being too lazy to obtain it, they decided to create the evidence necessary to make them appear entitled to summary judgment. Judge Victoria Kolakowski was not drawn in to the scam. When she ordered them to file sworn testimony that the evidence provided to the court was true and correct, but to also provide metadata, the Mendones doubled down and provided wildly unrealistic “metadata.”
Judge Kolakowski showed a level of tenacity and intelligence that the Mendones were not prepared for. For example, the Judge noted that images purportedly from a ring video camera were in black and white, except for a security guard, who was in color, clearly indicating the image of the security guard had been stitched in. She even disproved one of the allegations made by the Mendones by tracking down iOS versions. Judge Kolakowski’s September 9, 2025 Order re Terminating Sanctions is an entertaining read, and includes live links to the AI-generated evidence.
So, how do we plan ahead to deal with these deepfakes in legal practice?
Best Practices
Discovery
Include standard interrogatories on AI tool usage
In Requests for Production, specifically request metadata and version histories for evidence
Pretrial
Timely file motions in limine requiring authentication disclosure before trial
Request judicial notice of AI capability limitations under FRE 201 (or a state court version)
Trial
Proposed jury instruction: “The mere possibility that evidence could be AI-generated does not, by itself, create reasonable doubt. You must evaluate the specific authentication evidence presented.”
Ask for a limiting instruction when AI evidence is admitted with authentication
Contract Provisions
Your contract provisions – even those as innocuously located as client engagement letters -- must account for AI. Here are some suggested provisions:
Requiring AI Disclosure: “Party shall disclose any use of generative AI in creating [list the deliverables] within 48 hours of delivery”
Authentication warranty: “Party warrants all provided media is authentic and unaltered by AI except as explicitly disclosed.”
Indemnification for undisclosed AI: “Party X agrees to indemnify and hold harmless Party Z for any and all content that causes reputational or legal harm.”
Liquidated damages for AI content violations in sensitive contexts
Firm Governance
Adopt comprehensive AI use policies with mandatory disclosure requirements
Conduct quarterly training on evolving AI capabilities and detection methods
Designate AI authentication specialists within litigation teams
Establish relationships with technical experts before cases arise
The Future is Now
We are already in the midst of the Trust Apocalypse. It’s here. The entertainment industry is normalizing deceptive AI use. Political actors are cashing in the Liar’s Dividend with the “deepfake defense." And Courts are struggling with admissibility standards designed for a pre-GenAI justice system.
Until the laws, rules, and technology catch up, we need to stay on top of generative AI and evidentiary issues in legal practice.
© 2025 Amy Swaner. All Rights Reserved. May use with attribution and link to article.
More Like This
The Trust Apocalypse and Deepfakes in Legal Practice
The rapid growth of deepfake technology is weakening public trust in real evidence and allowing people to deny genuine media as fake. To counter this, legal professionals must adopt stronger verification systems and update legal processes to ensure the authenticity of evidence.
Top Ten Ways to Eliminate or Reduce AI Hallucinations: A Guide for Lawyers
Top Ten List of Ways to Eliminate or Reduce AI Hallucinationated Citations in Your Legal Documents, David Letterman Style
Enhancing Productivity With AI
Find low-barrier ways to add GenAI productivity into your work space using tools you likely already have access to.

