April 6, 2026

AI in Legal Practice

Five Takeaways from the First Two AI Privilege Decisions

Five Takeaways from the First Two AI Privilege Decisions

Early court decisions confirm that using AI does not automatically waive privilege—but failing to ensure confidentiality, proper settings, and attorney direction just might.

Early court decisions confirm that using AI does not automatically waive privilege—but failing to ensure confidentiality, proper settings, and attorney direction just might.

Amy Swaner

What Heppner and Warner Mean for Lawyers Using AI

During the week of February 10, 2026, two federal district courts became the first to address whether attorney-client privilege and the work product doctrine protect materials generated using AI tools. The cases—United States v. Heppner (S.D.N.Y.) and Warner v. Gilbarco Inc. (E.D. Mich.)—reached different outcomes on different facts. Judge Rakoff, writing in Heppner, called it a “question of first impression nationwide.” 

I’ve seen early commentary—including analysis from attorneys at large, well-respected firms—that has characterized these decisions as conflicting. Two courts, same day, opposite results. An emerging “split” or a trap for the unwary. That reading is understandable at first glance. It is also, on closer examination of the actual orders, mostly wrong. The cases involved materially different parties, different factual situations, different procedural postures, and different doctrinal questions. When the orders are read together the practical guidance for lawyers is remarkably consistent. 

More AI decisions will follow; we can count on that. But these two are the starting point, and practitioners who understand these orders will be better positioned to protect their clients and themselves.  

Here are the five most important takeaways. 

1. AI Does Not Automatically Waive Privilege—But Carelessness Does 

The single most important point from these decisions is that neither court held that using AI, standing alone, waives privilege, or work product protections. The question in both cases was whether the specific circumstancesof the AI use destroyed protection. 

In Heppner, a criminal defendant facing securities and wire fraud charges used Claude by Anthropic—to generate approximately 31 documents that were later seized by the FBI from his home. The defendant claimed these documents were privileged and work product. Judge Rakoff disagreed, but his reasoning was grounded in the specific facts. The defendant used a consumer tool with no contractual confidentiality protections, no privacy protections toggled on, and Anthropic’s privacy policy expressly permitted data collection, model training, and disclosure to third parties. Work product didn’t apply because the defendant’s own counsel did not direct him to use Claude. 

The Court’s privilege analysis rested on three independent grounds. First, Claude is not an attorney—which, in Judge Rakoff’s words, “alone disposes of Heppner’s claim of privilege.” Second, the defendant had no reasonable expectation of confidentiality given Anthropic’s privacy policy. Third, the defendant did not communicate with Claude “for the purpose of obtaining legal advice”—Claude itself disclaims providing legal advice. 

The Court held each of these grounds was independently sufficient. The court did not hold that AI tools are categorically incompatible with privilege. It held that this AI tool, used in this way, with these terms of service, provided no basis for a privilege claim. Change the facts—enterprise platform, contractual confidentiality protections, attorney direction—and the analysis changes with them. 

Indeed, Judge Rakoff said as much. The court explicitly noted that had counsel directed Heppner to use Claude, the AI tool “might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.” 

2. AI Tools Are “Tools, Not Persons”—And That Changes the Work Product Analysis 

Warner v. Gilbarco addressed a different question—whether working on litigation materials with an AI tool constitutes the kind of disclosure that waives work product protection. 

A pro se plaintiff—acting as her own counsel—used ChatGPT to prepare materials for her employment discrimination case. The defendant moved to compel production of her AI interactions, arguing that using a third-party AI tool was tantamount to sharing work product with a third party. 

Judge Patti rejected that argument. The court’s core holding was that “ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background.” This matters because work product protection, unlike attorney-client privilege, can only be waived by disclosure to an adversary or someone likely to share information with an adversary. An AI vendor is neither. Using an AI tool to prepare litigation materials was, in the court’s analysis, no different from using any other tool—the work product remains protected because it was never disclosed to an opponent. 

Notably this is the stronger doctrinal argument for protecting AI-assisted legal work, and it is the one many attorneys seem to struggle to internalize. Yes, privilege is fragile—voluntary disclosure to any third party can waive it. But work product is more durable—only adversary disclosure triggers waiver. For lawyers using AI in litigation, the work product doctrine is the more reliable shield, and Warner is the first court to say so explicitly. 

3. A Pro Se Plaintiff Is Not in the Same Position as a Client

One of the most underappreciated aspects of these decisions in prior discussions of them is the difference in the parties’ litigation postures—and how that difference drove the work product analysis. 

Warner’s plaintiff was pro se. She was acting as her own counsel. This means that when she used ChatGPT to prepare litigation materials, she was doing what a lawyer does by using a tool to develop strategy and prepare documents in anticipation of litigation. Her materials were prepared by the functional equivalent of an attorney—because she was acting as her own attorney. 

Heppner’s defendant, by contrast, had retained counsel—but counsel did not direct him to use Claude. He acted on his own, and then when the documents were seized, he tried to protect his AI searches by flinging out a desperate assertion of work product privilege. The court held that his AI-generated documents were not “prepared at the behest of counsel” and did not “reflect defense counsel’s strategy.” Without that connection to counsel, the work product doctrine’s purpose—protecting “a zone of privacy in which a lawyer can prepare and develop legal theories and strategy”—was not applicable. 

Judge Rakoff also addressed—and rejected—the broader argument that work product protection extends to materials not prepared by or at the direction of an attorney. Heppner’s defense had cited Shih v. Petal Card, Inc., 565 F. Supp. 3d 557 (S.D.N.Y. 2021), a separate decision in which a magistrate judge held that work product protected a plaintiff’s litigation materials “regardless of whether” her lawyer “directed the work.” Rakoff “respectfully disagree[d]” with Shih, citing extensive Second Circuit authority for the proposition that the doctrine exists to protect lawyers’ mental processes and applies to material “prepared by or for counsel.” That reasoning creates an implied tension with Warner’s “tools, not persons” approach—but Rakoff did not cite or address Warner, which was decided the same day. 

But here is the critical nuance—the tension between these courts is narrower than it first appears. Warner’s pro se plaintiff was acting as her own counsel—so her materials would likely satisfy even Heppner’s stricter “at the behest of counsel” standard. The real disagreement surfaces only when a non-lawyer uses AI independently, without functioning as or acting at the direction of counsel. That was Heppner’s situation. It is not and will not be the situation of most lawyers reading this article. 

The lesson for firms is that the distinction between attorney-directed AI use and independent client AI use is legally significant. If your client is using AI on their own—without your direction or involvement—that creates a different and weaker privilege and work product posture. Advise clients accordingly and document your own direction over AI-assisted work. 

4. The Vendor’s Privacy Policy Is Now Part of the Evidence 

The single most actionable detail from Heppner is Judge Rakoff’s treatment of Anthropic’s privacy policy as direct evidence that the defendant had no reasonable expectation of confidentiality. 

The court examined the policy in detail. It noted that Anthropic’s privacy policy regarding the tool Heppner was using (public / free tier, with no privacy settings on) permits the company to collect user inputs and outputs, use that data for model training, and disclose information to third parties. The court further cited Footnote 3, observing that “even if certain information that Heppner input into Claude was privileged, he waived the privilege by sharing that information with Claude and Anthropic, just as if he had shared it with any other third party.” 

This is not a theoretical concern. The vendor’s terms of service were the factual foundation of the court’s confidentiality analysis. If Heppner had used an enterprise platform with a Data Processing Agreement that contractually prohibited training on his data, restricted vendor employee access, and imposed confidentiality obligations, the court’s analysis would have started from a fundamentally different factual premise. 

The implication is that practitioners need to read AI vendor privacy policies and terms of service with the same rigor they apply to Data Processing Agreements—because courts may look to and cite those policies as evidence of what confidentiality expectations were reasonable. A UI toggle that says “don’t train on my data” may or may not hold up. A contractual commitment in a DPA is stronger. 

Be sure you can answer these questions: What does this vendor’s privacy policy permit? Does it allow training on my inputs? Does it permit human review? Can the vendor disclose my data to third parties? Is there a DPA—and if so, what does it actually say? These are the facts a court will examine if your AI use is ever challenged and evidence that you are using reasonable safeguards. 

5. Document Attorney Direction—The Through-Line Both Courts Agree On 

Despite their different reasoning, both courts point to the same practical imperative. The lawyer’s involvement in and direction of the AI interaction is a critical variable, if not the critical variable. 

Heppner denied protection in part because counsel “did not direct [Heppner] to run Claude searches.” The Court left open that counsel-directed use could yield a different result. Warner protected work product created by a pro se plaintiff who was functioning as her own counsel—satisfying the attorney-direction element by definition. Read together, the consistent message is when an attorney directs the AI interaction on a platform with privacy options on, the information is protected. 

First, document this by establishing prompt conventions and matter codes that connect AI interactions to specific client matters and demonstrate that the work was performed at counsel’s direction, in anticipation of litigation or for legal analysis. Second, implement supervision protocols under Rule 5.1 and Rule 5.3 so that when staff or interns use AI tools, the supervising attorney’s direction is documented. Third, create a written AI use policy that establishes, as a matter of firm practice, that AI-assisted work is attorney-directed work product—prepared in anticipation of litigation or for legal analysis, under counsel’s supervision and control. 

You likely already document attorney direction when you assign research to associates, engage expert consultants, or direct paralegals to prepare litigation files.  Treat AI tools the same way. 

Looking Ahead 

Heppner and Warner are district court opinions—persuasive authority at best, binding on no one outside their respective districts. More decisions are coming and appellate courts will weigh in. State bars will issue additional guidance. The legal landscape around AI privilege and work product is going to develop quickly. 

But the core principles these courts articulated are unlikely to change, because they are not new principles. Privilege has always required a reasonable expectation of confidentiality. Work product has always required preparation in anticipation of litigation. Waiver has always turned on the nature of the disclosure—who you disclosed to, and under what circumstances. Heppner and Warner applied those established principles to a new technology. Future courts will do the same. 

These principles will apply to your AI use. Make certain that you have the facts on your side—the right platform, the right contractual protections, a written AI policy, and a documented record of attorney direction, because these factors will become your protection if your AI use is questioned. 

Case References 

United States v. Heppner, No. 1:25-cr-00503-JSR, Doc. 27 (S.D.N.Y. bench ruling Feb. 10, 2026; written opinion Feb. 17, 2026) (Rakoff, J.). Criminal securities and wire fraud prosecution. Defendant generated approximately 31 documents using Claude, a publicly accessible consumer AI chatbot operated by Anthropic. Court denied privilege and work product claims. First known federal decision addressing AI-generated documents and attorney-client privilege. 

Warner v. Gilbarco Inc., No. 2:24-cv-12333 (E.D. Mich. Feb. 10, 2026) (Patti, M.J.). Employment discrimination action. Pro se plaintiff used ChatGPT to prepare litigation materials. Court denied defendant’s motion to compel AI interactions, holding that work product protection was not waived because AI tools are “tools, not persons” and disclosure to an AI vendor is not adversary disclosure. 

Best Practices For Lawyers and Legal Professionals: 

  1. Use a Safe AI Platform. Most secure is an enterprise AI platform with a signed DPA. At least toggle every privacy control “On.” Consumer-tier tools with no privacy settings engaged are a near-guaranteed denial of privilege. 


  2. Read your vendor's privacy policy and terms of service. Know what the vendor can collect, train on, and disclose. Make sure you’re ok with this. 


  3. Document Attorney Direction Over Every AI Interaction, Especially For LA’s and Paralegals. Both courts treated the lawyer's involvement as a critical variable. Using matter codes and prompt conventions can easilycreate a paper trail. 


  4. Adopt a Written Firm AI Use Policy. Establish as a matter of firm practice that AI-assisted work is attorney-directed work product, prepared under counsel's supervision. 


  5. Advise clients about independent AI use. If a client uses AI without attorney direction, that creates a weaker privilege and work product posture. Counsel them so they don’t waive privilege before they understand it. 


  6. Lean on work product doctrine, not just privilege. Warner held that disclosure to an AI vendor is not adversary disclosure. Work product may be the more durable shield. 


This article is part of a series on AI and attorney-client confidentiality. See also: Is It Safe to Put Confidential Information in AI Tools? and 9 Privacy Myths About Attorney-Client Confidentiality with AI Tools. 

© 2026 Amy Swaner. All Rights Reserved.  May use with attribution and link to article. 


 

More Like This

Five Takeaways from the First Two AI Privilege Decisions

Early court decisions confirm that using AI does not automatically waive privilege—but failing to ensure confidentiality, proper settings, and attorney direction just might.

Is It Safe to Put Confidential Information in AI Tools?

Using AI with confidential client information is not inherently riskier than the cloud tools lawyers already trust—provided it’s done with the same level of diligence, configuration, and vendor scrutiny.

Do You Have Your Radio

“As AI tools become affordable, effective, and widely adopted, a lawyer’s failure to use them may soon be viewed not as caution—but as a breach of the duty of competence.”