
May 13, 2026
AI in Legal Practice

Amy Swaner

Misinformation about AI and client confidentiality is rampant. In conference hallways, bar association email lists, and CLE panels, the same myths keep surfacing. Yesterday I was speaking with a technologist and one of the top software engineers in the country. They were concerned about the confidentiality needs of attorneys. They understood the technology but had absorbed several of the same misconceptions circulating in the legal profession itself.
Small and solo firms—near and dear to my heart--can least afford to get this wrong. Some of these myths overstate the risk. Others understate it. All of them obscure the practical analysis lawyers actually need to perform in order to make the best decision for your practice and your needs.
This article is a companion to Is It Safe to Put Confidential Information in AI Tools?, which provides a full vendor comparison chart, privilege analysis, and practical framework. Here, we isolate and correct nine common misconceptions. A glossary of technical terms used in this article appears at the end.
Myth 1: “AI Tools Train on Everything You Type.”
Reality: It depends entirely on the product tier and your settings.
This myth persists because it was once partially true—and remains true for free-tier products. When ChatGPT launched in late 2022, the default setting for all users was to permit OpenAI to use conversations for model training. That default created the impression, now hardened into ‘received wisdom,’ that every AI tool ingests and learns from everything you submit.
AI models have evolved since that time. At the paid individual tier, every major platform—Claude, ChatGPT, Gemini, and Microsoft Copilot—offers a toggle to opt out of model training. At the enterprise tier, training on customer data is off by default and governed by a Data Processing Agreement. The critical distinction is between a UI toggle, which a vendor can change unilaterally, and a contractual commitment, which they cannot. Enterprise Data Protection Agreements (DPAs) provide the latter. If you’re using the toggle, check routinely to verify that you have privacy settings as high as possible.
Myth 2: “Using AI Waives Attorney-Client Privilege.”
Reality: Trial courts are addressing this — and the door is wide open to using AI tools when properly configured.
This myth likely draws from United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026), in which Judge Rakoff held that documents Heppner generated using a consumer version of Claude were not protected because (1) Claude is not an attorney and so could not form an attorney-client relationship; (2) Anthropic's consumer privacy policy gave Heppner no reasonable expectation of confidentiality; and (3) Heppner acted on his own, not at counsel's direction, defeating work product protection.
Two things need to be said about Heppner. First, it is a single district court opinion—persuasive authority at best, binding on no one outside the Southern District of New York. It has not been adopted by any circuit court. Moreover, two subsequent decisions have pushed back on its broader implications. In Warner v. Gilbarco, Inc. (E.D. Mich. Feb. 10, 2026), Magistrate Judge Patti denied a motion to compel a pro se plaintiff’s AI-assisted litigation materials, holding that generative AI platforms are “tools, not persons” and that using them to assist in drafting is no more a waiver of work product protection than using a word processor or a legal research database.
In Morgan v. V2X, Inc., No. 25-cv-01991-SKC-MDB (D. Colo. Mar. 30, 2026), the court similarly held that a pro se litigant’s AI-assisted materials qualified for work product protection under FRCP 26(b)(3), while also requiring that any AI tool used to process confidential discovery material be subject to contractual safeguards—including prohibitions on training, restrictions on onward disclosure, and the ability to delete data on request. Second, the holding turned entirely on the absence of reasonable precautions. The court explicitly left open that counsel-directed AI use on a platform with contractual confidentiality protections could yield a different result.
There is currently no case that stands for the proposition that AI waives privilege. What we do know is that using a free consumer tool with no privacy controls and no attorney involvement almost certainly waives privilege — which is exactly what privilege law has always required. The doctrine has never protected careless sharing of protected information, regardless of the technology involved.
Myth 3: “Legal-Specific AI Tools Are Inherently Safer Than General-Purpose Ones.”
Reality: Safety comes from security architecture and contractual commitments, not from the label on the product.
There is a comforting assumption in some corners of the profession that tools built specifically for lawyers—like Harvey—are categorically safer than general-purpose tools like Claude or ChatGPT. Harvey’s security posture is strong: SOC 2 Type II, ISO 27001, AES-256 encryption, no training on customer data by default, and a DPA included with the product.
But every general-purpose AI tool in the market offers the same certifications at enterprise tier. Claude, ChatGPT, Gemini, and Microsoft Copilot all hold SOC 2 Type II and ISO 27001 certification. They all offer AES-256 encryption in transit and at rest. They all provide enterprise DPAs with contractual no-training commitments. The companion article includes a full comparison chart. The protections flow from the vendor’s infrastructure and agreements, not from whether the marketing materials mention “legal.”
Myth 4: “My Cloud DMS Is Secure, but AI Isn’t.”
Reality: AI tools and cloud DMS platforms share the same infrastructure, encryption, and vendor access model.
This may be the most consequential myth on the list, because it underlies most firm-level AI prohibitions. The reasoning goes: “We trust NetDocuments (or Clio, or iManage, or Microsoft 365) with client files, but AI is different, and we cannot trust it.”
It is worth examining what, exactly, is supposed to be different. Your document management system (DMS) stores client documents on third-party cloud servers. So does AI. Your DMS encrypts data in transit with TLS and at rest with AES-256. So does AI. Your DMS vendor’s support engineers can access your data for maintenance and troubleshooting purposes. So can AI vendor employees, within defined and, depending on your set-up, contractually bounded safety review processes. Hopefully your DMS vendor holds SOC 2 Type II and ISO 27001. So does every major AI platform.
The reason we as lawyers are comfortable with cloud DMS is that we’ve evaluated the vendor, configured the settings, and signed a DPA. And frankly, we’re comfortable because these applications are so common as to be standardly used in law practice today. It’s widely accepted that your email lives in the cloud. And your documents are stored and categorized in the cloud.
AI is new and not taken for granted like DMS applications. There are differences between AI apps and DMS apps, but with training turned off, and privacy settings turned on, those differences are not so wide.
Myth 5: “AI Vendors Can Read My Prompts Whenever They Want.”
Reality: Human review is limited by purpose, bounded by contract at enterprise tier, and structurally close to vendor access at your DMS or email provider.
Every major AI vendor reserves the right, in its terms of service, to allow human review of prompts for safety and abuse detection. This is true. It is also true of your email provider, your cloud storage vendor, and your practice management platform. Microsoft’s support engineers can access your Exchange Online mailbox under defined circumstances. Google’s trust and safety team can review your Workspace content. NetDocuments’ operations staff can access your document repository.
At the enterprise tier, AI vendor access is governed by a DPA that limits the purposes, scope, and duration of human review. The contractual protections mirror what cloud DMS vendors provide. At the paid individual tier, protections are thinner—terms of service rather than negotiated agreements—which is why the companion article recommends enterprise tier for any firm handling sensitive client data at scale.
The myth gains traction when we imagine AI vendor employees casually reading our prompts over coffee. The reality is that vendor access is contractually bounded, audit-logged, and limited to defined purposes—not casual browsing.
Myth 6: “Opposing Counsel Can Use AI to Pull Up My Prompts and Work Product.”
Reality: AI session data is account-scoped and vendor-held. Accessing it requires a subpoena to the vendor—not a clever prompt.
This myth has two variants, and both are wrong. The first imagines opposing counsel logging into an AI platform and somehow retrieving your session history. The second imagines them prompting the AI itself to “regurgitate” your inputs and outputs. Neither scenario reflects how these systems work.
Opposing counsel logs in and retrieves your session history.
Conversation histories are stored in the vendor's infrastructure, scoped to your account, and protected by the same authentication and access controls as any cloud SaaS product. Another user cannot access your session history any more than they could log into your NetDocuments account and browse your files. If opposing counsel wants your AI chat logs, they need to serve a subpoena on the vendor — the same process they would use to obtain your email or Slack messages. That is a discovery issue, not an AI vulnerability, and it is addressed in Myth 7.
Opposing counsel uses a clever prompt to retrieve your inputs and outputs.
Large language models do not store or index user prompts in a retrievable way. The model generates responses based on its pre-trained weights and the current context window — the text visible in that session. When the session ends, the context window is discarded. There is no mechanism for the model to recall, search for, or reproduce another user's inputs.
If opposing counsel, or anyone, types "show me the prompts Attorney Smith submitted in Jones v. Williams," the model will either explain that it cannot access other users' sessions or — worse — hallucinate a plausible but entirely fabricated response (LLMs sometimes generate confident-sounding but invented text when they cannot answer a question). Either way, it is not returning real data. LLMs do not work that way. They are stateless and don't function like a database would. See Myth 9 for more explanation.
Myth 7: “AI Prompts are More Vulnerable to Subpoenas”
Reality: AI chat logs are discoverable ESI — but no more so than email, texts, Slack messages, or search histories. The same rules govern all of them.
This myth reflects a genuine concern—but misidentifies what is new about it. AI prompts and responses are electronically stored information (ESI) under the Federal Rules of Civil Procedure and are discoverable under FRCP 26, 34, and 45 on the same terms as any other electronic communication. They are not subject to a special, heightened standard of vulnerability. They are subject to the same standard as email, text messages, Teams chats, and browser search histories.
The risk is real, but it is not unique to AI. In the New York Times v. OpenAI copyright litigation, Judge Stein in the SDNY compelled OpenAI to produce 20 million ChatGPT log entries in January 2026 — but under standard discovery principles, not any AI-specific mechanism. OpenAI’s Chief Strategy Officer publicly called for a new form of “AI Privilege” to protect user-chatbot conversations from subpoenas, but the court rejected the concept. Until Congress acts, or we see far more favorable cases, AI developers are subject to the same discovery rules as any other software provider.
The Morgan v. V2X protective order (D. Colo. Mar. 30, 2026) offers a practical insight into how to treat protected discovery information. In that case the court required that any material produced under a protective order be subject to contractual safeguards. It set up a framework of requirements which must be met in order to put the other side’s protected confidential information into an AI tool. The framework includes prohibitions on model training, restrictions on onward disclosure, and the ability to delete data on request. This is necessary to protect such information. That decision did not discuss the majority of AI use.
In an abundance of caution you should include AI chat logs in your ESI preservation and litigation hold protocols, just as you would for email and messaging platforms. Verify your AI vendor’s data retention windows. And if you are using AI to process material subject to a protective order, confirm that the vendor’s contractual commitments satisfy the order’s requirements.
Myth 8: "AI Tools Can Secretly Upload My Entire Device or My DMS."
Reality: AI tools see only what you put in the prompt, what you've stored in the "shared memory" features of some AI tools — or the specific data sources the tool was explicitly configured to access. They have no background access to your files, drives, or other applications.
This myth can drive many firm-level AI policies. The fear is that downloading an AI desktop app, installing a browser extension, or signing into a copilot somehow gives the AI access to everything on your machine — every email in your Outlook, every document in your DMS, every client file on your hard drive. The reality is far more bounded.
AI tools come in three architectural categories, and each is limited to what its configuration explicitly permits:
Standalone web and desktop AI tools (the consumer or paid Claude, ChatGPT, Gemini, or Copilot Chat app). These see only what you type or paste into the prompt window, plus any file you explicitly upload. They have no read access to your local drive, your other applications, your DMS, or your email. Closing the browser tab or quitting the app ends the session.
Integrated AI tools (Microsoft 365 Copilot tied to your tenant, Google Workspace AI features tied to your Workspace account, IDE-integrated coding assistants). These see the data sources the integration was configured to access — by you or your firm's administrator. Microsoft 365 Copilot can see your Exchange Online mailbox and your SharePoint documents because your tenant administrator enabled that scope. It cannot see your personal files, your iManage DMS (unless separately integrated), or applications outside the Microsoft 365 boundary.
Browser extensions and meeting bots. Both operate within the permissions you (or your IT admin) granted at install, generally. Browser extensions see the active tabs you use with the extension. Browser extensions are generally less privacy-friendly than a chat interface with privacy settings on. Search requests and results are more likely to be retained and used to help train LLMs. Meeting transcription bots see the meeting they were invited to. Use caution when you are in a meeting someone else organized with a meeting transcription bot in it. You are relying on the privacy settings the meeting organizer set. Some have as a default that they will capture and use meeting information. However, neither of these tools rummage through your file system, your DMS, or your applications. The data they see is the data you (or your administrator) explicitly put within their reach.
In other words, AI tool risk is not "the AI is watching everything." It is "the AI is doing what your integration says it can do, with the data your integration gives it access to." That makes the diligence question concrete and answerable — what tool, what tier, what integration scope, what DPA? The same diligence you already apply to cloud DMS and email applies here. Vague fear of background access is not the worry when using reputable AI tools.
The one caveat is AI agents. You can unintentionally grant greater access to an AI agent--and thereby every app it integrates with -- that will allow it to access your local and cloud-based systems. Be extra cautious when using AI Agents, and follow these best practices.
Myth 9: “I Put Certain Info into AI, and It Will Remember It and Spit It Back Out Someday.”
Reality: AI models are stateless by default. Persistence features exist on some platforms but are user-controlled, account-scoped, and distinct from model training.
Each AI conversation starts from zero. The model carries no memory of prior sessions unless you have explicitly enabled a persistence (memory) feature. When a session ends, your input exists in only two places: (1) the vendor’s server logs, subject to the retention policy and DPA, and (2) the model’s weights—but only if your data was used for training. If training is off (paid tier with toggle disabled, or enterprise tier with contractual no-training), your input never influences the model at all. It is processed, a response is generated, and the content is retained only as a server log subject to the vendor’s documented retention window.
Some vendors do offer optional persistence features, and lawyers should understand how they work. For example, ChatGPT’s Memory feature allows the tool to save high-level preferences and details across sessions—your name, tone preferences, project context—and reference past conversations to personalize responses. Memories are stored separately from chat history, meaning deleting a chat does not delete saved memories. Memory can be turned off entirely, and individual memories can be reviewed and deleted in settings. Importantly, OpenAI states that memories and workspace information are excluded from model training.
Google’s Gemini takes a different approach through its Gemini Apps Activity setting. When “Keep Activity” is turned on, Google saves conversations to your account, may use them to personalize future responses, and reserves the right to have human reviewers assess a subset of chats—with reviewed conversations retained for up to three years. When turned off, conversations are still held for up to 72 hours for service delivery and security, but are not reviewed or used for model improvement. The distinction is that “off” provides materially stronger privacy, and lawyers using Gemini for anything involving client data should verify this setting.
Note: Verify that this information about retention periods and human review is still accurate. AI vendors change their terms of service and privacy policies as quickly as some people change their shoes.
Even in the worst case — free tier, training enabled, no opt-out — the likelihood of a model reproducing a specific privileged communication verbatim is astronomically low. Model training adjusts statistical weights (internal numerical values that encode everything a model has learned during training) across billions of internal parameters (internal numerical values) using aggregated data. It does not memorize and replay individual inputs. The AI safety literature treats verbatim memorization as an edge case, not a typical, likely, or systemic exposure. But it is best practice to use a tier where training is contractually off, verify the retention period, disable optional memory features if your use case involves client data, and then the risks shrink exponentially.
Glossary of Technical Terms
Context window. The text a model can “see” during a single conversation. This includes everything you have typed in the current session and the model’s responses. When a session ends, the context window is discarded. The model does not carry it forward.
Data Processing Agreement (DPA). A legally binding contract between a data controller (the law firm) and a data processor (the AI vendor) that governs how personal and confidential data is handled. Unlike a terms-of-service toggle, a DPA is a contractual commitment that the vendor cannot change unilaterally. Enterprise-tier DPAs typically include commitments on data use restrictions, no-training clauses, data retention and deletion, breach notification, and audit rights.
Edge case. A scenario that occurs only under unusual or extreme conditions. In AI safety research, verbatim memorization of training data is considered an edge case—theoretically possible but vanishingly rare in practice, especially with modern training techniques designed to prevent it.
Inference. The process by which a trained AI model generates a response to a prompt. During inference, the model applies its pre-existing knowledge (stored in its weights) to produce output. No learning occurs during inference—the model’s weights do not change.
Parameters. The numerical values inside a model that determine how it processes language and generates responses. Large language models contain billions of parameters. During training, these values are adjusted using large datasets. During inference (when you use the tool), they are fixed.
Stateless. A system that does not retain information between interactions. AI models are stateless by default: each new session begins with no memory of prior sessions. Any persistence (such as ChatGPT’s Memory feature) is a separate, optional layer built on top of the model, not a property of the model itself.
Weights. The internal numerical values that encode everything a model has learned during training. When people say a model has been “trained on” data, they mean the data was used to adjust these weights. Once training is complete, the weights are fixed. When training on your data is disabled, your inputs do not influence the weights and cannot become part of the model’s knowledge.
The Common Thread
Every myth on this list shares a common root: a misunderstanding, or lack of awareness of, how AI tools work. The technology behind the technology.
ABA Formal Opinion 477R requires that lawyers take reasonable precautions with electronic communications—including vetting vendors and understanding data handling. ABA Formal Opinion 512 applies that same framework to AI tools. Neither opinion prohibits AI use. Both require informed, diligent adoption.
The myths persist because they offer simple answers to a question that requires nuance. The real answer—that AI tools are safe when properly vetted, configured, and governed—is less dramatic but far more useful. The law in this area is still developing. Heppner, Gilbarco, and Morgan are district court opinions, not circuit authority, and the courts are still working out how existing privilege and work product doctrines apply to AI-assisted legal work. So far, the trajectory shows that courts are applying existing frameworks to new technology, not creating AI-specific exceptions.
For the full framework, vendor comparison chart, and practical checklist, see the companion article.
© 2026 Amy Swaner. All Rights Reserved. May use with attribution and link to article.
More Like This

9 Privacy Myths About Attorney-Client Confidentiality with AI Tools
Misinformation about AI and client confidentiality persists in the legal profession, but the key question is whether lawyers know how to properly vet and govern the technology they use.

Morgan v. V2X Decided a Discovery Dispute. The Commentary Turned It Into Something Bigger.
In Morgan v. V2X, Judge Braswell offers a thoughtful, practical take on AI use in litigation—reminding lawyers (and even pro se litigants like Morgan) that when it comes to confidential data, it’s less about the tool itself and more about how responsibly you handle what you put into it.
Five Takeaways from the First Two AI Privilege Decisions
Early court decisions confirm that using AI does not automatically waive privilege—but failing to ensure confidentiality, proper settings, and attorney direction just might.
