November 13, 2025

AI in Legal Practice

AI Policy Part II – Avoid These Pitfalls

AI Policy Part II – Avoid These Pitfalls

AI Policy Part II – Avoid These Pitfalls

Amy Swaner

Executive Summary  

Law firms increasingly rely on AI, but misuse and weak governance can expose clients and attorneys to serious errors and ethical risks. Many firms stumble by delaying policy implementation, overcomplicating approvals, neglecting training, mishandling client communication, and failing to enforce or secure their AI tools. Firms must focus on practical, consistent implementation to close the gap between policy and practice, ultimately turning AI into a strategic advantage rather than a liability.  

Common Implementation Pitfalls 

“To err is human, but to really foul things up you need a computer [running GenAI].” 

If you ask ChatGPT where this quote came from (without my addition of [running GenAI], it will give you a variety of possible sources.  The earliest known print citation is the Farmers’ Almanac (1978) but it has also been attributed to Paul R. Ehrlich; Bill Vaughan and Dan Rather appear to be retrofits. 

Regardless of who said it, the point is apt today.  We as humans can mess things up.  But add powerful tools to our arsenal, and we can wreak tremendous havoc without even trying.  We can make fake videos of political figures.  We can create evidence. And even when we are trying to act cautiously and carefully, we can still err spectacularly.  Consider the cases of two federal court judges— US District Court Judge Julien Xavier Neals of New Jersey, and US District Court Judge Henry Wingate of Mississippi, who apparently thought that they had control over AI use in their chambers, but did not. Both judges now say they have formalized rules on AI use in their chambers. 

The parallel that comes to mind is that in the 1970’s and 1980’s most high schools in the US taught Human Reproduction or “sex-ed” with a heavy emphasis on abstinence. The problem with that approach was that there were a number of students who refused to consider abstaining from sex as a viable option.  Parents and school administrators were embarrassed and uncomfortable to discuss other methods of birth control, perhaps fearing it would seem permissive and in some states illegal.  So many students were not given even solid information.  And so, in turn, misconceptions (pun intended) were rampant.  

We’re on the peak of the same issue with AI in law.  That lawyers, paralegals, judges, and even experts are using AI in sometimes questionable ways is the worst-kept secret.  But the defense of “everyone is doing it,” is no justification for sloppy implementation.  

It is no longer acceptable for a law firm to not have an AI policy, with the one exception being solo practitioners. But having an AI policy is just the start. Even with very solid policy content, implementation can fail. Here are mistakes to avoid: 

Pitfall 1: The Perfect Policy Trap 

Mistake: Spending 6 months drafting the “perfect” comprehensive policy while attorneys use unvetted tools daily. 

Why it fails: Risk is accumulating during the drafting process. Attorneys and other legal professionals upload case information, and in some cases, client data, into unapproved tools, which may or may not be safe for use. Your not-yet-finalized policy doesn't help.  We do not have the six (6) months to get to perfection.  Better to start where you can and improve from there.  

Solution: Adopt a “good enough” policy quickly and improve it iteratively. The example AI policy we’ve included with Article 1 – Why Your Firm Needs An AI Policy provides comprehensive starting points—customize them to your firm’s needs and get started. Then, refine your policy over the first year as you encounter real-world scenarios. Even if the first policy was “perfect,” AI changes so rapidly that soon it would likely be imperfect.  Better to start than to wait for elusive perfection. 

Pitfall 2: The Approval Bottleneck 

In legal practice, there is no shortage of bottlenecks.  Bottlenecks in vetting potential clients and running conflicts checks, bottlenecks in getting approval for a policy change, and bottlenecks in determining which AI tools are effective, without being overlapping. 

Mistake: Requiring a full governance committee, or for smaller firms, the entire attorney roster, to review every AI tool--including Microsoft Word's grammar checker (ok, that might be a bit of an exaggeration) --is unacceptably slow. 

Why it fails: Let’s say the committee meets monthly (at best). Each meeting handles 5-10 agenda items. Low-risk AI tool approvals consume time that should be focused on higher-risk decisions. Meanwhile, attorneys and other staff wait 6 weeks to use basic tools, so they either use them without approval or don't adopt any AI whatsoever.  

Solution: IT/Security approves low-risk tools in 3-5 days. Pre-Approved Tools (Appendix C.1) need only configuration verification, not approval review—Microsoft 365 Copilot is pre-approved if you have an enterprise license with Commercial Data Protection enabled. 

This allows you to reserve Committee time, or Managing Partner time for genuinely complex decisions, such as novel tools, experimental AI, and high-risk use cases. 

Pitfall 3: Ineffective Training  

Mistake: Mandatory annual ‘AI Ethics’ webinar that covers hypothetical scenarios unrelated to your policy's actual requirements. Or AI ‘training’ that shows you, once again, how you can make your letter sound like Santa or Elvis wrote it. 

Why it fails: Attorneys tune out generic training. When they need to know “Can I use this tool for client work?”, they don't remember abstract principles from the webinar. They guess, often incorrectly.  Either way, they are missing out on the effective productivity boost AI offers, or risks malpractice and sanctions.  

And no one needs to know that their prayer for relief can still be written as a haiku.  It’s just not useful. 

Solution: Training must be concrete and policy-specific.  For example: 

  • “Here are our Pre-Approved Tools. Here's how to verify they're configured correctly before first use.” 

  • “Court-facing work requires source-checking. Here's the workflow: AI generates content → you pull the actual case → verify it exists, says what AI claims, and is still good law → only then include in filing.” 

  • “If you want to use a tool not on the approved list, here's the request form (Appendix E), here's who reviews it (depends on risk tier), and here's the typical timeline for a decision.” 

After training, attorneys should know exactly what to do in common situations.  Effective training can also be a venue to compare uses and best use cases. 

Bonus Tips:  

  • In every training session, remind all staff that verification of AI Output is absolutely mandatory. This cannot be stressed enough. 

  • Test understanding. After training, ask: “You want to use a new AI research tool. What do you do?”  

If attorneys can't answer this, training failed. 

Pitfall 4: Client Consent Confusion 

Mistake: Either asking explicit permission for every AI use (“Can I use MS Word? It has AI features...”) or never mentioning AI to clients until a problem arises. 

Why both fail: Over-disclosure trains clients to fear AI and creates a bureaucratic permission structure for routine tasks. Under-disclosure violates Model Rule 1.4 and damages trust when clients eventually discover you were using AI when they didn’t want you to, or vice versa. 

Solution: Adopt accurate, standard language.  

Example: “Our firm uses carefully vetted AI tools to enhance efficiency in drafting, research, and document review, subject to attorney oversight and confidentiality protections.” 

Use language that covers industry-standard tools that most sophisticated clients expect firms to use such as Microsoft 365 Copilot, Westlaw AI. 

Bonus Tips:  

  • Special circumstances call for affirmative consent: 

“This matter involves cross-border data processing. Our AI tools may process information on servers in [location]. We need your written consent.” OR 

“We'd like to pilot a new AI document review tool on this matter. It could reduce costs significantly. Here are the details and safeguards we’re implementing...” 

  • Warn your clients not to put confidential information into AI tools.  If you’re not sure why, check out this article.  

  • Honor Client-initiated restrictions: Some clients (especially regulated industries, government agencies) proactively restrict AI. Use the Client AI Preference Form to document their requirements. If restrictions create a material impairment (>20% cost increase), disclose the impact and discuss alternatives. 

  • Practical rule of thumb: If 90%+ of your peer firms use the tool as standard practice, engagement letter disclosure suffices. If you're an early adopter or the use is unusual for your practice area, get affirmative consent. 

Pitfall 5: Incident Response Paralysis 

Mistake: When something goes wrong, spending days debating “Do we tell the client? What do we say? Who decides?” while the client remains uninformed. 

Why it fails: Delayed notification compounds problems. Always. A client who learns about a confidentiality breach from you immediately is annoyed but appreciates transparency. A client who learns weeks later (or from a third party) is furious and considers it a cover-up. 

Solution: Use this three-tier system to remove decision paralysis: 

Tier 1 (Critical): Presumption of client notice within 24 hours. Don't debate whether—debate what to say and how to mitigate. 

Tier 2 (Major): Case-by-case decision within 24 hours. Clear criteria: Did the client specifically care about this risk? Could it affect the matter outcome? Does client sophistication suggest they'd want to know? 

Tier 3 (Minor): No client notice. Internal logging only. 

The classification itself is usually obvious: “Confidential Client Information (CCI) exposed to unauthorized third party” is Tier 1. “Missed logging entry” is Tier 3. Then give borderline cases a serious 10–15-minute discussion, not days of debate and indecision. 

Template language: Our Sample AI Policy includes notification templates. When a Tier 1 incident occurs, you're not drafting from scratch during a crisis—you adapt the template to the specifics. 

Preparation prevents paralysis: Semiannual or quarterly tabletop exercises help.  Lawyers are well-versed in hypotheticals.  

Hypothetical: Associate uploaded client memo to ChatGPT. Who gets notified? Within what timeframe?”  

Walking through scenarios before real incidents can be done as part of AI training. These scenarios can be done efficiently and make a crisis more easily dealt with.   

Pitfall 6: No Real Consequences 

Mistake: Your firm’s policy says violations have consequences, but in practice, nothing happens. For example, if an attorney or legal assistant uses their personal, unapproved technology (“shadow use”), the firm says, “don't do that again,” with no documentation and no follow-up. 

Why it fails: Policy becomes a suggestion. Conscientious attorneys follow it and feel disadvantaged.  Perhaps they’re slower.  Alternatively, cavalier attorneys ignore it, seeming to have an advantage—they're faster, and face no downside. The message that you’re sending is clear – the AI policy is nothing more than hot air because there are no consequences.   

Solution: Adopt clear, understandable consequences. And then enforce them consistently.  Inconsistent enforcement is significantly more destructive to a firm's morale than no consequences, because no one knows for certain what will happen. Consistent, proportional enforcement is a necessity.  List consequences in your AI policy and follow them scrupulously.  

Pitfall 7: Ignoring Tool Configuration 

Mistake: Approving an AI tool such as Microsoft 365 Copilot because “Microsoft is reputable” without verifying that Commercial Data Protection is enabled. Or approving Westlaw AI without confirming your enterprise agreement includes a no-training clause. 

Why it fails: Tool security depends on configuration. The same tool can be safe or risky depending on the settings. Approving tools without verifying configuration creates false security. 

Solution: Always verify that your firm has opted for maximum security.  Just because a door has a lock on it, it doesn’t provide the protections of a locked door until the lock is actually engaged.   And suing OpenAI for failing to warn about the need to rachet down security protocols is a nonstarter.  

AI Tools Are Meant to Be Used, Appropriately 

“To err is human” is not an indictment of technology; it is a reminder that judgment—not ease of use, and not novelty—must lead. Generative AI can amplify excellence or error, depending on whether a firm’s policy is lived practice or just paper. The pitfalls cataloged here—perfectionism that delays guardrails, murky approval processes that push lawyers to shadow tools, training that entertains instead of instructs, consequences that never arrive, and configurations that nobody actually verifies—are hardly better than no AI governance at all.  In fact, they can be worse because a firm can believe they have established AI governance procedures and guidelines, when as a practical matter there is no quality governance.   

Having an AI policy is the start.  Implementing it is the middle and end. Start with an adequate policy now and improve it deliberately. Tie roles and accountability to concrete workflows. Teach what the policy requires using the firm’s actual tools, matters, and staff.  Tell clients what you do and why, ask for informed consent when it truly matters, and honor client preferences  

The added bonus of this approach is that what is measured can be improved. Your AI policy can improve your practice and help identify ROI on your AI tools.  Firms that postpone, paper over, or pretend will discover that the real risk was never the model; it was the gap between what the firm said and what the firm actually did. Close that gap, and AI becomes the best advantage to law practice since the personal computer. 


 


 

More Like This

AI Policy Part II – Avoid These Pitfalls

Why Your Law Firm Needs an AI Policy: A Practical Guide for Lawyers 

The Trust Apocalypse and Deepfakes in Legal Practice  

The rapid growth of deepfake technology is weakening public trust in real evidence and allowing people to deny genuine media as fake. To counter this, legal professionals must adopt stronger verification systems and update legal processes to ensure the authenticity of evidence.