October 8, 2025

AI Liability and Risk Assessment

AI Policy, Governance, and Enforcement: Why Your Firm Needs All Three

AI Policy, Governance, and Enforcement: Why Your Firm Needs All Three

AI Policy, Governance, and Enforcement: Why Your Firm Needs All Three

Amy Swaner

Executive Summary

Most law firms now have an AI policy, but a policy alone is insufficient to meet ethical and professional standards. Like a speed limit without enforcement, an AI policy without governance and enforcement offers little real protection. To comply with ABA Model Rules 1.1 and 1.6 and to mitigate malpractice risk, firms must implement three integrated components: Policy, Governance, and Enforcement. The AI Policy defines what tools and uses are permitted, establishes confidentiality standards, and sets documentation and disclosure requirements. AI Governance operationalizes these rules through a cross-functional committee, vendor management, training, risk assessments, and tool approval workflows. AI Enforcement ensures accountability through monitoring, incident response, remediation, and proportionate discipline.

Firms that rely solely on written policies face predictable failures—such as associates uploading confidential data to unapproved tools or partners adopting risky AI applications. In contrast, firms with robust governance and enforcement can detect violations early, respond effectively, and demonstrate reasonable care. A complete framework makes compliance the path of least resistance and turns AI risk into a manageable, auditable process. Lawyers should assess their current systems and prioritize building governance and enforcement structures that protect clients, uphold ethical duties, and sustain trust in an AI-enabled legal practice.

Your AI Policy Matters

We’re finally reaching a point where most law firms have an AI policy.[1] I recently gave a presentation for the Iowa State Bar Association. I asked the nearly 200 participants how many had an AI Policy. Although a few outliers responded with “AI Policy--what’s that?”, most responded that they did have an AI policy. That’s a great first step. However, a policy alone is just that—a first step. For lack of a more elegant analogy, imagine an AI policy as a speed limit. Would a speed limit alone be enough to stop everyone from speeding? Clearly, the answer is no. There must also be a method to monitor those speeding, and then laws that enforce penalties for those who are caught violating laws against speeding.

The same is true in regard to an AI policy. An AI policy is an important part of your law practice for all but solo practitioners. Without mechanisms to monitor adherence to laws such as those against speeding, speeding has no meaning. Likewise, if a law—speeding or otherwise—has no enforcement mechanism, it can hardly be called a law. Having an AI policy without AI Governance and Enforcement is failing firms. It results in associates uploading privileged documents to the free version of ChatGPT because no one explained the difference between free and enterprise tools. Partners adopt whichever shiny new AI application has the slickest sales pitch.

The problem isn’t a lack of good intentions. Most managing partners have read the ABA guidance and know that Model Rules 1.1 and 1.6 require technological competence and confidentiality protections. What they’re missing is the distinction between three fundamentally different concepts:

AI Policy establishes the rules.
AI Governance creates the operational framework to implement those rules.
AI Enforcement ensures accountability when rules are violated.

The consequences are predictable and expensive. When a malpractice claim arrives after an associate’s careless use of an unapproved tool, or a failure to review the output from an approved AI tool, the firm discovers its carefully drafted policy provides no defense. Courts and bar authorities don’t ask whether the firm had rules on paper—they ask whether the firm had reasonable procedures to ensure compliance. A policy without governance infrastructure and active enforcement mechanisms doesn’t meet that standard.

This article breaks down what Policy, Governance, and Enforcement actually mean in practice, provides practical frameworks for implementation, and shows how these three elements interact with and reinforce each other.

Defining the Three Components

AI Policy

Your AI policy is your rulebook. It defines what attorneys can and cannot do with artificial intelligence tools when representing clients.

A proper AI policy answers specific questions:

  • Which tools are approved for firm use?

  • What types of client information can be input into those tools?

  • What review standards must attorneys meet before relying on AI-generated output?

  • When must the firm disclose AI use to clients? When must the firm obtain client consent? See this article on disclosing AI use to your clients.

  • What documentation must be maintained?

The policy should address scope clearly (who it covers, which practice areas), establish confidentiality protections (vendor requirements, data handling), set competence standards (verification requirements), and create documentation obligations.

AI Governance

AI Governance is your accountability mechanism. It should provide an operational framework—structure, processes, and accountability mechanisms— that turn policy rules into daily practice. If policy is the law, governance is the administrative state that implements it.

Governance requires:

  • Clear ownership and accountability: Someone must be responsible for AI risk. This will look different depending on the size of your firm. At a smaller firm, it might be one to three people. In a mid-to-large firm, this should be a cross-functional committee with representation from practice groups, technology, compliance and firm management.

  • Vendor management: Due diligence processes, contract negotiations, and approved tool lists

  • Training programs: Initial competency building, role-specific modules, and regular update training

  • Risk assessment workflows: Frameworks for evaluating different AI use cases and determining appropriate controls

  • Tool approval processes: Structured methods for evaluating and authorizing new AI applications

AI Enforcement

AI Enforcement is your accountability mechanism. It encompasses monitoring systems that detect violations, investigation protocols that determine what happened, remediation processes that fix problems, disciplinary measures that impose consequences, and feedback loops that drive continuous improvement.

Enforcement components include:

  • Monitoring: Automated systems (network monitoring, data loss prevention) and manual oversight (audits, peer review)

  • Incident detection and investigation: Clear reporting channels and structured investigation protocols

  • Remediation: Technical, client, and legal responses to violations

  • Disciplinary measures: Graduated consequences proportionate to violation severity

  • Feedback loops: Systematic capture of lessons learned to improve policy and governance

These three components form an integrated system. Policy without governance produces paper compliance that doesn’t reflect operational reality. Governance without policy has no substantive standards to implement. Either one without enforcement means rules exist but violations carry no consequences.

What Each Component Should Look Like in Practice

Building Your AI Policy

A functional AI policy needs to be comprehensive yet accessible. Check out our comprehensive article about writing your AI policy here. Briefly, here are the essential elements:

Core Components Checklist:

  • Scope and applicability (who is covered, which practice areas)

  • Approved tools list (specific applications by name)

  • Permitted use cases for each tool category

  • Confidentiality requirements (when client data may be processed, vendor contract requirements)

  • Prohibited uses (state specifically)

  • Competence and review standards (verification requirements)

  • Client disclosure requirements (notify clients, or receive their signed acknowledgement)

  • Documentation obligations

  • Contact for questions or edge cases

  • Create a mechanism for new tools to be considered and approved or denied

Building Your AI Governance Structure

Governance is where most firms stumble. It’s not something that you can create and set aside. It requires sustained organizational effort that many firms underestimate.

Governance requires cooperation between legal, IT, and risk management. This means governance cannot be delegated to one person or one department. The managing partner who tries to handle AI governance alone will drown in technical details they don’t understand. Even at the smallest firms, the IT director who unilaterally blocks AI tools will face partner rebellion when approved alternatives don’t meet practice needs. The compliance officer who operates in isolation will create approval processes so burdensome that attorneys adopt shadow IT to bypass them.

Effective governance requires a cross-functional team with clear authority, regular engagement, and accountability for outcomes. This team must balance competing priorities: security versus usability, risk mitigation versus innovation, firm-wide standards versus practice-specific needs. Getting this to balance correctly determines whether your governance structure enables safe AI adoption or becomes an obstacle that people work around.

Essential Governance Framework:

Committee Structure:

The AI Governance Committee is your operational nerve center. Structure matters because poor design creates either gridlock (too many members, unclear authority) or blind spots (missing critical perspectives, insufficient expertise).

Composition requirements:

  • Chair with authority: Typically, this is a practice group leader, managing partner, or senior partner with technology interest and firm leadership credibility. The chair must have authority to make binding decisions and enforce committee determinations. Without this authority, the committee becomes advisory; a waste of time that people ignore.

  • Practice area representatives: At least one partner from each major practice group (litigation, corporate, real estate, estate planning, etc.). These representatives serve two functions: they bring practice-specific knowledge to evaluate whether AI tools actually work for real matters, and they serve as liaisons to their groups, explaining committee decisions and gathering feedback. Representation matters—if transactional partners feel unrepresented on a litigation-heavy committee, they’ll bypass governance processes.

  • IT director or senior technology person: Essential for evaluating technical security, assessing vendor infrastructure, understanding integration requirements, and implementing technical controls. This person translates technical specifications into risk language the committee can understand and translates committee decisions into technical implementations that will work for your firm.

  • Compliance officer, loss prevention, or risk management partner: Reviews vendor contracts, identifies liability issues, assesses professional responsibility implications, and ensures decisions align with firm risk tolerance. This person catches the legal landmines—unfavorable indemnification clauses, inadequate breach notification provisions, and ambiguous data processing terms—that practitioners might miss.

  • Associate or junior partner representative: Provides a ground-level perspective on how AI tools actually function in daily practice. Associates often have more direct experience with AI than senior partners and can identify practical problems with governance processes, such as approval workflows that are too slow for real-world deadlines or training programs that miss critical scenarios.

Operational structure:

The operational structure should meet the needs of your particular firm. Here are guidelines to get you started.

  • Regular meeting schedule: Monthly meetings work for most firms during initial implementation. Quarterly meetings may suffice once systems stabilize, but monthly touchpoints maintain momentum and ensure issues don’t accumulate. Schedule meetings far in advance and protect the time—cancellations signal that governance isn’t a real priority. If you have quarterly meetings, the Chairperson should send monthly updates and ask for feedback for the agenda.

  • Emergency session protocols: Establish criteria for convening emergency sessions (serious incidents, urgent tool approval requests, significant policy violations) and procedures for rapid decision-making. Define what constitutes an emergency, how quickly the committee must convene (typically 24-48 hours), and who has authority to trigger an emergency session.

  • Clear decision authority: Document what decisions the committee can make unilaterally versus what require management committee approval. Ambiguity here creates friction and delays. Most effective structures give governance committees authority to approve standard tools, deny high-risk applications, mandate training requirement, and establish monitoring protocols. Major policy changes or significant financial commitments typically require management committee approval.

  • Documented minutes: Maintain written records of all decisions, including the rationale, any dissenting views, and specific action items with assigned owners and deadlines. These minutes serve multiple purposes: they create institutional memory, ensure accountability for action items, demonstrate reasonable care if challenged, and provide transparency to the broader firm.

  • Meeting structure: Effective meetings follow a consistent agenda: review of action items from prior meeting, monitoring reports (usage statistics, incident summaries), new tool approval requests, policy interpretation questions, training program updates, and emerging issues. Time-box discussions to maintain efficiency; governance shouldn’t consume excessive partner time, but it must receive sufficient attention to function.

Vendor Management Process:

  • Standardized due diligence checklist covering security certifications, data handling, training policies, contract terms, pricing, integration requirements

  • Standard contract provisions: data processing addendums, confidentiality obligations, training opt-outs, 24-hour breach notification, indemnification, termination rights with data deletion

  • Clear responsibility assignments: IT reviews security, compliance reviews contracts, practice groups assess functionality

Tool Approval Workflow:

  1. Attorney submits standard request form (tool name, vendor, use case, data types, cost, alternatives)

  2. IT reviews technical security (3 business days)

  3. Compliance reviews contract terms (3 business days)

  4. Committee makes final decision (5 business days)

  5. Approved tools added to authorized list with usage guidelines

  6. Denied requests receive written explanation and alternatives

Training Program:

  • Mandatory initial training (interactive CLE format):

    • Ethics foundations (Rules 1.1, 1.6, 5.3)

    • Firm policy with concrete examples

    • Hands-on tool demonstrations

    • Hallucination detection exercises (practice identifying planted errors)

  • Role-specific modules for different practice areas

  • Competency assessments before granting tool access

  • Annual refresher training with policy updates and case studies

Risk Assessment Framework:

  • Simple matrix considering data sensitivity, matter stakes, AI role in work product, client sophistication, regulatory environment

  • Determines appropriate approval levels, supervision requirements, documentation standards

Feedback Mechanisms:

  • Quarterly surveys on tool usefulness and challenges

  • Safe channels for reporting problems or near-misses

  • Regular review of attorney experiences to inform governance adjustments

Building Your AI Enforcement System

AI infractions tend to be easier to enforce but should be caught before they become large problems that have come to the attention of the court or a client. If an enforcement system is working correctly, infractions will receive attention before ever leaving the firm.

Monitoring Infrastructure:

Automated:
  • Network monitoring tracking approved tool access

  • Data loss prevention flagging uploads to unapproved sites

  • Endpoint protection blocking unauthorized applications

  • Usage dashboards for monthly committee review

Manual:
  • Quarterly audits of AI-assisted work product (random sampling)

  • Peer review protocols for high-stakes matters

  • Supervising attorney certification at matter closing

Incident Response:
  • Self-reporting requirements (24-hour reporting of suspected violations)

  • Compliance hotline or email to governance committee

  • Triage criteria for severity assessment:

    • Low: inadvertent violations, no client harm

    • Medium: confidential information exposure, mitigation possible

    • High: privilege breaches, actual client harm, willful violations

Investigation Protocol:
  1. Compliance officer interviews involved attorney (within 24 hours)

  2. IT pulls relevant logs and documents

  3. Committee reviews evidence and makes findings (48-72 hours for high-severity, one week for lower-severity)

  4. All findings documented in writing with supporting evidence

Remediation Framework:
  • Technical: revoke access, implement additional controls, attempt data deletion

  • Client: immediate notification, impact assessments at firm expense, credit monitoring

  • Legal: consult ethics counsel, notify malpractice carrier, assess bar reporting obligations

Disciplinary Measures (Graduated Three-Tier Framework):

The following are a few examples of guidelines, set out in three tiers of increasing severity.

Level 1 (Inadvertent, low harm):
  • Example: Brief access to unapproved tool without inputting client data

  • Response: Mandatory retraining, written warning, heightened monitoring (90 days)

Level 2 (Negligent, moderate harm):
  • Example: Uploaded confidential documents to unapproved tool, self-reported quickly

  • Response: 30-90 day AI tool suspension, retraining with competency reassessment, written reprimand in file, possible compensation reduction

Level 3 (Reckless, significant harm, or repeated violations):
  • Example: Willful policy violation causing client harm

  • Response: Immediate termination of AI access, potential employment termination, ethics complaint referral, client notification with firm accepting responsibility

Feedback Loops:
  • Root cause analysis after every incident

  • Policy updates addressing ambiguities revealed by violations

  • Governance process improvements (training enhancements, workflow acceleration)

  • Case studies for future training (anonymized when appropriate)

The Jane Doe Case Study

Six months after implementing its AI framework, a 50-attorney litigation firm’s automated monitoring flagged unusual activity. Attorney Jane Doe attempted to upload documents to a free AI tool 47 times over a two-week period. The data loss prevention system blocked most attempts, but seven uploads succeeded before the pattern was detected.

Investigation (Days 1-3): The compliance officer contacted Jane within hours. Jane explained she found the approved AI platform’s interface slow and needed quick summaries for an urgent motion. She didn’t think using a free AI tool for “just summaries” was serious. IT confirmed the uploads included client memos containing confidential competitive information for Client XYZ.

Risk Assessment: The committee convened an emergency session. This was high-severity: confidential information exposed to a public AI tool whose terms allow training data use. The tool’s free tier permits 30-day data retention and potential training use, with no opt-out.

Immediate Remediation:

  • Jane’s access to all AI tools was revoked

  • The AI tool’s creator was contacted, requesting data deletion (limited success given TOS)

  • Client XYZ was notified within 24 hours by the managing partner and the compliance officer

  • The firm offered to pay for a competitive intelligence audit and provided a fee credit

  • Ethics counsel was consulted (advised Rule 1.6 issue, but no bar notification required given prompt remediation)

  • The firm’s malpractice carrier notified

Discipline: Jane received Level 2 consequences: 90-day suspension from AI tools, mandatory four-hour retraining with written exam, 10% compensation reduction for the quarter, written reprimand in file, and requirement to present “lessons learned” at a firm-wide meeting.

Systemic Improvements (Within 30 days):

  • Policy updated with screenshots of prohibited AI interfaces labeled “DO NOT USE”

  • IT implemented network-level blocks on all public AI sites (not just monitoring)

  • Training program revised to include incident as case study

  • Tool approval workflow accelerated to 48-hour turnaround for standard tools

  • Monitoring increased from quarterly to monthly audits for one year

This enforcement example demonstrates real accountability while driving systemic improvement. The violation was detected quickly through automated monitoring. Investigation followed clear protocols. Remediation addressed immediate client risk. Discipline was proportionate. Most importantly, systemic improvements addressed root causes rather than simply punishing an individual.

Conclusion

The gap between firms that manage AI risk effectively and those that simply hope for the best is widening. The difference isn’t resources or sophistication—it is understanding that AI risk management requires three distinct but integrated functions working together.

Firms that treat AI policy as a standalone document will continue facing preventable incidents. The solution isn’t writing better policy language. It’s building operational systems that make compliance the path of least resistance—governance structures with clear ownership, training programs that build genuine competency, and enforcement mechanisms that detect problems early and drive continuous improvement.

None of this requires perfection. Small firms can implement proportionate frameworks with simpler structures. Large firms need more elaborate systems but have more resources. What every firm needs, regardless of size, is an understanding of each of the three components and a working knowledge of how to keep them functioning for the benefit of your firm.

Where to Start Now:

  1. Assess honestly: Do you have a clear policy? Functional governance? Active enforcement? For most firms, the answer is ‘yes’ to the first and ‘no’ to the second and third.

  2. Prioritize governance and enforcement: Don’t rewrite your policy unless it is clearly outdated or needs to be revised. Build the governance structure to implement it and create enforcement mechanisms to ensure compliance.

  3. Start with fundamentals:

    • Implement basic monitoring so you can see what’s happening

    • Establish a governance committee with decision-making authority

    • Create mandatory training so everyone understands expectations

    • Build from there, learning and adjusting based on experience

When the inevitable AI incident occurs at your firm, you’ll need to explain either why you had no systems in place or show the policy you established, the governance structure you built, the training you provided, the monitoring you conducted, and the enforcement action you took. One answer ends careers and closes firms. The other demonstrates professional responsibility and reasonable care.


© 2025 Amy Swaner. All Rights Reserved. May use with attribution and link to article.

More Like This

AI Policy, Governance, and Enforcement: Why Your Firm Needs All Three

Fair Use on Trial: What Two AI Decisions Reveal About the Future of AI and Copyright Law

An analysis of GenAI and Fair Use through two court cases: Kadrey v. Meta and Bartz v. Anthropic

The Case of Thomas Bates: Did AI Trigger His Retirement Loss?

Thomas Bates alleges that AI caused him to lose a large portion of his retirement funds. What can we learn from that?AI is no longer merely a research assistant. In the legal context, it has become both a sword and a shield, particularly for pro se litigants. It is a sword—a mechanism that can be wielded to challenge inequities or expose flawed systems; and a shield—a tool that equips these litigants with the language, structure, and confidence to defend themselves.