April 10, 2025

AI Regulatory Frameworks

U.S. Federal

A New Playbook for Federal AI Risk and Regulation

A New Playbook for Federal AI Risk and Regulation

A New Playbook for Federal AI Risk and Regulation

It seems clear that the Trump Administration’s goal is to create new compliance expectations without stifling technological growth.

It seems clear that the Trump Administration’s goal is to create new compliance expectations without stifling technological growth.

It seems clear that the Trump Administration’s goal is to create new compliance expectations without stifling technological growth.

Amy Swaner

Executive Summary


AI policy in the United States is shifting rapidly. After revoking the Biden Administration’s AI executive orders, President Trump’s Administration issued OMB Memo M-25-21 to accelerate AI innovation while reintroducing structured governance for federal agencies. The Memo requires agencies to develop formal AI strategies, appoint Chief AI Officers, manage risks for high-impact AI, and increase public transparency. Although private companies remain largely deregulated, the memo’s risk frameworks are likely to influence state legislation and future AI litigation. Legal and AI professionals must prepare for new compliance obligations in federal contracting and anticipate broader regulatory developments as AI policy evolves.

Introduction

AI policy in the United States is evolving almost as quickly as AI innovation itself. Under President Biden, Executive Orders 14110 and 14141 laid the foundation for a cautious, safety-first approach to artificial intelligence, emphasizing civil rights protections, international cooperation, and rigorous oversight. President Trump swiftly revoked these orders upon taking office, signaling a decisive shift toward deregulation and accelerated development.

Now, with the release of OMB Memo M-25-21 (“Memo M-25-21”), on April 3, 2025, the Trump Administration has recalibrated its AI strategy. While continuing to favor rapid innovation and minimal regulation of the private sector, the new Memo imposes structured governance obligations on federal agencies. This move reintroduces formal oversight into federal AI use, albeit lighter and more flexible than before. It seems clear that the Trump Administration’s goal is to create new compliance expectations without stifling technological growth.

This article examines how the new federal AI playbook reshapes the regulatory environment, what it means for legal and AI professionals, and the strategic considerations businesses must now keep in mind.

What the New Memo (M-25-21) Does

In Memorandum M-25-21 (“Memo”), issued by Russell T. Vought, Director of the Office of Management and Budget, the Administration has drawn a new line between innovation and accountability—one that every AI developer, lawyer, and policymaker will need to watch closely. Memo M-25-21 reintroduces formal AI governance within the federal government but with a clear emphasis on flexibility, speed, and American technological leadership. While private companies remain largely outside the scope of direct federal regulation, federal agencies are now required to adopt minimum risk management practices for high-impact AI, establish internal AI leadership structures, and publicly report on their AI activities. The new framework prioritizes responsible innovation without returning to the more restrictive approach of the previous Administration.

1. AI Strategies

The Memo requires all CFO Act agencies1—generally the largest and most important federal agencies—to develop and publicly release formal AI Strategies within 180 days. These strategies must identify existing and planned AI use cases, assess the agency’s current level of AI maturity, and set clear goals for infrastructure development, data governance, workforce training, and risk management. Each strategy must also outline how the agency will remove bureaucratic barriers to AI innovation and ensure responsible, efficient adoption. Agencies are instructed to use a standard template provided by OMB to promote consistency and accountability across government.

The goal of requiring AI Strategies is twofold. First, it forces agencies to take a proactive, organized approach to integrating AI into their operations rather than allowing fragmented or ad hoc implementation. Second, it aligns agency AI efforts with broader policy priorities: accelerating innovation, maximizing the value of taxpayer investments, protecting civil rights and privacy, and strengthening national security. By mandating public disclosure of these strategies, the Memo also seeks to promote transparency and invite public scrutiny, ensuring that federal AI initiatives maintain public trust.

2. AI Governance Structures

To ensure that AI adoption across the federal government is consistent, accountable, and strategically directed, the Memo mandates the creation of new AI governance structures within each agency. Every agency must appoint a Chief AI Officer (CAIO) within 60 days. The CAIO is responsible for promoting responsible AI innovation, overseeing AI risk management efforts, advising senior leadership, and coordinating internal AI activities. In addition, within 90 days each agency must establish an AI Governance Board composed of senior officials, including Deputy Secretaries, Chief Information Officers, legal counsel, privacy officers, and civil rights representatives, to oversee the agency’s AI use at all levels.

These governance requirements are designed to embed AI expertise and oversight into the highest levels of agency leadership. By requiring agencies to distribute accountability across different disciplines—including legal, technical, and policy functions—the Memo aims to prevent AI projects from operating in silos and to ensure that innovation efforts are balanced with ethical, legal, and operational safeguards. Coordination at the interagency level is also reinforced through the creation of the Chief AI Officer Council, a federal body led by OMB, which will standardize best practices and promote efficient AI adoption across the executive branch.

3. Risk Management for “High-Impact AI”

A central feature of the Memo is its detailed framework for managing risks associated with “high-impact AI.” High-impact AI refers to systems whose outputs materially affect individuals' civil rights, health, safety, access to services, or national security interests. Agencies using high-impact AI must implement a full suite of safeguards, including pre-deployment testing, comprehensive AI impact assessments, continuous performance monitoring, and human oversight mechanisms. They must also offer timely remedies or appeals for individuals adversely affected by AI-enabled decisions. Critically, if a high-impact AI system fails to meet these minimum risk management practices, the agency must suspend its use until compliance is restored.

The risk management requirements reflect the Administration’s attempt to balance innovation with public trust and legal accountability. Rather than imposing broad restrictions on all AI use, the Memo targets its strictest requirements at systems that have significant potential to harm individuals or critical infrastructure. By doing so, it recognizes that while AI can enhance government efficiency and effectiveness, it can also introduce serious risks if deployed improperly. These protections are intended to prevent discriminatory outcomes, safeguard civil liberties, and ensure that government use of AI remains aligned with constitutional and statutory obligations.

4. Public Transparency Requirements

In a move that might surprise Trump Administration critics, public trust is a central theme of the new Memo, and transparency plays a crucial role in achieving it. Federal agencies must publish annual AI use case inventories, disclosing how and where AI is being deployed in their operations. They must also publicly release their formal AI Strategies, report on any waivers granted for high-impact AI risk management practices and provide meaningful opportunities for the public to give feedback on agency AI use. Agencies are expected to make these disclosures accessible, understandable, and updated regularly to reflect ongoing developments.

These transparency requirements serve several important purposes. They are intended to enhance public confidence in the federal government’s use of AI, create external pressure for responsible behavior, and provide civil society, industry, and researchers with insight into government AI initiatives. By requiring agencies to solicit public input and disclose critical information about their AI use, the new policy seeks to prevent AI systems from operating in ways that are opaque, discriminatory, or otherwise harmful to democratic values. Transparency, in this framework, is not a formality but a safeguard designed to align AI innovation with accountability and civic trust.

Together, these new requirements mark a significant evolution in federal AI governance—one that, while lighter in tone than previous efforts, carries real consequences for legal professionals, businesses, and technology developers navigating the new landscape.

Practical Management of AI Risks

Granted, I tend to be an AI Risk Management zealot. Even taking that into consideration, there are some exceptionally practical and useful requirements in this Memo. Ones that could be used as a model for private companies or individual states. There are additional requirements that lead to effective AI governance and AI risk management, but these eight (8) form the core of a solid risk management framework.

1. Training Users—Ensures that individuals operating AI systems understand how to use them responsibly, interpret outputs correctly, and recognize potential risks.

2. Developing Use Cases--Focuses AI deployment on clear, mission-driven goals, preventing (albeit not eliminating) malicious, biased, wasteful or inappropriate applications of the technology.

3. Ongoing Monitoring of AI Use--Detects performance issues, unintended consequences, or evolving risks during the AI system’s actual use, allowing for timely intervention.

4. Regular Assessments of AI--Reassesses AI systems periodically to ensure they remain effective, fair, and safe as conditions, data, and technologies change.

5. Appointment of CAIOs / Boards responsible for AI use--Centralizes leadership, governance, and accountability for AI activities within agencies, promoting coordinated and strategic AI adoption

6. Knowledge Sharing--Encourages reuse of models, data, and tools across government, accelerating innovation, improving efficiency, and saving taxpayer dollars.

7. Identifying and Monitoring High-Impact Use--Prioritizes scrutiny and safeguards for AI systems that affect rights, safety, or critical services, reducing the risk of significant harm.

8. Transparency and Obtaining Feedback from the Public--Builds public trust by making AI use visible, understandable, and responsive to citizen concerns, ensuring democratic accountability.

This list, with a few refinements, aligns closely with NIST, ISO, OECD, and EU recommendations. It’s compliant-heavy, practical, and based on sound principles. It is designed to detect and prevent both model drift (AI model’s internal parameters or decision boundaries become less effective over time) and data drift (a shift in input data that degrades AI model performance). This helps with the early detection of problems with model performance. In summary, this framework is a solid foundation for AI implementation.

Agency Oversight and Enforcement

Although the Memo promotes innovation and flexibility, it makes it clear that compliance is not optional. The Office of Management and Budget (OMB) is charged with overseeing agency implementation, including reviewing AI Strategies, Compliance Plans, AI Use Case Inventories, and risk waivers. Agencies must meet strict reporting deadlines, participate in interagency coordination through the Chief AI Officer Council, and adhere to public transparency obligations. Chief AI Officers within each agency are directly responsible for internal enforcement, tracking high-impact AI use, and certifying compliance with risk management requirements.

Through this structure, OMB ensures that agencies not only embrace AI innovation but also maintain accountability, manage risk effectively, and uphold public trust.

The Memo’s focus on innovation, risk management, and transparency creates important strategic considerations for lawyers and AI industry professionals. Five key takeaways stand out:

Conclusion

Memo M-25-21 signals a new phase in U.S. AI policy—one that accelerates innovation while reintroducing formal governance standards for federal agencies. Although the private sector remains largely unregulated at the federal level, the Memo’s emphasis on risk management, transparency, and accountability will inevitably influence broader regulatory trends. Legal and AI professionals must stay attuned to this evolving landscape, helping clients and organizations navigate new compliance expectations, mitigate emerging risks, and seize opportunities in an increasingly complex and competitive AI environment. As AI policy continues to evolve alongside the technology itself, insightful and knowledgeable legal guidance will be essential to balance innovation with responsibility.

1A CFO Act agency refers to a federal agency covered by the Chief Financial Officers Act of 1990 (CFO Act) (31 U.S.C. § 901(b)), which was passed to improve financial management across the federal government by requiring major agencies to appoint Chief Financial Officers and standardize accounting practices. Examples include: Dept of Defense, Dept of Justice, Dept of Health & Human Services, Dept of Homeland Security, Dept of Education.

More Like This

X Corp. Takes On Minnesota: Deepfakes, Free Speech, and the Legacy of Moody v. NetChoice

A recent lawsuit filed by X Corp. against the state of Minnesota challenges criminalizing the knowing distribution of materially deceptive deepfake media intended to influence elections. X Corp. argues the law is overbroad, vague, and a content-based restriction that infringes on its First Amendment right to editorial discretion.

A New Playbook for Federal AI Risk and Regulation

It seems clear that the Trump Administration’s goal is to create new compliance expectations without stifling technological growth.

A Glimpse of the Future? Why China’s Labeling Law May Signal Global Trends

China’s new law introduces the most comprehensive requirements to date for the labeling, traceability, and accountability of AI-generated content anywhere in the world.

©2024 Lexara Consulting LLC. All Rights Reserved.