June 13, 2025
AI Liability and Risk Assessment
Employment Law

Amy Swaner
A Surprising Message
Not long into his retirement, in February 2022, Thomas Bates received a disappointing shock. “Maybe because I had additional time on my hands, I did not take the quick peek at my retirement portfolio balance [like I normally would] but began investigating a link I had never opened before. I found it nested under a ‘forms and documents’ link with another link entitled ‘correspondences’. I opened it and saw a correspondence sent in October 2021 that I had missed. It said...
“Based on new information we received, we have reassigned your risk tolerance from conservative to aggressive.”
Mr. Bates was shocked. “I said ‘what? I haven't supplied new info!’” He never intended to provide information that would change the risk tolerance in his retirement account to “aggressive” so close to retiring. “I thought, ‘that ain't right!’” Terrible timing for his portfolio to be changed to aggressive as he was retiring, and in the midst of a market crash. He said seeing a 25% hit to his hard-won retirement savings was physically, mentally, and emotionally painful. As he followed the breadcrumb trail, he found that an innocuous interactive online tool called “My Interactive Retirement Planning” (“MIRP”) likely led to his financial woes. Bates had randomly clicked on the MIRP tool and used it to hypothetically check on retiring a year earlier than anticipated.
In my professional capacity I focus on GenAI in the legal realm, so I was interested in this case because of Mr. Bates’s allegations that artificial intelligence was involved in the series of events that led to a disastrous outcome for him. Artificial intelligence (AI) increasingly governs many decisions that affect us, including financial decisions. The case of Thomas Mack Bates v. California Department of Human Resources is an example of the need for AI oversight and transparency. Bates, a retired state employee from Roseville, California, alleges that a faulty AI-driven decision-making system compromised his retirement account. This has triggered a legal battle that may set precedents for how AI is scrutinized in public sector decision-making, if Mr. Bates is proved correct in his belief that AI was at the heart of his loss.
The Legal Battle Unfolds
On May 2, 2023, Thomas Bates filed suit in Sacramento County Superior Court against the California Department of Human Resources (CalHR) and its contractor, Nationwide Investment Advisors, LLC, asserting that computational and AI bias unfairly influenced the handling of his 401K portfolio.
In his Notice of Judicial Review, Bates emphasized that the system used by Nationwide and its independent expert Wilshire relied on flawed algorithms or biased data sets, leading to a breach of fiduciary duties and violations of California’s Unfair Competition Law (UCL), Bus. & Prof. Code §17200 et seq.
The Heart of the Claim: Retirement Funds Allocation
At the center of Bates’s claim is the alleged misuse of AI tools for allocating and managing his retirement funds. According to Bates, “this [MIRP] tool overrode my signed questionnaire [used to determine retirement fund allocations]. There were no conspicuous labels or alerts that hypothetical informational changes would be reflected in my accounts. There was not even a ‘submit’ button.” Despite this, the information he entered triggered a reallocation of his retirement funds. “This was not my intention or purpose in using the online tool,” said Bates. He stated he clearly saw the disclosure for the online tool, which indicated it was only a hypothetical modeling tool. “In my case, it was not hypothetical. It became a real-life saga!” Bates asserts computational bias, improper training data, skewed data representation, and the inherent biases of AI developers as potential sources of this error.
Bates drew on Executive Order N-12-23, issued by Governor Gavin Newsom, which recognized the dangers of algorithmic discrimination and called for robust guardrails in AI deployment across California’s public sector. He referenced this order as evidence that his concerns fall within recognized risks flagged by state leadership.
Procedural Developments and Pushback
It has been a difficult and rocky uphill battle for Mr. Bates. He has been forced into the legal arena -- a realm he is unfamiliar with. He has fought to uncover the truth in a complex and overwhelming legal world.
“After approximately two years of fighting this legal battle,” says Bates, “my house looks like a used paper factory.” He has filed many motions as a pro se litigant. His time, sweat, and effort have paid off. “I was successful in achieving a trial date; however, many complex issues remain, such as interlocutory rulings of additional causes of action that frankly have overwhelmed me... I've decided to put out lifelines to firms to take this to the finish line. I've completed the heavy lifting with hundreds of hours of research and straining my brain as a novice lawyer. I'm proud, but exhausted. I’m determined to have my day in court or at a settlement table!” he says. Bates alleges, “Financial institutions always advocate that members assess risk before investing. But very few understand the impact of AI social engineering or ‘social engineering by clickbait’ and computational analysis of individual and corporate bias, agenda, profiteering, and/or conflict of interest.”
A Broader Warning About GenAI in Government
This case isn’t merely about one man’s retirement account—it reflects a broader national dilemma: how to govern AI that increasingly shapes essential public functions. Bates’s filings reference the AI Bill of Rights and policy briefings from the California Department of Technology, which stress the importance of ensuring that AI systems are transparent, equitable, and accountable.
Executive Order N-12-23 lays out an ambitious plan for AI oversight, including mandatory impact assessments, high-risk use case monitoring, and public sector AI procurement guidelines. The Bates case serves as a practical litmus test for whether these lofty promises translate into real-world protections for California’s most vulnerable populations, particularly retirees and public employees dependent on complex, automated systems.
This case also shines a spotlight on the need for the United States to pass comprehensive AI legislation that will set guidelines for AI governance and transparency in implemetation. An effective bill would balance innovation with public safety, civil rights, and economic competitiveness. The U.S. has the benefit of being able to call on existing proposals, international models (like the EU AI Act), White House initiatives (such as the AI Bill of Rights and Executive Order 14110), and sectoral U.S. regulations. Such a bill is long past due.
The Double-Edged Sword of AI—and the Irreplaceable Role of Human Counsel
Thomas Bates’s legal battle illustrates one of the paradoxes of our time: AI can be both the problem and the solution. On the one hand, Bates suspects that an AI-related process—whether autonomous or merely automated—may have reclassified his investment preferences without his knowledge or consent. On the other hand, when faced with a complex procedural landscape and no formal legal training, Bates turned to generative AI tools to help him stand his ground.
“I was working with 2nd Chair’s AI tool named David,” Bates explained. “I asked David an in-depth question about corporate conflict of interest related to my interlocutory cause of action. David rejected the question as too complex and advised me to consult a legal expert.” This moment of humility from the machine was revelatory. It confirmed what many in the legal profession already understand: AI can assist with legal research, issue spotting, and drafting—but it cannot replace the nuanced judgment, experience, and ethical responsibility of a trained lawyer.
AI Governance
AI is no longer merely a research assistant. In the legal context, it has become both a sword and a shield, particularly for pro se litigants. It is a sword—a mechanism that can be wielded to challenge inequities or expose flawed systems; and a shield—a tool that equips these litigants with the language, structure, and confidence to defend themselves. Bates’s use of AI tools to file motions, analyze procedural outcomes, and interpret policy documents is a case study in how these technologies can democratize access to legal understanding.
However, when AI systems operate as black boxes—unexplainable, unaccountable, and potentially biased—they also pose significant risks. Bates’s allegations about the misuse of the MIRP tool, whether AI-powered or not, raise critical questions about transparency and user agency. It is precisely this duality that makes AI a legal frontier: it can serve justice, or it can obscure it.
Empowering the Pro Se Litigant
Bates’s journey is particularly resonant for pro se litigants—individuals who represent themselves in court without an attorney. For these litigants, GenAI tools offer a lifeline. With the ability to interpret court rules, generate filings, and clarify legal terminology, AI levels the playing field in ways previously unthinkable. Bates’s ability to secure a trial date and navigate complex procedural hurdles demonstrates how AI, when responsibly used, can enhance procedural fairness.
Still, this power comes with limits. As Bates himself discovered, AI could not provide strategic counsel or assess evidentiary burdens. It could not negotiate or advocate. Most importantly, it could not offer the human empathy and professional ethics that guide real lawyers in the practice of law.
AI has become both a sword and a shield, particularly for pro se litigants. It is a sword—a mechanism that can be wielded to challenge inequities or expose flawed systems; and a shield—a tool that equips these litigants with the language, structure, and confidence to defend themselves. Bates’s use of AI tools to file motions, analyze procedural outcomes, and interpret policy documents is a case study in how these technologies can democratize access to legal understanding.
The Need for a Lawyer: The Irreplaceable Human
For all its computational prowess, AI lacks the key attributes that define the legal profession: advocacy, discretion, emotional intelligence, and ethical accountability. The law is not a mere exercise in pattern recognition—it is a human endeavor, rooted in history, equity, and reason. While tools like “David” may soon become essential co-counsel in routine drafting or research, the complexity of litigation, the strategic art of negotiation, and the sacred duty to justice remain firmly in human hands.
Thomas Bates has taken his case further than a trained professional might have under similar constraints. But like the best litigants, he knows when to ask for help. “I’ve done the heavy lifting,” he said. “Now I need a California lawyer to take this the rest of the way.” His is a powerful story not only about the harms of automation, but about the promise of AI as a legal equalizer—and the enduring need for high-quality lawyers to carry the torch of justice across the finish line.
Bates says he has fought his hard-won leg of the race; now, he needs a competent California-licensed attorney who can take on this case for him.
© 2025 Amy Swaner. All Rights Reserved. May use with attribution and link to article.
More Like This
Fair Use on Trial: What Two AI Decisions Reveal About the Future of AI and Copyright Law
An analysis of GenAI and Fair Use through two court cases: Kadrey v. Meta and Bartz v. Anthropic
The Case of Thomas Bates: Did AI Trigger His Retirement Loss?
Thomas Bates alleges that AI caused him to lose a large portion of his retirement funds. What can we learn from that?AI is no longer merely a research assistant. In the legal context, it has become both a sword and a shield, particularly for pro se litigants. It is a sword—a mechanism that can be wielded to challenge inequities or expose flawed systems; and a shield—a tool that equips these litigants with the language, structure, and confidence to defend themselves.
Beware the Poisoned RAG: A Hidden Danger of AI in Legal Practice
Best Practices for Lawyers Using AI with a RAG Database