April 6, 2026

AI in Legal Practice

Do You Have Your Radio

Do You Have Your Radio

“As AI tools become affordable, effective, and widely adopted, a lawyer’s failure to use them may soon be viewed not as caution—but as a breach of the duty of competence.”

“As AI tools become affordable, effective, and widely adopted, a lawyer’s failure to use them may soon be viewed not as caution—but as a breach of the duty of competence.”

Amy Swaner

Why AI Non-Use Is Becoming the New Legal Malpractice

On the night of March 8, 1928, Captain Walton of the tugboat Montrose steered his tow of three coal-laden barges out of Hampton Roads, Virginia, bound for New England. Somewhere below deck sat a radio receiver—homemade, unreliable, and by his own later admission, largely useless. Other tug captains working the same coastline that night—the masters of the Mars, the Waltham, the Menominee—had functioning sets. They picked up weather bureau broadcasts warning of a gale building off the Jersey coast. They turned their tows toward the Delaware Breakwater and rode out the storm in shelter.

Captain Walton did not receive the warning. Neither did the master of the T.J. Hooper, a sister tug running the same route with no working radio at all. By noon on March 9, the barometer dropped, the wind was blowing an easterly gale, and the sea was punishing. By nightfall, two barges were lost. Not surprisingly, the cargo owners sued.

The tug owners had a reasonable defense. They couldn’t have known about the storm without a radio. And almost no tugboat company required radios. The industry hadn’t adopted them as standard equipment. Custom, customary usage, industry standards—they were all on the tug owners’ side.

Judge Learned Hand of the Second Circuit was unpersuaded. In The T.J. Hooper, 60 F.2d 737 (2d Cir. 1932), he delivered what remains one of the most consequential sentences in American tort law: “[A] whole calling may have unduly lagged in the adoption of new and available devices. It never may set its own tests, however persuasive be its usages. Courts must in the end say what is required.” Id. The cost of a radio was trivial. The risk of not having one was catastrophic. Industry custom was no shield.

Nearly a century later, the legal profession is now essentially in Captain Walton’s wheelhouse—navigating increasingly complex waters with a transformative tool within easy reach, while a not-negligible portion of the profession has yet to turn it on, and even less use it profitably, effectively, and safely.

What The Evidence Shows

The empirical case for AI in legal practice has moved beyond anecdote. In 2025, Professor Daniel Schwarcz at the University of Minnesota Law School published the results of a rigorous randomized controlled trial measuring AI’s impact on realistic legal tasks. Upper-level law students were assigned six practice-like assignments—contract analysis, memo drafting, regulatory research—using either AI tools or none. The results were striking. AI-equipped participants saw productivity gains of 34–140%, with statistically significant quality improvements of 10–28% across four of six tasks. Time-on-task dropped 12–37%. The researchers called this “a significant departure from earlier studies, which generally reported limited quality gains.”

This is not an isolated finding. Thomson Reuters reported that AI saved lawyers an average of four hours per week in 2024, projecting twelve hours weekly within five years. The Vals Legal AI Report found AI outperforming human lawyers in four of seven legal performance areas tested. Harvard Law School’s Center on the Legal Profession documented a complaint-response system that reduced associate time from sixteen hours to three to four minutes. Lawyers surveyed by Everlaw reported saving up to 32.5 working days per year with generative AI. I have lawyers complain to me that adopting AI creates more work, because they have to read the original material, plus read the AI output. This is a fundamental misunderstanding of the technology and how to use it. Find the use cases that are ideal for your needs.

Adoption is increasing. The 2024 ABA Legal Technology Survey found that 30% of lawyers now use AI, up from 11% in 2023. Litify’s 2025 survey puts the figure at 78% of legal professionals, with two-thirds using AI daily or monthly. The holdouts are shrinking. Only 4% of Litify’s respondents—what Litify unsympathetically calls the ‘Laggards’-- said they would never implement AI, down from 9% in 2023.

The Doctrinal Malpractice Framework Already Exists

The T.J. Hooper is not the only precedent requiring lawyers to up their technology game. In Helling v. Carey, 519 P.2d 981 (WA 1974), the Washington Supreme Court held that an ophthalmologist was negligent as a matter of law for failing to administer a simple, inexpensive glaucoma test to a young patient—even though the prevailing medical standard did not require it for patients under forty. The test was cheap, harmless, and could have prevented devastating vision loss. The court cited Justice Holmes: “[W]hat usually is done may be evidence of what ought to be done, but what ought to be done is fixed by a standard of reasonable prudence, whether it usually is complied with or not.” Texas & Pac. Ry. Co. v. Behymer, 189 U.S. 468, 470 (1903).

Justice Holmes’ quote tracks with AI adoption today. AI tools are relatively inexpensive. Many legal-specific platforms cost less than a single billable hour per month. General purpose tools like Gemini, ChatGPT and Claude [AS1] cost only around $10-20 per month. They are demonstrably effective. They improve both speed and quality. Under the logic of T.J. Hooper and Helling, a court need not wait for universal or even majority adoption to find that reasonable care and competence require their use.

The ethical infrastructure reinforces this. In 2012, the ABA amended Comment 8 to Model Rule 1.1 to require that lawyers “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” Forty-two states have now adopted this language. In July 2024, ABA Formal Opinion 512—the first formal ethics guidance on generative AI—went further, specifying that lawyers must have “a reasonable understanding of the capabilities and limitations” of AI tools, that fees must remain “reasonable” in light of AI’s efficiency, and that uncritical reliance on AI output without verification violates the duty of competence.

Read together, the doctrine and the ethics rules describe a closing circle. The standard of care is what a reasonably competent attorney would do. A reasonably competent attorney keeps abreast of relevant technology. A reasonably competent attorney uses the tools that will make their practice more effective. It follows then, that a reasonably competent attorney should adopt technology that is available, affordable, and will bring a benefit to their practice. That competence increasingly requires AI use.

The Fee Problem Nobody Wants to Talk About

The legal practice’s disincentive that’s no longer a secret is that there is another dimension to this argument. And it cuts closer to the bone. If a document review task takes sixteen hours manually but four minutes with AI, billing the client for sixteen hours of manual labor, or even ten hours raises serious questions under Model Rule 1.5 (Fees). ABA Opinion 512 explicitly ties fee reasonableness to AI capabilities. Yet the data suggests the profession is slow to reckon with this. According to Axiom Law, 79% of firms use AI to boost efficiency, but only 6% pass those savings to clients. In a revenue-first move, 34% actually charge premium rates for AI-enhanced work. Only 7% of clients reported noticing any reduction in total matter costs.

It stands to reason that attorneys, who rate themselves, their bonuses, and their very value on their time bills, do not want to lose revenue or the pride in monumental amounts of time billed. This competitive suffering is not likely to end soon, even with the out that AI adoption might offer. Attorneys don’t just endure billable hour constraints, they rank themselves by it. However surprising to outsiders, this ‘misery as professional status’ is not likely to change soon, and so clients are not seeing AI’s efficiency benefits in their monthly statements.

But this disconnect is not sustainable. In-house departments are taking notice. A LexisNexis study found that in-house teams deploying AI at scale could reduce work sent to outside firms by 13%. Twenty-six percent of in-house counsel expect to cut law firm spending in 2026. It’s not surprising that in house counsel are among the highest adopters of AI. And corporate clients are increasingly pushing firms to adopt generative AI specifically for cost savings—and when the client is the one demanding efficiency, a lawyer’s refusal to adopt available tools starts to look less like professional judgment and more like dragging your feet to keep fees high.

There is also an access-to-justice dimension that the profession cannot afford to ignore. The American Bar Association has long acknowledged that the majority of low- and moderate-income Americans lack meaningful access to legal representation. If AI can reduce the cost of competent legal work by an order of magnitude—and the evidence increasingly suggests it can—then we as a profession should be looking for ways to help disadvantaged communities. This is a justice problem.

AI is also a level-set. The same tools that make BigLaw associates faster can make solo practitioners in underserved communities more viable. Or even create a more level playing field and competitiveness in solo practitioners and small firms.

Yes, But What About Hallucinations?

The strongest objection to treating AI use as a standard-of-care requirement is the hallucination problem, and it is a real one. Around 1,000 documented cases of AI-generated legal hallucinations are now on record, implicating more than 128 lawyers—including attorneys at top-tier firms. In Mata v. Avianca (S.D.N.Y. 2023), attorneys submitted a brief containing six entirely fabricated case citations generated by ChatGPT and were sanctioned $5,000. Since then, fines are increasing for AI-hallucinated filings, and the pace of sanctions cases is accelerating. Even judges are not immune.

But this objection, properly understood, is actually an argument for competent AI use, not for non-use. The lawyers in Mata were not sanctioned for using AI. They were sanctioned for failing to verify AI output—a violation of the most basic duty of candor to the tribunal. The standard of care must require AI use with appropriate verification, in the same way that lawyers have always been required to verify research from any source. A lawyer who cites a case without reading it is negligent whether the citation came from ChatGPT, a new associate, a colleague’s memo, or a footnote in a treatise.

Other objections fare no better under scrutiny. The “too early” argument—that adoption is too uneven for AI to constitute a professional standard—is precisely the defense that Judge Learned Hand (yes, I love his name) in T.J. Hooper rejected. The digital-divide concern—that small firms lack resources—is diminished by the reality that many effective AI tools cost $10–100 per month, less than the profession’s historical adoption of computerized legal research. The de-skilling argument echoes objections made about calculators, Westlaw, and spell-check. The profession adapted. It will again. It will have to in order to stay viable.

What Comes Next

We have not yet seen a successful legal malpractice claim premised solely on a lawyer’s failure to use AI. But the architecture for one is in place. The elements are straightforward: a lawyer fails to use an available AI tool; the work product is slower, more expensive, or lower quality than what AI-assisted counsel would have produced; the client suffers actual harm as a result. The “case-within-a-case” requirement—the plaintiff must prove that AI use would have had a substantial impact on the outcome—remains a meaningful hurdle. But in cases involving missed research, overlooked precedent, or excessive fees, that hurdle is increasingly surmountable.

Watch for malpractice insurers to move first. If carriers begin factoring AI adoption into underwriting—offering discounts, rather than higher premiums for firms with AI policies or surcharges for those without—the standard of care is shifting, even before any court rules. Watch also for court-imposed AI disclosure requirements, which are already emerging in several jurisdictions. I strongly maintain that these disclosure requirements are unnecessary, and that Rule 11 is sufficient. But they are a signal to pay attention to. And watch for the cases themselves. The most likely early claims will not involve exotic AI failures; they will involve lawyers who spent forty hours on research that AI could have completed in two, billed accordingly, and missed a dispositive case that the AI would have found. While it may not be economically feasible to bring a malpractice case over 38 hours overbilled, imagine that scenario at scale, where the savings would be in the hundreds of thousands. Even 100 hours billed at $500/hour for a cost difference if $50,000 is not merely a rounding error.

Competent AI Use

What does competent AI use look like in practice? First and foremost, it means reading all of your output. All. Verifying every citation and quote. It means understanding what the tools can and cannot do, and how to use them best. It means exercising independent professional judgment at every stage. It means having a firm-level AI policy that addresses confidentiality, data security, client communication, and supervision. In short, it means treating AI the way a competent lawyer treats any powerful tool: with informed skill, appropriate caution, and unwavering professional responsibility. At least ideally.

The deeper point is not that every lawyer must become an AI expert. ABA Opinion 512 is explicit on this. It is that willful ignorance of a tool that demonstrably improves both the quality and affordability of legal services is becoming harder to reconcile with the duties of competence, diligence, and loyalty that define the profession. No lawyer is expected to become a machine-learning engineer to stay competent in their legal practice. Just know enough to choose the right tool for the task, how to prompt effectively, and use context engineering to your advantage, and to evaluate the output critically.

The Bottom Line

The radio was available, and inexpensive. The captains who turned it on and took advantage of its benefits survived the storm, cargos intact. The ones who didn’t, lost their cargo—and lost the case. I imagine Captain Walton felt angry and foolish, in retrospect. How easy it would have been to buy a radio, instead of trying to make-do with his homemade version. In the moment he probably felt like it took too much time to procure one, or figured he’d get around to it when he wasn’t so busy. (Sound familiar?)

After retaining an attorney, explaining his failure to the court, and then losing his case in court, how easy it must have seemed by comparison to get a radio. Keep it handy. Turn it on. What stopped him from doing so? He likely didn’t think it was the standard of care at the time. In fact he treated the radio somewhat as a toy. This was probably a lack of understanding of the bigger picture. He did not fully appreciate what a radio could do for him, and that, combined with the very real ‘human inertia’ we all deal with on a daily basis, made a huge negative impact in his professional life.

We’re facing a very similar situation as Captain Walton faced in March 1928. The technology is available, affordable, and effective. Don’t wait until the storm is upon us, threatening to sink your tugboat before getting your radio and learning how to use it effectively and safely.

© 2026 Amy Swaner. All Rights Reserved.  May use with attribution and link to article.

More Like This

Five Takeaways from the First Two AI Privilege Decisions

Early court decisions confirm that using AI does not automatically waive privilege—but failing to ensure confidentiality, proper settings, and attorney direction just might.

Is It Safe to Put Confidential Information in AI Tools?

Using AI with confidential client information is not inherently riskier than the cloud tools lawyers already trust—provided it’s done with the same level of diligence, configuration, and vendor scrutiny.

Do You Have Your Radio

“As AI tools become affordable, effective, and widely adopted, a lawyer’s failure to use them may soon be viewed not as caution—but as a breach of the duty of competence.”