January 15, 2026

AI in Legal Practice

Part 2: AI Note-Takers, Wiretap Laws, and the Next Wave of Privacy Class Actions

Part 2: AI Note-Takers, Wiretap Laws, and the Next Wave of Privacy Class Actions

Part 2: AI Note-Takers, Wiretap Laws, and the Next Wave of Privacy Class Actions

The same features that make AI note-takers powerful—centralized processing, speaker identification, and “product improvement” licenses—are now forming the backbone of a coordinated wave of privacy class actions.

The same features that make AI note-takers powerful—centralized processing, speaker identification, and “product improvement” licenses—are now forming the backbone of a coordinated wave of privacy class actions.

The same features that make AI note-takers powerful—centralized processing, speaker identification, and “product improvement” licenses—are now forming the backbone of a coordinated wave of privacy class actions.

Amy Swaner

Part 2: The Legal Trifecta Class Action Plaintiffs Are Using 

Part 1 of this series explained why meeting and call recordings are unusually valuable AI training material: they are contextual, domain-rich, and outcome-linked in a way scraped web text is not. That value also creates legal exposure. When vendors design note-takers and contact-center AI to capture conversations, retain them, and reserve broad rights to reuse them for “product improvement” or model training, plaintiffs are increasingly framing the vendor as an unlawful interceptor rather than a simple, neutral productivity tool. 


This article focuses on the three legal frameworks now driving AI note-taker litigation—the federal Electronic Communications Privacy Act (ECPA), California’s Invasion of Privacy Act (CIPA), and state biometric statutes led by Illinois’ Biometric Information Privacy Act (BIPA)—and then applies them to two active class actions: In re Otter.ai Privacy Litigation and Ambriz v. Google. 

A consistent theme runs through these cases.  Liability often turns less on whether a recording occurred (everyone agrees it did) and more on (1) what the vendor’s architecture allows, (2) what its contract permits, and (3) whether the vendor has a recognizable economic incentive to reuse the data.  

It’s a double-edged sword: centralized processing, persistent speaker identification, and expansive license terms are precisely what make these systems valuable. But that’s also exactly why they are the subject of these class action lawsuits.   

The Legal Framework 

There are no sufficient AI and privacy laws for plaintiffs who believe their privacy was invaded during an AI meeting to use in bringing claims.  And so Plaintiffs in AI note-taker cases are repurposing statutes written for telephone wiretaps and applying them to cloud-based conversational AI. Plaintiffs typically plead three overlapping theories, because each statute attacks a different part of the product stack: interception, consent/disclosure, and biometric identifiers. 

Electronic Communications Privacy Act (18 U.S.C. § 2511) 

The ECPA (ECPA § 2511(1)(a)) prohibits intentional interception of wire, oral, or electronic communications. For decades, those who want to record a conversation without seeking consent of the other party have relied on Section 2511(2)(d).  It contains the familiar one-party consent exception, but also an interesting caveat.  Interception is lawful if one party consents—unless it is done “for the purpose of committing any criminal or tortious act.” 

Historically, one-party consent made recording simple; if a participant agreed, the recording was lawful. Plaintiffs are now taking advantage of the tortious-purpose carve-out. Plaintiffs argue that if they can show the interception was for a criminal or tortious purpose, the onepartyconsent defense fails under § 2511(2)(d), making the recording unlawful even if one party consented. Three torts fit well into this tortious-purpose carve-out:  

  • Intrusion upon seclusion: inserting a vendor into private conversations for undisclosed analytics or training, undermining reasonable privacy expectations. 

  • Conversion / misappropriation: treating conversational data as valuable property and appropriating it to improve proprietary systems sold to others. 

  • Unjust enrichment: extracting economic value from private speech without meaningful consent or compensation. 

The pleading strategy is straightforward. Plaintiffs allege that the “real” purpose of the interception is not just note-taking, but building datasets that improve a vendor’s speech recognition, summarization, or LLM capabilities—thereby increasing product value and revenue. The more a vendor’s terms, marketing, and technical design suggest a pipeline from “captured conversation” to “improved model,” the easier it is to plead tortious purpose in pursuing these claims.  And even though a single tortious claim won’t gain much traction, stacked together as a class action, and these cases take on a whole new meaning, especially when used with other state laws, such as the CIPA and the BIPA.   

California Invasion of Privacy Act (Cal. Penal Code §§ 631–637.5) 

California’s Invasion of Privacy Act (CIPA) is the most consequential statute for AI-recording tools. Although it only applies to California conversations, it is powerful. It is generally an all-party consent regime for many communications, it includes a private right of action, and it authorizes statutory damages that are pleaded at up to $5,000 per violation—an enticement that makes class actions feasible even when actual damages are hard to quantify. 

Several sections are routinely invoked in AI transcription and note-taker cases: 

  • § 631 (anti-wiretap): interception or “reading” of communications in transit—now applied to cloud AI that processes audio/text in real time during transmission. 

  • § 632 (confidential communications): recording/eavesdropping on “confidential communications” without consent of all parties. “Confidential” is often construed broadly in modern cases and can include virtual meetings, professional consultations, and customer service calls where participants reasonably expect limited dissemination. 

  • § 632.7 (cellular/cordless calls): protects many phone calls without requiring malicious intent, frequently pleaded for smartphone calls routed through AI contact centers. 

  • § 637.5 (subscriber conversations at home): historically aimed at monitoring of subscriber conversations in residences; now pleaded in some contact-center contexts tied to subscriber accounts. 

  • § 637.2: private right of action and statutory damages. 

Third-Party Eavesdropper Doctrine “Extension” vs. “Capability” 

CIPA litigation often turns on whether the technology provider is a party to the communication (permitted to receive/record it) or an unlawful third-party interceptor. Recent cases have sharpened two tests in regard to whether the provider is a party or interceptor. 

  • Extension test: the vendor is a third party only if it actually uses the data for its own ends (training, analytics, product improvement). 

  • Capability test: the vendor is a third party if it could use the data for its own ends—based on architecture and contractual rights—regardless of proof of actual use. 

In Ambriz v. Google, the Northern District of California adopted the capability approach at the pleading stage for § 631(a). Plaintiffs plausibly alleged that Google’s platform could use intercepted call data, supported by terms reserving rights to “improve services.” The court did not require proof that Google trained models on the plaintiffs’ specific calls. 

This is why “product improvement” language matters. Under a capability theory, broad license language plus centralized cloud processing can convert an AI vendor from “service provider” to “statutory eavesdropper,” even if the vendor insists it never trains models on customer-specific content. In practice, that means not only that contract drafting can be as important as actual data practices, but also that plaintiffs can survive early motions by pointing to rights reserved in terms of service and the technical reality that the vendor sits “in the middle” of communications. 

Biometric Privacy Statutes, Led by Illinois’ BIPA (740 ILCS 14/1 et seq.) 

Another powerful state law is Illinois’ Biometric Information Privacy Act.  BIPA regulates collection, retention, and use of biometric identifiers, including voiceprints. Its core requirements: 

  • informed written consent before collecting biometric identifiers; 

  • a publicly available retention and deletion policy; and 

  • restrictions on profiting from biometric identifiers. 

BIPA’s private right of action and statutory damages ($1,000 for negligent violations; $5,000 for intentional or reckless violations) make class actions compelling.  

 In the note-taker setting, plaintiffs commonly allege that systems generate persistent voiceprints (mathematical representations used to identify a speaker across interactions). Vendors sometimes respond that they do not create “voiceprints,” that any signal is transient, or that the data is deidentified. But the counterargument is that BIPA regulates the identifier itself and that deidentification is not a complete answer when the vendor still has the capacity to recognize, re-identify, authenticate, or train on those identifiers. 

Texas (Tex. Bus. & Com. Code § 503.001) and Washington (RCW 19.375) have related biometric statutes, though with different procedural and damages features. Where products include persistent voice or facial recognition—especially “speaker recognition” or “voice identity” features marketed as productivity enhancements—biometric claims are likely to remain stronger claims. 

The Trifecta  

These three laws converge on one vulnerability: the value of these recorded conversations. Vendors have not only contractual permission but also strong financial incentives to retain and reuse conversational data.  Put differently, as conversational data becomes more valuable to AI development, plaintiffs can plausibly allege that certain “productivity tools” operate as data-extraction systems—and then use old statutes to police that business model. The trifecta is powerful because it attacks the same conduct from three angles and stacks damages theories. 

Case Study: In re Otter.ai Privacy Litigation 

Originally filed as Brewer v. Otter.ai, this consolidated litigation includes plaintiffs from California, Illinois, and Washington asserting wiretap, biometric, and common-law claims. The focus is Otter Notetaker, which plaintiffs describe as a system that joins meetings, captures multi-modal content, and generates identifiers that allow long-term reuse. 

How Otter Notetaker Allegedly Works 

The Complaint alleges that when a user connects a calendar, Otter automatically joins scheduled meetings. During a meeting, it allegedly: 

  • records full audio; 

  • captures periodic video-call screenshots; 

  • transcribes speech and links words to timestamps; 

  • collects participant identity metadata (e.g., names and emails from meeting platforms and calendars); 

  • ties that identity data to the transcript (“who said what and when”); 

  • generates and stores a voiceprint for each speaker to recognize them across future meetings; and 

  • streams the resulting audio, screenshots, transcripts, metadata, and voiceprints to Otter’s servers. 

Two features matter legally. First, automation; a bot can “attend” meetings by default through calendar integration, reducing the friction and visibility of recording. Second, richness; the dataset is not just an audio file. Rather it is audio plus transcript plus metadata plus (alleged) biometrics, creating a structured record that is more reusable for downstream training and analytics. 

What Plaintiffs Claim Was Captured 

Plaintiffs allege recording and vendor retention of highly sensitive meetings: attorney-client and legal strategy discussions, medical and therapy appointments, support groups, religious meetings, and confidential corporate calls involving trade secrets and personnel issues. The claimed injury is not merely that a conversation was recorded, but that participants’ reasonable expectations were undermined by silent, automated capture and repurposing of intimate communications by a commercial AI provider they did not choose. 

Plaintiffs allege that bots amplify traditional recording risk by automating attendance, capturing multiple data modalities, and purportedly creating persistent biometric profiles that can follow a person across organizations and contexts—including contexts where privilege or professional confidentiality is expected or understood. 

Case Study: Ambriz v. Google 

Ambriz arises from routine customer service calls allegedly processed through Google Cloud Contact Center AI. Plaintiffs claim they were not told Google was involved and did not consent to Google recording, transcribing, and analyzing their calls. 

How Contact-Center AI Allegedly Operates 

Plaintiffs allege the platform processes call audio in real time to provide features such as live transcription, intent/sentiment analysis, virtual agents, and smart-reply suggestions to human agents, while also storing call artifacts (audio, transcripts, and related metadata such as call duration, account identifiers, and routing data). From a functionality perspective, the vendor is not simply “hosting” a call; it is actively transforming and interpreting call content as the call unfolds. 

This case is in the early stages, but notably the Northern District of California denied a motion to dismiss key CIPA claims. The court treated active, real-time processing and analysis as interception in transit for § 631(a) purposes, and it emphasized that contractual rights and technical ability to use call data for Google’s own ends (including “improving services”) were sufficient at the pleading stage to treat Google as a potential third-party interceptor—without proof of actual model training on the plaintiffs’ specific calls. 

The court also rejected a categorical argument that smartphone calls fall outside CIPA because smartphones are general-purpose computers, distinguishing the phone function as telephonic communication for purposes of CIPA provisions implicated by cellular calls. And the court allowed additional claims to proceed under other CIPA provisions pleaded by plaintiffs, underscoring how creatively plaintiffs can map modern call flows onto older statutory language. 

What Makes Ambriz Structurally Different 

  • Involuntary participation: consumers did not choose the vendor and could not realistically opt out. 

  • Disclosure gaps: generic “this call may be recorded” warnings may not identify the third party or explain secondary uses—especially model training or analytics. 

  • B2B2C risk chain: the vendor, the enterprise customer, and the consumer sit in different legal positions, creating indemnity and allocation fights. 

  • Scale: contact centers process massive call volumes; statutory damages theories can quickly dominate settlement posture. 

A Quick Caveat To Ambriz 

Shortly after the court denied Google’s motion to dismiss in Ambriz, the Ninth Circuit entered a decision in Popa v. Microsoft Corp that complicates the capability-test approach. In Popa, the plaintiff alleged a website was tracking her actions and keystrokes. The court dismissed her claims for lack of standing, holding that she failed to identify any embarrassing information collected, which was necessary to establish an actual injury.  While Popa involved website session-replay technology rather than voice recordings, its standing analysis suggests that capability-based theories may face appellate scrutiny. If appealed in Ambriz, the viability of the capability test may turn on whether voice recordings of customer service calls are categorically more sensitive than website browsing data, or whether plaintiffs must make particularized showings of actual harm. 

What These Cases Mean For AI 

The Otter.ai and Ambriz complaints involve different settings—enterprise meetings versus consumer customer-service calls—but the pressure point is the same. If the vendor’s system and contract allow it to extract and retain rich conversational data (audio, transcripts, metadata, screenshots, and voiceprints) and to reuse that data for model training or “service improvement,” plaintiffs can plausibly plead: 

  • interception beyond a simple “one party consented” model (ECPA); 

  • third-party eavesdropping based on capability and reserved rights (CIPA); and 

  • biometric collection and retention without written consent or compliant policies (BIPA and related statutes). 

The terms of service and privacy policy now matter as much as the fact of recording. “We may use data to improve our services” carries far wider repercussions than that your information is being used to train an AI model.  Features like speaker recognition, voiceprints, and image capture can convert an otherwise ordinary transcription tool into a biometric-privacy case. 

Next Article: Part 3 

Part 3 will give you concrete guidance to use for yourself and your clients.  We’ll include a convenient “cheat sheet” that you can share with your clients to help them stay on top of privacy issues.   

© 2026 Amy Swaner. All Rights Reserved.  May use with attribution and link to article. 

More Like This

Part 2: AI Note-Takers, Wiretap Laws, and the Next Wave of Privacy Class Actions

The same features that make AI note-takers powerful—centralized processing, speaker identification, and “product improvement” licenses—are now forming the backbone of a coordinated wave of privacy class actions.

Part 1: AI Note-Takers, Wiretap Laws, and the Next Wave of Privacy Class Actions

AI note-taker bots quietly transform private meetings and customer calls into high-value training data, turning everyday workplace infrastructure into potential electronic eavesdroppers and setting the stage for a new wave of wiretap, privacy, and biometric class actions.

Image by Joyce N. Boghosian, used pursuant to  17 U.S.C. §§ 105, 106 (2020) 
Image by Joyce N. Boghosian, used pursuant to  17 U.S.C. §§ 105, 106 (2020) 
Image by Joyce N. Boghosian, used pursuant to  17 U.S.C. §§ 105, 106 (2020) 
Image by Joyce N. Boghosian, used pursuant to  17 U.S.C. §§ 105, 106 (2020) 

Trumps AI Order

Trump’s December 11, 2025 AI Executive Order seeks national uniformity but is likely to increase regulatory uncertainty rather than reduce it.