Part 1 of this series argued that the legacy U.S. privacy framework, drawn around identifiable custodians and recognizable records, no longer maps to the reality of AI-driven inference. Part 2 walks through where the practical risk lands — the four fault lines that Florida and DC counsel are going to be litigating, advising on, and defending against this year.

Biometric capture has moved from secure-facility access control into the consumer device pocket. A modern phone keyboard captures typing rhythm; a delivery app captures gait; a smart speaker captures voiceprint. Each stream is innocuous on its own — fed through an AI system, each becomes a unique identifier. Photo by Flanoz via Wikimedia Commons (CC0 / Public Domain).
Fault line 1: Biometric overreach
Biometric data — face, voice, gait, typing rhythm, even keystroke pressure — is being collected by devices and platforms most consumers do not associate with biometric collection. A smartphone keyboard captures typing rhythm. A smart speaker captures voiceprint. A delivery app captures gait through accelerometer data. Each data stream is innocuous on its own. Fed through an AI system, each becomes a unique identifier.
The legal exposure for businesses sits in three places:
- State biometric statutes. Illinois (BIPA) and Washington remain the most aggressive. Texas, New York, and others have followed. Florida and DC do not yet have comprehensive biometric statutes, but FCRA and FTC theories can reach some of the same conduct.
- Consumer-protection statutes. Misrepresentation about whether biometric data is collected — or what is done with it — is actionable under federal and state UDAP statutes including Florida's FDUTPA.
- Contractual exposure. Vendor contracts that fail to specify biometric handling create indemnification gaps that surface only on breach.
Practical advice: businesses should audit what biometric data they actually collect — including data they do not realize is biometric — and either eliminate it, fully disclose it, or contractually flow the risk to the vendor that does the inference.
Fault line 2: AI hallucinations in legal filings
This one is no longer hypothetical. There is a growing line of cases — including the well-known Mata v. Avianca sanction in the Southern District of New York and the more recent matter often referenced as the Wikipedia / Bank of Cancun citation issue — in which attorneys have filed briefs containing AI-generated case citations that do not exist. Real-looking case names, real-looking citation formats, complete with parenthetical holdings — fabricated from whole cloth by a language model.
The professional-responsibility consequences are severe. Sanctions. Disbarment proceedings. Public discipline orders that follow the attorney for the rest of the career. And the cost to the client — both in immediate fees and in the long-term credibility hit — is extraordinary.
The discipline that prevents this is unglamorous and absolute:
- Verify every citation manually. Open the case on Westlaw, Lexis, or the court's own database. Read it. Confirm the holding.
- Treat AI as a drafting assistant, never a research source. A model that does not have access to a verified legal database will make up citations. That is not a quirk. It is how the technology works.
- Build the verification step into the firm's pre-filing checklist — the same checklist that catches statute misspelled as statue and confirms the local rules are followed.
The frontier here is not technological. It is professional. The lawyers who will be sanctioned in 2026 are the lawyers who treated an AI's confident output as research rather than draft.
Fault line 3: Employer surveillance and employment law
Employers now have access to monitoring tools that would have seemed like science fiction five years ago. Keystroke logging. Screen capture. Webcam analysis of attentiveness. Sentiment analysis of email tone. Productivity scores generated by AI from millions of data points.
Many of these tools are deployed without the employees knowing what is being measured, who sees it, or how it influences hiring, firing, and promotion decisions. The legal risks for employers cluster in a few places:
- Notice and consent. Most state wiretap and privacy statutes require disclosure. AI-driven monitoring without disclosure invites litigation.
- Disparate impact. AI scoring systems that produce different outcomes for different protected groups create Title VII, ADA, and ADEA exposure regardless of whether the developer intended bias.
- Wage-and-hour exposure. Surveillance that proves employees worked off the clock — or that proves they did not — can be used both ways. Employers who do not preserve the data are vulnerable to spoliation arguments.
- Trade secret leakage. Surveillance tools that send data to third-party vendors create trade-secret and attorney-client-privilege exposure most general counsel never see coming.
Sophisticated employers should be running an annual audit of every workplace AI tool against these four risks, and should have a written employee-monitoring policy that survives a plaintiff's deposition.
Fault line 4: Consumer-protection erosion
The federal consumer-protection apparatus has weakened materially in the last 24 months. The CFPB's April 2025 internal memo reducing supervisory examinations by half, combined with continuing personnel attrition at Federal Student Aid, the FTC, and other agencies, means consumers are facing more sophisticated AI-driven fraud with less federal cover than they have had in a generation.
For consumer-protection counsel — FDCPA, FCCPA, FCRA, TCPA — the practical implication is that state-court and private-right-of-action work is going to bear more of the load. Florida's FDUTPA, the FCCPA, and the federal statutes that include private rights of action remain available, and AI-driven scams that violate them are increasingly the prosecutable conduct.
Three areas where consumer attorneys should expect to see more work:
- AI-cloned voice scams targeting elderly clients and impersonating family members, the IRS, the Department of Education, or law-enforcement.
- Algorithmic credit decisions that produce statistically disparate outcomes by race, gender, age, or disability, in apparent violation of the FCRA's reasonable-procedures requirement.
- AI-driven debt collection that uses synthetic communication channels (text, email, voicemail) without the FDCPA's required disclosures.
Where to focus
If a client is exposed in more than one of these fault lines — and most are — the order of priority is straightforward: fix the worst exposure first, document the rest, and update the contractual and policy infrastructure to absorb the next fault line that opens. The lawyers who treat AI privacy as a static problem will lose to lawyers who treat it as the moving target it is.
For further reading, the Stanford Center for Research on Foundation Models publishes regular reports on the structure and capabilities of leading AI systems. Solove's Artificial Intelligence and Privacy — 77 Fla. L. Rev. 207 — remains the most useful single legal article on the framework.
If you have AI-driven privacy exposure in your business, your employment practices, or a pending matter, request a private introduction or call 877-862-7188.