Part 1 of this series argued that the legacy U.S. privacy framework, drawn around identifiable custodians and recognizable records, no longer maps to the reality of AI-driven inference. Part 2 walks through where the practical risk lands — the four fault lines that Florida and DC counsel are going to be litigating, advising on, and defending against this year.

Fingerprint scanner reading a fingerprint

Biometric capture has moved from secure-facility access control into the consumer device pocket. A modern phone keyboard captures typing rhythm; a delivery app captures gait; a smart speaker captures voiceprint. Each stream is innocuous on its own — fed through an AI system, each becomes a unique identifier. Photo by Flanoz via Wikimedia Commons (CC0 / Public Domain).

Fault line 1: Biometric overreach

Biometric data — face, voice, gait, typing rhythm, even keystroke pressure — is being collected by devices and platforms most consumers do not associate with biometric collection. A smartphone keyboard captures typing rhythm. A smart speaker captures voiceprint. A delivery app captures gait through accelerometer data. Each data stream is innocuous on its own. Fed through an AI system, each becomes a unique identifier.

The legal exposure for businesses sits in three places:

Practical advice: businesses should audit what biometric data they actually collect — including data they do not realize is biometric — and either eliminate it, fully disclose it, or contractually flow the risk to the vendor that does the inference.

Fault line 2: AI hallucinations in legal filings

This one is no longer hypothetical. There is a growing line of cases — including the well-known Mata v. Avianca sanction in the Southern District of New York and the more recent matter often referenced as the Wikipedia / Bank of Cancun citation issue — in which attorneys have filed briefs containing AI-generated case citations that do not exist. Real-looking case names, real-looking citation formats, complete with parenthetical holdings — fabricated from whole cloth by a language model.

The professional-responsibility consequences are severe. Sanctions. Disbarment proceedings. Public discipline orders that follow the attorney for the rest of the career. And the cost to the client — both in immediate fees and in the long-term credibility hit — is extraordinary.

The discipline that prevents this is unglamorous and absolute:

The frontier here is not technological. It is professional. The lawyers who will be sanctioned in 2026 are the lawyers who treated an AI's confident output as research rather than draft.

Fault line 3: Employer surveillance and employment law

Employers now have access to monitoring tools that would have seemed like science fiction five years ago. Keystroke logging. Screen capture. Webcam analysis of attentiveness. Sentiment analysis of email tone. Productivity scores generated by AI from millions of data points.

Many of these tools are deployed without the employees knowing what is being measured, who sees it, or how it influences hiring, firing, and promotion decisions. The legal risks for employers cluster in a few places:

Sophisticated employers should be running an annual audit of every workplace AI tool against these four risks, and should have a written employee-monitoring policy that survives a plaintiff's deposition.

Fault line 4: Consumer-protection erosion

The federal consumer-protection apparatus has weakened materially in the last 24 months. The CFPB's April 2025 internal memo reducing supervisory examinations by half, combined with continuing personnel attrition at Federal Student Aid, the FTC, and other agencies, means consumers are facing more sophisticated AI-driven fraud with less federal cover than they have had in a generation.

For consumer-protection counsel — FDCPA, FCCPA, FCRA, TCPA — the practical implication is that state-court and private-right-of-action work is going to bear more of the load. Florida's FDUTPA, the FCCPA, and the federal statutes that include private rights of action remain available, and AI-driven scams that violate them are increasingly the prosecutable conduct.

Three areas where consumer attorneys should expect to see more work:

Where to focus

If a client is exposed in more than one of these fault lines — and most are — the order of priority is straightforward: fix the worst exposure first, document the rest, and update the contractual and policy infrastructure to absorb the next fault line that opens. The lawyers who treat AI privacy as a static problem will lose to lawyers who treat it as the moving target it is.

For further reading, the Stanford Center for Research on Foundation Models publishes regular reports on the structure and capabilities of leading AI systems. Solove's Artificial Intelligence and Privacy77 Fla. L. Rev. 207 — remains the most useful single legal article on the framework.


If you have AI-driven privacy exposure in your business, your employment practices, or a pending matter, request a private introduction or call 877-862-7188.