Privacy law in the United States grew up around two assumptions:

Both assumptions are now strained to the point of failure. That is the argument Professor Daniel J. Solove makes in Artificial Intelligence and Privacy, 77 Fla. L. Rev. 207 — and it is the article every attorney advising clients on data, technology, employment, or compliance should read this year.

This is the first of two posts on the article and the broader AI-privacy frontier. Part 1 frames the problem. Part 2 walks through where the practical risks land.

CCTV dome surveillance camera mounted overhead

Modern surveillance does not look like a wiretap. It looks like a network of cameras, microphones, and apps producing data that — fed through an AI system — becomes biometric identification. The aggregate is the kind of intrusion the Fourth Amendment was written to prevent. Photo by Sanderflight via Wikimedia Commons (CC BY-SA 4.0).

The framework that no longer fits

The legacy U.S. privacy framework — the one built into HIPAA, GLBA, FERPA, the FCRA, the FTC Act, and most state statutes — assumes a model that looked like this:

That model worked, more or less, for paper files and structured databases. It is increasingly mismatched to the reality of AI systems, which do at least three things the traditional framework was never designed for:

Solove's central observation is that the privacy harms produced by these systems do not map cleanly onto the legal categories the U.S. uses to regulate privacy. The harm is real. The cause of action is not.

Three categories where the gap is widest

Three areas where the mismatch is causing immediate problems for practitioners:

Surveillance, biometrics, and identification technology

Modern surveillance does not look like a wiretap. It looks like a network of cameras, microphones, and apps producing data that, fed through an AI system, becomes biometric identification — gait, voice, face, typing pattern, browsing pattern. Each individual data point may be innocuous. The aggregate is the kind of intrusion the Fourth Amendment was written to prevent.

The legal challenge is that no single statute squarely governs the aggregate. The data was collected lawfully at every step. The inference is the harm.

Algorithmic decision-making and statistical discrimination

When an AI system is trained on historical data — lending decisions, hiring decisions, criminal-justice outcomes — it learns the patterns in that history, including the patterns of bias. The output looks neutral. The math underneath is not.

Existing anti-discrimination statutes prohibit decisions made because of protected characteristics. AI systems make decisions on proxies that correlate with protected characteristics without naming them. The injury is the same as the injury under the historical statute. The doctrine struggles to reach it.

Privacy without a record

The traditional privacy doctrine is built around records — files, transcripts, reports — held by identifiable custodians. AI systems generate inferences on demand, often without storing them as discrete records. Subpoenaing the file no longer works because there is no file. The model is the record, and the model cannot be produced in the way a paper file can.

What this means for advising clients

Three implications that should change how lawyers advise clients on privacy in 2026:

Why this matters now

The combination of three trends — wider AI adoption, weaker federal enforcement, and a long lag in legislative response — means the gap Solove identifies is going to widen before it narrows. Firms that wait for Congress to write a comprehensive AI-privacy statute will be exposed for years. Firms that advise clients to treat the gap as a real risk and to use contract, policy, and process to fill it will be in a defensible position when the litigation arrives.

The next post in this series walks through the four practical risks that flow from this framework — biometric overreach, AI hallucinations in legal filings, employer surveillance, and consumer-protection erosion — and what counsel can do about each.

For now: read Solove. The article is 77 Fla. L. Rev. 207. It will reset your thinking.


Continued: Part 2 — Where the AI-Privacy Risks Land in Practice — coming up next on stevenfraser.com.

If your business or practice is exposed to AI-driven privacy issues that your existing compliance program does not cover, request a private introduction or call 877-862-7188.