Privacy law in the United States grew up around two assumptions:
- The data is collected by an identifiable actor, who can be regulated, sued, or held to a contract.
- The data is recognizable as personal information, so the law can decide what counts as a "record" and what does not.
Both assumptions are now strained to the point of failure. That is the argument Professor Daniel J. Solove makes in Artificial Intelligence and Privacy, 77 Fla. L. Rev. 207 — and it is the article every attorney advising clients on data, technology, employment, or compliance should read this year.
This is the first of two posts on the article and the broader AI-privacy frontier. Part 1 frames the problem. Part 2 walks through where the practical risks land.

Modern surveillance does not look like a wiretap. It looks like a network of cameras, microphones, and apps producing data that — fed through an AI system — becomes biometric identification. The aggregate is the kind of intrusion the Fourth Amendment was written to prevent. Photo by Sanderflight via Wikimedia Commons (CC BY-SA 4.0).
The framework that no longer fits
The legacy U.S. privacy framework — the one built into HIPAA, GLBA, FERPA, the FCRA, the FTC Act, and most state statutes — assumes a model that looked like this:
- A specific organization holds a specific record about a specific person.
- The record contains identifiable information.
- The law constrains what that organization can do with the record.
That model worked, more or less, for paper files and structured databases. It is increasingly mismatched to the reality of AI systems, which do at least three things the traditional framework was never designed for:
- They train on data at a scale that obscures origin. Once an AI model has ingested a corpus of millions of documents, conversations, or images, the lineage of any single contribution is effectively lost.
- They infer personal information from data that is not personal on its face. A model can predict your medical condition from your typing rhythm, your political leaning from your podcast subscriptions, your immigration status from your supermarket purchases. The inputs were not protected. The output reads like a medical record.
- They operate outside identifiable institutional boundaries. A model deployed on a phone, a website, or a third-party API can generate the equivalent of a privacy violation without any single regulated entity being clearly responsible.
Solove's central observation is that the privacy harms produced by these systems do not map cleanly onto the legal categories the U.S. uses to regulate privacy. The harm is real. The cause of action is not.
Three categories where the gap is widest
Three areas where the mismatch is causing immediate problems for practitioners:
Surveillance, biometrics, and identification technology
Modern surveillance does not look like a wiretap. It looks like a network of cameras, microphones, and apps producing data that, fed through an AI system, becomes biometric identification — gait, voice, face, typing pattern, browsing pattern. Each individual data point may be innocuous. The aggregate is the kind of intrusion the Fourth Amendment was written to prevent.
The legal challenge is that no single statute squarely governs the aggregate. The data was collected lawfully at every step. The inference is the harm.
Algorithmic decision-making and statistical discrimination
When an AI system is trained on historical data — lending decisions, hiring decisions, criminal-justice outcomes — it learns the patterns in that history, including the patterns of bias. The output looks neutral. The math underneath is not.
Existing anti-discrimination statutes prohibit decisions made because of protected characteristics. AI systems make decisions on proxies that correlate with protected characteristics without naming them. The injury is the same as the injury under the historical statute. The doctrine struggles to reach it.
Privacy without a record
The traditional privacy doctrine is built around records — files, transcripts, reports — held by identifiable custodians. AI systems generate inferences on demand, often without storing them as discrete records. Subpoenaing the file no longer works because there is no file. The model is the record, and the model cannot be produced in the way a paper file can.
What this means for advising clients
Three implications that should change how lawyers advise clients on privacy in 2026:
- Compliance is necessary but not sufficient. A client that is fully compliant with HIPAA, GLBA, GDPR, or CCPA is still exposed to AI-privacy harms those statutes do not reach. The compliance map and the risk map have diverged.
- Inference is the new exfiltration. The most damaging privacy event for a client may not be a breach. It may be a vendor's AI system inferring something the client never disclosed.
- Contract is doing more work than statute. Vendor agreements, AI-use policies, and data-processing addenda are increasingly the only place where the client's actual privacy interests can be protected. The statute may not be there.
Why this matters now
The combination of three trends — wider AI adoption, weaker federal enforcement, and a long lag in legislative response — means the gap Solove identifies is going to widen before it narrows. Firms that wait for Congress to write a comprehensive AI-privacy statute will be exposed for years. Firms that advise clients to treat the gap as a real risk and to use contract, policy, and process to fill it will be in a defensible position when the litigation arrives.
The next post in this series walks through the four practical risks that flow from this framework — biometric overreach, AI hallucinations in legal filings, employer surveillance, and consumer-protection erosion — and what counsel can do about each.
For now: read Solove. The article is 77 Fla. L. Rev. 207. It will reset your thinking.
Continued: Part 2 — Where the AI-Privacy Risks Land in Practice — coming up next on stevenfraser.com.
If your business or practice is exposed to AI-driven privacy issues that your existing compliance program does not cover, request a private introduction or call 877-862-7188.