A federal court in New York just handed down a ruling that every person with a developing legal problem needs to understand — especially before they open ChatGPT, Claude, or any other AI tool to think through their situation.
The court held that documents a client prepared using an AI platform, before ever speaking to an attorney, were not protected by attorney-client privilege. The government could read them. The client's own defense strategy, drafted in his own words, became evidence.
If you have ever typed a legal problem into an AI tool — or if you are thinking about doing it now — this decision applies to you.
The Case: United States v. Heppner
Bradley Heppner, a corporate executive facing federal securities fraud charges related to GWG Holdings, did something many people do when they first realize they may be in legal trouble: he tried to get organized. Before hiring a lawyer, he used Claude — Anthropic's AI platform — to prepare approximately 31 documents outlining his defense strategy and legal analysis of his situation.
He later shared those materials with his attorneys. The FBI seized them. Heppner argued they were protected by attorney-client privilege. The court disagreed — completely.
Three Reasons the Court Said No
The Southern District identified three independent deficiencies, any one of which would have been enough to defeat the privilege claim:
- Claude is not an attorney. Attorney-client privilege protects communications between a client and their attorney. An AI platform — no matter how sophisticated its responses — is not an attorney. It holds no bar license, owes you no fiduciary duty, and has no legal obligation to keep anything you share with it confidential. The relationship simply does not exist.
- There is no confidentiality when you use AI. Anthropic's own privacy policy explicitly permits data collection for model training and disclosure to third parties. The moment Heppner typed his legal situation into Claude, he had no legally reasonable expectation of confidentiality. The same analysis applies to ChatGPT, Gemini, Copilot, and virtually every other AI tool available today. You are not whispering to a trusted advisor. You are entering data into a commercial platform.
- Sharing with your attorney afterward does not cure the problem. This is the part most people would not anticipate. The court was explicit: documents that are unprivileged in the client's hands do not become privileged simply because the client later transmits them to counsel. If they were not protected when you created them, they remain unprotected regardless of what you do with them afterward.
The court also denied protection under the work product doctrine — a separate shield that protects materials prepared in anticipation of litigation. That protection failed too, because the materials were not prepared at an attorney's direction and did not reflect counsel's strategy or mental impressions.
Who Should Pay Attention to This
The instinct to prepare before calling a lawyer is completely natural. People want to organize their thoughts, understand their options, and not waste time or money in a first consultation. AI tools feel like a private, efficient way to do that. This decision clarifies that they are not private — and the consequences can be severe.
The risk is highest for:
- Executives and professionals facing regulatory investigations, employment disputes, wrongful termination claims, or securities matters. The Heppner fact pattern — an executive organizing his defense before hiring counsel — will repeat itself.
- Anyone considering bankruptcy. Many people use AI to research whether they qualify for Chapter 7, what property they can protect, or how their creditors might respond. Those conversations are not confidential. A licensed bankruptcy attorney is.
- Homeowners in insurance disputes. If you have used AI to draft a timeline of your claim, identify potential legal theories, or summarize what your insurer did wrong — those materials may be discoverable.
- Anyone in a business or contractor dispute who has used AI to organize the facts, draft a summary of events, or assess their legal position before retaining counsel.
Is There Any Exception?
The Heppner court acknowledged one narrow scenario where privilege might survive: if an attorney specifically directs a client to use an AI platform as the attorney's agent — an application of what courts call the Kovel doctrine — stronger arguments for privilege protection might apply.
But that requires the attorney to be retained and directing the work before the AI is used. It does not protect anything you prepared on your own, before you made that first call.
The Practical Guidance Is Simple
If you believe you may have a legal problem — an investigation, a dispute, a financial situation heading toward litigation — call a licensed attorney before you type anything into an AI tool.
The conversation you have with your attorney is protected. The prompt you type into ChatGPT is not.
Once you are represented, your attorney can guide you on what tools, if any, are appropriate to use and in what capacity. That decision belongs with counsel — not with the AI platform's terms of service.
Attorney-client privilege exists because honest, complete communication between clients and their lawyers is essential to the justice system. No AI platform — however capable — can stand in for that relationship. Anthropic, OpenAI, and Google are not bound by any duty of confidentiality to you. A licensed attorney is.
If something legal is developing in your life, in Florida or in Washington DC — call before you type.