Legal Technology · Attorney-Client Privilege

What You Tell AI Before Calling a Lawyer Can Be Used Against You

By Steven C. Fraser, Esq. May 6, 2026 FL Bar No. 625825 · DC Bar No. 460026

"Claude is not an attorney." — United States v. Heppner, No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 17, 2026)

A federal court in New York just handed down a ruling that every person with a developing legal problem needs to understand — especially before they open ChatGPT, Claude, or any other AI tool to think through their situation.

The court held that documents a client prepared using an AI platform, before ever speaking to an attorney, were not protected by attorney-client privilege. The government could read them. The client's own defense strategy, drafted in his own words, became evidence.

If you have ever typed a legal problem into an AI tool — or if you are thinking about doing it now — this decision applies to you.

The Case: United States v. Heppner

Case: United States v. Heppner Citation: No. 25 Cr. 503 (JSR) Court: S.D.N.Y. Decided: February 17, 2026

Bradley Heppner, a corporate executive facing federal securities fraud charges related to GWG Holdings, did something many people do when they first realize they may be in legal trouble: he tried to get organized. Before hiring a lawyer, he used Claude — Anthropic's AI platform — to prepare approximately 31 documents outlining his defense strategy and legal analysis of his situation.

He later shared those materials with his attorneys. The FBI seized them. Heppner argued they were protected by attorney-client privilege. The court disagreed — completely.

Three Reasons the Court Said No

The Southern District identified three independent deficiencies, any one of which would have been enough to defeat the privilege claim:

  1. Claude is not an attorney. Attorney-client privilege protects communications between a client and their attorney. An AI platform — no matter how sophisticated its responses — is not an attorney. It holds no bar license, owes you no fiduciary duty, and has no legal obligation to keep anything you share with it confidential. The relationship simply does not exist.
  2. There is no confidentiality when you use AI. Anthropic's own privacy policy explicitly permits data collection for model training and disclosure to third parties. The moment Heppner typed his legal situation into Claude, he had no legally reasonable expectation of confidentiality. The same analysis applies to ChatGPT, Gemini, Copilot, and virtually every other AI tool available today. You are not whispering to a trusted advisor. You are entering data into a commercial platform.
  3. Sharing with your attorney afterward does not cure the problem. This is the part most people would not anticipate. The court was explicit: documents that are unprivileged in the client's hands do not become privileged simply because the client later transmits them to counsel. If they were not protected when you created them, they remain unprotected regardless of what you do with them afterward.

The court also denied protection under the work product doctrine — a separate shield that protects materials prepared in anticipation of litigation. That protection failed too, because the materials were not prepared at an attorney's direction and did not reflect counsel's strategy or mental impressions.

Who Should Pay Attention to This

The instinct to prepare before calling a lawyer is completely natural. People want to organize their thoughts, understand their options, and not waste time or money in a first consultation. AI tools feel like a private, efficient way to do that. This decision clarifies that they are not private — and the consequences can be severe.

The risk is highest for:

Is There Any Exception?

The Heppner court acknowledged one narrow scenario where privilege might survive: if an attorney specifically directs a client to use an AI platform as the attorney's agent — an application of what courts call the Kovel doctrine — stronger arguments for privilege protection might apply.

But that requires the attorney to be retained and directing the work before the AI is used. It does not protect anything you prepared on your own, before you made that first call.

The Practical Guidance Is Simple

If you believe you may have a legal problem — an investigation, a dispute, a financial situation heading toward litigation — call a licensed attorney before you type anything into an AI tool.

The conversation you have with your attorney is protected. The prompt you type into ChatGPT is not.

Once you are represented, your attorney can guide you on what tools, if any, are appropriate to use and in what capacity. That decision belongs with counsel — not with the AI platform's terms of service.

Attorney-client privilege exists because honest, complete communication between clients and their lawyers is essential to the justice system. No AI platform — however capable — can stand in for that relationship. Anthropic, OpenAI, and Google are not bound by any duty of confidentiality to you. A licensed attorney is.

If something legal is developing in your life, in Florida or in Washington DC — call before you type.

Have a Legal Situation Developing?

A confidential conversation with a licensed attorney is protected. Anything you type into an AI platform is not. Steven C. Fraser is available for free consultations for Florida and DC matters — personally, no associates.

Disclaimer: This article is provided for general informational purposes only and does not constitute legal advice. Reading this post does not create an attorney-client relationship. Laws vary by jurisdiction. If you have a specific legal matter, consult a licensed attorney. FL Bar No. 625825 · DC Bar No. 460026.