Somewhere in the past two years, millions of lawyers and their clients have done the following thing: opened ChatGPT, Claude, or Gemini, typed in something sensitive about a legal matter, and assumed — without ever quite articulating the assumption — that the conversation was private. That it stayed between them and the screen. That the same professional instincts that govern what they say on the phone and what they put in an email applied here too.
On 10 February 2026, Judge Jed S. Rakoff of the Southern District of New York ruled that assumption wrong. In a bench ruling followed by a written opinion issued on 17 February, he held — in what he himself described as a matter of nationwide first impression — that documents generated by a criminal defendant using the consumer version of Anthropic’s Claude were not protected by either attorney-client privilege or the work product doctrine. The defendant had typed information received from his lawyers into Claude, generated 31 documents laying out defence strategy, and later shared those documents with his legal team. The FBI seized them in a search of his home. His lawyers claimed privilege. Judge Rakoff said no.
The case is United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y.). Every lawyer who has ever used a consumer AI tool in connection with a client matter — and every client who has ever done the same — needs to understand what it says, what it does not say, and what it means for practice starting today.
Who Is Bradley Heppner, and What Did He Actually Do?
The facts of Heppner are important because the ruling is fact-specific. This is not a case about a lawyer drafting a brief with AI assistance. It is a case about a client — a defendant in a serious federal criminal proceeding — who used a publicly available AI chatbot to process information he had received from his attorneys and prepare what amounted to a private legal strategy document.
Bradley Heppner is the former CEO and board chairman of GWG Holdings, Inc., a Dallas-based financial services company. On 28 October 2025, a federal grand jury in the Southern District of New York indicted him on charges of securities fraud, wire fraud, conspiracy, making false statements to auditors, and falsification of records — all arising from an alleged scheme to defraud investors. The government’s case centres on Heppner’s alleged misconduct as an executive of GWG Holdings.
What matters for the privilege question is what happened in the months before the indictment. Heppner had received grand jury subpoenas and had been informed he was a target of the government’s investigation. He had engaged defence counsel — the firm Quinn Emanuel — and was communicating with them about the case. At some point during this period, after his lawyers had briefed him on the state of the investigation and the legal issues at play, Heppner sat down at his computer and opened the consumer version of Claude. Without being directed to do so by his counsel, he began typing in what he knew — information he had received from his attorneys — and asking Claude to help him organise it. He used Claude to generate reports outlining his potential defence strategies and the likely legal arguments in his case.
He generated 31 such documents. He later shared them with his defence lawyers.
On 4 November 2025, he was arrested. When the FBI executed a search warrant at his residence and seized his electronic devices, those 31 documents were on them. Heppner’s counsel identified the documents on a privilege log as “artificial intelligence-generated analysis conveying facts to counsel for the purpose of obtaining legal advice.” The government moved for a ruling that the documents were not privileged. On 10 February 2026, Judge Rakoff agreed.
The Three Reasons Attorney-Client Privilege Failed
Judge Rakoff’s written opinion, issued 17 February 2026, applies what he describes as “longstanding legal principles” rather than any AI-specific doctrine. The fact that AI is involved does not change the analysis. What changes the analysis — what destroys privilege here — is the nature of how AI tools are built and how they handle user data.
The attorney-client privilege protects communications that meet three requirements: they must be between a client and an attorney, they must be made in confidence, and they must be made for the purpose of obtaining or providing legal advice. Judge Rakoff found that Heppner’s interactions with Claude failed at least two of these three requirements, and arguably all three.
First: Claude is not a lawyer. This is the most fundamental point and, in one sense, the most obvious. Privilege attaches to communications between a client and counsel. Claude is not counsel. It holds no licence to practise law, owes no duty of loyalty or confidentiality, is subject to no professional discipline, and cannot form an attorney-client relationship with anyone. As Judge Rakoff put it in his written opinion, all recognised privileges require “a trusting human relationship” with “a licensed professional who owes fiduciary duties and is subject to discipline.” No such relationship can exist between a user and an AI platform, regardless of what the user intends.
The fact that Heppner later shared the Claude-generated documents with his actual lawyers did not cure this deficiency. Documents that are not privileged when created do not become privileged simply by being transmitted to counsel. As Judge Rakoff wrote, they cannot “somehow alchemically change[] into privileged material by later being shared with counsel.” The privilege analysis looks at the communication at the moment it is made — not at what happens to it afterwards.
Second: There was no reasonable expectation of confidentiality. This is the finding with the broadest practical implications, and it goes directly to how the major AI platforms are architected. Heppner used the consumer version of Claude — the version available to anyone who creates a free account at claude.ai. Anthropic’s privacy policy, which governs the consumer product, states that the company collects users’ inputs and outputs, uses that data to train its model, and reserves the right to disclose user data to “governmental regulatory authorities” and other “third parties” — and can do so even absent a subpoena.
Judge Rakoff found that this policy “clearly puts Claude’s users on notice” that their communications are not confidential. A user who reads the terms of service — or who is held to constructive knowledge of them, as courts generally require — has no reasonable expectation of privacy in what they type. The court drew a sharp distinction between a client drafting private notes in a word processor (which it acknowledged are generally not privileged, but at least private) and a client submitting those notes to a third-party platform that expressly reserves the right to hand them to the government. Heppner, the court said, “first shared the equivalent of his notes with a third-party, Claude” — and that sharing broke confidentiality before the documents ever reached his lawyers.
This reasoning applies with equal force to ChatGPT, Gemini, Meta AI, and any other consumer AI product governed by similar terms. The confidentiality-destroying logic is not specific to Claude. It is specific to the category of publicly available AI platforms that retain user data and reserve disclosure rights.
Third: The purpose was not to obtain legal advice from an attorney. Claude expressly disclaims providing legal advice. Its own terms of service tell users that it cannot provide legal counsel and recommend consulting a qualified lawyer. Judge Rakoff noted that when directly asked, the AI tool itself responds that it cannot give legal advice. A user cannot credibly claim they used a tool “for the purpose of obtaining legal advice” when the tool itself expressly says it does not provide that service.
Why the Work Product Doctrine Also Failed
Work product protection shields materials prepared by or at the direction of counsel in anticipation of litigation. The animating purpose of the doctrine — as the Second Circuit has consistently held — is to protect the lawyer’s mental processes: their strategy, their analysis, their impressions and theories.
Heppner’s defence team argued that the 31 documents constituted work product because they were prepared in anticipation of litigation and contained information received from counsel. Judge Rakoff rejected this for two reasons.
First, the documents were not prepared by or at the direction of counsel. Defence counsel conceded at the hearing that Heppner created the AI documents “of his own volition” and that his lawyers “did not direct” him to run the Claude searches. The work product doctrine does not protect a client’s independent research, however litigation-related it might be. It protects the lawyer’s mental processes. A client acting alone — even a client acting on information received from counsel — is not acting at counsel’s direction in any meaningful legal sense.
Second, even if the documents were somehow analogised to a client’s own notes, the court’s analysis of the consumer AI platform’s privacy terms applied with the same force to work product as to privilege. The AI tool’s express disclaimer that user data is not confidential undermined any work product claim alongside the privilege claim.
Rakoff explicitly declined to follow the reasoning in Shih v. Petal Card, Inc. (S.D.N.Y. 2021), a magistrate judge’s decision that had extended work product protection to materials prepared by a client without attorney direction. He held that such an expansion “undermines the policy animating the work product doctrine,” which exists to protect lawyers’ mental processes — not the independent output of a client’s AI conversation.
The Doors Judge Rakoff Left Open — And Why They Matter
Heppner is a narrow ruling on specific facts. Judge Rakoff was careful to say what it does not decide, and the gaps he identified are practically significant.
The enterprise AI question. The court’s analysis turned critically on the consumer version of Claude’s privacy policy and its disclosure provisions. Judge Rakoff explicitly left open whether the analysis would differ for enterprise AI tools — products like Claude for Enterprise, ChatGPT Team, or Microsoft Copilot for Legal — that contractually commit to not training on user data and to maintaining confidentiality of inputs. Multiple law firms’ client alerts, including those from Gibson Dunn, Debevoise, and Proskauer, have noted this opening. The consensus among the lawyers who have analysed the opinion is that use of a properly contracted enterprise AI tool — one with genuine data isolation, zero-retention provisions, and contractual confidentiality — stands in a materially different position than the consumer product Heppner used.
This distinction matters enormously in practice. A lawyer using Harvey, CoCounsel, or an enterprise-licensed deployment of Claude or ChatGPT that operates under a data processing agreement with robust confidentiality protections is not doing the same thing as a client who opens a free AI account and starts typing in what their lawyer told them. The court’s logic requires the confidentiality-destroying element — the consumer terms of service — to operate. Remove that element, and the analysis changes.
The Kovel doctrine opening. Perhaps the most practically significant passage in Judge Rakoff’s written opinion is his discussion of what would have happened if counsel had directed Heppner to use Claude. The judge noted that under the Kovel doctrine — established by the Second Circuit in United States v. Kovel, 296 F.2d 918 (2d Cir. 1961) — attorney-client privilege can extend to communications with non-lawyer third parties whose involvement is necessary for effective legal representation. The classic example is the accountant retained by a tax lawyer to help analyse a client’s financial records. The court stated that had counsel directed Heppner to use Claude, “Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.”
That is a significant crack in the door. If attorney direction is the key variable — and the court’s analysis suggests it is — then lawyers who deliberately structure client AI use as part of a supervised legal workflow, and document that structure, may be able to maintain privilege. The question of whether this would survive challenge is unresolved. But the court has identified the framework within which the argument would be made.
The criminal versus civil context. Heppner arises in a criminal case, and some commentators have noted that the court’s analysis does not necessarily establish the same rule in civil litigation, where the work product doctrine’s scope has historically been applied somewhat more broadly. The court cited Adlman — a Second Circuit decision arising in the corporate tax advisory context — in a way that invites argument about whether civil litigants have more room to manoeuvre. This question remains genuinely open.
The Waiver Risk That Has Not Been Fully Reckoned With
One aspect of Heppner that deserves more attention than it has received in the commentary so far is the waiver implication. The 31 documents Heppner fed into Claude contained information he had received from his attorneys — information that, in the attorney-client relationship where it originated, was privileged. By taking that privileged information and typing it into a consumer AI platform, Heppner may have waived privilege not only over the AI-generated outputs, but potentially over the original communications with counsel from which those outputs were derived.
Jones Walker’s analysis of the case identified this as “perhaps the most troubling aspect of the ruling.” The government argued — and Judge Rakoff agreed — that sharing privileged communications with a third-party AI platform may constitute a waiver of the privilege over the underlying attorney-client communications themselves.
This is a different and more alarming problem than simply losing protection over the AI documents. If a client discusses a privileged conversation with their lawyer in a ChatGPT session, they may have waived privilege over what the lawyer told them — not just over what ChatGPT generated in response. The full downstream consequences of this reasoning have not yet been tested in litigation, but they point in a direction that every litigator should take seriously.
What This Means in Practice: A Framework for Every Lawyer
Judge Rakoff’s ruling is, in the court’s own words, “a traditional application of privilege in the AI era.” It does not create new law. It applies old law — about confidentiality, about the attorney-client relationship, about what the work product doctrine is for — to a new and poorly understood technology. The surprise is not the legal reasoning. The surprise is how many lawyers and clients have been operating as though these established principles did not apply.
Here is what this means for practice, broken down by role.
If you are a practising lawyer advising clients:
The first and most immediate obligation is to tell your clients — explicitly, in plain language — that anything they type into a consumer AI platform is not confidential, is very likely not privileged, and may be discoverable by adverse parties and accessible by government authorities. This is not a niche consideration. Any client involved in litigation, facing a regulatory investigation, or dealing with a legal dispute who is also a regular AI user is exposed to this risk right now. Most of them do not know it.
Consider updating your engagement letters to address AI use specifically. The standard engagement letter — drafted before generative AI existed as a mass-market product — says nothing about this. A provision advising clients not to discuss privileged matters using consumer AI tools, and to seek advice before using any AI tool in connection with the representation, is now appropriate practice hygiene for most engagements.
If you are in-house counsel:
Your exposure is broader than an external adviser’s. The lawyers in your legal department may be using consumer AI tools for research, drafting, and analysis. Your business clients may be using AI tools to think through legal problems and then bringing the outputs to you. Your executives, facing investigations or litigation, may be doing exactly what Heppner did: opening a chatbot, typing in what their lawyers told them, and generating strategy documents they consider private.
Heppner means none of those communications are safe on a consumer platform. Enterprise AI policies need to be implemented and enforced, not merely aspirational. The distinction between approved enterprise tools (with contractual confidentiality protections) and consumer tools (without them) is no longer a matter of IT governance preference. It is a privilege and confidentiality issue with direct legal consequences.
If you are managing AI adoption at a law firm:
The ruling reinforces something that good AI governance policy has always required: the tool matters as much as the use. A law firm that has deployed enterprise-grade AI under a proper data processing agreement, with contractual data isolation and no training on client inputs, is in a fundamentally different position from a firm whose lawyers are using free accounts. If your firm’s AI governance policy does not clearly distinguish between enterprise and consumer tools, and does not expressly address privilege implications, it needs to be updated.
If you are advising clients who are targets of investigations:
Heppner directly addresses your situation. The ruling sends a clear message: a client who receives a grand jury subpoena, target letter, or any indication they are under investigation should be explicitly warned, immediately, not to use consumer AI tools to analyse their legal situation, prepare defence materials, or process anything their lawyers have told them. This warning should be given at the outset of the engagement and repeated.
The Practical Checklist: AI and Privilege Protection
The following steps reflect the current state of the law as established by Heppner and the analytical framework Judge Rakoff applied.
For any AI tool used in connection with legal matters:
- Identify whether the tool is a consumer product or an enterprise product with contractual confidentiality protections and a prohibition on training on user inputs. If you cannot immediately answer this question, you are using a consumer product.
- Review the platform’s privacy policy. If it reserves the right to use your inputs for model training or to disclose them to third parties or government authorities, assume no confidentiality exists.
- Never use a consumer AI tool to process privileged communications, legal strategy, or information received from counsel.
For lawyers structuring client AI use:
- If you want a client to use an AI tool as part of their legal preparation, direct them explicitly to do so and document that direction. Heppner explicitly identifies attorney direction as the key variable that could sustain privilege under a Kovel-type analysis.
- Use enterprise tools with contractual confidentiality provisions wherever possible. Document the contractual basis for confidentiality in your matter file.
- Update privilege logs in active matters to clearly document the basis for any privilege claim involving AI outputs, including whether the tool was used at counsel’s direction and under what confidentiality framework.
For in-house counsel and legal operations:
- Implement a clear AI tool policy that distinguishes approved enterprise tools from prohibited consumer tools for any use involving privileged information.
- Train lawyers and non-lawyer staff on the privilege implications of consumer AI use. Do not assume the distinction between “enterprise” and “consumer” is intuitively obvious — for most employees, it is not.
- Consider adding explicit AI-use provisions to litigation hold notices: when a hold is issued, employees should be directed to preserve their AI usage history and to immediately cease using consumer tools to discuss the subject matter of the litigation.
The Bottom Line for Lawyers
United States v. Heppner does not prohibit AI use in legal practice. It does not even come close. What it does is apply a rule that has always existed — confidentiality is a prerequisite for privilege — to a category of technology that most users have never seriously considered from a confidentiality standpoint.
The ruling is narrow. It turns on the consumer version of an AI tool, on a client acting without counsel’s direction, and on a privacy policy that expressly contemplates disclosure to the government. Change any of those facts — use an enterprise tool, act at counsel’s direction, use a platform with genuine contractual confidentiality — and the analysis may well be different.
But the overwhelming majority of AI use by lawyers and clients today does not involve any of those protective features. It involves free accounts, consumer products, and a vague assumption that what happens on the screen stays on the screen. Heppner is a direct and authoritative rejection of that assumption from one of the most prominent federal judges in the United States — a judge whose opinions carry persuasive weight in courts across the country, even though this ruling is technically limited to one district.
Every lawyer who has not yet thought carefully about which AI tools they use, how those tools handle data, and what they are telling clients about AI and confidentiality should think about it now. The framework for doing so is clearly set out in Judge Rakoff’s 17 February opinion. The risk of not doing so is equally clear — and Heppner is only the first case.
Frequently Asked Questions
Is it safe to use ChatGPT for legal research? For general research that does not involve any privileged information — background law, publicly available facts, general legal principles — the privilege question does not arise, because there is nothing privileged to protect. The problem identified in Heppner arises when a lawyer or client uses a consumer AI tool to process information that would otherwise be privileged: communications with counsel, legal strategy, confidential client information. If the research involves none of that, the privilege analysis is not engaged. But lawyers should be aware that the line between “general research” and “privileged matter” can blur quickly, and the safer practice is to use enterprise tools with proper confidentiality protections for any work connected to a client matter.
Does this ruling apply outside the United States? Heppner is a decision of the Southern District of New York and is not binding outside that jurisdiction. However, the privilege analysis it applies reflects broadly shared common-law principles about confidentiality and attorney-client relationships. Lawyers in the UK, Australia, Canada, and other common law jurisdictions should expect courts to engage with similar reasoning when these questions arise in their own courts — and similar questions are already arising internationally.
What is the difference between a consumer AI tool and an enterprise AI tool for privilege purposes? A consumer AI tool — the free or standard subscription version of Claude, ChatGPT, Gemini, and similar products — is governed by a privacy policy that typically permits the provider to use inputs for model training and to disclose data to third parties, including government authorities. An enterprise AI tool is deployed under a data processing agreement that contractually prohibits training on client inputs and commits to maintaining confidentiality. Under Judge Rakoff’s analysis, the consumer tool destroys the reasonable expectation of confidentiality that privilege requires. An enterprise tool, properly contracted, may preserve it — though this has not yet been definitively tested in litigation.
What should lawyers tell clients right now? At minimum: do not type anything related to your legal matter, including anything your lawyers have told you, into a free AI tool. If you want to use AI to help organise your thoughts or prepare for meetings with your lawyers, discuss this with your lawyers first and use only tools they have specifically approved. Anything you type into a consumer AI platform may be discoverable by the other side and accessible to the government.
Could the original privileged communications with counsel also be waived? Potentially. The government in Heppner argued that feeding privileged attorney communications into a consumer AI platform constitutes a waiver of privilege over those original communications — not just the AI-generated outputs. Judge Rakoff’s ruling was consistent with this argument. The full implications of this waiver theory have not yet been litigated, but the risk is real and should be flagged to clients explicitly.
Further Reading
- [AI Hallucinations in Court: Every Lawyer Needs to Read This Before Their Next Filing] — The widening crisis of unverified AI output in filed documents, and how to protect yourself.
- [What Is Agentic AI — And Why Every Lawyer Needs to Understand It Before 2027] — As AI begins taking autonomous actions in legal workflows, the confidentiality questions multiply.
- [5 ChatGPT Settings to Change Immediately If You’re a Lawyer] — Practical settings that improve both the utility and the data security of ChatGPT for legal use.
- [ChatGPT vs Claude: Pros and Cons for Legal Professionals] — A comparison of the two dominant general-purpose AI tools for legal work, including their data and privacy policies.
- [Harvey vs Clio vs CoCounsel vs Westlaw AI: The Honest Lawyer’s Guide to Legal-Specific AI Tools] — A guide to enterprise-grade legal AI tools — the category of product that Judge Rakoff’s analysis suggests may be able to preserve privilege.
Case reference: United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y.). Bench ruling: 10 February 2026. Written opinion: 17 February 2026, Dkt. No. 27.
Subscribe to LegalAIWorld Weekly — new analysis, court decisions, and practical guidance every week for practising lawyers navigating the AI era.



