When Your AI “Assistant” Isn’t Privileged: A Wake‑Up Call from U.S. v. Heppner

  1. Overview
  • A federal judge ruled that the attorney-client privilege and the work-product doctrine did not protect a criminal defendant’s artificial intelligence (AI)-generated documents from disclosure to the government.
  • A client’s use of a public-facing generative AI to produce documents pertinent to a case may not be protected by the attorney-client privilege or the work product doctrine, even when the documents are later provided to counsel.
  • Traditional elements of privilege must be satisfied in the AI context: Communications shared with public-facing generative AI tools may not be deemed confidential, and materials created outside counsel’s supervision may not qualify for work product protection.

On Feb. 10, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York ruled from the bench[1]— and confirmed days later in a written opinion — that documents prepared by a client using a public facing generative AI tool were not protected by attorney-client privilege or the work-product doctrine.  This decision is among the first to highlight the risks arising when clients use public-facing generative AI tools in legal proceedings, particularly when such tools are used outside the supervision of counsel and lack confidentiality protections traditionally afforded to attorney-client communications and other privileged materials.

  1. Background

In United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Oct. 28, 2025), the U.S. Department of Justice (DOJ) brought charges against Bradley Heppner, the founder of financial services company Beneficient Company Group, L.P., for securities fraud, wire fraud, and related offenses. The charges stemmed from an alleged scheme to defraud investors through, among other things, misrepresentations concerning what was presented as an arm’s-length lender, but in reality, was created and controlled by Heppner.

On Nov. 4, 2025, federal agents arrested Heppner, executed a search warrant for his home, and seized numerous electronic devices and hard copy records. According to defense counsel, 31 documents on those devices consisted of AI-generated material created for the “purpose of obtaining legal advice.” Heppner’s counsel maintained that the documents at issue memorialize communications between Heppner and Claude, a generative AI platform operated by Anthropic, after Heppner had received a grand jury subpoena and it was clear that he was the target of the FBI investigation.[2] The defense asserted privilege over the materials on the basis that they were created to facilitate discussions with counsel and because those documents were later shared with them. Defense counsel conceded, however, that the documents were prepared by Heppner with the aid of Claude, and not at the direction of his attorneys.

The Government moved for a ruling that the AI-generated documents were not privileged, arguing that the AI tool was “plainly not an attorney,” that the materials were not created for the purpose of obtaining legal advice, and that privilege could not apply retroactively by transmitting the documents to counsel after their creation. The Government also argued that work-product protection did not apply because the documents were not prepared at counsel’s direction. Finally, the Government emphasized that, unlike an attorney, Claude, a public-facing AI platform, owed no contractual duties of loyalty or confidentiality to Heppner, likening the materials to independent research – such as Internet searches or library materials – later shared with counsel. Judge Rakoff agreed.

  1. The Court’s Ruling and Implications

Noting that the attorney-client privilege is construed narrowly, Judge Rakoff held that the AI-generated documents lacked at least two, if not all three, elements of the privilege. First, the AI documents were not communications between Heppner and his counsel. Second, Heppner’s communications with Claude, were not confidential because Claude’s written privacy policy put users like Heppner on notice that they had no “substantial privacy interest” in their conversations with the platform.[3] Third, Heppner did not communicate with Claude for the purpose of obtaining legal advice. While acknowledging that this issue presented a “closer call” because Heppner’s counsel maintained that Heppner communicated with Claude for the “express purpose of talking with counsel,” he did not do so at the suggestion or direction of counsel.

The court likewise rejected the application of the work-product doctrine, which may provide qualified protection for materials prepared at the behest of counsel in anticipation of litigation or for trial. Judge Rakoff took a relatively narrow view of the doctrine, focusing on who directed the creation of the documents and whose mental impressions they reflected.[4] Because the AI‑generated materials were not prepared at counsel’s request, and did not reveal counsel’s strategy, they were not protected. The defendant, in short, was not acting as his lawyer’s agent when he used the AI tool.

Judge Rakoff recognized that generative AI “presents a new frontier in the ongoing dialogue between technology and the law,” but was unmoved by arguments that AI’s novelty should soften “longstanding legal principles” governing the attorney-client privilege and the work product doctrine.  

That conclusion has ripple effects far beyond criminal defendants. Some lawyers, paralegals, and clients routinely query generative AI tools like ChatGPT, Perplexity, Gemini or Claude—often reflexively, and often with sensitive facts. Both individual and corporate clients should therefore be mindful that while these tools may be useful, they are not a substitute for legal counsel. Moreover using such tools outside of the attorney-client context or without adequate network and contractual safeguards may result in a lack of any protection for the user inputs and the product generated by the generative AI tool.

  • Understand your use posture with respect to the AI tool.  Judge Rakoff’s opinion applied to a “user communicat[ing] with a publicly available AI platform” operating under terms that permitted data retention, training, and disclosure to third parties.  The court did not address the use of generative AI within a closed or enterprise environment that contractually restricts third party retention, training, and review of inputs and outputs.  As the opinion suggests, a non-attorney’s direct access to a public-facing AI platform over the open internet, on the platform’s standard terms, presents an exponentially greater confidentiality and privilege risk than an attorney’s use of a closed AI system that is designed to preserve confidentiality.  

  • Scrutinize confidentiality protection before using an AI tool, including whether the platform is a closed enterprise system with strong privacy protection. Using AI platforms that disclaim privacy protection or permit data retention for training purposes, for example, may defeat privilege claims. Even paid AI subscriptions may fail to provide adequate privilege safeguards where the user directly accesses an AI platform over the internet without sufficient privacy guarantees.

  • Involve counsel early when using AI to analyze legal exposure. Clients may provide valuable support to counsel in pursuing or defending litigation. However, client use of public-facing generative AI tools should avoid the disclosure of confidential information, and preferably be undertaken under the direction of counsel.

  • Adopt clear internal policies governing AI use in connection with pre-suit investigations and litigation. The processes by which a company undertakes the use of AI may prove significant, particularly where counsel actively directs and supervises such use in real time and in connection with legal proceedings.

A particularly significant aspect of the Heppner ruling is the unresolved question concerning its implications for the use of “closed” AI systems, an issue of growing practical importance as clients increasingly turn to generative AI tools in the course of addressing legal problems. Uncertainty remains as to whether materials generated when a client independently employs a secure, confidentiality‑protected AI tool to research legal issues, whether to prepare for consultation with counsel, or to organize preliminary legal analysis, would be afforded protection under the attorney‑client privilege or the work product doctrine.

While recent case law suggests an emerging judicial willingness to extend work product protection to certain AI‑assisted materials, the doctrinal boundaries of such protection remain unsettled and warrant caution.[5] We encourage clients to consult with their counsel before embarking on research using generative AI tools in connection with a legal matter.

Cozen O’Connor lawyers are available to help clients assess the risks highlighted by the Heppner decision and to answer questions about the appropriate use of generative AI in litigation and investigations.


[1] Oral Order, United States v. Bradley Heppner, Case No. 1:25-cr-00503-JSR (S.D.N.Y. Feb. 10, 2026).

[2] The specific content of the documents generated by Claude is not disclosed in the opinion, but defense counsel at oral argument stated that Heppner “prepared reports that outlined defense strategy, that outlined what he might argue with respect to the facts and the law that we anticipated that the government might be charging.”

[3]  Citing Anthropic, Privacy Policy (as of February 19, 2025)(even in the absence of a subpoena compelling it to do so, Claude “may “disclose personal data to third parties in connection with claims, disputes[,] or litigation.” 

[4]  In Warner v. Gilbarco, No. 24-12333, Magistrate Judge Anthony P. Patti of the Eastern District of Michigan took a fundamentally different view of how the use of generative AI impacts privilege and work product in holding that documents generated by a pro se litigant using ChatGPT remained protected work product and that disclosure to an AI tool did not constitute a waiver. The court reasoned that ChatGPT and similar generative AI systems are “tools, not persons,” notwithstanding the existence of third‑party administrators, and therefore do not function as adversaries or conduits to adversaries. By contrast, the Heppner court treated the use of an AI platform subject to broad privacy disclaimers as tantamount to disclosure to a third party, concluding that such use defeated claims of privilege or work‑product protection. Taken to its logical extreme, the Heppner ruling might be construed to preclude privilege or work product protection for virtually any use of a generative AI tool with similar privacy disclaimer terms, and would inevitably operate as a disincentive for both lawyers and clients to employ such tools in litigation, which is an outcome the Warner court expressly avoided.

[5] For example, in Concord Music Group, Inc. v. Anthropic PBC, No. 24-cv-03811-EKL (N.D. Cal. Dec. 18, 2025), defendant Anthropic sought production of certain AI prompts and outputs from plaintiffs. The plaintiffs disclosed all AI prompts and outputs on which they relied in developing their lawsuit but refused to disclose additional prompts and outputs. United States Magistrate Judge Susan Van Keulen found that it was work product and declined to order its production. The court expressly agreed with Tremblay v. OpenAI, Inc., No. 23-CV-03223-AMO, 2024 WL 3748003 (N.D. Cal. Aug. 8, 2024), which found that: “ChatGPT prompts were queries crafted by counsel and contain counsel’s mental impressions and opinions about how to interrogate ChatGPT and were thus opinion, not fact, work product.”

About The Authors

Last modified: February 26, 2026

No comments yet.

Leave a Reply

Your email address will not be published. Required fields are marked *

× Close