Adjuria
← All News
Legal AIRisk & Compliance

The Hidden Privilege Trap: Why Casual AI Use Could Expose Your Clients

April 18, 2026

The Hidden Privilege Trap: Why Casual AI Use Could Expose Your Clients

A federal judge just ruled that conversations with an AI tool are not protected by attorney-client privilege. The implications for any firm whose attorneys or clients are using consumer AI tools - even informally - are severe. Here is what happened, and what you need to do about it.

A federal court decision should be required reading for every managing partner in the country. In United States v. Heppner, Judge Jed Rakoff of the U.S. District Court for the Southern District of New York ruled that a defendant's conversations with an AI tool were not protected by attorney-client privilege - and were therefore fair game for the government. The case is a stark illustration of a risk that most firms haven't fully reckoned with: the casual, well-intentioned use of consumer AI tools can silently shred the protections that clients depend on.

You can read D&G Law's detailed breakdown of the decision at dglaw.com.

What Happened

After receiving a grand jury subpoena, the defendant turned to Claude - Anthropic's AI assistant - to help analyze his legal exposure and think through litigation strategy. These are exactly the kinds of conversations that would be fully protected if they had taken place with an attorney. But they didn't take place with an attorney. They took place with an AI platform governed by a commercial user agreement.

When the government executed a search warrant, those chat logs were obtained. The defendant argued privilege. Judge Rakoff disagreed on two separate grounds - and both are worth understanding in detail.

Why Privilege Failed: The Two-Part Problem

First: No reasonable expectation of confidentiality. Attorney-client privilege requires that communications be confidential. Claude's user agreement explicitly states that Anthropic may collect user prompts for training and other purposes, and reserves the right to disclose user data to third parties. The court found that the defendant had no reasonable expectation of confidentiality in those communications. When you share information with a third party who can share it further, the confidentiality element of privilege is gone.

Second: No attorney direction. Work product protection also failed because the defendant's attorneys did not direct him to use Claude. Work product doctrine protects materials prepared in anticipation of litigation - but only when prepared at the direction of or by counsel. An individual unilaterally deciding to use an AI tool, even for legal purposes, does not meet that standard.

The court's takeaway was explicit: "Any use of AI tools for legal strategy or advice must be structured and directed by counsel" to receive protection.

The Broader Problem: It's Not Just One Defendant

The Heppner ruling matters far beyond its facts. Consider how many people inside a law firm interact with AI tools every day - partners drafting strategy memos, associates doing research, paralegals summarizing documents, even clients doing their own preliminary analysis before a meeting. If any of those interactions involve consumer AI platforms with broad data collection policies, privilege is potentially compromised.

The problem is not that AI is inherently dangerous. The problem is which AI, governed by whose terms, used under what supervision.

Consumer tools like ChatGPT, Claude.ai, Gemini, and others are built for mass-market use. Their data retention policies, training data practices, and third-party disclosure rights are written to serve their business models - not to preserve attorney-client privilege. Every prompt sent to these platforms is a potential disclosure to a third party. And once confidentiality is broken, privilege is gone.

The "We'll Just Use ChatGPT" Risk

There is a real and growing pattern in law firms: attorneys or support staff using consumer AI tools informally, without firm governance, because they are fast, free, and genuinely useful. This is completely understandable - these tools are impressive. But it is also a liability that most firms have not priced into their risk calculus.

The Heppner decision is the clearest signal yet that courts will not extend privilege protections to AI tools that do not meet the confidentiality requirements the privilege demands. Firms that do not address this are one search warrant - or one discovery request - away from a very uncomfortable conversation with a client.

How Adjuria's Approach Eliminates This Risk

Every solution Adjuria builds and deploys is architected around a foundational principle: your data is yours, full stop. We use zero data retention models, meaning that the AI systems we deploy for your firm do not store, log, or learn from your queries. Nothing you or your clients discuss with an Adjuria-powered system is used for model training, product improvement, or any other purpose. We will never sell or share your data - not now, not ever.

This is not just a policy preference. It is the only architecture that is consistent with the confidentiality requirements attorney-client privilege demands. When an AI system does not retain data and does not disclose it to third parties, the third-party disclosure problem that sank Heppner does not arise.

Beyond data architecture, our engagements are structured to satisfy the attorney-direction requirement as well. Adjuria works with your firm's leadership and counsel to ensure that AI tools are deployed as part of a supervised, governance-backed workflow - not as shadow IT. That structure is what separates defensible AI use from the kind of informal, ungoverned use that courts are starting to scrutinize.

What Every Firm Should Do Now

The Heppner ruling is a signal, not an endpoint. Expect more decisions like it. Here is the baseline every firm should have in place:

  • Audit your AI usage. Know what tools are being used, by whom, and under what terms.
  • Review data retention policies for every AI tool in your environment. If the vendor can use your prompts for training or disclose them to third parties, that tool is not privilege-safe for sensitive legal work.
  • Establish AI governance policies that define which tools are approved, for which tasks, and under what supervision.
  • Ensure attorney direction for any AI use touching privileged matters - document that the use was supervised and directed by counsel.
  • Train your team. The attorneys and staff who understand these risks are your first line of defense.

The Stakes Are Too High to Wing It

Attorney-client privilege is one of the foundational protections of the legal system. It is what allows clients to be honest with their counsel, and it is what makes effective legal representation possible. A firm that inadvertently destroys privilege through careless AI use has not just made a technology mistake - it has potentially harmed the client it was trying to serve.

The good news is that this is a solvable problem. The technology exists to deploy AI in ways that are fast, powerful, and fully consistent with privilege requirements. Adjuria exists to help law firms do exactly that - to move beyond informal experimentation into AI adoption that is strategic, governed, and safe.

If you would like to understand where your firm stands on AI privilege risk, we would be glad to start that conversation.