- Judge ruled AI platforms are not lawyers for privilege.
US lawyers are now warning their clients that anything typed into an AI chatbot, whether ChatGPT, Claude, or similar tools, is not protected by attorney-client privilege. That means prosecutors or opposing parties in a lawsuit could potentially access those conversations. The concern is not theoretical.
A recent federal court ruling in New York has made it very real, and law firms across the country have started sending urgent advisories to clients about the legal risks of turning to AI while facing litigation.
What Happened In The New York Court Case?
The alarm was triggered by a February ruling from Manhattan-based US District Judge Jed Rakoff, who ordered the former chair of bankrupt financial services company GWG Holdings to hand over 31 documents he had created using Anthropic’s Claude while preparing his criminal defence.
Bradley Heppner, the former GWG Holdings chair, was charged last November with securities and wire fraud and pleaded not guilty. During his defence preparation, he used Claude to draft reports about his case and then shared those documents with his attorneys. His legal team argued the materials were protected under attorney-client privilege, but prosecutors pushed back.
Judge Rakoff sided with prosecutors, ruling that no attorney-client relationship exists “or could exist, between an AI user and a platform such as Claude,” and that Claude itself “expressly provided that users have no expectation of privacy in their inputs.”
Why Your AI Chats Are Not Protected Like Lawyer Conversations
Attorney-client privilege is a foundational legal protection in the United States. It shields communications between a person and their lawyer from prosecutors and opposing parties. The core issue is straightforward: AI chatbots are not lawyers.
Under a long-established legal principle, voluntarily sharing information with any third party, human or otherwise, can strip away that protection entirely. When someone types details of their legal situation into an AI platform, they are effectively disclosing it to an outside party.
Both OpenAI and Anthropic state in their terms of service that they can share user data with third parties, which adds another layer of concern.
More than a dozen major US law firms have since issued client advisories. Several common recommendations have emerged. Firms, including O’Melveny and Myers, suggest using closed, corporate AI systems rather than consumer-facing chatbots where possible, though they acknowledge that approach remains largely untested in court. If AI research is being conducted at a lawyer’s direction, clients are advised to state that explicitly in the prompt.


