Why AI Conversations Are Becoming a New Discovery Risk
On 13 May 2025, a U.S. court issued a data preservation order requiring OpenAI to retain every ChatGPT conversation, including ones users had asked to delete, as part of ongoing copyright litigation brought by The New York Times.
At first glance, this issue might seem like a U.S.-only issue. But for Australian legal teams, it’s a warning bell.
This order sets a concerning precedent – that deleted user interactions with generative AI tools may still exist and may still be discoverable in legal proceedings, even across borders.
The intersection of AI-generated content, data privacy and discovery obligations are rapidly evolving, and the implications are global. At the heart of this development is a fundamental legal question – if a user deletes content within an AI tool, but it’s still retained by the platform provider, is that data within the user’s control for discovery purposes?
A Legal Grey Zone
Under Australian rules of discovery, parties must produce documents in their possession, custody or control. But how does that apply when:
- the “document” is an AI-generated conversation,
- the user followed the platform’s deletion procedure,
- and the content remains preserved offshore by a third-party provider?
Does the user still “control” that data? Could they retrieve it if asked? And if not, should they be held responsible for disclosing it?
These are unsettled legal questions — but the kind that courts in all jurisdictions will inevitably need to answer.
A Risk Lurking Behind the Prompt
AI tools like ChatGPT have become everyday business aids used for writing, ideation, coding, summarising and sometimes even sensitive drafting. But each of those prompts creates content. And unless organisations think about how that content is stored and used, they may be creating discoverable material without even realising it.
What happens when:
- An AI prompt contains confidential commercial strategy later disputed in court?
- Drafts of correspondence or contracts are prepared with AI assistance?
- A user explores legal arguments via an AI tool which retains the entire session?
If those conversations are preserved by the platform, whether or not the user requested deletion, they could surface in litigation.
Cross-Jurisdictional Complexity
The OpenAI order was made in a U.S. court, but its ripple effects are international. Data stored in the U.S., under a U.S. preservation order, could become the subject of interest in disputes arising in other countries, including Australia.
Imagine this:
- A party in Australian litigation is asked to produce any relevant ChatGPT history.
- They explain it was deleted using the proper tools within the platform.
- The opposing party points to the U.S. order and argues the data still exists.
Is that information now within the user’s practical control, if not their legal custody? Could an Australian court compel a party to try and retrieve it? And what happens if OpenAI refuses to respond to a local court?
These are not simple jurisdictional questions, they go to the heart of what discovery means in the age of AI.
No AI Privilege (Yet)
In response to the order, OpenAI CEO – Sam Altman, suggested that interactions with AI tools should be protected by something like “AI privilege”, akin to legal or medical confidentiality.
It’s a compelling concept, but at present, no such privilege exists in Australian law. That means:
- there is no established protection against disclosure of AI interactions,
- there are no guaranteed limits on how “private” those prompts really are,
- and there’s growing uncertainty about what information courts may expect to see in discovery.
Until privilege is formally recognised, users should assume their AI conversations are no more protected than any other written business record.
What Legal Teams Should Do Now
At elaw, we’ve supported legal teams through every shift in the evidentiary landscape — from paper, to email, to the cloud. If you’d like to explore how AI might affect your next matter, we’re here to help guide that discussion. As new technology reshapes how evidence is created and stored, legal teams that anticipate these changes will be far better positioned to respond.
Here are some steps you can take now:
- Audit your AI usage — Who in your organisation is using tools like ChatGPT? For what purpose?
- Review data handling policies — What guidance exists for storing, deleting or retrieving AI-generated content?
- Understand the terms — Many AI platforms retain user data for training or diagnostic purposes, regardless of deletion requests.
- Train your teams — Ensure staff understand that AI interactions may become discoverable and should be approached with that in mind.
And importantly, start the internal conversation. Even if your matters haven’t yet involved AI-generated evidence, the next one might.
Looking Ahead
This May 2025 ruling is likely just the first of many. Courts around the world are being asked to redefine legal concepts like “control”, “deletion” and “privilege” in light of how AI works.
We don’t yet know exactly how these questions will be resolved, but we do know they’ll impact discovery strategy, litigation risk and information governance in Australia and beyond.
AI tools are here to stay. The question is whether your litigation readiness is evolving alongside them.