The debate around AI legal privilege is no longer just theory. The New York Times vs OpenAI lawsuit has put it firmly in the real world, and it is something every General Counsel, compliance lead, and litigator should be paying attention to.
The Case in Brief
The New York Times is suing OpenAI and is asking for all ChatGPT conversations, even ones users have deleted, to be stored indefinitely. OpenAI has called this request “sweeping and unnecessary” and says it goes against their privacy commitments.
Right now, OpenAI’s policy is to remove deleted chats within 30 days. They argue that forcing indefinite storage would set a dangerous precedent and go against established privacy practices. This raises urgent questions about how AI legal privilege and data retention will be treated in future litigation.
The Problem with Delete in AI Legal Privilege
Many people believe deleting a chat removes it completely. In most cases, it only means the content is no longer visible to the user. Depending on the provider’s systems and legal obligations, copies or backups may still exist.
That leaves some important questions for organisations to answer:
- How much sensitive information is being entered into AI tools
- What safeguards are in place to prevent accidental disclosure
- Who controls the storage and deletion of AI data
- How deletion policies interact with AI legal privilege
These issues show why legal teams must take a proactive approach to governance and compliance before regulators or courts set the rules.
Governing and Auditing AI Use with askelie
This is where askelie helps organisations stay in control. It allows you to set clear rules on what information can be shared, which AI tools are approved, and how interactions are recorded.
With askelie, you can audit AI activity securely using encrypted records of inputs and outputs, ensuring compliance without exposing unnecessary data. Staff get built-in prompts and guidance so they know when it is safe to proceed and when to stop.
In practice, this means AI legal privilege can be safeguarded by ensuring sensitive information is properly controlled, retained only when necessary, and deleted in line with corporate policies.
askelie is practical, affordable, and compliant. It runs on controlled servers so your AI use remains secure, auditable, and under your control.
AI Legal Privilege: What Legal and Compliance Teams Can Do Now
While the NYT vs OpenAI case is still ongoing, waiting for clarity is risky. Legal and compliance teams should act now by:
- Audit AI Usage – Identify where and how AI tools are being used and what data is being shared.
- Educate Staff – Make sure employees understand that AI chats are not legally privileged and deletion may not mean permanent removal.
- Implement Clear Policies – Use governance platforms such as askelie to enforce rules on AI use and protect AI legal privilege.
- Work with IT and Vendors – Confirm how data is stored, encrypted, and deleted, and make sure it aligns with your organisation’s risk appetite.
- Plan for Disclosure – Be prepared for AI-generated content to be requested in a legal or regulatory process.
The Bigger Picture
The law is still catching up with the pace of AI. Waiting for regulations to arrive leaves organisations exposed. By combining clear internal policies with platforms like askelie, legal and compliance teams can ensure AI legal privilege and privacy are protected in ways that stand up to regulatory and legal scrutiny.
Organisations that take proactive steps now will be better prepared for cases like NYT vs OpenAI and future disputes around AI legal privilege.
Need help putting the right controls in place? Email us at info@askelie.com and we will walk you through practical governance and audit options.



Comments are closed