- A federal judge has ordered OpenAI to preserve all ChatGPT conversations, including those that have been deleted, amid ongoing legal disputes involving the company.
- The plaintiffs in the case argued that ChatGPT users exploited the AI tool to circumvent paywalls, but OpenAI dismissed these claims as “speculative” and “unfounded,” asserting they lack credible evidence.
- OpenAI warned that complying with the court’s preservation order could violate its commitments to user privacy and risk exposing sensitive data shared through the platform.
- The legal battle underscores broader tensions between protecting copyrighted content and safeguarding digital privacy rights in the growing era of AI technology.
- The controversy has sparked widespread user backlash, with many critics questioning the ethics and practicality of AI systems deeply embedded in digital services.
In a situation drawing sharp criticism from privacy advocates, a federal magistrate judge has mandated that OpenAI retain every ChatGPT user interaction permanently—regardless of whether users explicitly chose to delete it. The May 13 ruling, issued by U.S. Magistrate Judge Ona T. Wang in New York, ordered the AI company to “preserve and segregate all output log data that would otherwise be deleted” until further court action. This sweeping directive, first revealed weeks later as OpenAI challenged it, centers on a growing legal battle over AI training data and the rights of users to control their private information.
The order stems from lawsuits filed by media organizations led by The New York Times, which allege that OpenAI unlawfully used their copyrighted content—including articles—to train ChatGPT. Plaintiffs argue that without preserving every chat record, OpenAI risks destroying evidence of users employing the chatbot to bypass paywalls or reproduce protected work. But OpenAI has pushed back firmly, asserting the order undermines user privacy guarantees and lacks evidential merit.
Legal clash over copyright and privacy
The lawsuit unfolds at a time of heightened scrutiny over AI data practices. New York Times attorneys claim OpenAI’s systems process requests such as summarizing paywalled articles, enabling users to illicitly access journalism without paying. However, OpenAI calls these accusations speculative. “There is no evidence supporting the theory that users delete chats to hide copyright misuse,” wrote OpenAI in court filings, emphasizing that plaintiffs have failed to present “a single piece of evidence” linking deleted chats to infringement.
The company argues the preservation mandate amounts to judicial overreach. Chief Operating Officer Brad Lightcap stated, “This order fundamentally conflicts with the privacy commitments we’ve made to users.” OpenAI noted ChatGPT users discuss everything from tax planning to relationship struggles, and permanently archiving all conversations—even temporary chats—would expose sensitive data. “When users delete a chat, they’ve taken a deliberate step,” Lightcap added. “The court’s order erases that agency.”
The ruling has triggered wide-eyed consternation. OpenAI estimates hundreds of millions of users globally, many of whom rely on the platform for professional consultations, creative brainstorming and even medical inquiries. One consultant urged clients on LinkedIn to avoid OpenAI’s API over fears their trade secrets could be “read by outsiders.” Another user lamented on X, “If my PTSD therapy chat goes into a court file, that’s beyond terrifying.”
OpenAI argues privacy rights are at risk
Privacy experts warn the case could set a dangerous precedent. “This isn’t just about ChatGPT,” said Katie Brewster, a digital rights attorney with the Electronic Frontier Foundation. “If courts force companies to override user data choices, it jeopardizes trust in all digital tools.” OpenAI agrees, citing the order’s potential to breach its global privacy agreements.
The company had implemented options like “temporary chats” and account deletion to let users purge data, with deletion finalizing within 30 days. Now, those choices are revoked. “We’re forced to jettison those user promises,” OpenAI wrote, arguing this breach could hurt customer relations and even lead to legal violations. Moreover, complying would require overhauling its infrastructure, diverting months of engineering resources.
Users sound the alarm amid growing concerns
The backlash has been swift. Social media platforms erupted as news spread, with cybersecurity professionals and everyday users alike criticizing the order. “This is a security nightmare,” wrote one engineer on Twitter, while another called it “a direct attack on user autonomy.” Firms using OpenAI’s API—whose data is now frozen per the order—face heightened risks, including exposing encrypted client information.
Legal analysts note the judge’s reasoning: Wang expressed skepticism about OpenAI’s “good-faith retention policies.” She flagged a hypothetical scenario where ChatGPT users might delete chats after hearing of the lawsuit to “cover their tracks.” However, OpenAI counters the premise. “Judge Wang’s order assumes bad faith, but we’ve never destroyed data,” wrote the company. “The plaintiffs’ theory remains fiction.”
From security gaps to billion-dollar lawsuits
This clash echoes past tech controversies. In March 2024, OpenAI briefly took ChatGPT offline after a bug let users see titles from strangers’ chats—a flaw that underscored the challenges of balancing functionality with privacy. Meanwhile, Apple’s famed secrecy on product data paints a stark contrast, reinforcing public unease with corporate data hoarding.
The case could also influence global AI governance. The EU’s upcoming AI Act aims to ensure user opt-out rights, while U.S. legislators debate whether “fair use” applies to training datasets. Legal scholar Jason Schultz of NYU’s Information Law Institute noted, “Courts are playing policymaker here. If they side with corporate copyrights over privacy, it could stifle innovation and users’ digital rights.”
Furthermore, the class-action lawsuit accusing OpenAI of mass nonconsensual data collection adds pressure. That case alleges ChatGPT training material includes “every piece of internet data,” with plaintiffs demanding $63 billion in damages. While unrelated to the preservation order, it underscores broader fears that AI companies hoard user data without transparency.
Future of user privacy hangs in the balance
As the case progresses, the outcome could redefine digital privacy expectations. OpenAI’s appeal—demanding oral arguments—will test how courts weigh corporate accountability against user rights. “This isn’t just about ChatGPT’s logs,” said Brewster. “It’s about whether we retain control over our digital footprints when technologies advance faster than laws.”
For millions of users, the message is stark. Until resolved, every prompt sent to ChatGPT may now be immortalized—a reality few imagined when they first logged into the AI’s friendly interface.
Sources for this article include: