Will OpenAI be forced to reveal tens of millions of user chats?

ChatGPT developer fighting court order to release millions of anonymized chats in a major New York Times copyright lawsuit; as global creators score wins and US rulings remain split, tech firms face growing legal and regulatory pressure

OpenAI is turning to the federal court in New York to challenge an order that requires it to hand over millions of anonymized chat logs from its AI chatbot platform ChatGPT as part of a billion-dollar-scale copyright lawsuit led by the New York Times and other media companies.
In its filing, OpenAI argued that providing the 20 million chat sample, even with identifying information removed, would violate user privacy and open the door to “a speculative fishing expedition” by the Times.
2 View gallery
ניו יורק טיימס ו-ChatGPT
ניו יורק טיימס ו-ChatGPT
(Illustration: Shutterstock)
The company further asserted that “99.99%” of the requested logs are irrelevant to the underlying infringement claims.
Media plaintiffs contend the logs are essential to determine whether ChatGPT reproduced copyrighted Times material and to address OpenAI’s claim that the newspaper manipulated the model to generate evidence.
U.S. Magistrate Judge Ona Wang had ruled that the production was appropriate, citing planned de-identification safeguards and a protective court order. OpenAI’s deadline to comply with the order is Friday.

Content creators vs. AI firms

The current legal battle marks a new peak in the lawsuit filed by the New York Times against OpenAI and its partner Microsoft. The newspaper alleges the companies used millions of its articles—content valued in the billions of dollars—to train their AI models without licensing or compensation, effectively "freeloading" on the Times' journalistic work.
The Times is seeking billions in damages and has asked the court to order the destruction of GPT models trained on its copyrighted material. The case stands as one of the most prominent in a wave of legal clashes between tech giants and content creators, including authors like Jonathan Franzen and George R.R. Martin, as well as visual artists, who argue that generative AI tools function as piracy engines.
2 View gallery
מנכ"ל OpenAI, סם אלטמן
מנכ"ל OpenAI, סם אלטמן
OpenAI CEO Sam Altman
(Photo: Reuters)
The legal fight in U.S. courts unfolds against a fragmented global legal landscape. This week, European creators scored a major win: a German court ruled in favor of GEMA, the country’s composers and musicians rights society, concluding that ChatGPT had violated copyright law by generating lyrics from protected German songs without permission. GEMA hailed the decision as a “groundbreaking precedent” in Europe, signaling a stricter approach to the use of protected content for AI training.
In the United States, however, the legal picture remains murkier. While the Times and many other creators are mounting aggressive legal challenges, prior rulings have created cracks in their unified front. In a separate case against Anthropic, the developer of the Claude AI model, a federal judge ruled that training AI models on copyrighted books may qualify as “fair use”—a key legal defense for tech firms. That ruling favored the tech industry, drawing a clear legal line between the act of training and the use of data obtained illegally, such as from pirate websites.
The contrast between Germany’s unequivocal ruling and the U.S. judiciary’s more permissive stance under fair use underscores the urgent need for clear regulation of large language models (LLMs).
Amid this uncertainty, tech companies are moving to manage the legal risk. Microsoft, for instance, has pledged to cover legal costs and defend its business customers if they face copyright infringement suits—an indication of the scale of the threat facing the AI industry.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""