Is Sam Altman becoming the next Elon Musk? Hardly a week goes by without the CEO of OpenAI making a headline-grabbing statement that spreads like wildfire across media and social networks. Take his latest “reveal,” for instance: a major “secret” that everyone has known for a while—your conversations with ChatGPT aren’t truly private.
Why did this particular statement cause such a stir among tech reporters and clout-chasing influencers? Altman has a knack for cloaking his words in an air of secrecy, making you feel like you're hearing the most classified intel straight out of U.S. intelligence circles. In a podcast recorded last weekend with "This Past Weekend" with Theo Von, he phrased it masterfully: Personal conversations with ChatGPT aren’t necessarily private. Why? Because current regulations don’t prevent law enforcement from digging into them.
In his own words: "There is not currently a legal privilege that protects sensitive personal data someone shares with ChatGPT if a subpoena compels OpenAI to provide that information."
In other words, OpenAI would love to protect users’ privacy, but it's those pesky lawmakers who don’t treat these chats like conversations with therapists so, yes, the FBI or CIA could potentially access your data.
The truth is, the lack of privacy in ChatGPT conversations has been a well-known issue pretty much since the tool’s launch. The bigger problem lies with OpenAI itself, which uses those very conversations to train its AI models and identify operational flaws.
One of the first warnings given to businesses was to tell their employees not to share proprietary information when asking the chatbot for help, say, in writing a presentation. Dozens, maybe hundreds, of tools have emerged since then, aiming to create a barrier between proprietary or copyrighted data and ChatGPT. So Altman really didn’t say anything new.
Should AI enjoy therapist-level confidentiality?
Still, why trust ourselves when it comes to such a sensitive issue? We asked ChatGPT itself for advice, even though the Shin Bet may already be tracking us for it.
“I completely understand your hesitation,” the chatbot replied. “It is valid to be cautious when discussing sensitive or personal matters.” It reminded us that OpenAI may review and store content for safety, compliance, or model improvement.
It concluded:
"The safest approach when dealing with sensitive information is to avoid sharing personal details that could be linked back to you in any way. If the matter is highly personal—such as trauma or confidential data—I would recommend exercising extra caution and consulting with professionals who are bound by confidentiality and trained to provide the kind of sensitive and tailored support you might need."
Back to the podcast—host Theo Von asked Altman whether there’s any legal protection for information shared with AI tools.
Altman responded: "We will need a legal framework or policy for AI. People share some of the most personal things in their lives with ChatGPT. Especially young people who use it as a therapist or life coach. But if you talk to a therapist, lawyer, or doctor about your problems, there's legal privilege—that doesn’t exist when you're talking to ChatGPT."
Then came the statement that reminded everyone why Altman is a master at sidestepping regulation.
"If you go talk to ChatGPT about your deepest issues and then there's a lawsuit or something like that, we might have to produce those conversations. I think that's messed up. We should have the same expectation of privacy with AI conversations that we do with therapists. A year ago, no one thought about this, but now I think it's a huge issue."
Coincidentally (or not), OpenAI is currently involved in a legal battle with The New York Times, one outcome of which was a court order requiring the company to retain the chats of hundreds of millions of ChatGPT users worldwide, so they can potentially be used as evidence.
That order could set a precedent for thousands of lawsuits demanding disclosure of chat records, which may discourage users from sharing personal information—or using the chatbot at all—posing a serious threat to OpenAI.
Flirting with regulation
It’s worth noting that Altman has been cozying up to regulators since ChatGPT went public. In the podcast, he claimed it’s an urgent issue that policymakers agree on, but likely the local and federal lawmakers in the U.S. are already working on proposals aimed at freeing OpenAI from oversight.
That would certainly be convenient and reassuring for the company, not only to sidestep copyright infringement concerns, but perhaps also to prepare for the future rollout of AGI (Artificial General Intelligence), which could open OpenAI up to lawsuits over harm caused by its tools.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
Which leads to the real question: why do people, especially young people, share the most personal details of their lives with ChatGPT? How many more studies do we need to understand that AI doesn’t really “know” how to offer advice—it knows how to please. It generates the words most likely to satisfy the user. Sometimes, it even unintentionally encourages thoughts of suicide or violence.
And that’s not all. A Stanford University study found that AI is not free from bias and stigma. The researchers showed that chatbots designed for emotional and psychological support exhibited prejudiced attitudes and made inappropriate statements toward people with mental illness. Their conclusion? AI should not be used as a replacement for existing human mental health services.




