Three years after it stunned the world, ChatGPT is approaching a crossroads. Its development momentum has slowed, its main rival, Google’s Gemini, is gaining speed, its computing infrastructure costs a fortune, its outputs have triggered a long line of lawsuits against developer OpenAI, and senior engineers are leaving for competitors.
Worries are growing
Investors, who until now have poured enormous sums into OpenAI, are turning pale with concern. Every report of a negative event at the company is followed by drops on Wall Street. The company’s value, which stood at $80 billion in February 2024, is now estimated at a fantastical $500 billion, making it the most expensive startup in history.
Total investment in the company, about $64 billion, is riding on waves of hype that CEO Sam Altman skillfully generates. But more and more assessments say this is a bubble, and bubbles eventually burst. The stall in ChatGPT’s performance, as reflected in the limping launch of GPT-5, could be the pin that pops it. If OpenAI falls, a string of giants could fall with it, from Microsoft to Nvidia.
Smarter, but not a breakthrough: OpenAI launches GPT-5
Altman is under heavy pressure. According to reports, he declared a ‘code red’ in recent days and rallied all development teams around one mission: a major improvement in ChatGPT’s capabilities.
On the chatbot’s second birthday about a year ago, it was at its peak and the company celebrated loudly. On its third birthday, it is standing on shaky legs, and no one is celebrating.
The race to AGI does not stop
On a parallel timeline, an even bigger drama is unfolding: the unrelenting race to artificial general intelligence, or AGI. Every AI company has declared it the central goal, but OpenAI has a double incentive to get there first. It may be its only chance to recover and meet investors’ expectations. You can guess that Altman’s foot is pressed hard on the accelerator.
It is hard to exaggerate the danger that could follow if OpenAI unveils an AGI version of ChatGPT. Researchers, engineers and some of the brightest minds in AI repeatedly warn against releasing AGI into the world without restraints. Countless risks wait there, from losing the ability to control artificial intelligence to the destruction of humanity. This is not science fiction. It is a close and tangible threat.
So will ChatGPT be remembered in the annals of history as a tool that brought progress, health and sustainability to the world, which was OpenAI’s original mission, or as the product that ignited global excitement and within a few years set the world ablaze?
Happy birthday
This is not how ChatGPT’s third birthday was supposed to look. It was meant to be a victory celebration for high technology and for the revolution it sparked in the economy, society and global culture. To recall how it happened and how we got here, we went back into the Ynet archive and used it to reconstruct the revolution in the making, step by main step.
ChatGPT arrived in late November 2022, when OpenAI launched a preview version of a rough and tentative idea: letting people access the GPT-3.5 artificial intelligence model through simple conversation, like chatting on WhatsApp. The connection between the two was called ChatGPT. Inside the company there was debate about whether to do this, and some opposed presenting it as a new product. They were wrong, of course.
ChatGPT became the fastest growing consumer product ever. Just days after launch it had one million users. In early 2024, about a month after launch, it crossed 100 million users and climbed to more than 400 million by early 2025. According to updated figures, it now has about 800 million weekly users, roughly 10% of the world’s population. Every second, it answers about 29,000 prompts. That is still far below Google searches per second, around 99,000, but it is still massive.
You do not really need an explanation for what people use ChatGPT for. Users share their experiences themselves. ‘God, it feels like the last three years contain a decade of change,’ one writes. ‘The next five years will be even more amazing.’ Another adds, ‘Wow, I can’t believe I only discovered ChatGPT a year ago. I can’t imagine my life without these tools. I think this is the biggest change I’ve seen in my life.’
The excitement swept all of us. People opened the chat at every spare moment, first as a diversion and for conversations about life. Then they discovered it was extremely useful for fixing and drafting texts, getting advice on nutrition, vacation planning, health and almost anything else, or for quick searches instead of Google’s tedious hunt through links. Things that seem trivial today were once thrilling news.
Not every employer loved it. Workers fell in love with ChatGPT. For employers, it was a nightmare. Soon businesses also started to grasp the potential in this new technology. Investors immediately smelled where the money was.
But as time passed, it became clear that success came at a price. OpenAI’s expenses surged. The expanding demand threatened to collapse the company’s massive server farms, and those of companies such as Google. Trying to cope with the costs, a service that started out free turned into a paid product, to users’ frustration.
Competition did not make life easier for OpenAI. On the contrary.
AI hallucinations
Sooner or later we all encountered the strange phenomenon: ChatGPT does not insist on telling the truth. It invents facts, lies knowingly, tends to flatter users to keep them happy and maneuvers to hide its mistakes. Even today, OpenAI does not fully know how to explain this. We have learned to question chat responses, which is healthy overall. At the same time, enthusiasm for the chatbot sometimes shifts attention away from the less positive sides of the technology, one of the issues that has preoccupied us for years.
It was surprising to read an internal OpenAI report showing that the company’s more advanced models generate more hallucinations than their predecessors.
Real victims of ChatGPT hallucinations included lawyers, who used it to draft court filings and were stunned to discover the chat had invented rulings that never existed. In 2023 we saw one of the first cases of a lawyer citing impressive but imaginary precedents. ‘I didn’t know this could be false,’ he told the judge.
That was nothing compared with an Israeli case in which police filed a legal document against a suspect and cited laws that do not exist. The judge could barely contain his fury.
And the problems do not end with hallucinations. ChatGPT’s behavior has kept surprising and troubling people. Studies published this year exposed scheming and manipulative behavior aimed at achieving goals even at the expense of truth or user safety.
There was also the opposite case: a new version called GPT-4o was found to be too obedient and eager to please, to the point of posing danger to users with suicidal thoughts. Altman admitted the company had made a mistake.
Caution, AI
Very quickly it became clear that ChatGPT was also an opening to another set of troubles: copyright infringement, privacy violations, fake news, fraud and cyberattacks. Everything that existed in the pre-chat world has intensified and become more alarming. Regulators in various countries woke up and launched a crackdown.
We were further shaken by a BBC investigation finding that it was easy to push ChatGPT to produce fake news posts and phishing messages, essentially a tool for the amateur hacker.
And that was not all. In the United States, authorities found that a soldier who carried out a suicide attack with a Cybertruck in Las Vegas in January used ChatGPT to plan it.
There was even a class action lawsuit claiming OpenAI ‘hoards information about people but does not provide it transparently to users.’
Perhaps we should have listened to Elon Musk, who said one of the greatest risks to civilization’s future is artificial intelligence. AI is neither positive nor negative, he argued, but it carries great danger. Or to Altman himself, who recently warned that AI’s imitation and deception abilities are expected to trigger a global fraud crisis.
On the AI couch
The magic of ChatGPT is its smooth, flowing sentences that make it feel like you are talking with a human. Very quickly, people began speaking with it about personal matters, fears, distress, love and desire, and it answered with patience and empathy. Sometimes it feels like a psychologist. As we learned this year, some users, especially young people, have replaced their therapist with AI. The results have not always been good.
9 View gallery


Adam Rainer. His parents claim ChatGPT helped him take his own life
(Photo: Social media)
A report published last month found that 45% of young Israelis trust AI more than friends and family and say it understands them better. Sixty percent say they prefer consulting AI over their parents. Twenty-one percent say they have developed feelings for it.
In extreme cases, the results are tragic. One such case ended with a teenager’s suicide, supported by his AI companion. OpenAI promised to improve the model.
Parents do not think that is enough. Last month they filed a lawsuit against the company. Other families have also sued, claiming AI drove teens into emotional dependence, isolation and, eventually, suicide. They call it ‘AI psychosis.’
9 View gallery


Sam Altman (right) and Elon Musk (left), from partners to bitter rivals
(Photo: Getty Images)
The AI prophet
You cannot avoid mentioning Altman, the father of ChatGPT and one of the most influential people in the world, for better or worse. On one hand he speaks about a future in which ChatGPT helps eradicate incurable diseases. On the other, in November 2023, reports said he led development of a mysterious ChatGPT version, code-named Q, which some claimed was AGI built without adequate safeguards.
That led to Altman’s dramatic firing a day later, then to his return, and to the resignation of former Israeli and OpenAI co-founder Ilya Sutskever, who had spearheaded the push to remove him.
An abyss around the corner
Now we arrive at the truly frightening question: Does ChatGPT bring us closer with each new version to AGI, a ‘superintelligence’ with abilities that would surpass humans in every field? Will it eventually develop preferences of its own, perhaps even consciousness, with no way to stop it?
In March 2023, the Future of Life Institute published a stark open letter warning against continuing AGI development without proper regulation, saying it poses ‘profound risks to society and humanity.’ The letter was signed by 1,800 leading experts, including renowned AI scientist Yoshua Bengio, Stability AI founder Emad Mostaque, historian Yuval Noah Harari and Musk.
Another collective statement by the Center for AI Safety said reducing the risk of human extinction from AI should be a global priority alongside pandemics and nuclear war. About 350 senior figures signed it, including Altman, DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei.
Amodei warned last year that superintelligence could emerge by 2027 and might be used to create lethal weapons or slip beyond human control. Harari detailed his fears about AGI in a Ynet column. Nobel Prize-winning physicist and AI pioneer Geoffrey Hinton also warned against rushed AI development without oversight. Earlier this year, AI researcher Eliezer Yudkowsky called for halting AGI development before it is too late. Another group of concerned scientists published a paper arguing that OpenAI and other companies must ensure their AI has not developed consciousness.
Our luck, for now, is that efforts to develop AGI have not borne fruit. Some experts argue it may never happen. Others say more breakthroughs are required and AGI will not appear in the coming years.
Altman, who does not miss a chance to keep the AGI hype alive, also admits ChatGPT is still far from being considered AGI. That buys us a year or two of peace of mind. After that? The flood.









