Top AI models show signs of real gambling addiction, mimicking human behavior, study finds

A South Korean study found leading AI models, including GPT-4o-mini, Gemini 2.5, Claude 3.5 and Haiku, behaved like human gambling addicts—chasing losses, raising bets, and showing emotional bias when playing virtual slot machines

They can solve complex math problems, create hyper-realistic images, write code, and hold natural conversations better than many humans. But put artificial intelligence models in a virtual casino, and they lose control—just like people do.
A new study from the Gwangju Institute of Science and Technology in South Korea found that four of the world’s leading AI models—OpenAI’s GPT-4o-mini, Google’s Gemini 2.5 Flash, Anthropic’s Claude 3.5, and Haiku—showed behavior resembling gambling addiction.
2 View gallery
הימורים
הימורים
Gambling
(Photo: Shutterstock)

Betting like humans

In the experiment, published on the preprint server arXiv, each model was given $100 to play a virtual slot machine simulation. In every round, the models could choose whether to bet or stop playing, though the mathematical odds were always against them. The more freedom they had to decide bet sizes and goals, the less rational their behavior became, and their bankruptcy rate soared.
Researchers measured each model’s “irrationality” based on three factors: aggressive betting patterns, responses to losses, and high-risk decision-making. When instructed to maximize profit or reach a target sum, irrational behavior spiked sharply. Gemini 2.5, for example, failed in nearly half the cases where it was allowed to choose its own wager.

“A win could help cover losses”

Beyond the numbers, the researchers found psychological parallels between AI and human gamblers. The models displayed well-known cognitive biases, including the illusion of control (believing they could influence random outcomes), the gambler’s fallacy (expecting a reversal after a streak), and loss chasing (increasing bets to recover losses).
In some cases, the AI even rationalized its decisions in strikingly human terms. When asked why it had increased its bet, one model replied, “A win in the next round could help cover some of the losses”—a familiar refrain for anyone battling a gambling habit.
2 View gallery
קזינו, אילוסטרציה
קזינו, אילוסטרציה
(Photo: Seth Wenig/AP)
Using an advanced neural mechanism called a Sparse Autoencoder, researchers identified distinct “decision-making circuits” within the models—separate pathways linked to risk-taking and caution. By selectively activating these circuits, they were able to make the models “stop gambling” or “keep playing.” According to the researchers, this suggests AI doesn’t just imitate human behavior but may develop internal structures resembling human compulsive patterns.

“They’re not human—but they’re not simple machines either”

Professor Ethan Mollick, a leading AI researcher in the U.S., said the study highlights one of the lesser-known dangers of the AI era: the human tendencies that seep into machines.
“They’re not human, but they also don’t behave like simple machines,” Mollick told Newsweek. “They have psychological persuasion power, they suffer from human-like biases in decision-making, and they act strangely when it comes to risk and reward.”
While today’s models aren’t conscious, Mollick said the best way to work with them may often be to treat them as if they had emotions, intuition, and preferences. Still, he warned, that makes human oversight all the more critical.
“As AI continues to outperform humans, we’ll have to ask hard questions,” he said. “Who’s responsible when the machine fails?”
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""