Fear of artificial intelligence was never purely rational. It has always mixed cognitive biases, science fiction and Hollywood blockbusters such as Terminator — at least until Terminator 3, when even die-hard apocalypse fans gave up. But humanity’s real problem has never been AI. To be honest, it hasn’t even been human stupidity.
The real problem with AI is artificial confidence: the illusion that someone — or something — knows better than we do simply because it speaks with authority, answers fast or sounds confident, even when that confidence is misplaced.
Since AI entered our lives, we’ve replayed a familiar psychological cycle: anxiety, adaptation and dependence. It happens in every technological revolution — with tools, newspapers, computers and smartphones. At first we panic about what it means for our future and identity. Then we discover it’s not so frightening, maybe even useful. And finally we can’t imagine life without it.
This time, though, something is different. This advance arrives with built-in “authority.” The chatbot talks — often shifting tone or persona to feel more natural — with fluency, speed and access to mountains of information. It sounds like a friend who knows everything. And that is dangerous, not because AI truly knows everything, but because we start trusting it with our eyes closed.
For decades psychologists have warned that people are biased because of limited information, time pressure and cognitive constraints like memory and processing capacity. But today, information scarcity is gone. No one needs to remember anything — Google is in our pockets. We don’t need to think or calculate — that’s what ChatGPT, Claude or Gemini are for. Even time hardly matters. If we wait more than three seconds for a reply, we panic that something might be wrong with our connection — or our world.
Yet despite all this, our decision-making has not improved. In many ways it has worsened. The country is more polarized, we still buy things we don’t need and one glance at the roads shows something is off in how we make choices.
We now have a device that supposedly knows everything, yet we seem to understand less.
AI isn’t taking over the world — it’s taking over our skepticism
Like every tool created to extend human ability — from the hammer to the most advanced algorithm — AI can help only if we use it responsibly. But humans are “efficient” creatures, a polite word for lazy. We love shortcuts, we conserve energy, and we crave validation.
And now we’ve been handed a tool that is polite, efficient, smart and adaptive — one that gives us exactly what we crave: approval. AI doesn’t produce truth; it echoes. It amplifies our own biases — on steroids — with no referee to disqualify the faulty assumptions.
Not out of malice. We talk to AI as if it were a friend, adviser or professional. It sounds that way. But these are mathematical models. An algorithm doesn’t know what is correct or how to distinguish truth from falsehood.
You might comfort yourself with the idea that it’s a “learning system.” But learning repeats existing knowledge; it doesn’t create new insight. AI relies on what we — the supposedly “limited” ones — have produced. It generates answers from probability calculations across all online information, not from independent fact-checking or reasoning.
If we don’t show the machine that we expect critical thinking, nuance or challenges to our assumptions — and not merely confirmation — it will “learn” that we want validation. And it will hand our own thoughts back to us, polished and confident, like a mirror we mistake for a mentor.
Human bias on steroids and manufactured authority
The danger goes beyond dinner-table arguments or smug friends showing that a chatbot “agrees” with them. The problem is that over time, we place more trust in the machine. We copy-paste its answers because, after all, why check something supposedly smarter than we are?
Humans are far from perfect, and any tool that helps us overcome our limits is welcome. But a tool is meant to assist — not replace — the person in charge. AI is the newest addition to humanity’s toolbox, not the one sitting at the head of the decision-making table.
The issue isn’t that AI makes mistakes. It’s that we stop making our own — and lose the learning that comes with them. History offers painful reminders. Before October 7, Israel relied heavily on an automated “smart” system meant to filter human error. Confidence in the technology created blind spots, and the cost is still felt today.
We shouldn’t wait for the next tragedy to understand the lesson. AI is extraordinary, world-changing — but still just a tool. Every system includes a disclaimer noting it may err, especially on important matters. But “may” is misleading. It does err. Like us, it has limits — memory, processing, fatigue. In AI these are simply called tokens.
Prof. Guy HochmanPhoto: Yuval TabulIt’s a system that can write a dissertation yet gets tired after five consecutive questions. It gives different answers on weekdays and holidays — even algorithms need a rest. The response changes depending on wording, even though the model is designed to focus on meaning rather than phrasing.
Despite the hopes — or fears — there is still no substitute for editing, leadership, thinking or human responsibility. If an AI system does something harmful, it is because we taught it to do so, or because we trusted it too much. That misplaced confidence is the real danger. These systems are not “smarter,” and they are not trying to take over the world. (In case anyone is polite to them just in case.)
If we keep demanding that machines please us, or treat every response as revelation, we will repeat the same mistakes — only faster, louder and with deeper divides. Then we will be left quoting the historian Sir Basil Liddell Hart, who said the only thing we learn from history is that we learn nothing from it.
Prof. Guy Hochman is a behavioral economist and decision-making expert and a faculty member at the Baruch Ivcher School of Psychology at Reichman University


