In closed conference rooms, sitting across from CEOs, professors and respected creators, it happens again and again: the people used to being the smartest in the room use their intelligence to build walls of denial. Why do they feel artificial intelligence threatens their identity, and how do you dismantle that barrier?
For example, I sit across from a CEO who manages a thousand employees with one hand and a glass of cognac in the other. We talk about artificial intelligence. At this stage they are usually calm, even a bit open. Then it happens. A spark. The look changes. Like a child who suddenly understands exactly what you did and refuses to keep playing along.
It is not a misunderstanding. On the contrary. They understand perfectly well. And understanding, as we know, can sometimes be the enemy of humility. Then comes the sentence, always in the same tone of final pronouncement: “It’s hype. A toy. I tried it yesterday and it wrote total nonsense.”
At that point it is clear they are not reporting a technical glitch. They are rushing back to a world in which they do not have to deal with a new reality.
We tend to think resistance to technology comes from ignorance. But the reality on the ground, backed by recent research, exposes a phenomenon that is both opposite and fascinating: the most persistent, sophisticated and difficult resistance comes from the intellectual elite.
These people are not technophobes. It is not that they fail to operate the system. The problem is the opposite: they succeed all too well, and what they see makes them want to close the browser and go back to 2019.
Psychologists call it “motivated reasoning.” It turns out there is a cruel correlation: the smarter you are, the more skilled you become at finding logical, persuasive, even brilliant arguments to dismiss facts that threaten your worldview.
These people are not analyzing reality. They are using their high IQ to protect their ego. They convince themselves the revolution is not happening simply because their linear mind, the one that brought them to the top, cannot digest the exponential curve of technology. So they spend time explaining why “it still requires human judgment” — and they are right, it does. But while they repeat that line again and again, artificial intelligence is already doing half the work for us.
To maintain this impressive wall of denial, complete with guard towers, these sharp critics have developed a new hobby: error hunting.
They roam around looking for the moment AI will claim that Ben-Gurion was elected president of Colombia or that Apollo 11 was actually a fitness app. And in doing so, they soothe themselves: if the AI is wrong, then maybe they are not.
But peel off the intellectual layer. When you sit with them one-on-one, you discover the fear is not only economic. CEOs are not afraid of going hungry. The fear is existential. In the professional literature this is called “identity threat,” and it perfectly captures what people are experiencing today.
One artist described the feeling to me in a way that chilled even me: “It feels like someone broke into my studio in the middle of the night. They didn’t steal any works, but they touched my brushes.”
Think about that. Art is not just the final product. It is your fingerprint, your essence in the world. When AI learns to mimic your style, it may not take your job at this stage, but it does create a threat to your identity. It turns your unique “me” into something generic that can be produced at the push of a button.
The same thing is happening in teachers’ rooms. A veteran teacher told me she checks student papers the way a mother “searches for her child’s smell on a shirt.” She is not looking for correct answers but for evidence of effort. She fears the moment a student hands in a perfect, cold assignment and she will have to pretend she is teaching him. The threat to her identity as an educator is enormous. If she cannot recognize the student within the work, what is the point of her role?
The major fear is not that machines will take over. It is that human expertise — the thing it took us a lifetime to build and that defines who we are — will become transparent and worthless next to an algorithm.
The solution is not found in slogans about the future or vague promises. It begins with three simple but smart steps.
The first step is learning to work with the tool the way you learn a new language. Not all at once and not through big assignments but through small actions. When a person sees with their own eyes where AI is strong and where it struggles, something settles. The fear loses volume.
The second step is redefining responsibility. What remains in human hands, where the tool is used as support and where it can lead. Once that map is clear, people stop feeling like the ground is slipping away from them.
Keren ShaharThe third step is giving people back the final say. Not as a general slogan but as a work policy: the output that goes out into the world passes through human eyes. When that boundary is set, a sense of order returns within the chaos.
When you take these three steps, the resistance does not disappear. It recalibrates. It gathers itself. It becomes a warning light that leads to a real conversation. And only then can we ask the question that actually matters: what do we want technology to do for us, and what do we insist, with good reason, on keeping in human hands.
Keren Shahar is a lecturer and instructor in the use of generative artificial intelligence.


