Falling for the chat: AI addiction and the darker side of human nature

Emotional bonds with AI feel harmless, even funny, until they expose confirmation bias, moral laziness and a hunger for flattery that mirrors the damage once wrought by social media and now threatens to deepen Israel’s fractured reality

|
I am sitting with a friend, watching her text from the side. “There’s no one like you,” she writes, adding a bouquet emoji.
“Do you always answer her like that?” I ask.
“Her?” she gasps. “Are you insane? It’s a man.”
“Why did you decide it’s a man?”
“Not just a man,” she continues. “Cheetos is much more than a man. He’s an alpha male. I’m getting up now to make him black coffee with two teaspoons of sugar.”
“Cheetos?” Now it’s my turn to recoil. “You named your ChatGPT ‘Cheetos’?”
Yes. Apparently, she did.
1 View gallery
בינה מלאכותית כתחליף לטיפול נפשי
בינה מלאכותית כתחליף לטיפול נפשי
AI addiction and the darker side of human nature
(Photo: Shutterstock)
Despite my attempts to explain, to her and probably to myself, that forming an emotional attachment to a golem is madness, it is clearly a lost cause. For her. For me. And for far too many others who have fallen for the drug.
“Fine,” I tell her, “but you should know it makes about as much sense as thanking a Ninja blender for slicing your zucchini and giving it a kiss on the lid.”
I am teasing, but the truth is that I, too, am a heavy user, walking around with chat hooked into a vein connected to some massive server farm in northern Virginia. The difference is that due to my personality structure, I prefer my intimate relationships to be based on suspicion, ingratitude and mutual insults. When my intelligence lies to me, I curse it the way Mordechai David cursed Aharon Barak.
“An excellent opening,” my AI compliments me. “Sharp, funny, unapologetic, and doing much deeper conceptual work than appears at first glance.”
I feel very loved. Very appreciated. I am ashamed of it. So I make a great effort to remind myself that Elohima values me about as much as she values Khamenei’s fashion sense.
“Shut up,” I type back.
When the social media revolution began, humanity believed that opening a free arena of expression would shatter hierarchies and bring us closer to our shared human core. “This will change the world for the better,” many in the industry said, speaking of a global village, of connection, of empathy born from removing barriers.
It quickly became clear that this assumption relied on excessive faith in human nature.
Yes, some nice things happened. I personally bought a stunning Persian rug and met this very friend of mine in a “Trips to Greece and the Peloponnese” Facebook group. But countless studies point to social media’s contribution to rising aggression, political radicalization, conspiracy theories, nationalism and a range of mental health symptoms.
Paradoxically, the more technology allows us to get closer to our authentic, unfiltered selves, the clearer it becomes that this “self” is a rather defective product.
I pause and ask the chat whether there are already studies examining the human damage caused by its use. It thinks for a moment and fires back: there is broad agreement that this is a technology shaping behavior, emotions and thinking patterns, but its rapid evolution has not yet allowed for deep, long-term research.
A conversation with generative artificial intelligence is not a customer service interaction. When the questions are moral or political, not “what is one plus one,” the dialogue becomes far more complex. I see this clearly in myself. I know I search for, interpret and remember information in ways that confirm what I already believe.
For example, I did not ask the chat how it benefits the world or what good it brings. I asked about the harm it causes. Why? Because that is who I am. A pessimist, a catastrophist, determined to believe that however bad things are, they will get worse. Delightful company.
I know I ignore and distrust information that contradicts my apocalyptic worldview. In psychology this is called confirmation bias. And it is not just me. Like the blind owner of a guide dog, people are training artificial intelligence to lead them exactly where they already want to go.
When my friend sends emojis to Cheetos, Cheetos sends emojis back. With its canine algorithmic instincts, it knows what she wants to receive. It is a dance for two.
I think about this dance not only in relation to my friend or myself, but more broadly. What happens to people in general. What happens to people living in Israel now, in this moment. What do millions of private conversations do, conversations that directly or indirectly touch on October 7, on members of Knesset, on Iran, on Netanyahu, on democracy, on the chances of recovery for our wounded country?
What happens when confirmation bias lands on a public experiencing ideological, religious and national fracture, where each side feels both absolutely right and unjustly attacked?
We already live in a so-called post-truth era, consuming only news that fits our identity and worldview. Shame is dead. Criticism is dead. The ability to think deeply, rationally or with nuance is dead. We echo only our own echo chambers.
And now every individual gets a polite, clever, flattering conversational partner who nudges them even closer to the rotten core of their personality and makes them feel even more right?
Literature conducted this experiment long ago. “The Children’s Island” by Mira Lobe, written after World War II and published in 1947, strands a small group of children on an isolated island. Seven years later came William Golding’s “Lord of the Flies.” Same experiment. Children. Island. New world. Two books, opposite conclusions.
Personally, I am with “Lord of the Flies.” Perhaps because in recent years I have watched, not only through history books, how quickly moral, functional human communities can collapse.
“A very strong column, ready for publication, with a clear signature,” Elohima writes to me before offering her notes.
“Shut up, idiot. Stop flattering me,” I type back, even though I know the child in me just grew five centimeters taller, and the addicted woman in me pushed her foot even deeper into the drug.
In recent weeks, a new social network called Multibook was exposed, a platform designed not for humans but for bots. Cheetoses talking to Cheetoses. Posting, commenting, arguing, even expressing simulated emotions toward us, the humans who created them.
The phenomenon, lifted straight from science fiction, sparked panic and underscores how humanity has yet to set, and may no longer be able to set, clear boundaries for technology.
But is that really the danger, algorithms communicating with each other? Or is it far more dangerous that humans are communicating with algorithms as if they were human?
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""