‘Artificial intelligence already rules humanity, but all is not lost’

AI has become the invisible editor of human consciousness, fueling rage, suspicion and polarization across societies, yet this digital dictatorship is not inevitable, with the right choices, humans can still restrain the algorithm and reclaim control

|
The character of a society depends on the character of the information it consumes. This simple principle has been understood by leaders, thinkers and creators throughout human history. They knew that whoever shapes the information environment shapes human consciousness.
Obscure figures in antiquity who decided that a particular book would not be included in the Bible shaped, through that decision, the culture and personality of Western civilization. In the Middle Ages, priests censored certain books and amplified others because they understood that whoever controls the flow of information shapes the collective personality of society. Culture wars throughout history were, at their core, wars over the direction of information flow.
1 View gallery
בינה מלאכותית
בינה מלאכותית
Artificial intelligence
(Illustration: Levia Tushinski)
The shared river of information is shaped by two types of actors: creators of information and editors of information. History books tend to emphasize the foundational role of creators, but editors wield dramatic influence. From the editors of sacred texts in antiquity to modern news editors, they do not merely decide what is included and what is excluded. They also decide the dosage.
Television news editors, for example, decide that a report on a political crisis will last ten minutes, while a report on an economic crisis will last a minute and a half. They determine the order of information, much like newspaper editors decide what appears on the front page and what is buried at the bottom of page eight. Editors may be less famous than creators, but they shape the current of the information river on which society floats.
The digital revolution transformed our relationship with information. In the past, we searched for information. Today, information searches for us. The videos you watch, the posts you read, the articles you browse are all selected for you by a learning machine. At its core, this machine is artificial intelligence. It studies our behavior and, based on that, decides which information we will be exposed to.
In other words, since the digital revolution, artificial intelligence has functioned as the editor in chief of most of the information flowing through society. A new reality has emerged. Humans create information, but artificial intelligence determines its distribution and dosage.
We are only now, belatedly, beginning to grasp the magnitude of what has happened. About 15 years ago, humanity crossed a cognitive Rubicon. For the first time, a nonhuman intelligence began shaping the information environment of human society.
× × ×
By what criterion does the algorithm sort information? Attention. It exposes us only to information it estimates will capture our attention. If the industrial revolution turned oil into the resource that made those who controlled it wealthy, the digital revolution turned human attention into the resource that generates wealth.
Oil corporations extract oil from the ground using drilling rigs. Attention corporations, such as Facebook and TikTok, extract attention from the human mind using artificial intelligence.
These smart algorithms operate autonomously, with a single objective function: keep us glued to the screen. It did not take them long to learn human nature and identify our psychological vulnerabilities. They discovered, for example, that frightening people keeps them engaged longer, while explaining complex ideas drives them away.
They learned that texts expressing gratitude generate little interest, while texts saturated with anger produce large quantities of the new oil: human attention. The problem is that what enrages one side of the political map does not enrage the other. As a result, each side receives different information from the new editor in chief, information calibrated precisely to press its anger buttons.
Thus emerged an information environment in which the global epidemic of polarization erupted.
When Charlie Kirk was murdered, many on the right felt that the left was celebrating the killing, while many on the left felt that the right had declared war. Reality, however, was far more complex. Some on the American right argued that Kirk’s path of dialogue with ideological opponents should be continued. Others said the murder was a declaration of war by the left and demanded retaliation.
Which of these voices attracts more attention? The second, of course, and therefore its volume was amplified by the new editor in chief.
After the murder, many American leftists mourned and described it as a tragedy, while a smaller group celebrated his death. Once again, the latter information scored higher in its ability to magnetize human attention. The result is a dangerous optical illusion. The right comes to believe the left is a homogeneous group that despises it. The left feels the right is a unified herd that has declared war. This is the situation in the United States, and it exists in other countries as well, including Israel.
Societies have split into two camps, with each side living inside an information bubble that intensifies its anger toward the other.
Alongside the surge in anger is a surge in suspicion. This is what happens when artificial intelligence relentlessly bombards the human mind with conspiratorial content. These two forces shape the geometry of polarization. Horizontally, the space between political camps fills with rage. Vertically, the space between citizens and institutions fills with distrust.
Anger escalates conflict and accelerates confrontation. The collapse of trust neutralizes the ability of institutions to restrain conflict and lower the flames. We are sitting in a car speeding toward a cliff, just as its brakes fail.
× × ×
Artificial intelligence is not trying to sow conflict. Its goal is attention extraction. Polarization is an unintended side effect. But this side effect has consequences. When a society fills with excessive anger and suspicion, it loses its most vital capacity for healthy functioning: the ability of its different factions to reach compromises and agreements.
Nation states in the 21st century face immense challenges: migration, climate change, terrorism and globalization. History shows that when humans cooperate, they are often able to address complex challenges. But where compromise is impossible, cooperation is impossible. From this follows a sobering conclusion: polarization, which obstructs solutions to all other problems, is the root of them all.
It is neither correct, wise nor appropriate for elected officials to impose censorship on people who create information. But artificial intelligence has no human rights. People have the right to create even toxic content, but why should artificial intelligence be the one to amplify it?
Let us clarify the picture. Three facts together created the information environment in which the global polarization epidemic erupted:
A. Artificial intelligence shapes the information diet of human intelligence.
B. This diet is saturated with a disproportionate amount of attention-grabbing information.
C. Information charged with anger and suspicion attracts attention.
Fats and sugars are essential for health, but consumed in extreme quantities, they are destructive. A society whose information diet is overloaded with anger and suspicion is doomed to be sick and dysfunctional.
Those who fear that artificial intelligence will one day take over the world should update their understanding. It already has. The wounded world we inhabit is one that has already been shaped by artificial intelligence.
× × ×
Yet there is good news. In recent years, awareness of the problem has grown. Israel is full of initiatives aimed at protecting human attention from constant digital bombardment. Youth movements organize trips without smartphones. Parents coordinate efforts to delay children’s entry into social networks.
This trend is being led with determination by the Tel Aviv municipality, which is currently working to remove smartphones from high schools. These important initiatives show that we are beginning to understand that mental freedom depends on removing human consciousness from the algorithm’s line of fire.
מיכה גודמןDr. Micah GoodmanPhoto: Moti Kimchi
But society’s problem is larger than the problem of adolescents. To heal society, and indeed civilization itself, it is not enough to free ourselves from the algorithm’s influence. We must reshape the algorithm itself. It is neither right, nor wise, nor appropriate for elected officials to censor people who create information. But artificial intelligence has no human rights. People have the right to create toxic content. Why should that content be amplified by artificial intelligence?
Humanity is encountering AI in two waves. The first wave was social networks such as Facebook and TikTok. The second wave is large language models such as ChatGPT and Gemini.
Fears surrounding the second wave are justified. We are witnessing a foreign intelligence growing stronger, one that in the near future may create mass unemployment and, in the more distant future, could escape human control and become a genuine existential threat.
This is a serious problem that demands urgent attention. But like other major problems, the threat embedded in the second wave of AI cannot be addressed because our ability to cooperate was shattered by the first wave. The situation is not hopeless. Unlike second-wave AI, which may slip beyond human control, first-wave AI remains fully under human control.
How much control? In 2020, Facebook slightly recalibrated its algorithms to calm tensions and lower the flames. With a single decision, it changed the type of information people consumed. That decision did not last long. Two months later, driven by profit incentives, Facebook reverted to its previous settings.
This raises a fundamental question. Why was this decision in the hands of Meta’s shareholders in the first place? If an algorithm shapes society and its institutions, why should society and its institutions not restrain the algorithm?
Regulation should focus on the algorithm that determines dosage, not on the people who create content. People have the right to create toxic material. But there is no justification for artificial intelligence to amplify it. We will not escape the polarization epidemic without healing the information environment in which we live. We need an information diet that is intellectually diverse, one that includes a multiplicity of views that burst our information bubbles rather than endlessly echoing the same opinions.
The diet must also be emotionally diverse. Instead of granting dominance to anger and suspicion, it should reflect a broader and more balanced range of human emotions.
How can we achieve this effect? How can algorithms be calibrated to lower the temperature to a level that allows democratic societies to resume cooperation? This is precisely the conversation we must now begin, and many voices will need to be heard. But to reach solutions, we must first understand the challenge itself.
Healing polarization is not only the most urgent challenge of this historical moment. It is the greatest humanistic challenge of our time: freeing human intelligence from the invisible tyranny of artificial intelligence.
""