'Trust me, I'm an AI expert' – A red flag for the future of human intelligence

Analysis: The chief AI scientist of Meta asks us to trust him regarding the safety of artificial intelligence systems, but there are plenty of examples of experts who were absolutely confident in the safety of their developments until they claimed human lives; The stage should not be left solely to technology experts – the stance of social and humanities scientists is equally important

Dr. Erez Firt|Updated:
An interesting theoretical clash recently took place regarding the question "Does the research and development of artificial intelligence pose an existential threat?"
<< Follow Ynetnews on Facebook | Twitter | Instagram | TikTok >>
More stories:
The panel members at the Munk Debate on Artificial Intelligence held on June 22 in Toronto, Canada, were all computer scientists: On the pro side - Yoshua Bengio, Turing Award Laureate; and Max Tegmark, a professor at MIT, researcher, and author. On the con side - Yann LeCun, chief AI scientist at Meta AI and also a Turing Award laureate; and Melanie Mitchell, professor at the Santa Fe Institute and a renowned scientist in the field.
(Munk Debate on ArtificiaI Intelligence)
Each side presented its position. Surprisingly, it was LeCun who presented arguments that found favor with the audience present. However, these arguments have two major and significant flaws, because they are used to argue against the need for research, development and investment in the safety of artificial intelligence. I'll call the two flaws – "Trust me, I'm a doctor" and the fallacy of imitation.
Let's put aside any personal considerations and, for the sake of presentation, assume that LeCun's position (as the chief AI scientist at Meta, which will undoubtedly suffer from commercial limitations on AI research and development) is meant to be transparent and objective.
One of LeCun's central arguments is that the most advanced artificial intelligence systems we have today are far from the level of human intelligence, and that the future systems' internal structure - the "architecture" - will be different. His conclusion is that we shouldn't prepare for an unknown future. The time to prepare will come when we know that architecture. Then we will have the tools to deal with it and plan appropriate safety measures. "We will build them," he said. "They won't just appear. If they are not safe, we won't build them in the first place."
True, it is possible to build systems cautiously and gradually. As LeCun suggests, we can start with an intelligent system in the form of a cat, then progress to an intelligent system in the form of a dog and, finally, as a human – ensuring their safety at each stage.
However, there are at least two problems that LeCun recognizes but chooses to ignore: First, humanity has always aspired to build things safely, yet we have still failed - Chernobyl, submarines that sink and baby formulas causing death and severe illness, are just a few examples. Given the high risk in this case, it is wise to start preparing, and not fall back into the "Trust me, I'm a doctor" narrative.
Numerous thought experiments illustrate the specific difficulty of designing intelligent systems safely, such as artificial intelligence systems that act contrary to human planning and intention and are capable of outsmarting and bypassing all early human preparation, systems that try to achieve their goals optimally and find ways we hadn't considered or desired, intelligent systems that surpass the "safe" testing environment by manipulation of the human overseers and more. If we bury our heads in the sand, the problem won't disappear.
2 View gallery
יאן לקון
יאן לקון
Yann LeCun
(Photo: Meta)
The second flaw, the fallacy of imitation problem, is even more severe. LeCun is convinced that we will be able to build systems that not only exhibit human-like intelligence but also possess consciousness and a range of human emotions. According to him, the fact that they have human-like emotions will be the way through which we can control them. In other words, they will be like us, they will feel like us and, therefore, we can control them; they won't be alien and unfamiliar.
Such a stance regarding the relationship between intelligence and humanity appears limited and narrow-minded – human intelligence is just one possible form of intelligence. Just think of all the human aspects related to emotions that won't necessarily come together with artificial intelligence: the body (including biological matter, sensory organs, bodily sensations and what is known as embodied cognition - a theory that emphasizes that cognition involves acting with a physical body on an environment in which that body is immersed), human culture (education, environment, history, tradition), human psychology (including birth, belonging, family, and societal interactions).
We already are creating systems that operate at the intelligence level of a cat, as LeCun himself states, without any scope of feline emotions. Furthermore, even if we succeed in creating artificially intelligent systems with consciousness and emotions, it is very reasonable to assume that these will be non-human consciousness and emotions, as they will be based and rooted in a non-human body, "culture," and psychology.
2 View gallery
GPT-4
GPT-4
GPT-4
(Photo: Tada Images / Shutterstock.com)
Melanie Mitchell also commented on this issue during her debate with LeCun. She said that human intelligence is a very specific intelligence adapted precisely to our biological organs and our specific problems, that differentiates us from machines, mice and viruses. Humans have their own problems, needs and motivations, she said, and we are deeply embedded in a physical, cultural and social environment. Hence, she claimed that artificial intelligence systems may learn from human data and perceive certain aspects, but they lack an understanding of the world in which we operate.

Less technology, more spirit

LeCun represents one side of an ongoing debate. It is important to understand that the public needs to hear precise arguments from here and there so they can form an optimal opinion. Today, technology leaders play a significant role in shaping our agenda. They are experts in business and science, and they are creative, wise and assertive. But as talented as they may be, they are not spiritual leaders, philosophers or social scientists, and such expertise is also required when determining the correct future path for human society.
Neglecting these various fields obscures the importance of their arguments, deep understanding of ethics, philosophy of the mind and consciousness, and social sciences. Panels, debates, brainstorming sessions, Senate testimonies and policy decisions cannot be solely, nor primarily, composed of technology experts. It is worthwhile and necessary to hear their professional opinions in relevant contexts – the boundaries of technology, short-term developments, new research avenues and more. But, just as importantly, and in my opinion even more so, we must listen to experts from other fields directly related to the concerns that trouble us: philosophers, neuroscientists, jurists and social scientists. Until we learn to do that, we will continue to be inundated with flawed arguments and uneducated guesses.
Dr. Erez Firt is the academic manager of the Center for Humanities and Artificial Intelligence at the University of Haifa and the Technion.
First published: 17:19, 07.18.23
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""