It is no longer a distant future—autonomous military systems are operating at sea, in the air and on land, as recent wars have turned them from a technological add-on into an operational necessity. But alongside the operational leap, tough questions were raised Tuesday about the limits of responsibility and international law at the ynet and Yedioth Ahronoth Defense Tech Conference, held in cooperation with the Israeli Engineers, Architects and Technological Professions Association.
Zvika Yarom, general manager of the Land Systems Division at Elta Systems, noted that autonomous platforms are not a new development. “UAVs began operating decades ago, and on land we have been developing autonomous ground vehicles for 15 years,” he said.
According to Yarom, both the Israel-Hamas war and the war in Ukraine “have turned the issue of autonomy on the ground battlefield from a technological upgrade into a real operational need.
“From civilian drones that you can find in almost every home, we’ve reached soldiers on the battlefield operating these tools. At the same time, the Panda ground robot has accumulated countless combat hours. We were very concerned about the use of autonomous tools alongside human soldiers, but we saw that it works,” Yarom explained.
The next stage, he said, is “one-to-many—meaning one operator controlling several platforms, with broader, more mission-oriented control, rather than focusing on operating a single vehicle.” In other words, instead of managing each platform separately, an integrated system recommends courses of action to the operator, who then decides whether to approve them.
Dr. Liran Antebi, a senior researcher in autonomy and artificial intelligence at the Institute for National Security Studies (INSS), pointed to Ukraine as a live laboratory. “In Ukraine, the battlefield is highly threatened in the electromagnetic domain and full of interference. That, in turn, pushes the use of autonomous tools that reduce dependence on communications,” she said.
At the same time, she stressed that the systems are still at a stage where they support humans rather than replace them. “These tools still do not fully replace human beings. They assist, they go ahead, but as support.”
“To expand the use of autonomous tools on land as well, and not limit ourselves to the air,” Antebi added, “we need systems that are more autonomous and more intuitive, primarily to reduce the risk to human life.”
3 View gallery


Panel about the limits of responsibility and international law at the ynet and Yedioth Ahronoth Defense Tech Conference
(Photo: Avigail Uzi)
This is where the ethical dimension comes in. Professor Asa Kasher, a philosopher and Israel Prize laureate who advises the IDF on ethics and warfare, said that “I am pleased with any system that operates successfully, but I am skeptical when it comes to international law.”
He pointed to the principle of distinction—the obligation to differentiate between combatants and non-combatants—in arenas such as Gaza or Lebanon, where there are no clear front lines as in a classic battlefield.
“How do you distinguish between the sides?” Kasher asked. “If you look at the way human beings distinguish between other human beings, you reach the conclusion that you cannot achieve autonomy at all levels.”
Even when a legitimate target exists, the principle of proportionality remains. “The negative is harming those who are not involved; the positive is that you are carrying out your mission,” Kasher said. “In the end, the decision rests with the commander.”




