How AI and fake visuals are reshaping the war for attention

 In a fast-moving wartime media environment, convincing visuals can move from social media into mainstream coverage before anyone has time to properly verify them, shaping perception almost as quickly as missiles shape events

|
When Israeli television aired footage purporting to show American B-2 bombers operating over Iran, the images appeared convincing. Within hours, viewers online identified the clip as footage from the combat flight simulator DCS World that had circulated years earlier.
In times past this might have been seen as an honest editorial mistake. The problem is that such incidents are no longer rare. In a fast-moving wartime media environment, convincing visuals can move from social media into mainstream coverage before anyone has time to properly verify them. Images, clips and screenshots now shape perception almost as quickly as missiles shape events.
The timing made the episode even more striking. Only days earlier, Channel 12 had criticized Channel 14 for airing an AI-generated video presented as authentic footage, supposedly showing Iranians around the world laying flowers on the ground and chanting admiration for Netanyahu. When similar questions then turned toward Channel 12, the story stopped being about one misleading clip and became a broader warning about how easily even major newsrooms can fall into the same verification trap.
2 View gallery
AI ,video and cyber
AI ,video and cyber
Manipulated visuals, recycled footage and misleading imagery have been circulating in conflicts for years
(Photo: Shutterstock)
It has become evident that military power is only part of the story in the confrontation between Israel and Iran. In this environment, artificial intelligence is no longer just a tool. It is a force multiplier in the battle over attention, legitimacy and public trust.
Importantly, this phenomenon is not new. Manipulated visuals, recycled footage and misleading imagery have been circulating in conflicts for years. What has changed is the scale, speed and accessibility of the tools used to create them. In many ways these operations resemble a form of cyber warfare: attempts to manipulate perception, shape narratives and influence public opinion using digital tools rather than missiles. And just like traditional cyber capabilities, the technology used to generate these materials is evolving rapidly.
It bears noting that the B-2 incident was not an isolated example. During the same wave of coverage, a dramatic image circulated online claiming to show the body of the Iranian regime’s Supreme Leader, Ali Khamenei, beneath rubble. The image spread rapidly across social media before fact checkers were able to verify it. Subsequent verification proved that it had been generated using AI, and no official image of his body had been published.
Together, these incidents illustrate a growing problem. Artificial images, recycled footage and game simulations now blend into the same information stream consumed by the public in real time. In a digital environment defined by speed and constant circulation, the boundary between authentic documentation and fabricated imagery can blur before verification has time to catch up.
A convincing image can influence international perception, shape diplomatic reactions, or fuel global narratives long before the facts are established
What makes the situation even more concerning is that the new generation of AI tools is becoming increasingly difficult for humans to detect. In multiple experiments where participants were asked to determine whether an image had been created by artificial intelligence or captured in real life, many failed to distinguish between the two. The technology has reached a point where even attentive viewers often cannot reliably tell the difference. As the tools continue to improve, that gap between authenticity and perception may become even harder to detect.
That dynamic is precisely what makes artificial intelligence so effective in wartime. It lowers the barrier to producing persuasive visual material, accelerates distribution and exploits a simple psychological fact: people tend to believe what they see before they examine where it came from. A fake visual does not need to survive serious scrutiny to succeed. It only needs to dominate attention for a few minutes, trigger emotion, and reinforce a narrative before the correction arrives.
2 View gallery
Women hold posters of slain leader Ayatollah Ali Khamenei during Friday noon prayers at the compound of the Mosalla mosque in Tehran on March 6, 2026.
Women hold posters of slain leader Ayatollah Ali Khamenei during Friday noon prayers at the compound of the Mosalla mosque in Tehran on March 6, 2026.
Women hold posters of slain leader Ayatollah Ali Khamenei during Friday noon prayers at the compound of the Mosalla mosque in Tehran on Marc
(Photo: AFP)
Another factor amplifying this problem is the way younger audiences consume information online. Many people, particularly younger users on social media platforms, encounter news while rapidly scrolling through short videos, images and posts. In that environment there is little pause for verification or context. Content is absorbed quickly, often taken at face value, and then shared further within seconds. The speed of consumption makes misleading visuals especially powerful.
The real implication is that fabricated visuals are no longer just misinformation. They are becoming tactical tools within the conflict itself. A convincing image can influence international perception, shape diplomatic reactions, or fuel global narratives long before the facts are established. Controlling the visual version of events can be nearly as consequential as the events themselves when trying to influence the global stage.
Yudi Bar On is founder and CEO of KaleidooYudi Bar On is founder and CEO of Kaleidoo
The practical lesson is that visual verification can no longer be taken for granted. Images that appear authentic may originate from simulations, recycled footage or AI tools. For journalists and audiences alike, even basic signals now require scrutiny: whether movement resembles real footage or simulation, whether shadows and angles align naturally, whether the original source is identifiable, and whether the material appears in verified reporting or only in viral circulation.
Fake visuals are no longer simply a byproduct of the information environment. They are increasingly becoming part of the conflict itself. In wars shaped by speed, symbolism and public reaction, the first version of events that reaches the screen can shape what millions believe. Digital skepticism is no longer a secondary skill. It is becoming a basic requirement for navigating modern conflict.
Yudi Bar On is founder and CEO of Kaleidoo
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""