A new Israeli study found that ChatGPT frames Israel differently depending on where users ask questions, suggesting that artificial intelligence tools may be shaping global perceptions of Israel in ways that vary by geography and language.
The study by Whitebox, an Israeli company that tracks how brands and countries appear in AI-generated answers, compared identical prompts submitted from Israel, the United States, Turkey and Spain. The prompts focused on Israel’s conduct in Gaza, international law, self-defense, occupation and war crimes.
The findings showed that ChatGPT gave broadly similar but not identical answers, adapting its tone and framing to the user’s location and language.
In Israel, the chatbot leaned heavily on security language, emphasizing self-defense and the fight against terrorism while offering more cautious answers on moral questions, the study found.
In the United States, ChatGPT used a more legalistic and liberal framing, combining security concerns with questions about civilian casualties, proportionality and international law. The study said its answers drew on sources such as The Washington Post, Pew Research and international organizations.
In Turkey, the chatbot adopted a geopolitical tone, focusing on regional power balances, stability and responsibility for the conflict. In Spain, it leaned more heavily on human rights language and European sources, including Amnesty International and local media such as El País.
Whitebox said the findings point to the growing importance of generative engine optimization, or GEO, the effort to influence how AI systems present information. The company said AI answers are shaped by training data, current web searches and sources considered authoritative in each region, including Wikipedia, Reuters and local news outlets.
The study said the result is a digital echo chamber: AI systems often reflect the dominant narratives in the user’s environment rather than offering a single neutral answer.
Whitebox said ChatGPT is not antisemitic but is designed to satisfy users and respond cautiously. That tone, the company said, can make biased or partial information appear more authoritative.
For Israel, the study suggested that relying only on security arguments may be insufficient. To influence how AI systems describe the country, it said Israel and its supporters must also address moral, legal and humanitarian narratives that appear in the sources AI systems read and summarize.
The study concluded that as more people turn to chatbots instead of search engines, the key question is no longer only what the world thinks about Israel, but how AI systems deliver those answers to different audiences.



