One prominent Russian example among many of knowledge poisoning is the Russian website network Pravda, meaning ‘truth’ in Russian. It is a network of around 150 sites that distribute millions of fake articles in multiple languages. The target is not human readers, but algorithms.In 1991, an internal memorandum was written by the Muslim Brotherhood, a radical Sunni Islamist organization of which Hamas is one branch. The document, written in Arabic, outlined a long-term plan for the ‘Islamization of the United States’, not through violence but through what it termed ‘soft jihad’. This meant systematic infiltration of academic institutions, the media and popular culture, alongside the establishment of da’wah institutions, a term referring to the religious duty of spreading Islam, presented outwardly as cultural and social bodies. The idea was both simple and sophisticated: to wrap radical ideology in the language of human rights, diversity and culture so that any criticism would automatically be labeled as racism or Islamophobia.
The memorandum did not come to light through investigative journalism or academic research. It was discovered by chance by the FBI in 2004 during an entirely unrelated investigation. Since then, it has been freely available online in both Arabic and English. For anyone reading it today, it is difficult to ignore how precisely this master plan has been implemented and how present its outcomes are in contemporary Western discourse.
About two months ago, a new English-language Wikipedia entry dealing with this memorandum was created. Within less than a day, particularly diligent editors voted to delete it. Not only was the entry removed, but every attempt to integrate even a mention of the document into the general English-language page on the Muslim Brotherhood was rejected. This is not a dubious document or a conspiracy theory. It is an authentic document seized by American law enforcement authorities. And yet, it was erased from the world’s most popular encyclopedia.
But this is not a story about one specific case on Wikipedia. It is merely one example of how the takeover of central global knowledge repositories, combined with the deletion of factual content and at times its replacement with outright falsehoods, creates deep disinformation that is extremely difficult to reverse, especially in the age of AI.
In recent years, I have written extensively about disinformation campaigns on social media, spam websites filled with falsehoods that rank highly on Google search results and also about Wikipedia itself and organized groups of editors operating within it. Bias on Wikipedia is neither theoretical nor limited to ideological activists. Time and again, cases have been exposed in which states have deployed organized mechanisms to systematically rewrite knowledge. In one example, it was reported in Britain that the Qatari government paid a London-based public relations firm called Portland for more than a decade to ‘clean up’ Wikipedia of embarrassing information, including the deaths of some 6,000 construction workers, human rights violations and legal claims related to the 2022 World Cup. The operation involved the use of dozens of fictitious accounts that ostensibly acted as neutral editors. In another case, monitoring organizations documented how pro-regime editors coordinated across multiple accounts to delete documentation of human rights abuses in Iran in what was defined as full-scale information warfare. These cases illustrate how governments, not only ideological movements, now understand that control over historical and contemporary narratives passes through knowledge infrastructures and that Wikipedia is a strategic target, not a marginal platform.
But Wikipedia is only a symptom. The real phenomenon is far deeper and broader: knowledge poisoning. Knowledge poisoning is different from classic fake news. It is not a false report that can be easily debunked, but a structural contamination of information spaces. The web is flooded with biased, partial or distorted content until the truth disappears into noise. The goal is to undermine the very ability to know what is true. When everything is disputed, nothing remains stable.
One prominent Russian example among many of knowledge poisoning is the Russian website network Pravda, meaning ‘truth’ in Russian. It is a network of around 150 sites that distribute millions of fake articles in multiple languages. The target is not human readers, but algorithms
Several actors stand behind many of the influence operations involved in poisoning knowledge in the West, most notably Russia and Iran. One prominent Russian example among many is the Pravda network, meaning ‘truth’ in Russian. This is a network of around 150 websites that distribute millions of fake articles in multiple languages. The target is not human readers but algorithms. Search engines and AI models learn from the internet. Pravda learned how to get Google to index its content and rank it highly in search results. Tests conducted by NewsGuard and published in March 2025 showed that in about a third of cases, AI systems repeated disinformation originating from Pravda, sometimes quoting those sites directly. Poisoned knowledge thus receives a stamp of technological objectivity.
From here, a closed loop emerges. The poisoned content enters search engines and AI systems. From there, it seeps into social media. Users share it, comment on it and argue about it. Later, the same content returns to sites like Wikipedia, Reddit and other platforms as external references. And it is precisely from these sites that AI systems draw their answers. This is not a chain but a loop, with each layer feeding the next.
Public knowledge cannot be based on a platform privately owned by a single businessman whose interests may change. Replacing one biased source with another is not a solution, but an illusion
Recently, a new player has entered the arena: GrokPedia, an encyclopedia based entirely on artificial intelligence and owned by billionaire Elon Musk. Some argue that it is more accurate and less politically biased than Wikipedia. Based on my own review of entries related to Israel, the Jewish people and Middle Eastern history, it is indeed more accurate than Wikipedia. According to a report published this week in The Guardian, the latest ChatGPT model has begun training on its content. But there is no reason for excessive optimism here either. Public knowledge cannot be based on a platform privately owned by a single businessman whose interests may change. Replacing one biased source with another is not a solution but an illusion.
States hostile to the West understand this arena very well. Not missiles and not tanks, but knowledge infrastructures are the means to create chaos, cacophony and internal fragmentation within societies. Israel is a central target, but it is not alone.
Ella Kenan While adversaries invest strategy, resources and patience, many Western states, including Israel, still treat this arena as background noise. Knowledge poisoning is an arena in its own right. It requires policy for prevention, exposure and regulation, alongside deep investment in public literacy. Knowledge in the age of artificial intelligence is national infrastructure. The time has come for Israel and other countries in the Western camp to treat this arena actively as a full-fledged theater of war.
Ella Kenan is a co-founder of Here4Good, where she leads content creation and research on the impact of foreign actors on public opinion in social networks and artificial intelligence models, including the exposure of disinformation campaigns and networks


