Moscow and Minsk have announced the joint development of a new artificial intelligence system aimed at promoting “traditional values” and protecting citizens from foreign “manipulation.”
While Russian officials claim the project is a response to biased Western AI systems, recent research reveals that Russian-made models are among the most heavily censored in the world, particularly when it comes to politically sensitive issues.
The new project, dubbed the “patriotic chatbot,” is the latest in a series of efforts by Russia and Belarus to assert technological and ideological independence from the West—a push that has intensified since Russia’s 2022 invasion of Ukraine. According to the announcement, the AI will provide users with “objective information” rooted in what officials describe as “fundamental and traditional values.”
Sergey Glazyev, secretary of the supranational Union State—an alliance between Russia and Belarus—said the goal is to create a trustworthy system that shields younger generations from the “manipulations of foreign models.” He accused American-developed AIs of promoting “racist and extremist” views.
But a recent study by Ghent University in Belgium casts doubt on those claims. It found that Russia’s leading AI models, YandexGPT and GigaChat, exhibit the highest levels of political censorship among 14 major language models tested globally—including those developed in China.
While Chinese AIs were found to engage in “top-down censorship” aligned with the government’s officially defined “core socialist values,” the Russian models were described as exhibiting “hard censorship,” frequently refusing to answer politically sensitive questions.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
Researchers observed that the Russian models consistently declined to respond to prompts related to the war in Ukraine, often stating they were unable to address certain topics or redirecting users to external sources. This contradicts official claims that Russian AI offers an objective alternative to Western systems.
Notably, the models refused to engage even when prompted in Russian—their primary language—suggesting that the censorship is targeted at the domestic population and not based on the user's language or origin.
In effect, the study indicates that these Russian models are less a source of neutral information and more an extension of the Kremlin’s longstanding propaganda apparatus—now with a chatbot interface.
While Moscow markets its AI tools as reliable alternatives to allegedly biased Western platforms, the technology appears to follow the same ideological playbook, simply dressed in a more modern form.



