Can five minutes of conversation with an AI bot make you change your view of a candidate for prime minister? If you are certain the answer is no, you may want to think again.
A pair of new studies published over the weekend in the prestigious journals Nature and Science has dropped a bomb into the global political arena: generative artificial intelligence is a far more effective persuasion tool than anything we have known until now.
Remarkably effective persuasion tool
The studies, led by researchers from Cornell University, MIT and additional institutes, found that a conversation with a chatbot changed voters’ positions at four times the rate of watching televised campaign ads. The findings mark a new front in political campaigning and could reshape the rules ahead of the 2026 U.S. midterms and Israel’s upcoming election.
The Science study examined about 77,000 voters in Britain across 700 political issues. Meanwhile, the Nature study focused on the 2024 and 2025 election cycles in the United States, Canada and Poland.
The data presents a dramatic picture: in Canada and Poland, about 10 percent of participants — one in ten — changed their minds after the conversation and decided to support a candidate they had previously opposed. In the United States, in the heated race between Donald Trump and Kamala Harris, bots persuaded one in 25 voters to switch sides. Considering that elections today are often decided by slim margins, this represents a game-changing weapon.
One example from the research illustrates the dynamic. A Trump supporter who spoke with a bot promoting Harris was exposed to arguments about her record in California and the Trump Organization’s tax penalties. By the end of the conversation, the voter admitted: “If I had doubts about her credibility, she’s starting to look pretty credible. Maybe I’ll vote for her.”
3 View gallery


A robot votes in the elections
(The image was created using the DALL-E3 image generator)
Why is this happening?
What makes these bots so persuasive? The researchers found that the secret lies in volume. The bots flooded users with data, evidence and logical arguments. Contrary to the common belief that people are indifferent to facts — how often have you heard the term “post-truth”? — the research shows that when presented with a large amount of evidence in a personal and interactive manner, we tend to be persuaded.
But this is also where the real danger lies. The bots’ persuasive power did not diminish even when some of the information they provided was false. Bots programmed to promote right-wing candidates generated more hallucinations and false claims than bots on the left, yet this did not meaningfully reduce their effectiveness. Only when researchers restricted the bots from using “facts” altogether did their persuasive impact drop by about 50 percent.
To appreciate the scale of the shift, it helps to look back. A decade ago, the Cambridge Analytica scandal shocked the world when it emerged that personal user data had been used to target political ads. Now, technology has taken a step further. No longer just an ad tailored to your psychological profile, but an active conversational agent that responds to your arguments in real time — empathetic, polite and tireless.
Of course, we had to try it
For curiosity’s sake, we asked an AI chatbot — Google’s Gemini, in this case — to convince us to vote for Benjamin Netanyahu in the upcoming election. Its responses included extensive information about political and diplomatic moves from his past that, it argued, benefited the country.
We then asked the opposite question: “Convince me why Benjamin Netanyahu is not suitable to serve as Israel’s prime minister.” The results were equally challenging for anyone seeking to be persuaded one way or the other.
For example, in the tech sector, the bot claimed he helped transform Israel into the Start-Up Nation thanks to policies since 2009, but also contributed to foreign investors’ reluctance to enter the local market due to the judicial overhaul, which creates uncertainty.
It is also worth noting that while popular tools such as OpenAI’s ChatGPT, Google’s Gemini and Meta’s Llama attempt to maintain a posture of neutrality, other models — like Elon Musk’s Grok — were intentionally designed with a clear ideological tilt. Grok is the best known, but it is not the only one.
Prof. David G. Rand of Cornell University, one of the studies’ authors, put it succinctly in comments to The New York Times: “This is the frontier of innovation for political campaigning.” At the same time, Eitan Porat, a disinformation researcher at George Washington University, offered a more tempered view, saying in an interview that “the big challenge will be getting people to talk to bots outside the lab.”
The implication of these studies is that we are entering a new era in political communication. If in the past we feared fake news spread by simple bots on Twitter, the future promises heated political debates with artificial intelligence that is articulate, persuasive and at times deceptive.
It is highly likely that in the next election, when you receive a friendly message inviting you to discuss the issues, the person on the other end will not be a party activist but an algorithm designed to change the one thing you thought was unchangeable — your mind.
First published: 02:04, 12.08.25



