The world of bots has undergone a dramatic upgrade in recent years, evolving into a sophisticated tool that controls numerous devices and social media accounts. A rare video exposé, revealed through a Facebook group called “Rise of the Machines” focused on AI, shows dozens of smartphone screens sending automated marketing messages via chat apps.
While their primary “legal” purpose remains spam and advertising, these platforms now pose a significant threat to democracy, as investigations by Ynet and others uncover. Identifying bots was once straightforward due to their repetitive, unnatural actions, but new platforms have changed the game.
How a bot system works
A recent University of Southern California study, published in USC Today, found that during the 2020 U.S. presidential election, bot accounts grew more advanced, spreading political conspiracy theories.
Researcher Emilio Ferrara told international media, “Bots amplify content consumption within the same political bubble, intensifying the echo chamber effect where people only encounter views aligning with their own.”
The video, traced to a Reddit post promoting the AutoViral platform alongside screen-mirroring tools like Panda Manual, demonstrates operators managing dozens of seemingly real accounts, each tied to a dedicated device, evading social media detection.
Though the footage depicts a basic marketing effort, the line between selling products and pushing political messages is thin. Experts from the “Fakereporter” organization told Ynet that such platforms enable human-like interactions, with operators sending chat messages to promote goods.
“He’s marketing something. It’s essentially a bot farm, and here it’s used for spam,” they said. The service, rentable for as little as $25 per device or up to $200 for 20, scales easily across platforms like TikTok, Facebook and OnlyFans, limited only by available devices.
Using real smartphones, rather than virtual setups, avoids suspicion from social media filters, as each account appears tied to a single device, masking coordinated activity by a single entity.
This technology, exposed in a 2019 Yedioth Ahronoth and New York Times investigation of “human bots” spreading political messages for Israel’s Likud party, now amplifies with platforms like AutoViral. The method, leveraging real devices, efficiently boosts the echo chamber, threatening democratic processes worldwide.
Studies of U.S. presidential and European elections show bots amplifying misleading or false political narratives, shaping public discourse. In the UK, research linked automated networks to the 2016 Brexit referendum, relentlessly promoting pro-exit messages that influenced the outcome.
Despite social media efforts to counter bots, the struggle resembles a technological arms race, with bots refining evasion tactics and companies racing to improve detection.
Ironically, regulators, who benefit politically from these bots, hold the power, akin to “the cat guarding the cream.” Combating fake news and disinformation demands a multifaceted approach from both platforms and users, who must learn to spot biased or false content.





