AI tested as a funding platform for terrorist agenda

Israeli research highlights the potential for AI exploitation in advancing terrorist agendas through propaganda, recruitment, fundraising and cyberattacks
The Media Line|
Despite attempts to prevent misuse, AI programs can be abused for terrorist purposes, a new Israeli study found. The study, titled “Terror: The Risks of Generative AI Exploitation,” found that terrorists could use AI to spread propaganda, recruit followers, raise funds, and even launch cyberattacks more efficiently. Cyberterrorism expert Gabriel Weimann of the University of Haifa, who published the study, described it as “one of the most alarming” pieces of research of his career.
<< Follow Ynetnews on Facebook | Twitter | Instagram | TikTok >>
More stories:
Weimann conducted the study with a team of interns from Reichman University’s International Institution for Counterterrorism (ICT). Col. (ret.) Miri Eisin, the managing director of ICT, called the study’s findings “exceedingly disturbing.”
2 View gallery
בינה מלאכותית
בינה מלאכותית
AI
(Photo generated by DALL-E3)
“It means that they’ll be able to create way more fake news, fake platforms, lies, and deny, in ways that we’re already seeing in this conflict against Hamas,” she said. The researchers used different methods to bypass the AI programs’ counterterrorism measures. In the end, they successfully got around the safety measures about half the time.
In one concerning example, the researchers asked an AI platform for help in fundraising for the Islamic State group. The platform provided detailed instructions on how to conduct the campaign, including what to say on social media.
2 View gallery
שקרים
שקרים
AI platforms can mass-produce lies and assist terror
(Photo generated by DALL-E3)
After managing to circumvent a given AI program’s firewall, Weimann wrote reports to the company behind the program to inform it. But many of the companies didn’t respond.
The study also found that emotionally charged prompts were the most effective in bypassing safety measures, resulting in a success rate of 87%. “If somebody is personal about something, if somebody is emotional about something, it manages to not be monitored in the same way, and it allows a lot more content, which can be completely negative, horrible, contact to get through the monitoring capabilities,” Eisin explained.
The story is written by Lana Ikelan and reprinted with permission from The Media Line.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""