A new threat assessment released by Google Cloud Security warns that artificial intelligence, which until now has mostly been used experimentally in cyberattacks, is expected to become fully operational in the hands of threat actors by 2026. According to the report, AI will no longer serve as a supplement to traditional methods but will play a central role in automating and scaling attacks, from tailored phishing to sophisticated influence operations.
Google’s analysts describe a clear shift from isolated demonstrations of AI misuse to the integration of AI directly into the attack chain. Threat actors are expected to rely on AI systems to accelerate reconnaissance, craft tailored messages, generate synthetic audio and video, and operate networks of fabricated accounts capable of shaping online narratives. The report also highlights how state-aligned groups, including those linked to Iran, are increasingly adopting AI-generated content and coordinated influence techniques during periods of regional tension.
This trend carries particular relevance for Israel. Radware’s latest global analysis ranks Israel as the second most attacked country in the world, making developments described in Google’s forecast directly meaningful for local institutions and businesses. The techniques outlined in the report from synthetic media to coordinated influence networks mirror patterns previously observed in Iran-linked information operations targeting regional audiences. As AI tools become more capable and accessible, the shift from experimental misuse to fully operational deployment introduces an additional layer of complexity to Israel’s already intensive threat landscape.
Omer Bachar, co-founder and CEO of Vetric, a company that provides data infrastructure for detecting impersonations, deepfakes, and digital threats, says the pattern identified in Google’s report is already visible in real-world cases examined recently. As he explains, “We’re seeing the same maturity trend described in Google’s report. Artificial intelligence is moving from experimentation to fully operational use in cyberattacks. Threat actors are now deploying AI to enhance speed, accuracy, and scale, especially across social engineering, identity spoofing, and coordinated influence campaigns. This marks a clear shift from isolated proof-of-concept to real offensive automation being deployed in the wild”.
One of the most significant challenges highlighted by both Google and Vetric is the growing accessibility of AI-generated media. Deepfake audio and video, once time-consuming and expensive to produce, can now be created quickly using publicly available tools. Bachar warns that this capability is already being abused. “When people see a viral AI-generated clip, whether it’s a celebrity deepfake or a realistic synthetic news video, they rarely realize how easily it was made. If a casual user can create something that convincing with off-the-shelf tools, imagine what a skilled malicious actor can do by cloning a relative’s voice and face to sound distressed and ask for money. These capabilities are becoming more accessible and more convincing. Today, almost anyone can carry out a believable scam with minimal effort and cost”.
Vetric CEO Omer Bachar
Photo: TeammeGoogle’s report emphasizes that the shift to operational AI will affect governments, critical industries, and everyday users alike. Social engineering campaigns powered by AI-generated voices, AI-crafted phishing messages, and AI-driven reconnaissance are expected to grow in both volume and sophistication. These operations, according to the report, may combine deepfake content with large-scale influence campaigns, making it harder for individuals to distinguish genuine information from fabricated narratives.
The overall picture painted by the forecast is one of rapid acceleration. What was once an emerging risk is becoming a central component of cyber operations worldwide. As AI systems grow more capable and more widely available, the gap between experimentation and fully operational use is closing. Google’s assessment underscores the urgency for organizations, policymakers, and the public to understand these developments and prepare for a wave of AI-enabled threats that will be harder to detect, faster to deploy, and easier for attackers to scale.


