In 2021, audio deepfake scams shook the corporate world as hackers cloned executives’ voices to steal millions. Now, the threat has escalated to video: advances in artificial intelligence are enabling criminals to run live deepfake calls, convincingly impersonating both the faces and voices of people in positions of power.
A special report by the National Cyber Directorate’s Biometric Identification Unit warns that the technology has already moved from experimental use to a mass weapon.
The report urges the public and organizations in Israel to prepare. Fraud-prevention company Trustpair says 2024 saw a 118 percent increase in the use of fake video and audio. Another firm, Entrust, found that a deepfake scam is carried out around the world every five minutes, and about 40 percent of all biometric fraud cases now involve such tools.
According to the Israeli report, criminals are using readily available deepfake software to impersonate CEOs, doctors and lawyers, either in real time or through prerecorded clips. These deepfakes appear convincing enough to issue financial instructions or sensitive orders. In the past year, the Cyber Directorate received 250 reports of deepfake videos in Israel, most involving impersonations of public figures.
High-profile cases around the world
The most notorious such case took place in early 2024 in Hong Kong. A senior finance officer at the British engineering company Arup received an email invitation to a video call with the firm’s CFO and several executives he knew well.
During the meeting, they requested urgent transfers of funds as part of a secret acquisition. Reassured by seeing familiar faces and hearing familiar voices, the employee authorized 15 transfers to five accounts, totaling $25 million. All of the supposed participants were, in fact, deepfakes generated by AI.
Reports from 2023 and 2024 show a sharp increase in deepfake attempts to bypass identity verification steps required by banks and corporations. Criminals create fake videos of real people performing actions such as blinking, turning their heads or reciting sentences. These tricks help them open digital bank accounts, apply for loans or access personal data.
Research by identity-verification company Regula and regulatory bodies shows more than 1,000 fake accounts and countless fraudulent loan applications were created this way. Losses amounted to tens or even hundreds of millions of dollars.
Beyond finance: Political and security risks
The threat extends far beyond the business world. “Anyone conducting a video call with a service provider, such as a psychologist, lawyer or mortgage adviser, needs to be aware this could be an impersonation,” said Naama Ben-Tzvi, head of the Biometric Identification Unit.
The Cyber Directorate warns of increasingly sophisticated attacks in which criminals collect video and voice samples of executives, study an organization’s structure and then use accessible AI tools to generate highly credible fakes. Motives can include espionage, personal data theft, financial scams, privacy breaches or preparation for broader cyberattacks.
In late 2024 and early 2025, deepfake videos of India’s finance minister, Nirmala Sitharaman, circulated widely. The clips falsely promised astronomical returns on small deposits. The scam was so convincing that the Indian government was forced to publicly deny its authenticity.
Meanwhile, U.S. authorities reported that thousands of North Korean operatives landed tech jobs by using AI-generated deepfakes during Zoom interviews. Recruiters were unable to detect that the candidates were using false digital identities.
The Israeli report also highlights other possible tactics. These include joining meetings through stolen links, logging in with fake profiles that carry real participants’ names and photos or sending phishing invitations that require downloading malicious software.
Red flags and defenses
The Cyber Directorate recommends several countermeasures. These include asking participants to display a specific physical item on camera, using real-time facial movement analysis and deploying AI-based verification tools. Individuals should also look out for telltale signs during calls: frozen facial expressions, unfocused eyes, poor lip-syncing or delayed responses.



