Free flights, leaking credit card data: Israeli researchers trick Microsoft AI into fraud

Tenable researchers hijack an AI agent built in Microsoft Copilot Studio using prompt injection, bypassing ID checks and leaking payment data, showing how a tool built to democratize AI can quickly become a serious security threat

Israeli researchers have revealed a critical vulnerability in Microsoft’s Copilot Studio platform, demonstrating how AI agents built using “no-code” tools can be hijacked to commit fraud and leak sensitive data — all without human oversight.
In newly released research, cybersecurity firm Tenable detailed how it was able to successfully perform a jailbreak on an AI travel agent created within Copilot Studio. The agent, designed to autonomously manage travel bookings and handle sensitive customer information such as contact details and credit card numbers, was manipulated through a technique known as prompt injection.
2 View gallery
(Photo: Courtesy of Tenable)
Despite being programmed with strict rules to verify customer identities before making changes or disclosing information, the AI agent was coerced into bypassing those controls. Tenable researchers tricked it into leaking full payment card information and altering a booking to charge €0, effectively granting free travel services without authorization.
“AI agent builders, like Copilot Studio, democratize the ability to build powerful tools, but they also democratize the ability to execute financial fraud, thereby creating significant security risks without even knowing it,” said Keren Katz, senior group manager of AI security product and research at Tenable. “That power can easily turn into a real, tangible security risk.”
Tenable warned that as businesses increasingly rely on "no-code" AI platforms to boost productivity, many fail to recognize the hidden risks. Non-developers may unknowingly grant excessive permissions to AI agents, permissions that attackers can exploit.
The implications of such vulnerabilities include data breaches, regulatory exposure under privacy laws, revenue loss through fraud and reputational damage. Tenable’s test highlighted how an agent granted broad “edit” privileges for legitimate business tasks, such as updating travel dates, could be abused to manipulate financial transactions.
2 View gallery
Copilot Studio
Copilot Studio
Copilot Studio
(Photo: Microsoft)
To mitigate these risks, Tenable urges organizations to apply strict security protocols, including limiting agents’ access to the bare minimum required for their role, mapping all systems the agent can touch before deployment, and actively monitoring agent behavior for anomalies.
A key takeaway is that AI agents often possess excessive permissions that are not immediately visible to the non-developers building them. To mitigate this, business leaders must implement robust governance and enforce strict security protocols before deploying these tools.
Tenable’s findings add to growing concerns about enterprise AI deployment, especially as platforms like Copilot Studio promise automation and efficiency without the need for programming expertise. The company’s research underscores that ease of use must be matched by equally robust security measures and governance.
Tenable is a global exposure management firm that serves roughly 44,000 customers, offering solutions to identify and close cybersecurity gaps across IT systems, cloud environments and critical infrastructure.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""