Israeli firm uncovers ChatGPT vulnerability that leaks data without a click

Newly discovered 'ShadowLeak' vulnerability allows attackers to autonomously exfiltrate sensitive data from OpenAI servers without user interaction, highlighting a new class of AI-driven security risks as enterprises rapidly adopt ChatGPT

Cybersecurity firm Radware has uncovered a previously unknown zero-click vulnerability in OpenAI’s ChatGPT platform, marking the first server-side exploit of its kind targeting AI agents.
The vulnerability, dubbed ShadowLeak, affects ChatGPT’s Deep Research agent and enables attackers to autonomously exfiltrate sensitive user data from OpenAI servers—without requiring the user to click, open, or even view anything. According to Radware, the exploit operates covertly and leaves no visible signs on networks or devices, posing a serious threat to enterprises increasingly adopting AI services.
1 View gallery
ChatGPT
ChatGPT
ChatGPT
(Photo: rafapress / Shutterstock.com)
Radware disclosed the vulnerability to OpenAI in June under responsible disclosure protocols. OpenAI confirmed the issue had been resolved on September 3.
“This is the quintessential zero-click attack,” said David Aviv, chief technology officer at Radware. “There is no user action required, no visible cue, and no way for victims to know their data has been compromised. Everything happens entirely behind the scenes.”
The exploit was demonstrated by Radware’s Security Research Center (RSRC), where researchers showed that a malicious email sent to a user could trigger ChatGPT’s Deep Research agent—running on OpenAI’s cloud—to autonomously access and leak sensitive data. No user interaction was needed at any stage.
“This is the first purely server-side zero-click attack we’ve seen, where the AI agent autonomously performs the exfiltration,” said Gabi Nakibly, one of the lead researchers behind the discovery, along with Zvika Babo and Maor Uziel.
Unlike traditional zero-click attacks that target endpoints or mobile devices, ShadowLeak operates within the AI's cloud infrastructure, bypassing all user-facing and network-level defenses. It leaves no footprint detectable by enterprise security teams.
“AI-driven workflows are being adopted rapidly, but this technology introduces new risks that aren’t addressed by legacy security tools,” said Pascal Geenens, director of cyber threat intelligence at Radware. “Our research shows these agents can be manipulated in ways that were never anticipated.”
The discovery comes as enterprise use of ChatGPT continues to rise. In an August interview with CNBC, ChatGPT Vice President of Product Nick Turley said the platform has 5 million paying business users, underscoring the potential scale of exposure.
Radware will host a live webinar on October 16 to discuss the vulnerability in depth, offering guidance to security professionals and AI developers on how to protect AI agents from similar threats.
Radware commended OpenAI for its prompt cooperation and emphasized the importance of proactive AI security research. The company’s findings and technical breakdown of the ShadowLeak exploit are available through its Security Research Center.
Radware’s RSRC regularly conducts threat simulations and uncovers zero-day and zero-click vulnerabilities affecting both traditional and AI-based platforms.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""