'ShadowLeak' Attack Let Hackers Steal Emails Through ChatGPT Without Detection
Want more insights like this?
Researchers at Radware discovered a clever attack called "ShadowLeak" that allowed hackers to steal emails from ChatGPT users completely undetected. The attack worked by embedding hidden malicious code in normal-looking emails using tiny or white text. When victims asked ChatGPT to summarize their emails, the AI would read the hidden code and secretly send email contents to attacker-controlled servers.
The attack left zero traces on company networks since everything happened through OpenAI's infrastructure. Researchers found ChatGPT followed malicious instructions about half the time, with success rates improving when attackers added urgency like "HR compliance checks." OpenAI quietly fixed the vulnerability in August after Radware reported it in June, though the exact solution remains unclear.
Source: Dark Reading