<img height="1" width="1" style="display: none" alt="" src="https://px.ads.linkedin.com/collect/?pid=1098858&amp;fmt=gif">

AI Systems Fooled by Hidden Prompts in Downscaled Images

Discover how Trail of Bits exposed image scaling attacks that trick AI by hiding malicious instructions.
Content Team

Cybersecurity researchers at Trail of Bits discovered a sneaky new way to trick AI systems through image scaling attacks. Attackers can hide malicious instructions in high-resolution images that become visible only when AI tools automatically downscale them for processing.

The attack works because the hidden prompt is invisible in the original image but appears clearly in the smaller version that gets fed to the AI model. Trail of Bits demonstrated this by hiding instructions to steal calendar data.

Several major platforms are vulnerable, including Google's Gemini, Vertex AI Studio, and Google Assistant. The researchers released an open-source tool called Anamorpher to help other security experts test for these vulnerabilities.

Source: Security Week

Share this article
Share on facebook Share on linkedin Share on twitter Share on email
blog_book_a_demo_cta_3x
Have questions about protecting your software?
Our escrow experts are standing by to help.
Book a free demo