<img height="1" width="1" style="display: none" alt="" src="https://px.ads.linkedin.com/collect/?pid=1098858&amp;fmt=gif">

Hackers Can Now Hide Malware Inside AI Files on Windows

Security loophole found: Malware can bypass Windows defenses by hiding in trusted AI model files. Learn how to protect yourself.
Content Team

Security researcher hxr1 discovered a new way to sneak malware past Windows defenses by hiding it inside AI model files. The attack exploits Windows' built-in AI features, which automatically trust ONNX neural network files used by apps like Windows Hello and Office.

Since Windows doesn't check these AI files for threats, attackers can embed malicious code in the model's data and use Microsoft's own trusted system files to execute it. Security programs see legitimate AI processing instead of a cyberattack.

The researcher suggests this highlights a major blind spot as AI becomes more common. Security tools need updates to scan AI files, and users shouldn't blindly trust AI models downloaded from the internet.

Source: Dark Reading

Share this article
Share on facebook Share on linkedin Share on twitter Share on email
blog_book_a_demo_cta_3x
Have questions about protecting your software?
Our escrow experts are standing by to help.
Book a free demo