Security researcher hxr1 discovered a new way to sneak malware past Windows defenses by hiding it inside AI model files. The attack exploits Windows' built-in AI features, which automatically trust ONNX neural network files used by apps like Windows Hello and Office.
Since Windows doesn't check these AI files for threats, attackers can embed malicious code in the model's data and use Microsoft's own trusted system files to execute it. Security programs see legitimate AI processing instead of a cyberattack.
The researcher suggests this highlights a major blind spot as AI becomes more common. Security tools need updates to scan AI files, and users shouldn't blindly trust AI models downloaded from the internet.
Source: Dark Reading