← Back to feed

OpenClaw and Hugging Face Supply Chain Poisoning: 575+ Malicious AI Agent Skills and 352,000+ Unsafe Models Identified—Trojans, Cryptominers, and AMOS Stealer Malware

Date: 2026-05-12
Tags: supply-chain, malicious-tool, prompt-injection

Executive Summary

Hugging Face and ClawHub, the two largest repositories for AI models and agent skills, have been found to contain hundreds of malicious models capable of executing arbitrary code and 575 malicious OpenClaw agent skills designed to steal credentials, open reverse shells, and hijack AI agents for cryptocurrency mining. Protect AI, which partnered with Hugging Face to scan the platform's model library, has examined more than four million models and identified approximately 352,000 unsafe or suspicious issues across 51,700 models. Over 575 malicious skills across 13 developer accounts were identified in OpenClaw, targeting Windows and macOS with trojans, cryptominers, and AMOS stealer malware.

Campaign Summary

FieldDetail
Campaign / MalwareAI Supply Chain Poisoning Campaign (OpenClaw/Hugging Face)
AttributionUnknown threat actor collective (confidence: low)
TargetDevelopers, researchers, and enterprises downloading AI models and agent skills from Hugging Face and OpenClaw; organizations using ClawHub for agentic AI deployments
VectorMalicious model repositories using pickle serialization exploits (nullifAI technique); trojanized agent skills with indirect prompt injection; social engineering via legitimate-appearing model names
Statusactive
First Observed2026-04-28

Detailed Findings

The attack technique, known as "nullifAI," exploits Python's pickle serialization format by embedding malicious Python code at the start of the pickle byte stream and compressing the file using 7z rather than the default ZIP format, which breaks Hugging Face's PickleScan detection tool. Attackers use indirect prompt injection to cause AI agents to execute malicious actions, with trojanized skills masquerading as legitimate tools while instructing users to run encoded commands or install hidden malware. Hidden instructions cause AI agents to perform malicious actions. A malicious Hugging Face repository named Open-OSS/privacy-filter impersonated OpenAI's legitimate Privacy Filter release, reached the #1 trending position with approximately 244,000 downloads and 667 likes in under 18 hours, numbers that were artificially inflated to make the repository appear legitimate. Campaigns employ advanced evasion techniques such as obfuscation, encryption, in-memory execution, process injection, and persistence to maintain stealth and command-and-control capabilities.

MITRE ATT&CK Mapping

TechniqueIDContext
Supply Chain CompromiseT1195Malicious models and agent skills embedded into trusted AI repositories
Execution via Python DeserializationT1203Arbitrary code execution via unsafe pickle.loads() in machine learning models
Credential Access via Stealer MalwareT1555AMOS stealer malware targets browser data, SSH keys, and credential files
Remote Code Execution via Prompt InjectionT1059Indirect prompt injection in AI agent skills triggers command execution
Defense Evasion via ObfuscationT1027Multistep infection chains and encryption techniques bypass detection

IOCs

Domains

recargapopular.com

Full URL Paths

https://huggingface.co/Open-OSS/privacy-filter

Splunk Format

"recargapopular.com" OR "https://huggingface.co/Open-OSS/privacy-filter"

Package Indicators

Open-OSS/privacy-filter (Hugging Face)
575+ trojanized OpenClaw agent skills
352,000+ unsafe/suspicious Hugging Face models

Detection Recommendations

Scan all downloaded Hugging Face models and OpenClaw agent skills with ML-specific analysis tools (Protect AI, safetensors validation); treat all pickle-serialized models as untrusted and verify file signatures; enforce content security policies on agent skill descriptions to detect hidden prompt injection instructions; implement sandboxed environments for model deserialization; maintain audit logs of model downloads and instantiation; block execution of agent skills from unverified developer accounts; use SBOM/ML-BOM tracking for model provenance.

References