Hugging Face
PassAudited by VirusTotal on May 11, 2026.
Overview
Type: OpenClaw Skill Name: hugging-face Version: 1.0.0 The skill bundle is well-aligned with its stated purpose of interacting with Hugging Face services for model discovery and inference. It transparently declares its use of `curl` and `jq` for API calls to specified Hugging Face endpoints, and `HF_TOKEN` for authentication, with explicit instructions in `inference.md` and `memory-template.md` to handle tokens securely (not logging or storing them locally). The `setup.md` file performs standard local directory and file creation with restrictive `chmod 700` and `chmod 600` permissions. Crucially, `SKILL.md` and `inference.md` contain strong security guardrails and core rules for the AI agent, instructing it to minimize external data, avoid sending unrelated user context or local files, and validate licenses, which actively mitigates prompt injection risks and promotes secure behavior. There is no evidence of data exfiltration beyond declared endpoints, unauthorized execution, persistence, or obfuscation.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill can use your Hugging Face account token for requested API calls, which may access gated resources or consume account quota.
The skill requires a Hugging Face token for authenticated inference requests. This is expected for the stated integration and the artifact also says not to print full tokens.
export HF_TOKEN="<token>" ... -H "Authorization: Bearer ${HF_TOKEN}"Use a least-privilege Hugging Face token, do not paste tokens into chat logs, and revoke or rotate the token if it may have been exposed.
Any prompt or task data you choose to process through hosted inference leaves your machine and is handled by Hugging Face.
The skill discloses that selected prompts or task inputs may be sent to Hugging Face services for hosted inference.
Inference payloads sent to Hugging Face Inference API when execution is requested.
Only send data you are comfortable sharing with Hugging Face, and remove unrelated private context from prompts before running inference.
Future recommendations may be influenced by locally stored preferences and evaluation notes, and those files may contain summaries of prior work.
The skill creates persistent local memory for preferences, model shortlists, evaluation logs, and endpoint notes; the artifact bounds this to durable non-secret context.
Create `~/hugging-face/memory.md` with this structure ... Store durable decisions, not full conversation transcripts. ... Never store tokens, secrets, or private keys in memory files.
Review `~/hugging-face/` periodically, avoid storing sensitive prompts or secrets there, and use the documented paused/read-only behavior if you do not want memory updates.
