Hugging Face
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill can use your Hugging Face account token for requested API calls, which may access gated resources or consume account quota.
The skill requires a Hugging Face token for authenticated inference requests. This is expected for the stated integration and the artifact also says not to print full tokens.
export HF_TOKEN="<token>" ... -H "Authorization: Bearer ${HF_TOKEN}"Use a least-privilege Hugging Face token, do not paste tokens into chat logs, and revoke or rotate the token if it may have been exposed.
Any prompt or task data you choose to process through hosted inference leaves your machine and is handled by Hugging Face.
The skill discloses that selected prompts or task inputs may be sent to Hugging Face services for hosted inference.
Inference payloads sent to Hugging Face Inference API when execution is requested.
Only send data you are comfortable sharing with Hugging Face, and remove unrelated private context from prompts before running inference.
Future recommendations may be influenced by locally stored preferences and evaluation notes, and those files may contain summaries of prior work.
The skill creates persistent local memory for preferences, model shortlists, evaluation logs, and endpoint notes; the artifact bounds this to durable non-secret context.
Create `~/hugging-face/memory.md` with this structure ... Store durable decisions, not full conversation transcripts. ... Never store tokens, secrets, or private keys in memory files.
Review `~/hugging-face/` periodically, avoid storing sensitive prompts or secrets there, and use the documented paused/read-only behavior if you do not want memory updates.
