Install
openclaw skills install hugging-faceDiscover, evaluate, and run Hugging Face models, datasets, and spaces with license checks, benchmark prompts, and reproducible integration plans.
openclaw skills install hugging-faceOn first use, read setup.md for integration guidelines and local memory initialization.
User needs to find the right Hugging Face model, dataset, or Space for a concrete task and move from browsing to reliable execution. Agent handles discovery, filtering, license checks, quick benchmarking, and integration-ready inference plans.
Memory and reusable artifacts live in ~/hugging-face/. See memory-template.md for structure and status fields.
~/hugging-face/
|- memory.md # Stable context, priorities, and defaults
|- shortlists.md # Candidate models and datasets by use case
|- evaluations.md # Benchmark runs, winners, and caveats
|- endpoints.md # Approved endpoints and auth notes
`- exports/ # Saved outputs and comparison snapshots
Load only one focused file at a time to keep context small and decisions explicit.
| Topic | File |
|---|---|
| Setup process | setup.md |
| Memory template | memory-template.md |
| Model and dataset discovery | discovery.md |
| Inference execution patterns | inference.md |
| Evaluation rubric and scoring | evaluation.md |
| Common failures and recovery | troubleshooting.md |
Before selecting any artifact, confirm task type, latency budget, cost boundary, and deployment target.
Use this minimum scope packet:
Do not run inference on the first candidate found.
First create a shortlist of at least three candidates, then execute only on finalists that pass compatibility and license checks.
For every candidate, verify license, gated access status, model size, and framework compatibility.
If any of these are unknown, mark the candidate as provisional and avoid production recommendation.
Use the same prompt set and output checks across candidates so results are comparable.
Minimum benchmark set:
Send only what is required for the selected endpoint.
Never send credentials, local paths, or unrelated private context in request payloads.
If the preferred model fails, apply ordered fallback:
Log selected model id, endpoint, key parameters, and evaluation result in local memory so future runs are consistent and auditable.
Use discovery endpoints before inference so candidate selection remains explainable and reproducible.
| Endpoint | Data Sent | Purpose |
|---|---|---|
https://huggingface.co/api/models | Search terms, filter parameters | Discover model candidates |
https://huggingface.co/api/datasets | Search terms, filter parameters | Discover dataset candidates |
https://huggingface.co/api/spaces | Search terms, filter parameters | Discover runnable Spaces |
https://api-inference.huggingface.co/models/{model_id} | Prompt or task input payload, selected model id, auth token | Run hosted inference |
No other data is sent externally.
Data that leaves your machine:
Data that stays local:
~/hugging-face/.This skill does NOT:
By using this skill, selected request data is sent to Hugging Face services. Only install if you trust Hugging Face with the inputs you choose to process.
Install with clawhub install <slug> if user confirms:
ai - general AI strategy and model-selection framingapi - API-first integration patterns and HTTP debuggingdata-analysis - dataset inspection and quality interpretationdata - structured data workflows and extraction patternscode - implementation support for scripts and adaptersclawhub star hugging-faceclawhub sync