skill-trust-auditor
PassAudited by ClawScan on May 10, 2026.
Overview
This looks like a purpose-aligned pre-install skill auditor, but users should treat its scores as advisory and be careful with its optional package setup, LLM mode, and audit-then-install alias.
This skill appears coherent and not malicious based on the provided artifacts. Before installing, know that setup can install Python packages, `--llm` uses Anthropic, and the trust score should be treated as advisory. For safer use, run audits manually, review the output yourself, and install target skills only after you understand the findings.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running setup may install or update Python packages on your machine.
The first-run setup pulls dependencies from PyPI and upgrades pip. That is expected for this Python-based auditor, but it relies on external package provenance and changes the user's Python environment.
python3 -m pip install --quiet --upgrade pip ... "requests>=2.31.0" ... "anthropic>=0.25.0" ... python3 -m pip install --quiet "$pkg"
Run setup in a virtual environment if possible, and review dependency installation before approving it.
If you enable LLM mode, the skill may use your Anthropic account and incur provider usage.
The skill can use the user's Anthropic credential for optional LLM-assisted analysis. This is disclosed and purpose-aligned, with no evidence here of logging or unrelated use.
Anthropic API key (optional, for `--llm` mode)
Only use `--llm` if you are comfortable using your Anthropic API key for this audit workflow.
Target skill content or excerpts may be processed by an external LLM provider when LLM mode is enabled.
Optional LLM judging implies sending audit context about the target skill to an external provider. This is aligned with the feature, but users should understand the data boundary.
Optionally uses **LLM-as-judge** (Claude Haiku) for ambiguous curl intent
Avoid `--llm` for private or sensitive skill source unless provider processing is acceptable.
If copied as-is, this workflow may move quickly from an audit into installing another skill.
The optional alias chains an audit command directly to `clawhub install`. Installing skills can change agent behavior, so the audit result should be reviewed before proceeding.
alias clawhub-safe='bash ~/.openclaw/workspace/skills/skill-trust-auditor/scripts/audit.sh $1 && clawhub install $1'
Use the auditor first, read the report, and then run `clawhub install` separately if you still want to proceed.
A user may over-trust a clean score and skip manual review of a skill that still has unrecognized risks.
The documentation presents a high score as 'SAFE' and says to install freely, while the described method is regex/pattern-based and may miss risks outside fetched or referenced files.
| 90-100 | ✅ SAFE | Install freely |
Treat the trust score as a screening aid, not proof that a skill is safe.
