skill-trust-auditor

PassAudited by ClawScan on May 10, 2026.

Overview

This looks like a purpose-aligned pre-install skill auditor, but users should treat its scores as advisory and be careful with its optional package setup, LLM mode, and audit-then-install alias.

This skill appears coherent and not malicious based on the provided artifacts. Before installing, know that setup can install Python packages, `--llm` uses Anthropic, and the trust score should be treated as advisory. For safer use, run audits manually, review the output yourself, and install target skills only after you understand the findings.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Running setup may install or update Python packages on your machine.

Why it was flagged

The first-run setup pulls dependencies from PyPI and upgrades pip. That is expected for this Python-based auditor, but it relies on external package provenance and changes the user's Python environment.

Skill content
python3 -m pip install --quiet --upgrade pip ... "requests>=2.31.0" ... "anthropic>=0.25.0" ... python3 -m pip install --quiet "$pkg"
Recommendation

Run setup in a virtual environment if possible, and review dependency installation before approving it.

What this means

If you enable LLM mode, the skill may use your Anthropic account and incur provider usage.

Why it was flagged

The skill can use the user's Anthropic credential for optional LLM-assisted analysis. This is disclosed and purpose-aligned, with no evidence here of logging or unrelated use.

Skill content
Anthropic API key (optional, for `--llm` mode)
Recommendation

Only use `--llm` if you are comfortable using your Anthropic API key for this audit workflow.

What this means

Target skill content or excerpts may be processed by an external LLM provider when LLM mode is enabled.

Why it was flagged

Optional LLM judging implies sending audit context about the target skill to an external provider. This is aligned with the feature, but users should understand the data boundary.

Skill content
Optionally uses **LLM-as-judge** (Claude Haiku) for ambiguous curl intent
Recommendation

Avoid `--llm` for private or sensitive skill source unless provider processing is acceptable.

What this means

If copied as-is, this workflow may move quickly from an audit into installing another skill.

Why it was flagged

The optional alias chains an audit command directly to `clawhub install`. Installing skills can change agent behavior, so the audit result should be reviewed before proceeding.

Skill content
alias clawhub-safe='bash ~/.openclaw/workspace/skills/skill-trust-auditor/scripts/audit.sh $1 && clawhub install $1'
Recommendation

Use the auditor first, read the report, and then run `clawhub install` separately if you still want to proceed.

What this means

A user may over-trust a clean score and skip manual review of a skill that still has unrecognized risks.

Why it was flagged

The documentation presents a high score as 'SAFE' and says to install freely, while the described method is regex/pattern-based and may miss risks outside fetched or referenced files.

Skill content
| 90-100 | ✅ SAFE | Install freely |
Recommendation

Treat the trust score as a screening aid, not proof that a skill is safe.