Skill
PassAudited by ClawScan on May 1, 2026.
Overview
The skill is coherently described as a verification gate and only shows expected use of an npm CLI package, LLM provider APIs, and optional configuration.
This looks reasonable for a verification-gating skill. Before installing, make sure you trust the moltblock npm package and repository, use a dedicated LLM API key where possible, and avoid sending secrets or private data to a cloud provider unless that is acceptable.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running the skill will execute the installed npm CLI package, so the package provenance matters.
The skill depends on an external npm package that is run as a CLI. The version is pinned and this is central to the stated purpose, but users should still trust the package source before installing or invoking it.
node | package: moltblock@0.11.8 | creates binaries: moltblock
Use the pinned version, install from the expected npm package/repository, and review the package source if you need high assurance.
The tool can use your LLM provider account and may incur usage costs under the selected API key.
The tool uses LLM provider API keys from the environment. This is expected for the described LLM-based verification workflow, and the artifacts do not show hardcoded keys or unrelated credential use.
Set one of these for a cloud provider: OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, ZAI_API_KEY
Use a dedicated, limited-scope API key for verification and only expose the provider key you intend the tool to use.
Sensitive task details could be sent to the selected LLM provider if included in the verification prompt.
The skill sends task content to a configured LLM provider or local LLM. This external provider flow is disclosed and purpose-aligned, but users should be aware that task descriptions and generated artifacts may leave the local environment.
Generates artifacts via LLM API calls, then runs policy checks against the output
Avoid including secrets in tasks sent to cloud providers, or configure a local provider for sensitive verification work.
