Back to skill
Skillv1.0.5
ClawScan security
botlearn-assessment · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignMar 6, 2026, 7:51 PM
- Verdict
- benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's files, instructions, and runtime behavior are coherent with a local self-assessment/reporting tool: it reads/writes local result files and can run bundled Node.js report generators but does not request credentials or surprising external access.
- Guidance
- This skill appears to do what it claims: run an autonomous self-assessment, self-score, and generate Markdown+HTML reports in a local results/ directory. Before installing or running: 1) be aware that the skill will write question/answer text, scoring, and generated reports to results/ (inspect that directory if results may contain sensitive input); 2) HTML reports (or the D4 example HTML) may reference external CDNs (e.g., Chart.js) when opened in a browser—open them offline or inspect the generated HTML if that is a concern; 3) report HTML generation uses the included Node scripts—if you do not want Node execution in your environment, the flows note the agent will skip HTML generation when node is not available; 4) if you want extra assurance, quickly review the two JS files (scripts/radar-chart.js and scripts/generate-html-report.js) for any outbound network calls before running them. Overall the package is internally consistent and does not request disproportionate access, but treat generated reports as potentially sensitive outputs and run in an environment you control.
Review Dimensions
- Purpose & Capability
- okThe name/description (a 5-dimension self-assessment) match the included question banks, flows, and report-generation scripts. The files (questions, references, scoring, and two JS scripts) are exactly what such a tool needs; no unrelated credentials, binaries, or config paths are requested.
- Instruction Scope
- noteSKILL.md and flows explicitly instruct the agent to read repository files (questions, references) and to read/write a local results/ directory (INDEX.md, exam-*.md, exam-*-data.json). It also instructs attempting web_search or node-based code execution only when a question requires those capabilities. This is coherent for the stated purpose, but the report will capture question/answer text and scoring artifacts in results/, which may include user-provided or sensitive content if used in an interactive session.
- Install Mechanism
- okNo install spec is provided (instruction-only with bundled scripts), so nothing is downloaded from external URLs. The included Node.js scripts are local files; running them requires Node.js to be present, but the flows already document skipping HTML generation if node is not available.
- Credentials
- okThe skill requires no environment variables, secrets, or external credentials. Its behavior (file I/O within results/, optional web_search/tool checks) is proportional to a self-assessment/reporting tool. There are no declarations requesting unrelated tokens or keys.
- Persistence & Privilege
- okalways:false and normal model invocation settings. The skill writes files into a results/ directory (its expected output), but it does not request system-wide configuration changes or permanent elevated privileges. It does not modify other skills' configs per the provided files.
