Back to skill
Skillv1.0.1

ClawScan security

crawl requirement from confluence · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

ReviewMar 16, 2026, 6:31 AM
Verdict
Review
Confidence
medium
Model
gpt-5-mini
Summary
The instructions match a Confluence scraping tool, but they include overbroad behaviors (mandatory full-site crawling, reliance on a logged-in browser session/cookie, and automatic deletion of files in your workspace) that could cause data loss or unintended data exposure and are not fully justified or declared.
Guidance
This skill appears to do what it says (crawl Confluence and save Markdown/images) but has two practical risks you should consider before installing: (1) it insists on crawling every child page and will not stop mid-run, which can download large amounts of potentially sensitive data; (2) its cleanup script will delete the oldest zip files and output subdirectories under the configured workspace path — that could remove unrelated files. Also, the tool relies on a logged-in browser session or cookies (session credentials) that are not declared as required environment variables; avoid pasting cookies into untrusted tools. Recommendations: set outputDir and workspaceDir to an isolated test folder, review/modify the cleanup script (or disable automatic deletion), test on a small subtree first, and prefer using a dedicated Confluence API token (scoped to read-only) instead of browser cookies. If you can provide the skill's source code or clarify how cookies/session data are handled, I can reassess with higher confidence.

Review Dimensions

Purpose & Capability
noteThe skill's name/description (Confluence reader that converts pages to Markdown and saves images) matches the SKILL.md instructions: enumerating page trees, converting HTML→MD, and saving images. There are no unrelated binaries or external services requested. However, the requirement to always fetch every child page (no partial fetch) is strict and may be disproportionate for many legitimate uses.
Instruction Scope
concernRuntime instructions tell the agent/user to (a) open a browser and stay logged in (relying on a session), (b) execute JavaScript in the browser console to scrape page links, (c) optionally use a cookie with the Confluence REST API, (d) recursively fetch all child pages and images without skipping, and (e) run PowerShell that deletes zip files and output subdirectories when storage thresholds are exceeded. These behaviors broaden the scope beyond simple conversion: they depend on user session cookies, require executing browser console JS and local PowerShell, and include destructive file operations that may affect other data.
Install Mechanism
okNo install spec or code is included (instruction-only), so nothing is downloaded or installed by the skill itself. That reduces supply-chain risk. Note: the agent's runtime instructions still call for executing local PowerShell and browser JS, which is operational risk but not an installation risk.
Credentials
concernThe skill declares no env vars/credentials, but the instructions require a logged-in browser session and include functions that accept a cookie string for REST API calls — effectively requiring session credentials that are not declared. The cleanup script targets workspaceDir and *.zip files in the workspace root, which may contain unrelated user data; that access is broader than just writing the skill's own outputs.
Persistence & Privilege
okThe skill does not request always:true or otherwise demand permanent inclusion. It does instruct creating timestamped output directories and removing old directories/zip files within configured paths, which is normal for a scraper but carries the deletion risk noted above.