Back to skill
Skillv3.0.0

ClawScan security

Sniplink · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousApr 8, 2026, 7:48 PM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill's behavior mostly matches a URL-saver, but there are omissions and ambiguous steps (undeclared tooling, unclear storage destination, and use of third‑party APIs) that should be clarified before installing.
Guidance
This skill generally does what it says (extracts and saves link metadata) but before installing ask the author/platform these questions: 1) Which runtime tools does it expect to exist (gh, curl, browser_navigate, web_fetch, web_search)? 2) Where are saved links stored (local agent storage, OpenClaw cloud, third‑party DB)? Who can read them and what is the retention policy? 3) The skill calls a third‑party fxtwitter API for tweets — are you comfortable that tweet text/URLs will be sent to that service? 4) Will it ever require GitHub or other credentials from you (GH_TOKEN, etc.)? If answers are unclear or you cannot verify the storage/third‑party endpoints, treat this as higher risk and avoid installing until clarified.

Review Dimensions

Purpose & Capability
noteThe declared purpose (one-shot URL saver) matches the instructions: extracting metadata from GitHub, tweets, and web pages and presenting them for user approval. However the SKILL.md repeatedly instructs use of platform tools and CLIs (e.g., 'gh api', 'curl' to fxtwitter, 'web_fetch', 'browser_navigate', 'web_search') while the registry metadata lists no required binaries or credentials — a mild mismatch. It's plausible these are built-in agent tools, but the skill does not declare that dependency explicitly.
Instruction Scope
noteInstructions stay largely within the stated purpose and explicitly require user confirmation before saving. Positive boundaries are stated (do not scrape behind logins/paywalls, respect robots.txt). Two things to flag: (1) the skill directs network calls to a third-party proxy API (https://api.fxtwitter.com) which will receive tweet URLs/text — this is expected for tweet extraction but is a privacy/exfiltration surface the user should know about; (2) the instructions say 'save to database' but do not specify where that database lives or who can read it.
Install Mechanism
okNo install spec or code files — lowest technical risk. Nothing is downloaded or written by an installer step according to the registry data.
Credentials
noteThe skill requests no env vars or credentials, which is consistent with the absence of declared secrets. But it references GitHub API usage via 'gh api' (which in practice can use GH tokens/config) and external HTTP calls; the SKILL.md does not explain whether it will rely on any existing agent credentials or require the user to supply GH/Twitter credentials. This ambiguity should be clarified.
Persistence & Privilege
notealways:false and user-invocable are appropriate. The main open question is persistence: the skill saves data to 'database' on user approval but does not say where (agent-local storage, cloud service owned by skill author, or user-owned storage). That gap affects who can access saved links and how long they're retained.