Back to skill
Skillv1.0.0

ClawScan security

deep-scraper · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousApr 23, 2026, 4:33 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill's code matches a YouTube transcript-scraper, but the package/README overclaims features and the runtime instructions expect a Dockerfile that is not included — several inconsistencies warrant caution before installing or running it.
Guidance
Do not run this on production hosts or with privileged access yet. Key concerns: (1) The README and package.json expect a Docker image and a Dockerfile, but no Dockerfile is included — ask the publisher for the Dockerfile and confirm its contents before building. (2) The description claims X/Twitter support, but the shipped code only implements YouTube/generic scraping; ask for clarification or updated code if you need X/Twitter. (3) Building and running Docker images from unknown sources can execute arbitrary code on your host — inspect the Dockerfile and image contents (or run it in an isolated sandbox/VM) before use. (4) The tool intentionally clears cookies and intercepts network requests to fetch transcripts; this behavior can bypass site protections and may violate website terms of service. If you proceed, run in an isolated environment, review the missing Dockerfile, and verify that the image only contains the expected Node dependencies and scripts.

Review Dimensions

Purpose & Capability
concernThe description promises 'deep' scraping for YouTube and X/Twitter and a Dockerized Crawlee environment. The actual code implements YouTube-focused scraping only (two handlers both target YouTube or generic pages) — there is no X/Twitter-specific logic. The SKILL.md and package.json state Docker is required, but the skill manifest earlier lists no required binaries; additionally the SKILL.md instructs keeping a Dockerfile in the skill directory, yet no Dockerfile is present in the provided file manifest. These mismatches suggest the published metadata and the shipped files are out of sync.
Instruction Scope
noteSKILL.md instructs building and running a Docker image, copying the skill directory into a host 'skills/' folder, and running the node handlers inside the container. The runtime steps and the code stay within scraping behavior (clearing cookies, simulating UI actions, intercepting network requests, and printing JSON to stdout). The instructions do not ask for unrelated system credentials or to exfiltrate data to third-party endpoints. Still, the guidance to 'penetrate protections' and the UI/network-interception behavior can be used to bypass site protections — that's consistent with the stated scraping purpose but has legal/TOS implications the user should consider.
Install Mechanism
concernThis is instruction-only with included Node files and a package.json (no install spec). SKILL.md requires building a Docker image from the skill directory (docker build -t skillboss-crawlee skills/deep-scraper/), but no Dockerfile is present in the listed files. Running an image built from an absent or unknown Dockerfile is impossible as-is; if a Dockerfile is added by the publisher later, building and running arbitrary Docker images from an unknown source is higher risk. Dependencies (crawlee, playwright) are expected for the described functionality but are heavy; the absence of an explicit, included Dockerfile is the primary install risk.
Credentials
okThe skill requests no environment variables, no credentials, and no config paths. The code does not read env vars or secret files. Output is written to stdout only. From a credential-scope viewpoint, the skill is proportionate to its scraping purpose.
Persistence & Privilege
okThe skill does not request persistent 'always' inclusion and does not modify other skills or system settings. It runs as a containerized task per the instructions; autonomous invocation is allowed by default but not combined with other high-risk privileges.