Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Links to PDFs

v0.0.1

Scrape documents from Notion, DocSend, PDFs, and other sources into local PDF files. Use when the user needs to download, archive, or convert web documents to PDF format. Supports authentication flows for protected documents and session persistence via profiles. Returns local file paths to downloaded PDFs.

2· 2k·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The SKILL.md describes a scraper that uses a globally-installed npm package, session profiles, and an LLM fallback (Claude). That functionality aligns with 'download/convert webpages to PDF', but the skill metadata declares no install, no config paths, and no required credentials — which is inconsistent with the described capabilities (daemon, profiles, and LLM API access all imply filesystem and credential usage).
!
Instruction Scope
The runtime instructions instruct installing and running an external CLI that will perform browser automation, accept site credentials (email/password), persist session cookies/profiles, auto-check NDA checkboxes, and send page HTML to an LLM (Claude) as a fallback. Those behaviors go beyond a simple 'download a PDF' helper and involve collecting/transmitting potentially sensitive content and credentials.
!
Install Mechanism
Although the skill bundle contains no install spec, the SKILL.md explicitly tells users to run `npm install -g docs-scraper` (global install from the npm registry). That is a moderate-to-high risk action because it fetches and executes third-party code outside the skill bundle; no source URL, homepage, or verified release is provided in the metadata to validate the package.
!
Credentials
The SKILL.md mentions an LLM fallback using Claude and also describes handling site credentials and session profiles, yet the skill metadata declares no environment variables or config paths. Missing declarations for an external LLM API key (or where profiles are stored/secured) is a proportionality and transparency mismatch — the tool will likely require secrets and filesystem storage that are not declared.
!
Persistence & Privilege
The scraper runs a daemon that auto-starts, keeps browser instances and session profiles, and stores files under ~/.docs-scraper/output. The skill metadata does not declare these config paths or mention persistent background activity. The lack of disclosure about persistent files/processes is a concern for persistence and privilege scope.
What to consider before installing
Before installing or using this skill: 1) Treat the npm package as unverified — find its npm/GitHub page and inspect the source and maintainer. 2) Do not provide real account passwords or sensitive credentials until you confirm how and where they are stored; profile/session cookies will be written to disk (~/.docs-scraper). 3) The LLM fallback will upload page HTML to an external service (Claude) — that can leak private document contents; verify what API key is required and how data is sent. 4) Prefer running the scraper in a sandboxed environment or use a browser/manual export for sensitive documents. 5) If you need this capability, ask the publisher for a homepage/repo, a signed release, and clear docs on credential handling and where files/processes are persisted; absence of those is a red flag.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ee8d9xjn55g869fxg49pq9580cb5n

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments