Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Links to PDFs

v0.0.1

Scrape documents from Notion, DocSend, PDFs, and other sources into local PDF files. Use when the user needs to download, archive, or convert web documents to PDF format. Supports authentication flows for protected documents and session persistence via profiles. Returns local file paths to downloaded PDFs.

2· 2.1k·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The SKILL.md describes a scraper that uses a globally-installed npm package, session profiles, and an LLM fallback (Claude). That functionality aligns with 'download/convert webpages to PDF', but the skill metadata declares no install, no config paths, and no required credentials — which is inconsistent with the described capabilities (daemon, profiles, and LLM API access all imply filesystem and credential usage).
!
Instruction Scope
The runtime instructions instruct installing and running an external CLI that will perform browser automation, accept site credentials (email/password), persist session cookies/profiles, auto-check NDA checkboxes, and send page HTML to an LLM (Claude) as a fallback. Those behaviors go beyond a simple 'download a PDF' helper and involve collecting/transmitting potentially sensitive content and credentials.
!
Install Mechanism
Although the skill bundle contains no install spec, the SKILL.md explicitly tells users to run `npm install -g docs-scraper` (global install from the npm registry). That is a moderate-to-high risk action because it fetches and executes third-party code outside the skill bundle; no source URL, homepage, or verified release is provided in the metadata to validate the package.
!
Credentials
The SKILL.md mentions an LLM fallback using Claude and also describes handling site credentials and session profiles, yet the skill metadata declares no environment variables or config paths. Missing declarations for an external LLM API key (or where profiles are stored/secured) is a proportionality and transparency mismatch — the tool will likely require secrets and filesystem storage that are not declared.
!
Persistence & Privilege
The scraper runs a daemon that auto-starts, keeps browser instances and session profiles, and stores files under ~/.docs-scraper/output. The skill metadata does not declare these config paths or mention persistent background activity. The lack of disclosure about persistent files/processes is a concern for persistence and privilege scope.
What to consider before installing
Before installing or using this skill: 1) Treat the npm package as unverified — find its npm/GitHub page and inspect the source and maintainer. 2) Do not provide real account passwords or sensitive credentials until you confirm how and where they are stored; profile/session cookies will be written to disk (~/.docs-scraper). 3) The LLM fallback will upload page HTML to an external service (Claude) — that can leak private document contents; verify what API key is required and how data is sent. 4) Prefer running the scraper in a sandboxed environment or use a browser/manual export for sensitive documents. 5) If you need this capability, ask the publisher for a homepage/repo, a signed release, and clear docs on credential handling and where files/processes are persisted; absence of those is a red flag.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ee8d9xjn55g869fxg49pq9580cb5n
2.1kdownloads
2stars
1versions
Updated 22h ago
v0.0.1
MIT-0

docs-scraper

CLI tool that scrapes documents from various sources into local PDF files using browser automation.

Installation

npm install -g docs-scraper

Quick start

Scrape any document URL to PDF:

docs-scraper scrape https://example.com/document

Returns local path: ~/.docs-scraper/output/1706123456-abc123.pdf

Basic scraping

Scrape with daemon (recommended, keeps browser warm):

docs-scraper scrape <url>

Scrape with named profile (for authenticated sites):

docs-scraper scrape <url> -p <profile-name>

Scrape with pre-filled data (e.g., email for DocSend):

docs-scraper scrape <url> -D email=user@example.com

Direct mode (single-shot, no daemon):

docs-scraper scrape <url> --no-daemon

Authentication workflow

When a document requires authentication (login, email verification, passcode):

  1. Initial scrape returns a job ID:

    docs-scraper scrape https://docsend.com/view/xxx
    # Output: Scrape blocked
    #         Job ID: abc123
    
  2. Retry with data:

    docs-scraper update abc123 -D email=user@example.com
    # or with password
    docs-scraper update abc123 -D email=user@example.com -D password=1234
    

Profile management

Profiles store session cookies for authenticated sites.

docs-scraper profiles list     # List saved profiles
docs-scraper profiles clear    # Clear all profiles
docs-scraper scrape <url> -p myprofile  # Use a profile

Daemon management

The daemon keeps browser instances warm for faster scraping.

docs-scraper daemon status     # Check status
docs-scraper daemon start      # Start manually
docs-scraper daemon stop       # Stop daemon

Note: Daemon auto-starts when running scrape commands.

Cleanup

PDFs are stored in ~/.docs-scraper/output/. The daemon automatically cleans up files older than 1 hour.

Manual cleanup:

docs-scraper cleanup                    # Delete all PDFs
docs-scraper cleanup --older-than 1h    # Delete PDFs older than 1 hour

Job management

docs-scraper jobs list         # List blocked jobs awaiting auth

Supported sources

  • Direct PDF links - Downloads PDF directly
  • Notion pages - Exports Notion page to PDF
  • DocSend documents - Handles DocSend viewer
  • LLM fallback - Uses Claude API for any other webpage

Scraper Reference

Each scraper accepts specific -D data fields. Use the appropriate fields based on the URL type.

DirectPdfScraper

Handles: URLs ending in .pdf

Data fields: None (downloads directly)

Example:

docs-scraper scrape https://example.com/document.pdf

DocsendScraper

Handles: docsend.com/view/*, docsend.com/v/*, and subdomains (e.g., org-a.docsend.com)

URL patterns:

  • Documents: https://docsend.com/view/{id} or https://docsend.com/v/{id}
  • Folders: https://docsend.com/view/s/{id}
  • Subdomains: https://{subdomain}.docsend.com/view/{id}

Data fields:

FieldTypeDescription
emailemailEmail address for document access
passwordpasswordPasscode/password for protected documents
nametextYour name (required for NDA-gated documents)

Examples:

# Pre-fill email for DocSend
docs-scraper scrape https://docsend.com/view/abc123 -D email=user@example.com

# With password protection
docs-scraper scrape https://docsend.com/view/abc123 -D email=user@example.com -D password=secret123

# With NDA name requirement
docs-scraper scrape https://docsend.com/view/abc123 -D email=user@example.com -D name="John Doe"

# Retry blocked job
docs-scraper update abc123 -D email=user@example.com -D password=secret123

Notes:

  • DocSend may require any combination of email, password, and name
  • Folders are scraped as a table of contents PDF with document links
  • The scraper auto-checks NDA checkboxes when name is provided

NotionScraper

Handles: notion.so/*, *.notion.site/*

Data fields:

FieldTypeDescription
emailemailNotion account email
passwordpasswordNotion account password

Examples:

# Public page (no auth needed)
docs-scraper scrape https://notion.so/Public-Page-abc123

# Private page with login
docs-scraper scrape https://notion.so/Private-Page-abc123 \
  -D email=user@example.com -D password=mypassword

# Custom domain
docs-scraper scrape https://docs.company.notion.site/Page-abc123

Notes:

  • Public Notion pages don't require authentication
  • Toggle blocks are automatically expanded before PDF generation
  • Uses session profiles to persist login across scrapes

LlmFallbackScraper

Handles: Any URL not matched by other scrapers (automatic fallback)

Data fields: Dynamic - determined by Claude analyzing the page

The LLM scraper uses Claude to analyze the page HTML and detect:

  • Login forms (extracts field names dynamically)
  • Cookie banners (auto-dismisses)
  • Expandable content (auto-expands)
  • CAPTCHAs (reports as blocked)
  • Paywalls (reports as blocked)

Common dynamic fields:

FieldTypeDescription
emailemailLogin email (if detected)
passwordpasswordLogin password (if detected)
usernametextUsername (if login uses username)

Examples:

# Generic webpage (no auth)
docs-scraper scrape https://example.com/article

# Webpage requiring login
docs-scraper scrape https://members.example.com/article \
  -D email=user@example.com -D password=secret

# When blocked, check the job for required fields
docs-scraper jobs list
# Then retry with the fields the scraper detected
docs-scraper update abc123 -D username=myuser -D password=secret

Notes:

  • Requires ANTHROPIC_API_KEY environment variable
  • Field names are extracted from the page's actual form fields
  • Limited to 2 login attempts before failing
  • CAPTCHAs require manual intervention

Data field summary

ScraperemailpasswordnameOther
DirectPdf----
DocSend-
Notion--
LLM Fallback✓*✓*-Dynamic*

*Fields detected dynamically from page analysis

Environment setup (optional)

Only needed for LLM fallback scraper:

export ANTHROPIC_API_KEY=your_key

Optional browser settings:

export BROWSER_HEADLESS=true   # Set false for debugging

Common patterns

Archive a Notion page:

docs-scraper scrape https://notion.so/My-Page-abc123

Download protected DocSend:

docs-scraper scrape https://docsend.com/view/xxx
# If blocked:
docs-scraper update <job-id> -D email=user@example.com -D password=1234

Batch scraping with profiles:

docs-scraper scrape https://site.com/doc1 -p mysite
docs-scraper scrape https://site.com/doc2 -p mysite

Output

Success: Local file path (e.g., ~/.docs-scraper/output/1706123456-abc123.pdf) Blocked: Job ID + required credential types

Troubleshooting

  • Timeout: docs-scraper daemon stop && docs-scraper daemon start
  • Auth fails: docs-scraper jobs list to check pending jobs
  • Disk full: docs-scraper cleanup to remove old PDFs

Comments

Loading comments...