Links to PDFs

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: links-to-pdfs Version: 0.0.1 The skill bundle is classified as suspicious due to its inherent high-risk capabilities, even though they align with the stated purpose. The `SKILL.md` instructs the agent to install an external `docs-scraper` CLI tool via `npm`, which introduces a supply chain risk. It explicitly handles sensitive user credentials (email, password) for authentication and requires access to the `ANTHROPIC_API_KEY` environment variable for external API calls, which are significant data access capabilities. Furthermore, it manages a persistent background daemon process and stores session cookies, allowing for continued operation and session hijacking if compromised. While these actions are presented as necessary for document scraping, they represent a broad attack surface and high privilege requirements without clear malicious intent within the provided instructions.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the skill may require trusting external npm code that was not included in the reviewed artifacts and that can access authenticated web content.

Why it was flagged

The skill installs an unpinned global npm CLI, while the submitted artifact set contains no reviewed implementation or install spec. That is material because this CLI is expected to run browser automation and handle credentials/session cookies.

Skill content
npm install -g docs-scraper
Recommendation

Verify the npm package source and maintainer before installing, prefer a pinned version, and run it in an isolated environment if possible.

What this means

The agent could help access a protected DocSend document in a way that appears to accept NDA-related terms on the user's behalf.

Why it was flagged

Automatically checking NDA boxes can represent a user or organization accepting legal or access terms, but the artifact does not show a separate approval step for that high-impact action.

Skill content
The scraper auto-checks NDA checkboxes when name is provided
Recommendation

Require explicit user confirmation before using any option that checks NDA or agreement boxes, and avoid using this automation for documents with legal acceptance requirements unless authorized.

What this means

Sensitive page content from private documents or authenticated sites could be sent to a third-party AI provider during scraping.

Why it was flagged

The skill says unmatched URLs are handled by an automatic Claude-based fallback, and later states Claude analyzes the page HTML. The artifact does not clearly bound whether private/authenticated page HTML is excluded or how the user opts out.

Skill content
LLM fallback - Uses Claude API for any other webpage
Recommendation

Use the skill only for documents you are allowed to share with the provider, and prefer a mode that disables LLM fallback for confidential pages.

What this means

The tool can retain authenticated sessions for sites such as Notion or DocSend after the initial scrape.

Why it was flagged

Credential and session handling is expected for scraping protected documents, but saved cookies are high-impact because they may allow future authenticated access.

Skill content
Profiles store session cookies for authenticated sites.
Recommendation

Use dedicated or least-privilege accounts where possible, avoid saving profiles for highly sensitive sites, and run `docs-scraper profiles clear` when finished.

NoteHigh Confidence
ASI10: Rogue Agents
What this means

A local background process may remain active and keep browser resources or session state available until stopped.

Why it was flagged

A background daemon is disclosed and useful for keeping browsers warm, but it means the tool can keep running beyond a single scrape command.

Skill content
Note: Daemon auto-starts when running scrape commands.
Recommendation

Check `docs-scraper daemon status` and stop it with `docs-scraper daemon stop` when you no longer need it.