Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Web Scraper - Firecrawl

v1.0.0

Web scraping and content extraction using Firecrawl API. Use when users need to crawl websites, extract structured data, convert web pages to markdown, scrap...

0· 146·0 current·1 all-time
byantonia huang@antonia-sz

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for antonia-sz/web-scraper-firecrawl.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Web Scraper - Firecrawl" (antonia-sz/web-scraper-firecrawl) from ClawHub.
Skill page: https://clawhub.ai/antonia-sz/web-scraper-firecrawl
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install web-scraper-firecrawl

ClawHub CLI

Package manager switcher

npx clawhub@latest install web-scraper-firecrawl
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name, description, SKILL.md and the included script all describe a Firecrawl API client (scrape, crawl, map, batch, extract) which is coherent with the declared purpose. However the registry metadata lists no required environment variables or primary credential even though both SKILL.md and the script require a FIRECRAWL_API_KEY to operate.
!
Instruction Scope
SKILL.md instructs users to set FIRECRAWL_API_KEY and to install the Python 'requests' dependency, but the included script reads FIRECRAWL_API_KEY from the environment and uses urllib (not requests). The instructions expect an external API key and allow reading schema and URL list files — which is expected — but the mismatch between docs and code and the presence of an apparent truncation/typo near the end of the script (an isolated 's' and truncated file content) are concerning and reduce confidence in correctness.
Install Mechanism
No install spec is provided (instruction-only installation) and the code file is included in the skill bundle. No remote downloads or archive extraction are used, which minimizes install-time risk.
!
Credentials
Only FIRECRAWL_API_KEY is used by the script (reasonable for a third-party scraping API), but the skill metadata did not declare any required env vars or a primary credential. The omission is a mismatch that could confuse users and cause them to unknowingly supply a secret without expecting to. No other unrelated credentials are requested.
Persistence & Privilege
The skill is not marked always:true and does not request elevated or persistent system-wide privileges. It does write output files when asked and reads user-provided files (schemas, URL lists), which is expected behavior.
What to consider before installing
This skill appears to be a Firecrawl API client, which requires you to provide a FIRECRAWL_API_KEY. Before installing: (1) Confirm the registry metadata is updated to list FIRECRAWL_API_KEY as a required/primary credential so you know what secret the skill needs. (2) Inspect and fix the included scripts — SKILL.md recommends installing 'requests' but the script uses urllib, and the script appears truncated/contains a stray character; ask the publisher for a corrected release. (3) Only provide an API key you trust the endpoint (https://firecrawl.dev) with; consider creating a limited-scope or replaceable key. (4) Because the skill contacts an external API, avoid supplying highly privileged credentials or long-lived tokens unless you trust the provider. If the author corrects the metadata and the script (removing the truncation and aligning docs with code), reassess — that would likely move this to 'benign'.

Like a lobster shell, security has layers — review code before you run it.

crawlervk97bgb3wnskp36ngp4mnaxy5js83b3madata-extractionvk97bgb3wnskp36ngp4mnaxy5js83b3malatestvk97bgb3wnskp36ngp4mnaxy5js83b3maweb-scrapingvk97bgb3wnskp36ngp4mnaxy5js83b3ma
146downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Firecrawl Skill

Powerful web scraping powered by Firecrawl - turn websites into LLM-ready markdown.

Overview

Firecrawl provides APIs for:

  • Scrape - Single page extraction to markdown
  • Crawl - Entire site crawling with depth control
  • Map - URL discovery from a starting point
  • Batch - Multiple URL processing
  • Extract - Structured data extraction with schemas

Prerequisites

  1. Firecrawl API Key - Get free tier at https://firecrawl.dev
  2. Install Python dependencies: requests

Configuration

Set environment variable:

export FIRECRAWL_API_KEY="fc-your-api-key"

Usage

Single Page Scraping

# Basic scrape
firecrawl scrape https://example.com

# With specific options
firecrawl scrape https://example.com --formats markdown,html --only-main-content

# Wait for JS rendering
firecrawl scrape https://spa-app.com --wait-for 2000

Site Crawling

# Crawl entire site (up to limit)
firecrawl crawl https://docs.example.com --limit 50

# With depth control
firecrawl crawl https://blog.example.com --max-depth 2 --limit 100

# Include/exclude patterns
firecrawl crawl https://site.com --include "/blog/*" --exclude "/admin/*"

# Custom formats
firecrawl crawl https://docs.example.com --formats markdown,links

URL Mapping

# Discover all URLs from a site
firecrawl map https://example.com

# With search term
firecrawl map https://docs.python.org --search "tutorial"

Batch Processing

# Scrape multiple URLs
firecrawl batch urls.txt --output ./scraped/

# From JSON list
firecrawl batch urls.json --formats markdown --concurrency 5

Structured Extraction

# Extract specific data using CSS selectors
firecrawl extract https://example.com/products \
  --schema '{"name": ".product-title", "price": ".price", "description": ".desc"}'

# Extract to JSON
firecrawl extract https://news.example.com/article --schema article-schema.json

Output Formats

Markdown

Clean, LLM-ready markdown with:

  • Headings preserved
  • Links converted to markdown format
  • Images with alt text
  • Tables formatted as markdown tables

HTML

Raw or cleaned HTML

Links

Extracted link lists for further crawling

Screenshot

Page screenshot (if requested)

Use Cases

Knowledge Base Building

# Crawl documentation site
firecrawl crawl https://docs.framework.com --limit 200 -o ./kb/

# Merge into single file for RAG
cat ./kb/*.md > knowledge-base.md

Research & Analysis

# Scrape competitor pricing
firecrawl batch competitors.txt --extract pricing-schema.json

# Monitor blog updates
firecrawl map https://blog.company.com --since 2024-01-01

Content Migration

# Export old CMS content
firecrawl crawl https://old-site.com --formats markdown,html -o ./export/

Scripts

All functionality via scripts/firecrawl.py:

  • Handles API authentication
  • Automatic rate limiting
  • Retry logic for failures
  • Progress tracking for large crawls

Integration

Works well with:

  • markdown-sync-pro - Sync scraped content to Notion/GitHub
  • arxiv-paper - Combine with academic paper downloads
  • maybe-finance - Scrape financial data for analysis

Comments

Loading comments...