Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Firecrawl Cli

v1.0.0

When the user wants to scrape, crawl, or extract content from websites. Also use when the user mentions 'scrape site,' 'crawl website,' 'extract content,' 'w...

0· 187·0 current·1 all-time
byMario Karras@mariokarras

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mariokarras/abm-firecrawl-cli.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Firecrawl Cli" (mariokarras/abm-firecrawl-cli) from ClawHub.
Skill page: https://clawhub.ai/mariokarras/abm-firecrawl-cli
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install abm-firecrawl-cli

ClawHub CLI

Package manager switcher

npx clawhub@latest install abm-firecrawl-cli
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The SKILL.md explicitly instructs the agent to run a Firecrawl CLI (firecrawl.js) to perform scrapes, crawls, agent-driven extraction, and async jobs. However, the skill metadata lists no required binaries, no install spec, and no required environment variables/credentials. A CLI that starts remote crawl jobs typically requires a binary and an API key or config; that expected linkage is missing and therefore incoherent.
!
Instruction Scope
Runtime instructions direct the agent to read workspace files if present (e.g., .agents/product-marketing-context.md or .claude/product-marketing-context.md) before asking questions. That lets the skill read arbitrary repository/agent-local files, which can include sensitive information. The SKILL.md also references starting async jobs, polling, and previewing API requests (dry-run) but does not state what remote endpoints are used, where scraped data is stored, or whether data is transmitted off-host.
Install Mechanism
This is an instruction-only skill with no install spec and no code files, which minimizes direct disk writes from the skill bundle itself. However, the instructions expect firecrawl.js to exist in the environment; the absence of an install mechanism or declared required binary is an inconsistency (it is unclear how the CLI will be provided). That ambiguity could lead to ad-hoc fetching/execution behavior by the agent or user.
!
Credentials
No environment variables or primary credential are declared, yet the CLI semantics (async jobs, --max-credits, polling, agent autonomous gathering) strongly imply interaction with a remote service that would normally require authentication and possibly billing credentials. The skill also instructs reading local context files, increasing access to potential secrets. The lack of declared credentials is disproportionate to the implied capabilities.
Persistence & Privilege
The skill is not always-enabled (always: false) and does not request elevated platform privileges in the metadata. SKILL.md does not instruct modifying other skills or global agent settings. Autonomous invocation is allowed by default, which is normal for skills; this is not sufficient alone to escalate concern.
What to consider before installing
Before installing or using this skill, ask the publisher for: (1) an authoritative install method or package (where does firecrawl.js come from?), (2) the exact network endpoints the CLI talks to and where scraped data is stored or transmitted, (3) required credentials or config (e.g., FIRECRAWL_API_KEY or similar) and why those are needed, and (4) privacy/legal constraints for crawling target sites. Consider running the CLI only in a sandboxed environment and avoid granting it access to sensitive workspace files until you confirm it only needs a specific, limited context file. If you rely on the skill to read .agents/product-marketing-context.md (or similar), inspect that file for secrets first. If the author cannot explain the missing binary/install and missing auth requirements, treat the package as risky.

Like a lobster shell, security has layers — review code before you run it.

latestvk970smvr5fynxsdvtbzc2ydr5h839nz6
187downloads
0stars
1versions
Updated 1h ago
v1.0.0
MIT-0

Firecrawl CLI

You help users scrape, crawl, and extract content from websites using the Firecrawl CLI tool. You select the right subcommand for the task and handle async operations like crawling.

Before Starting

Check for product marketing context first: If .agents/product-marketing-context.md exists (or .claude/product-marketing-context.md in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task.

Then determine:

  1. Target URL(s) — What site or page to extract content from
  2. What content to extract — Full page, specific data, all pages on a site
  3. Single page vs full site — One URL (scrape) or many pages (crawl)
  4. Search first? — Does the user need to find pages before extracting content

Command Reference

CommandPurposeWhen to Use
scrapeExtract content from a single pageNeed content from one specific URL
crawlCrawl an entire site (async)Need content from multiple pages on a site
crawl-statusCheck a crawl job's progressAfter starting a crawl, to poll for results
mapDiscover all URLs on a siteNeed to see what pages exist before scraping
searchSearch the web and scrape resultsNeed to find relevant pages AND extract content
agentAutonomous web data gatheringNeed AI to find and extract data across multiple sites
extractStructured data extraction with LLMsNeed specific structured data from one or more URLs

Workflow by Use Case

Scraping a Single Page

firecrawl.js scrape --url "https://example.com/page"

When: You need content from one specific page. Returns markdown by default.

Options:

  • --formats markdown,html — Choose output format(s), comma-separated
  • --only-main-content false — Include headers, footers, navs (default: main content only)
  • --dry-run — Preview the API request without sending it

Example — extract a blog post as markdown:

firecrawl.js scrape --url "https://example.com/blog/post-title"

Example — get full HTML including navigation:

firecrawl.js scrape --url "https://example.com" --formats html --only-main-content false

Crawling an Entire Site

firecrawl.js crawl --url "https://example.com" --limit 50

When: You need content from multiple pages across a site. This is an async operation — the CLI starts a crawl job and returns a job ID.

Options:

  • --limit N — Maximum number of pages to crawl
  • --max-depth N — Maximum link depth from the starting URL
  • --poll — Automatically poll for completion instead of returning the job ID
  • --dry-run — Preview the API request without sending it

Workflow without --poll:

  1. Start the crawl: firecrawl.js crawl --url "https://example.com" --limit 20
  2. Note the returned job ID
  3. Check status: firecrawl.js crawl-status --id "job-id-here"
  4. Repeat step 3 until complete

Workflow with --poll (recommended):

firecrawl.js crawl --url "https://example.com" --limit 20 --poll

The CLI handles polling automatically and returns all results when done.

Checking Crawl Status

firecrawl.js crawl-status --id "job-id-here"

When: You started a crawl without --poll and need to check if it finished.

Getting a Site Map

firecrawl.js map --url "https://example.com"

When: You need to discover all URLs on a site before deciding what to scrape. Good first step before targeted scraping.

Options:

  • --search "term" — Filter URLs containing a search term
  • --limit N — Maximum number of URLs to return
  • --dry-run — Preview the API request without sending it

Example — find all blog pages:

firecrawl.js map --url "https://example.com" --search "blog"

Searching and Scraping

firecrawl.js search --query "site:example.com pricing plans"

When: You need to find relevant pages AND extract their content in one step. Combines web search with content extraction.

Options:

  • --limit N — Number of results to return
  • --country "us" — Country code for localized results
  • --dry-run — Preview the API request without sending it

Example — find competitor pricing pages:

firecrawl.js search --query "competitor.com pricing" --limit 5

Autonomous Agent Extraction

firecrawl.js agent --prompt "Find pricing information for Acme Corp" --poll

When: You need the AI to autonomously search, navigate, and gather data. No URLs required -- just describe what you need.

Options:

  • --prompt "..." -- Describe what data to find (required, max 10000 chars)
  • --url "..." -- Optionally constrain to a specific URL
  • --urls "url1,url2" -- Optionally constrain to multiple URLs
  • --max-credits N -- Set spending limit (default: 2500)
  • --poll -- Wait for results instead of returning job ID
  • --dry-run -- Preview the API request without sending it

Example -- find competitor pricing:

firecrawl.js agent --prompt "Find pricing tiers and plans for competitor.com" --url "https://competitor.com" --poll

Extracting Structured Data

firecrawl.js extract --urls "https://example.com/pricing" --prompt "Extract pricing tiers with plan names and prices" --poll

When: You need specific structured data extracted from URLs using LLMs. Supports wildcards for domain-wide extraction.

Options:

  • --urls "url1,url2" -- URLs to extract from, comma-separated (required). Supports /* wildcards
  • --prompt "..." -- Describe what data to extract
  • --schema '{"type":"object",...}' -- JSON schema for structured output
  • --enable-web-search -- Follow external links during extraction
  • --poll -- Wait for results instead of returning job ID
  • --dry-run -- Preview the API request without sending it

Example -- extract product data from multiple pages:

firecrawl.js extract --urls "https://example.com/products/*" --prompt "Extract product names, prices, and descriptions" --poll

Choosing the Right Command

  • Need one page?scrape
  • Need the whole site?crawl with --limit and --poll
  • Need to discover URLs first?map, then scrape specific pages
  • Need search + content?search
  • Started a crawl and need results?crawl-status
  • Need AI to find data autonomously?agent with --poll
  • Need structured data from known URLs?extract with --prompt and --poll

Common multi-step workflows:

  1. Targeted extraction: map → review URLs → scrape specific pages
  2. Full site dump: crawl --poll --limit 100
  3. Research a topic: search → review results → scrape for deeper content

Output Format

Present extracted content with clear structure:

  • Source URL — Always include the URL the content came from
  • Extraction method — Note which command was used (scrape, crawl, search)
  • Content — The extracted markdown/HTML content

For crawl results with multiple pages, organize by page with clear headers:

### Page 1: [title] (https://example.com/page-1)
[content]

### Page 2: [title] (https://example.com/page-2)
[content]

Environment Setup

The Firecrawl CLI requires a FIRECRAWL_API_KEY environment variable. If the key is not set, the CLI will return an error. The user needs to:

  1. Get an API key from firecrawl.dev
  2. Set it in their environment: export FIRECRAWL_API_KEY="fc-..."

Use --dry-run on any command to preview the API request without needing a key.


Related Skills

  • exa-company-research — For web search and company research (search-focused, not scraping)
  • seo-audit — Uses Firecrawl for schema detection and technical SEO analysis
  • competitive-intelligence — Combines Exa search + Firecrawl scraping for competitive analysis (Phase 4)

Comments

Loading comments...