XCrawl Crawl

PassAudited by ClawScan on May 1, 2026.

Overview

This is a coherent XCrawl API wrapper, but users should notice that it uses a local API key, sends requests through an external crawl service, and returns raw crawled content.

Install only if you are comfortable giving the agent access to an XCrawl API key and sending crawl requests through XCrawl. Avoid using privileged cookies or authorization headers unless necessary, keep the API key file protected, and treat returned crawl content as untrusted data.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Anyone or any agent action with access to that key could start XCrawl requests and spend account credits.

Why it was flagged

The skill requires a local API key that authorizes XCrawl account usage and credit consumption; this is expected for the integration and is disclosed.

Skill content
Before using this skill, the user must create a local config file and write `XCRAWL_API_KEY` into it. Path: `~/.xcrawl/config.json`
Recommendation

Store the config file securely, use a scoped or dedicated XCrawl key if available, and rotate the key if it is exposed.

What this means

Supplying session cookies, authorization headers, or sensitive webhook destinations could expose private access or crawl results to the crawl provider or callback endpoint.

Why it was flagged

The documented request surface can send cookies, headers, and webhook callbacks through the external XCrawl service; this is purpose-aligned but can carry sensitive data if the user supplies it.

Skill content
`cookies` | object map | No | - | Cookie key/value pairs | ... `headers` | object map | No | - | Header key/value pairs | ... `webhook` | object | No | - | Async callback config |
Recommendation

Only provide cookies/headers when necessary, avoid privileged session tokens, use trusted webhook endpoints, and prefer dedicated low-privilege accounts for authenticated crawling.

What this means

Crawl results may come from connections whose certificates were not verified, and sensitive request data could be at higher risk if used with affected targets.

Why it was flagged

The skill documents a crawl request option whose default skips TLS verification, which is a weaker security posture if sensitive headers or cookies are used.

Skill content
`skip_tls_verification` | boolean | No | `true` | Skip TLS verification |
Recommendation

Explicitly set TLS verification to a safe value when the API supports it, and avoid sending sensitive cookies or headers to sites that require relaxed TLS checks.

What this means

A crawled page could contain prompt-injection text that tries to influence later agent behavior if the output is reused directly.

Why it was flagged

Crawled web pages are untrusted content; returning them raw is expected for a crawler, but downstream agents should not treat page text as instructions.

Skill content
Default behavior is raw passthrough: return upstream API response bodies as-is.
Recommendation

Treat crawl output as data, not instructions, and review or sanitize raw crawled content before using it in downstream agent workflows.