XCrawl Search

Use this skill for XCrawl search tasks, including keyword search request design, location and language controls, result analysis, and follow-up crawl or scra...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 74 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The description promises XCrawl search capabilities and the SKILL.md instructs only how to call XCrawl's /v1/search endpoint. Required tools (curl/node) and the local API key file (~/.xcrawl/config.json) are coherent with that purpose.
Instruction Scope
Runtime instructions only read a local config file for 'XCRAWL_API_KEY' and perform POST requests to https://run.xcrawl.com/v1/search, returning raw API responses. This is within scope. Minor note: the allowed-tools list includes Write/Edit, but the instructions only show reading the config and making network calls — write/edit permissions appear broader than strictly needed.
Install Mechanism
There is no install spec and no code files; this is instruction-only, so nothing is written to disk by an installer. This is the lowest-risk installation posture.
Credentials
The skill requests no environment variables and uses a single local config file for the API key, which is proportionate. As a security consideration, storing the API key in plaintext under ~/.xcrawl/config.json is a user-side decision — the skill's behavior matches that requirement. Also note the examples access the user's HOME path; ensure that file contains only the intended key.
Persistence & Privilege
always:false and no install persistence requested. The skill does not request system-wide configuration changes or credentials belonging to other services.
Assessment
This skill appears to do exactly what it says: read an XCrawl API key from ~/.xcrawl/config.json and call https://run.xcrawl.com/v1/search using curl or node, returning raw responses. Before installing: (1) Confirm you trust the XCrawl domain and service (https://run.xcrawl.com and https://www.xcrawl.com/) and understand that API calls consume credits; (2) Store only the XCrawl API key in ~/.xcrawl/config.json and restrict file permissions (e.g., chmod 600) to limit access; (3) Be aware the skill returns raw API responses (which may contain scraped content or billing details) and will not summarize results unless you ask; (4) Note the SKILL.md lists Write/Edit permissions even though it only needs to read — consider whether you want to grant file-write capability to the agent runtime. If you need greater assurance, request the maintainer provide a minimal example that omits write/edit permissions or show why they are necessary.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.2
Download zip
latestvk978mq3r807p5j5kmbpm2af2gd82rk4p

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

Any bincurl, node

SKILL.md

XCrawl Search

Overview

This skill uses XCrawl Search API to retrieve query-based results. Default behavior is raw passthrough: return upstream API response bodies as-is.

Required Local Config

Before using this skill, the user must create a local config file and write XCRAWL_API_KEY into it.

Path: ~/.xcrawl/config.json

{
  "XCRAWL_API_KEY": "<your_api_key>"
}

Read API key from local config file only. Do not require global environment variables.

Credits and Account Setup

Using XCrawl APIs consumes credits. If the user does not have an account or available credits, guide them to register at https://dash.xcrawl.com/. After registration, they can activate the free 1000 credits plan before running requests.

Tool Permission Policy

Request runtime permissions for curl and node only. Do not request Python, shell helper scripts, or other runtime permissions.

API Surface

  • Search endpoint: POST /v1/search
  • Base URL: https://run.xcrawl.com
  • Required header: Authorization: Bearer <XCRAWL_API_KEY>

Usage Examples

cURL

API_KEY="$(node -e "const fs=require('fs');const p=process.env.HOME+'/.xcrawl/config.json';const k=JSON.parse(fs.readFileSync(p,'utf8')).XCRAWL_API_KEY||'';process.stdout.write(k)")"

curl -sS -X POST "https://run.xcrawl.com/v1/search" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${API_KEY}" \
  -d '{"query":"AI web crawler API","location":"US","language":"en","limit":20}'

Node

node -e '
const fs=require("fs");
const apiKey=JSON.parse(fs.readFileSync(process.env.HOME+"/.xcrawl/config.json","utf8")).XCRAWL_API_KEY;
const body={query:"web scraping pricing",location:"DE",language:"de",limit:30};
fetch("https://run.xcrawl.com/v1/search",{
  method:"POST",
  headers:{"Content-Type":"application/json",Authorization:`Bearer ${apiKey}`},
  body:JSON.stringify(body)
}).then(async r=>{console.log(await r.text());});
'

Request Parameters

Request endpoint and headers

  • Endpoint: POST https://run.xcrawl.com/v1/search
  • Headers:
  • Content-Type: application/json
  • Authorization: Bearer <api_key>

Request body: top-level fields

FieldTypeRequiredDefaultDescription
querystringYes-Search query
locationstringNoUSLocation (country/city/region name or ISO code; best effort)
languagestringNoenLanguage (ISO 639-1)
limitintegerNo10Max results (1-100)

Response Parameters

FieldTypeDescription
search_idstringTask ID
endpointstringAlways search
versionstringVersion
statusstringcompleted
querystringSearch query
dataobjectSearch result data
started_atstringStart time (ISO 8601)
ended_atstringEnd time (ISO 8601)
total_credits_usedintegerTotal credits used

data notes from current API reference:

  • Concrete result schema is implementation-defined
  • Includes billing fields like credits_used and credits_detail

Workflow

  1. Rewrite the request as a clear search objective.
  • Include entity, geography, language, and freshness intent.
  1. Build and execute POST /v1/search.
  • Keep request explicit and deterministic.
  1. Return raw API response directly.
  • Do not synthesize relevance summaries unless requested.

Output Contract

Return:

  • Endpoint used (POST /v1/search)
  • request_payload used for the request
  • Raw response body from search call
  • Error details when request fails

Do not generate summaries unless the user explicitly requests a summary.

Guardrails

  • Do not claim ranking guarantees that the API does not expose.
  • Do not fabricate unavailable filters or response fields.
  • Do not hardcode provider-specific tool schemas in core logic.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…