XCrawl Search

v1.0.2

Use this skill for XCrawl search tasks, including keyword search request design, location and language controls, result analysis, and follow-up crawl or scra...

0· 1.1k· 1 versions· 10 current· 11 all-time· Updated 13h ago· MIT-0

Install

openclaw skills install xcrawl-search

XCrawl Search

Overview

This skill uses XCrawl Search API to retrieve query-based results. Default behavior is raw passthrough: return upstream API response bodies as-is.

Required Local Config

Before using this skill, the user must create a local config file and write XCRAWL_API_KEY into it.

Path: ~/.xcrawl/config.json

{
  "XCRAWL_API_KEY": "<your_api_key>"
}

Read API key from local config file only. Do not require global environment variables.

Credits and Account Setup

Using XCrawl APIs consumes credits. If the user does not have an account or available credits, guide them to register at https://dash.xcrawl.com/. After registration, they can activate the free 1000 credits plan before running requests.

Tool Permission Policy

Request runtime permissions for curl and node only. Do not request Python, shell helper scripts, or other runtime permissions.

API Surface

  • Search endpoint: POST /v1/search
  • Base URL: https://run.xcrawl.com
  • Required header: Authorization: Bearer <XCRAWL_API_KEY>

Usage Examples

cURL

API_KEY="$(node -e "const fs=require('fs');const p=process.env.HOME+'/.xcrawl/config.json';const k=JSON.parse(fs.readFileSync(p,'utf8')).XCRAWL_API_KEY||'';process.stdout.write(k)")"

curl -sS -X POST "https://run.xcrawl.com/v1/search" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${API_KEY}" \
  -d '{"query":"AI web crawler API","location":"US","language":"en","limit":20}'

Node

node -e '
const fs=require("fs");
const apiKey=JSON.parse(fs.readFileSync(process.env.HOME+"/.xcrawl/config.json","utf8")).XCRAWL_API_KEY;
const body={query:"web scraping pricing",location:"DE",language:"de",limit:30};
fetch("https://run.xcrawl.com/v1/search",{
  method:"POST",
  headers:{"Content-Type":"application/json",Authorization:`Bearer ${apiKey}`},
  body:JSON.stringify(body)
}).then(async r=>{console.log(await r.text());});
'

Request Parameters

Request endpoint and headers

  • Endpoint: POST https://run.xcrawl.com/v1/search
  • Headers:
  • Content-Type: application/json
  • Authorization: Bearer <api_key>

Request body: top-level fields

FieldTypeRequiredDefaultDescription
querystringYes-Search query
locationstringNoUSLocation (country/city/region name or ISO code; best effort)
languagestringNoenLanguage (ISO 639-1)
limitintegerNo10Max results (1-100)

Response Parameters

FieldTypeDescription
search_idstringTask ID
endpointstringAlways search
versionstringVersion
statusstringcompleted
querystringSearch query
dataobjectSearch result data
started_atstringStart time (ISO 8601)
ended_atstringEnd time (ISO 8601)
total_credits_usedintegerTotal credits used

data notes from current API reference:

  • Concrete result schema is implementation-defined
  • Includes billing fields like credits_used and credits_detail

Workflow

  1. Rewrite the request as a clear search objective.
  • Include entity, geography, language, and freshness intent.
  1. Build and execute POST /v1/search.
  • Keep request explicit and deterministic.
  1. Return raw API response directly.
  • Do not synthesize relevance summaries unless requested.

Output Contract

Return:

  • Endpoint used (POST /v1/search)
  • request_payload used for the request
  • Raw response body from search call
  • Error details when request fails

Do not generate summaries unless the user explicitly requests a summary.

Guardrails

  • Do not claim ranking guarantees that the API does not expose.
  • Do not fabricate unavailable filters or response fields.
  • Do not hardcode provider-specific tool schemas in core logic.

Version tags

latestvk978mq3r807p5j5kmbpm2af2gd82rk4p

Runtime requirements

Any bincurl, node