Install
openclaw skills install firecrawl-apiFirecrawl API integration with managed authentication. Scrape, crawl, map, and search web content. Use this skill when users want to extract content from web...
openclaw skills install firecrawl-apiAccess the Firecrawl API with managed authentication. Scrape webpages, crawl entire websites, map site URLs, and search the web with full content extraction.
# Scrape a webpage
python <<'EOF'
import urllib.request, os, json
data = json.dumps({"url": "https://example.com", "formats": ["markdown"]}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/scrape', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
https://gateway.maton.ai/firecrawl/{native-api-path}
Replace {native-api-path} with the actual Firecrawl API endpoint path. The gateway proxies requests to api.firecrawl.dev and automatically injects your API key.
All requests require the Maton API key in the Authorization header:
Authorization: Bearer $MATON_API_KEY
Environment Variable: Set your API key as MATON_API_KEY:
export MATON_API_KEY="YOUR_API_KEY"
Manage your Firecrawl connections at https://ctrl.maton.ai.
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections?app=firecrawl&status=ACTIVE')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
python <<'EOF'
import urllib.request, os, json
data = json.dumps({'app': 'firecrawl'}).encode()
req = urllib.request.Request('https://ctrl.maton.ai/connections', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections/{connection_id}')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"connection": {
"connection_id": "b5449045-2dcd-4e99-816f-65f80511affb",
"status": "ACTIVE",
"creation_time": "2026-03-11T09:49:09.917114Z",
"last_updated_time": "2026-03-11T09:49:27.616143Z",
"url": "https://connect.maton.ai/?session_token=...",
"app": "firecrawl",
"metadata": {},
"method": "API_KEY"
}
}
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections/{connection_id}', method='DELETE')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
If you have multiple Firecrawl connections, specify which one to use with the Maton-Connection header:
python <<'EOF'
import urllib.request, os, json
data = json.dumps({"url": "https://example.com"}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/scrape', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
req.add_header('Maton-Connection', 'b5449045-2dcd-4e99-816f-65f80511affb')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
If omitted, the gateway uses the default (oldest) active connection.
POST /firecrawl/v2/scrape
Extract content from a single webpage.
Required Parameters:
url (string): The webpage URL to scrapeOptional Parameters:
formats (array): Output formats - "markdown", "html", "json", "screenshot", "links" (default: ["markdown"])onlyMainContent (boolean): Extract only main content, exclude headers/footers (default: true)includeTags (array): HTML tags to includeexcludeTags (array): HTML tags to excludewaitFor (integer): Milliseconds to wait before scraping (default: 0)timeout (integer): Request timeout in ms (default: 30000, max: 300000)mobile (boolean): Emulate mobile device (default: false)actions (array): Browser actions to perform before scrapingheaders (object): Custom HTTP headersblockAds (boolean): Block ads and cookie banners (default: true)Example:
python <<'EOF'
import urllib.request, os, json
data = json.dumps({
"url": "https://docs.firecrawl.dev",
"formats": ["markdown", "html"],
"onlyMainContent": True,
"waitFor": 1000
}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/scrape', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"data": {
"markdown": "# Example Domain\n\nThis domain is for use in documentation...",
"metadata": {
"title": "Example Domain",
"language": "en",
"sourceURL": "https://example.com",
"url": "https://example.com/",
"statusCode": 200,
"contentType": "text/html",
"creditsUsed": 1
}
}
}
POST /firecrawl/v2/crawl
Start crawling an entire website. Returns a crawl ID for status polling.
Required Parameters:
url (string): The base URL to start crawling fromOptional Parameters:
limit (integer): Maximum pages to crawl (default: 10000)maxDepth (integer): Maximum crawl depthincludePaths (array): Regex patterns for URLs to includeexcludePaths (array): Regex patterns for URLs to excludeallowSubdomains (boolean): Enable subdomain crawlingallowExternalLinks (boolean): Follow external linksscrapeOptions (object): Options for each page scrape (formats, onlyMainContent, etc.)webhook (string): Webhook URL for completion notificationExample:
python <<'EOF'
import urllib.request, os, json
data = json.dumps({
"url": "https://example.com",
"limit": 10,
"scrapeOptions": {
"formats": ["markdown"]
}
}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/crawl', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"id": "019cdc53-0acf-76ec-a80c-3ead753b2730",
"url": "https://api.firecrawl.dev/v1/crawl/019cdc53-0acf-76ec-a80c-3ead753b2730"
}
GET /firecrawl/v2/crawl/{id}
Get the status and results of a crawl job.
Path Parameters:
id (string): The crawl job IDExample:
python <<'EOF'
import urllib.request, os, json
crawl_id = "019cdc53-0acf-76ec-a80c-3ead753b2730"
req = urllib.request.Request(f'https://gateway.maton.ai/firecrawl/v2/crawl/{crawl_id}')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"status": "completed",
"completed": 2,
"total": 2,
"creditsUsed": 2,
"expiresAt": "2026-03-12T09:56:00.000Z",
"data": [
{
"markdown": "# Example Domain\n\nThis domain is for use in documentation...",
"metadata": {
"title": "Example Domain",
"sourceURL": "https://example.com",
"statusCode": 200
}
}
]
}
Status Values:
scraping - Crawl in progresscompleted - Crawl finished successfullyfailed - Crawl failedDELETE /firecrawl/v2/crawl/{id}
Cancel an in-progress crawl job.
Path Parameters:
id (string): The crawl job IDExample:
python <<'EOF'
import urllib.request, os, json
crawl_id = "019cdc53-0acf-76ec-a80c-3ead753b2730"
req = urllib.request.Request(f'https://gateway.maton.ai/firecrawl/v2/crawl/{crawl_id}', method='DELETE')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"status": "cancelled"
}
POST /firecrawl/v2/map
Get all URLs from a website without scraping content.
Required Parameters:
url (string): The starting URLOptional Parameters:
search (string): Query to order results by relevancelimit (integer): Maximum links to return (default: 5000, max: 100000)includeSubdomains (boolean): Include subdomains (default: true)sitemap (string): Sitemap handling - "skip", "include", "only" (default: "include")ignoreQueryParameters (boolean): Exclude URLs with query params (default: true)timeout (integer): Timeout in millisecondsExample:
python <<'EOF'
import urllib.request, os, json
data = json.dumps({
"url": "https://docs.firecrawl.dev",
"limit": 100,
"includeSubdomains": False
}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/map', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"links": [
"https://docs.firecrawl.dev",
"https://docs.firecrawl.dev/api-reference",
"https://docs.firecrawl.dev/introduction"
]
}
POST /firecrawl/v2/search
Search the web and get full page content for each result.
Required Parameters:
query (string): Search query (max 500 characters)Optional Parameters:
limit (integer): Number of results (default: 5, max: 100)sources (array): Search types - "web", "images", "news" (default: ["web"])country (string): ISO country code (default: "US")location (string): Geographic targeting (e.g., "Germany")tbs (string): Time filter - "qdr:d" (day), "qdr:w" (week), "qdr:m" (month), "qdr:y" (year)timeout (integer): Timeout in ms (default: 60000)scrapeOptions (object): Options for content extractionExample:
python <<'EOF'
import urllib.request, os, json
data = json.dumps({
"query": "web scraping best practices",
"limit": 5,
"scrapeOptions": {
"formats": ["markdown"]
}
}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/search', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"data": [
{
"url": "https://example.com/article",
"title": "Web Scraping Best Practices",
"description": "Learn the best practices for web scraping...",
"markdown": "# Web Scraping Best Practices\n\n..."
}
],
"creditsUsed": 5
}
POST /firecrawl/v2/batch/scrape
Scrape multiple URLs in a single batch job.
Required Parameters:
urls (array): List of URLs to scrapeOptional Parameters:
formats (array): Output formats (default: ["markdown"])onlyMainContent (boolean): Extract only main content (default: true)webhook (string): Webhook URL for completion notificationExample:
python <<'EOF'
import urllib.request, os, json
data = json.dumps({
"urls": ["https://example.com", "https://example.org"],
"formats": ["markdown"]
}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/batch/scrape', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"id": "019cdc59-56b9-7096-a9f9-95fcc92a3a75",
"url": "https://api.firecrawl.dev/v1/batch/scrape/019cdc59-56b9-7096-a9f9-95fcc92a3a75"
}
GET /firecrawl/v2/batch/scrape/{id}
Get the status and results of a batch scrape job.
Path Parameters:
id (string): The batch scrape job IDExample:
python <<'EOF'
import urllib.request, os, json
batch_id = "019cdc59-56b9-7096-a9f9-95fcc92a3a75"
req = urllib.request.Request(f'https://gateway.maton.ai/firecrawl/v2/batch/scrape/{batch_id}')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"status": "completed",
"completed": 2,
"total": 2,
"creditsUsed": 2,
"expiresAt": "2026-03-12T10:02:54.000Z",
"data": [
{
"markdown": "# Example Domain\n\n...",
"metadata": {
"title": "Example Domain",
"sourceURL": "https://example.com",
"statusCode": 200
}
}
]
}
DELETE /firecrawl/v2/batch/scrape/{id}
Cancel an in-progress batch scrape job.
Path Parameters:
id (string): The batch scrape job IDGET /firecrawl/v2/batch/scrape/{id}/errors
Get errors from a batch scrape job.
Path Parameters:
id (string): The batch scrape job IDResponse:
{
"errors": [],
"robotsBlocked": []
}
GET /firecrawl/v2/crawl/{id}/errors
Get errors from a crawl job.
Path Parameters:
id (string): The crawl job IDExample:
python <<'EOF'
import urllib.request, os, json
crawl_id = "019cdc53-0acf-76ec-a80c-3ead753b2730"
req = urllib.request.Request(f'https://gateway.maton.ai/firecrawl/v2/crawl/{crawl_id}/errors')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"errors": [],
"robotsBlocked": []
}
GET /firecrawl/v2/crawl/active
Get all active crawl jobs.
Example:
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/crawl/active')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"crawls": []
}
POST /firecrawl/v2/extract
Extract structured data from URLs using AI.
Required Parameters:
urls (array): List of URLs to extract fromprompt (string): Natural language description of what to extractOptional Parameters:
schema (object): JSON schema for structured outputscrapeOptions (object): Options for scrapingExample:
python <<'EOF'
import urllib.request, os, json
data = json.dumps({
"urls": ["https://example.com"],
"prompt": "Extract the main heading and description"
}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/extract', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"id": "019cdc59-977b-774b-b584-af2af45c055b",
"urlTrace": []
}
GET /firecrawl/v2/extract/{id}
Get the status and results of an extract job.
Path Parameters:
id (string): The extract job IDExample:
python <<'EOF'
import urllib.request, os, json
extract_id = "019cdc59-977b-774b-b584-af2af45c055b"
req = urllib.request.Request(f'https://gateway.maton.ai/firecrawl/v2/extract/{extract_id}')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"data": [
{
"heading": "Example Domain",
"description": "This domain is for use in documentation..."
}
],
"status": "completed",
"expiresAt": "2026-03-11T16:03:05.000Z"
}
POST /firecrawl/v2/browser
Create an interactive browser session for manual control via CDP.
Example:
python <<'EOF'
import urllib.request, os, json
data = json.dumps({}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/browser', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"id": "019cdc5d-5c9d-732e-a7bd-f095a96a2bb1",
"cdpUrl": "wss://browser.firecrawl.dev/cdp/...",
"liveViewUrl": "https://liveview.firecrawl.dev/...",
"interactiveLiveViewUrl": "https://liveview.firecrawl.dev/...",
"expiresAt": "2026-03-11T10:17:12.409Z"
}
GET /firecrawl/v2/browser
List all active browser sessions.
Example:
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/browser')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"sessions": [
{
"id": "019cdc5d-5c9d-732e-a7bd-f095a96a2bb1",
"status": "active",
"cdpUrl": "wss://browser.firecrawl.dev/cdp/...",
"liveViewUrl": "https://liveview.firecrawl.dev/..."
}
]
}
DELETE /firecrawl/v2/browser/{id}
Delete a browser session.
Path Parameters:
id (string): The browser session IDPOST /firecrawl/v2/agent
Start an AI agent to autonomously navigate and extract data.
Required Parameters:
prompt (string): Description of what data to extract (max 10,000 chars)Optional Parameters:
urls (array): URLs to constrain the agent toschema (object): JSON schema for structured outputmaxCredits (integer): Spending limit (default: 2500)strictConstrainToURLs (boolean): Only visit provided URLsmodel (string): "spark-1-mini" (default, cheaper) or "spark-1-pro" (higher accuracy)Example:
python <<'EOF'
import urllib.request, os, json
data = json.dumps({
"prompt": "Find the pricing information",
"urls": ["https://example.com"],
"model": "spark-1-mini"
}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/agent', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"id": "019cdc5d-a2d4-728c-9c91-e9eae475568f"
}
GET /firecrawl/v2/agent/{id}
Get the status and results of an agent job.
Path Parameters:
id (string): The agent job IDExample:
python <<'EOF'
import urllib.request, os, json
agent_id = "019cdc5d-a2d4-728c-9c91-e9eae475568f"
req = urllib.request.Request(f'https://gateway.maton.ai/firecrawl/v2/agent/{agent_id}')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"success": true,
"status": "completed",
"model": "spark-1-pro",
"data": {...},
"expiresAt": "2026-03-12T10:07:30.055Z"
}
DELETE /firecrawl/v2/agent/{id}
Cancel an in-progress agent job.
Path Parameters:
id (string): The agent job IDUse actions parameter to interact with pages before scraping:
python <<'EOF'
import urllib.request, os, json
data = json.dumps({
"url": "https://example.com",
"formats": ["markdown", "screenshot"],
"actions": [
{"type": "wait", "milliseconds": 2000},
{"type": "click", "selector": "#load-more"},
{"type": "scroll", "direction": "down", "amount": 500},
{"type": "screenshot"}
]
}).encode()
req = urllib.request.Request('https://gateway.maton.ai/firecrawl/v2/scrape', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Available Actions:
wait - Wait for specified millisecondsclick - Click an element by CSS selectorwrite - Type text into an input fieldscroll - Scroll the pagescreenshot - Take a screenshotexecute - Run custom JavaScriptconst response = await fetch('https://gateway.maton.ai/firecrawl/v2/scrape', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.MATON_API_KEY}`
},
body: JSON.stringify({
url: 'https://example.com',
formats: ['markdown']
})
});
const data = await response.json();
console.log(data.data.markdown);
import os
import requests
response = requests.post(
'https://gateway.maton.ai/firecrawl/v2/scrape',
headers={'Authorization': f'Bearer {os.environ["MATON_API_KEY"]}'},
json={
'url': 'https://example.com',
'formats': ['markdown']
}
)
data = response.json()
print(data['data']['markdown'])
onlyMainContent: true to get cleaner output without navigation/footerjq or other commands, environment variables like $MATON_API_KEY may not expand correctly in some shell environments| Status | Meaning |
|---|---|
| 400 | Missing Firecrawl connection or invalid parameters |
| 401 | Invalid or missing Maton API key |
| 402 | Payment required (Firecrawl credits exhausted) |
| 409 | Conflict (e.g., crawl already completed) |
| 429 | Rate limited |
| 4xx/5xx | Passthrough error from Firecrawl API |
MATON_API_KEY environment variable is set:echo $MATON_API_KEY
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
firecrawl. For example:https://gateway.maton.ai/firecrawl/v2/scrapehttps://gateway.maton.ai/v2/scrape