Install
openclaw skills install scalekit-agent-authClawHub Security found sensitive or high-impact capabilities. Review the scan results before using.
Use this skill whenever the user asks for information from, or wants to take an action in, a third-party tool or service. This includes — but is not limited to — searching the web, reading or writing documents, sending messages, querying databases, managing tasks, fetching data from APIs, or interacting with any connected SaaS product (e.g. "search Exa for...", "read my Notion page", "send a Slack message", "get my Google Sheet", "create a GitHub issue", "query Snowflake", "look up a HubSpot contact"). Trigger this skill any time the user's request involves an external service, integration, or data source — even if the provider is not explicitly named. Handles OAuth and non-OAuth (API Key, Bearer, Basic) connections, tool discovery, execution, and proxy fallback via Scalekit Connect. ## Provider Mapping Some services are accessed through a different provider name in Scalekit. Always use the mapped provider name below: | User asks about | Use provider | |---|---| | LinkedIn — profiles, jobs, companies, posts, people search, ads, groups | `HARVESTAPI` |
openclaw skills install scalekit-agent-authTOOL_CLIENT_ID=skc_your_client_id TOOL_CLIENT_SECRET=your_client_secret TOOL_ENV_URL=https://your-env.scalekit.cloud TOOL_IDENTIFIER=your_default_identifier
General-purpose tool executor for OpenClaw agents. Uses Scalekit Connect to discover and run tools for any connected service — OAuth (Notion, Slack, Gmail, GitHub, etc.) or non-OAuth (API Key, Bearer, Basic auth).
Required in .env:
TOOL_CLIENT_ID=<scalekit_client_id>
TOOL_CLIENT_SECRET=<scalekit_client_secret>
TOOL_ENV_URL=<scalekit_environment_url>
TOOL_IDENTIFIER=<default_identifier> # optional but recommended
TOOL_IDENTIFIER is used as the default --identifier for all operations. If not set, the script will prompt the user at runtime and display a warning advising them to set it in .env.
When the user asks to perform an action on a connected service, follow these steps in order:
Dynamically resolve the connection_name by listing all configured connections for the provider. The API paginates automatically through all pages:
uv run tool_exec.py --list-connections --provider <PROVIDER>
"status": "COMPLETED" — ignore any with DRAFT, PENDING, or other non-completed statuses.key_id from the first COMPLETED result as <CONNECTION_NAME> for all subsequent steps.<PROVIDER> connection is configured in Scalekit and stop.key_id(s) found and tell them the connection configuration is not completed. Ask them to complete setup in the Scalekit Dashboard and stop.Run --generate-link for the connection. The tool automatically detects the connection type (OAuth vs non-OAuth) and applies the correct auth flow:
uv run tool_exec.py --generate-link \
--connection-name <CONNECTION_NAME>
OAuth connections:
Non-OAuth connections (BEARER, BASIC, API Key, etc.):
<CONNECTION_NAME> connection in the Scalekit Dashboard."<CONNECTION_NAME> connection in the Scalekit Dashboard."Never use
--get-authorizationin the execution flow — that is only for inspecting raw OAuth tokens and does not work for non-OAuth connections.
Fetch the list of tools available for the provider:
uv run tool_exec.py --get-tool --provider <PROVIDER>
notion_page_get for reading a page).Always fetch the schema of the matched tool before constructing the input. This tells you the exact parameter names, types, required vs optional fields, and valid enum values:
uv run tool_exec.py --get-tool --tool-name <TOOL_NAME>
input_schema.properties from the response — use only the parameter names defined there.required — these must always be included in --tool-input.description and display_properties to understand what each field expects.Construct the tool input using only parameters from the schema fetched in Step 3b, then run:
uv run tool_exec.py --execute-tool \
--tool-name <TOOL_NAME> \
--connection-name <CONNECTION_NAME> \
--tool-input '<JSON_INPUT>'
Return the result to the user.
If no Scalekit tool covers the required action, attempt a proxied HTTP request directly to the provider's API:
uv run tool_exec.py --proxy-request \
--connection-name <CONNECTION_NAME> \
--path <API_PATH> \
--method <GET|POST|PUT|DELETE> \
--query-params '<JSON>' \ # optional
--body '<JSON>' # optional
Note: Proxy may be disabled on some environments. If it returns
TOOL_PROXY_DISABLED, inform the user that this action isn't supported by the current Scalekit tool catalog and suggest they request a new tool from Scalekit.
User: "Find software engineers in San Francisco on LinkedIn"
--list-connections --provider HARVESTAPI → key_id: harvestapi-xxxx, type: API_KEY--generate-link --connection-name harvestapi-xxxx → detects API_KEY, checks account → ACTIVE--get-tool --provider HARVESTAPI → finds harvestapi_search_people
3b. --get-tool --tool-name harvestapi_search_people → schema shows valid params: first_names, last_names, search, locations, current_job_titles, etc.--execute-tool --tool-name harvestapi_search_people --connection-name harvestapi-xxxx --tool-input '{"first_names": "John", "locations": "San Francisco", "current_job_titles": "Software Engineer"}'
→ returns matching LinkedIn profilesAny LinkedIn-related request (profiles, jobs, companies, posts, people search, ads, groups) → use provider
HARVESTAPI.
User: "Search for latest AI news using Exa"
--list-connections --provider EXA → key_id: exa, type: API_KEY--generate-link --connection-name exa → detects API_KEY, checks account → ACTIVE--get-tool --provider EXA → finds exa_search
3b. --get-tool --tool-name exa_search → schema shows query (required), num_results, type, etc.--execute-tool --tool-name exa_search --connection-name exa --tool-input '{"query": "latest AI news"}'
→ returns search resultsUser: "Read my Notion page https://notion.so/..."
--list-connections --provider NOTION → key_id: notion-ijIQedmJ, type: OAUTH--generate-link --connection-name notion-ijIQedmJ → detects OAuth, already ACTIVE--get-tool --provider NOTION → finds notion_page_get
3b. --get-tool --tool-name notion_page_get → schema shows page_id (required)--execute-tool --tool-name notion_page_get --connection-name notion-ijIQedmJ --tool-input '{"page_id": "..."}'
→ returns page metadataUser: "Fetch the blocks of a Notion page"
--list-connections --provider NOTION → key_id: notion-ijIQedmJ--generate-link --connection-name notion-ijIQedmJ → ACTIVE--get-tool --provider NOTION → no notion_blocks_fetch tool found--proxy-request --path "/blocks/<page_id>/children" → fallback attemptSome providers do not have Scalekit tools for file operations. Use --proxy-request with --input-file (upload) or direct S3/CDN URL download (download). Provider-specific flows are documented below.
⚠️ Proxy token expiry:
--proxy-requestpasses the stored OAuth access token directly to the provider. If the token has expired, the provider will return401 Unauthorized. Unlike--execute-toolwhich auto-refreshes tokens, the proxy does not. If you get a 401, the token needs to be refreshed — re-run--generate-linkto check status; if the connection is ACTIVE but proxy still returns 401, the user must re-authorize via a new magic link to obtain a fresh token.
Notion file uploads are a 3-step process via proxy:
Step 1 — Create an upload object
uv run tool_exec.py --proxy-request \
--connection-name <CONNECTION_NAME> \
--path "/v1/file_uploads" \
--method POST \
--body '{"mode": "single_part"}' \
--headers '{"Notion-Version": "2022-06-28", "Content-Type": "application/json"}'
Returns a file_upload object with an id and upload_url. The upload is valid for 1 hour.
Step 2 — Send the file
uv run tool_exec.py --proxy-request \
--connection-name <CONNECTION_NAME> \
--path "/v1/file_uploads/<file_upload_id>/send" \
--method POST \
--input-file /path/to/file \
--headers '{"Notion-Version": "2022-06-28"}'
multipart/form-data. On success, status becomes uploaded.application/octet-stream. If the file extension is not recognized (e.g. .md), copy it to a .txt extension first so the MIME type resolves to text/plain.Step 3 — Attach the file block to a page
uv run tool_exec.py --proxy-request \
--connection-name <CONNECTION_NAME> \
--path "/v1/blocks/<page_id>/children" \
--method PATCH \
--body '{
"children": [{
"object": "block",
"type": "file",
"file": {
"type": "file_upload",
"file_upload": {"id": "<file_upload_id>"},
"name": "<display_filename>"
}
}]
}' \
--headers '{"Notion-Version": "2022-06-28", "Content-Type": "application/json"}'
Do not use
notion_page_content_appendfor file blocks — it does not support thefile_uploadblock type and will return anINTERNAL_ERROR. Always use the proxy for file attachment.
Notion files are stored on S3 with pre-signed URLs that expire in 1 hour. The download is a 2-step process:
Step 1 — Get a fresh pre-signed URL
List the page blocks to find the file block and its current URL:
uv run tool_exec.py --proxy-request \
--connection-name <CONNECTION_NAME> \
--path "/v1/blocks/<page_id>/children" \
--method GET \
--headers '{"Notion-Version": "2022-06-28"}'
Find the block with "type": "file" — the URL is at file.file.url. Always fetch a fresh URL; never reuse a URL from a previous response as it may be expired.
Step 2 — Download directly from S3
The S3 URL is public (pre-signed) — no Scalekit proxy needed. Download it directly:
import urllib.request
urllib.request.urlretrieve("<s3_url>", "/local/path/filename")
Or use --output-file if going through the proxy:
uv run tool_exec.py --proxy-request \
--connection-name <CONNECTION_NAME> \
--path "/v1/blocks/<block_id>" \
--method GET \
--headers '{"Notion-Version": "2022-06-28"}' \
--output-file /local/path/filename
Note:
--output-filesaves the raw API response (JSON block object), not the file itself. Use direct S3 download for the actual file content.
Coming soon
Coming soon
Any provider configured in Scalekit (Notion, Slack, Gmail, Google Sheets, GitHub, Salesforce, HubSpot, Linear, and 50+ more). Use the provider name in uppercase for --provider (e.g. NOTION, SLACK, GOOGLE).