Install
openclaw skills install knodsBuild and modify Knods visual AI workflows using either the OpenClaw Gateway polling protocol or the Knods headless flows API. Use for Knods polling payloads...
openclaw skills install knodsHandle two Knods modes:
Use the polling bridge for Knods Iris/chat payloads. Use the headless API when the task is to discover a flow, inspect inputs, run it, wait, cancel, or fetch outputs programmatically.
messageId, message, and history, and the response must stream back with optional [KNODS_ACTION]...[/KNODS_ACTION].message as the primary request.history for continuity.message describing node types and action rules. Always prefer the node catalog from this context over the defaults below.messageId to map all response chunks to the correct message.addNode for single-node additions.addFlow for multi-node workflows or any request requiring edges.[KNODS_ACTION]{"action":"addNode","nodeType":"FluxImage"}[/KNODS_ACTION][KNODS_ACTION]{"action":"addFlow","nodes":[...],"edges":[...]}[/KNODS_ACTION]"nodeType" (not "type") in node objects. Do NOT include position or data fields — Knods handles layout automatically.addFlow, ensure every edge source and target references an existing node id.Output node.Output.n1, n2, n3) so follow-up edits are easy./respond for the same messageId.{"messageId":"...","done":true} when complete.python3 {baseDir}/scripts/knods_headless.py listpython3 {baseDir}/scripts/knods_headless.py resolve --query "<text>"python3 {baseDir}/scripts/knods_headless.py get --flow-id "<flowId>"inputs and preserve every nodeId exactly.inputs as JSON array with nodeId, content, and type.python3 {baseDir}/scripts/knods_headless.py run --flow-id "<flowId>" --inputs-json '[...]'python3 {baseDir}/scripts/knods_headless.py wait --run-id "<runId>"python3 {baseDir}/scripts/knods_headless.py run-wait --flow-id "<flowId>" --inputs-json '[...]'completed, read outputsfailed, surface error.message and error.nodeId if present[KNODS_ACTION]...[/KNODS_ACTION] inline only when a canvas mutation is intended.IMPORTANT: Every generator node listed below has a built-in prompt textarea. Do NOT add a DocumentPanel before a single generator — just connect the generator directly to an Output. Only use DocumentPanel when one shared prompt feeds multiple generators in parallel.
When the first message includes a node catalog context, always use that list over these defaults. The context catalog is always more up-to-date.
All text generators accept text + image input and have a built-in prompt textarea.
ChatGPT — OpenAI models. Best all-rounder.Claude — Anthropic models. Great for reasoning and creative writing.All image generators have a built-in prompt textarea and accept optional image input for image-to-image editing.
GPTImage — OpenAI. Best at following complex instructions and text rendering.FluxImage — FLUX by Black Forest Labs. Industry-leading quality for portraits and artistic styles. Fast.ImagePrompt — Google Gemini. Great for photorealistic images and concept art.ZImageTurbo — Lightning-fast (<2 seconds). Best for rapid prototyping.QwenImage — Alibaba Qwen. Strong at anime, illustrations, and Asian-inspired aesthetics.Seedream — ByteDance. Dreamy, surreal compositions. Good at text rendering in images.GrokImage — xAI. Text-to-image and image editing.All video generators below have a built-in prompt textarea and support both text-to-video and image-to-video (connect an ImagePanel for image-to-video).
Veo3FalAI — Google Veo 3.1. Cinematic video up to 8s with native audio. Best overall quality.Sora2Video — OpenAI Sora 2. Realistic motion and physics, up to 12s.Kling26Video — Kling 2.6 Surreal Engine. Cinematic with audio, up to 10s.KlingO3Video — Kling 3.0. Latest generation, Standard/Pro quality, up to 10s.Wan26Video — Wan 2.6. Multi-shot videos, 720p/1080p, up to 15s.LTXVideo — LTX-2 Pro. High-fidelity cinematic with synchronized audio.GrokVideo — xAI. Video with native audio.WanAnimateVideo — Character animation. REQUIRES two inputs: a VIDEO (motion reference) + an IMAGE (character to animate). Does NOT have a text prompt. Only use when user wants to animate a character image using motion from another video.ImagePanel — Upload or paste an image. Output: image. Use when user wants to provide a reference image or a starting frame for image-to-video.DocumentPanel — Editable text container. Output: text. Use ONLY when one shared prompt feeds multiple generators in parallel.Output — Displays generated results (text, image, video). REQUIRED at the end of every flow.initialData only when user intent clearly implies parameters.Single image generator (most common):
FluxImage → Output
Image from reference photo:
ImagePanel → GPTImage → Output
One prompt feeding two image generators:
DocumentPanel → FluxImage → Output
DocumentPanel → GPTImage → Output
Text-to-video:
Veo3FalAI → Output
Image-to-video (animate a still image):
ImagePanel → Veo3FalAI → Output
Character animation from video motion (WanAnimateVideo needs both video + image):
ImagePanel → WanAnimateVideo → Output
[video source] → WanAnimateVideo
Text generation:
ChatGPT → Output
messageId across all chunk posts for a turn.gw_... token via query parameter token; never require Supabase JWT in this flow.When running a persistent poller service/process:
KNODS_BASE_URL already includes /updates?token=...KNODS_BASE_URL points to connection base and token is supplied separately (KNODS_GATEWAY_TOKEN)/respond from the same connection root as /updates.messageId values and transport errors for debugging.For headless API operations:
KNODS_API_BASE_URL + KNODS_API_KEYKNODS_API_BASE_URL should look like https://<instance>/api/v1KNODS_API_KEY must have knods:read and knods:runKNODS_BASE_URLThis skill ships the runtime bridge and installer:
scripts/knods_iris_bridge.pyscripts/knods_headless.pyscripts/install_local.shInstall/deploy from the skill folder:
bash /home/rolf/.openclaw/skills/knods/scripts/install_local.sh
The installer deploys:
~/.openclaw/scripts/knods_iris_bridge.py~/.config/systemd/user/knods-iris-bridge.serviceThen runs:
systemctl --user daemon-reloadsystemctl --user enable --now knods-iris-bridge.serviceSet these in ~/.openclaw/.env:
KNODS_BASE_URLKNODS_BASE_URL does not already include ?token=...:
KNODS_GATEWAY_TOKENKNODS_API_KEYKNODS_API_BASE_URLOPENCLAW_AGENT_ID (default: iris)OPENCLAW_BIN (default: openclaw on PATH)systemctl --user status knods-iris-bridge.servicesystemctl --user restart knods-iris-bridge.servicejournalctl --user -u knods-iris-bridge.service -fAfter changing gateway URL/token env values, restart the running bridge process so it reloads config.
systemctl --user restart <knods-bridge-service>Do not assume env changes are picked up live without restart.
references/protocol.md for canonical polling endpoints, payload schemas, and action examples.references/headless-api.md for the direct run/list/poll/cancel flow execution API.