Install
openclaw skills install opencreator-skillsOperate and build OpenCreator workflows via API. Use when the user wants to search templates, run workflows, poll results, deliver generated media, or design...
openclaw skills install opencreator-skillsUse this skill when the task involves any of:
User request
│
├─ Run template / get results / "帮我做 XX" ──► Operate Mode (default)
│
└─ Create workflow / edit graph / "从零搭" / no suitable template ──► Build Mode
Always try Operate Mode first. Switch to Build Mode only when:
If a task needs both, do Build first (produce the graph), then Operate (run it).
Must read: references/api-workflows.md
This single file covers the complete Operate flow:
flow_idSupplementary (read only when you need deeper tactics):
references/best-practices.md — template-first strategy and design principlesinputs must be flat: { "node_id": "value" } — never wrap in extra objectnode_id / inputText / imageBase64 to users — use business languageWhen building or editing a workflow graph, follow these four steps in order. Do not skip any step.
Work backward from the user's final deliverable to identify the abstract structure and module dependencies.
Answer these questions first:
Must read:
references/step-1-reverse-plan/workflow-reverse-planner.mdreferences/node-catalog.mdOutput: Macro Format + Dependency Graph
Map abstract modules to concrete generators and plan edges + naming.
Must read:
references/step-1-reverse-plan/generator-wiring-naming-planner.mdreferences/step-1-reverse-plan/generator-routing.mdThen read the matching file in references/step-2-generators/ (see routing table below).
Choose models, fill selectedModels and parameters for each node.
Hard rule before choosing any model:
Confirmed model IDs tables in each Step 3 file as the source of truth for model IDsSora 2, GPT Image 1.5, Seedream 5.0 Lite) into a guessed model IDreferences/node-catalog.md as the fallback source of truthselectedModelsRead the matching file in references/step-3-models/ (see routing table below).
Write prompts for nodes that need inputText.
Must read:
references/step-4-prompts/prompt-prewrite-reasoner.mdThen read the matching prompt best-practices file (see routing table below).
references/step-2-generators/reference-text-generator.mdreferences/step-2-generators/reference-image-text-generator.mdreferences/step-2-generators/reference-video-text-generator.mdreferences/step-2-generators/multimodal-text-generator.mdreferences/step-2-generators/storyboard-text-splitter.mdreferences/step-2-generators/text-to-image-generator.mdreferences/step-2-generators/image-reference-generator.mdreferences/step-2-generators/storyboard-image-generator.mdreferences/step-2-generators/relight-image-generator.mdreferences/step-2-generators/angle-control-image-generator.mdreferences/step-2-generators/text-to-video-generator.mdreferences/step-2-generators/image-to-video-generator.mdreferences/step-2-generators/storyboard-video-generator.mdreferences/step-2-generators/storyboard-video-generator-aligned.mdreferences/step-2-generators/omni-video-generator.mdreferences/step-2-generators/lipsync-video-generator.mdreferences/step-2-generators/motion-transfer-video-generator.mdreferences/step-2-generators/video-modify-generator.mdreferences/step-2-generators/text-to-speech-generator.mdreferences/step-2-generators/voice-cloning-generator.mdreferences/step-2-generators/music-generator.mdConfirmed model IDs table in each file below, with references/node-catalog.md as fallback for nodes without a dedicated Step 3 filetextGenerator / scriptSplit: references/step-3-models/text-generator-model-selection.mdimageMaker: references/step-3-models/text-to-image-model-selection.mdimageToImage: references/step-3-models/image-to-image-model-selection.mdvideoMaker: references/step-3-models/image-to-video-model-selection.mdtextToVideo: references/step-3-models/text-to-video-model-selection.mdtextToSpeech: references/step-3-models/text-to-speech-model-selection.mdreferences/step-3-models/input-block-skill.mdreferences/node-catalog.mdreferences/step-4-prompts/prompt-prewrite-reasoner.mdtextGenerator prompts: references/step-4-prompts/text-prompt-best-practices.mdreferences/step-4-prompts/image-prompt-best-practices.mdreferences/step-4-prompts/video-prompt-best-practices.md1 image + N texts → N results. Must use imageInput as the reference image source, not a generated image.
N images + N texts, 1:1 pairing. Counts must match exactly.
scriptSplit outputs a text list; downstream generators auto-expand per item — do not duplicate generator nodes.
In complex scenarios (lipsync ads, multi-branch video), generate a shared structured brief first, then fork to visual and audio branches.
references/scenarios/scenario-ugc-lipsync-ad.mdreferences/scenarios/scenario-storyboard-video.mdreferences/scenarios/scenario-ecommerce-multi-image.mdAfter completing the four steps, output standard nodes + edges JSON.
Node and edge schema: references/node-catalog.md
Save via create_workflow tool if available, otherwise via the Workflow PATCH API (see references/api-workflows.md §10).