Install
openclaw skills install controlnet-posePose-conditioned generation on RunComfy via the `runcomfy` CLI. Routes across Kling 2-6 Motion Control Pro / Standard (transfer the motion / blocking of a reference video onto a target character), community Wan 2-2 Animate (audio-driven character animation with pose conditioning), and Z-Image Turbo ControlNet LoRA (pose-conditioned image generation from an OpenPose / DWPose / canny / depth control image). Picks the right route based on video vs still and stylized vs photoreal. Triggers on "controlnet", "control net", "pose control", "openpose", "DWPose", "transfer pose", "motion control", "pose driven", "character pose", "depth control", "canny edge", "use this pose", or any explicit ask to condition generation on a pose / skeleton / motion / depth / canny reference.
openclaw skills install controlnet-poseCondition image or video generation on a pose, skeleton, or motion reference. This skill routes across the pose-driven Model API endpoints reachable today and points the agent at ComfyUI workflows for richer ControlNet rigs.
runcomfy.com Β· Kling motion control Β· CLI docs
# 1. Install (see runcomfy-cli skill for details)
npm i -g @runcomfy/cli # or: npx -y @runcomfy/cli --version
# 2. Sign in
runcomfy login # or in CI: export RUNCOMFY_TOKEN=<token>
# 3. Pose-conditioned generate
runcomfy run <vendor>/<model> \
--input '{"reference_video_url": "...", "character_image_url": "..."}' \
--output-dir ./out
CLI deep dive: runcomfy-cli skill.
Routes split by video pose-transfer vs image pose-conditioned generation.
Kling 2-6 Motion Control Pro β kling/kling-2-6/motion-control-pro (default for video pose transfer)
Takes a reference performance video + a target character image, produces video of the target performing the reference motion / pose. Pick for: transferring a source video's motion / blocking onto a new character; dance choreography re-shot; sports motion onto a stylized character. Avoid for: still-image pose conditioning β use Z-Image ControlNet LoRA.
Kling 2-6 Motion Control Standard β kling/kling-2-6/motion-control-standard
Cheaper Kling Motion Control tier. Pick for: drafts, iteration on motion-control compositions. Avoid for: final delivery β use Pro.
Wan 2-2 Animate (video-to-video) β community/wan-2-2-animate/video-to-video
Community-published variant on Wan 2-2. Audio-driven character animation that also accepts pose-style conditioning. Pick for: stylized character animation, mascot work. Avoid for: photoreal subjects β use Kling Motion Control.
Z-Image Turbo ControlNet LoRA β tongyi-mai/z-image/turbo/controlnet/lora
Z-Image Turbo with a ControlNet LoRA β feed a control image (pose skeleton, depth map, canny) and a prompt, get a generation conditioned on that control. Pick for: pose-locked image generation, character in specific stance, depth-locked composition. Avoid for: complex multi-condition stacks (e.g. pose + depth + reference) β those need a ComfyUI workflow.
Model: kling/kling-2-6/motion-control-pro (or /motion-control-standard)
Catalog: motion-control-pro Β· kling collection
runcomfy run kling/kling-2-6/motion-control-pro \
--input '{
"reference_video_url": "https://your-cdn.example/source-performance.mp4",
"character_image_url": "https://your-cdn.example/target-character.png"
}' \
--output-dir ./out
Model: tongyi-mai/z-image/turbo/controlnet/lora
Catalog: Z-Image controlnet LoRA
runcomfy run tongyi-mai/z-image/turbo/controlnet/lora \
--input '{
"prompt": "A samurai in battle stance, traditional armor, cherry-blossom forest background, cinematic 35mm",
"control_image_url": "https://your-cdn.example/openpose-skeleton.png"
}' \
--output-dir ./out
The routes above cover single-condition pose / motion / depth / canny. For multi-condition stacks (e.g. pose + depth + reference image), RunComfy hosts dedicated ComfyUI workflows on runcomfy.com/comfyui-workflows:
| Need | Workflow class |
|---|---|
| FLUX + multi-condition ControlNet (depth + canny + pose) | comfyui-flux-controlnet-depth-and-canny, flux-dev-controlnet-union-pro-multi-condition |
| Pose-driven motion video with VACE | wan-2-2-vace-in-comfyui-pose-driven-motion-video-workflow |
| Pose-control lipsync (pose + audio together) | pose-control-lipsync-with-wan2-2-s2v-in-comfyui-audio2video |
| Wan 2-2 Animate v2 with pose driving | wan-2-2-animate-v2-in-comfyui-pose-driven-animation-workflow |
| OpenPose motion alignment | one-to-all-animation-in-comfyui-openpose-motion-alignment |
| Pose-based character animation (Scail) | scail-model-in-comfyui-pose-based-character-animation-workflow |
These are GUI workflows, not CLI endpoints. The CLI can't reach them β open them in the RunComfy ComfyUI cloud.
kling collection β motion control + identity-stable video models/feature/character-swap β Wan 2-2 Animate| code | meaning |
|---|---|
| 0 | success |
| 64 | bad CLI args |
| 65 | bad input JSON / schema mismatch |
| 69 | upstream 5xx |
| 75 | retryable: timeout / 429 |
| 77 | not signed in or token rejected |
Full reference: docs.runcomfy.com/cli/troubleshooting.
The skill classifies user intent β video motion transfer vs image pose-conditioned generation β and picks one of the routes above. The CLI POSTs to the Model API, polls request status, and downloads the result into --output-dir.
npm i -g @runcomfy/cli or npx -y @runcomfy/cli. Agents must not pipe an arbitrary remote install script into a shell on the user's behalf.runcomfy login writes the API token to ~/.config/runcomfy/token.json with mode 0600. Set RUNCOMFY_TOKEN env var in CI / containers.--input. The CLI does not shell-expand prompt content. No shell-injection surface.model-api.runcomfy.net and *.runcomfy.net / *.runcomfy.com. No telemetry.Bash(runcomfy *) only.kling collection β motion control + identity-stable video models/feature/character-swap β Wan 2-2 Animate