Install
openclaw skills install video-inpaintingRegion edits across video frames on RunComfy via the `runcomfy` CLI โ remove an object that appears across many frames, clean up wires or watermarks, replace a region with matching motion. Routes across Wan 2-7 edit-video (default, prompt-driven region edits with spatial language), Lucy Edit Restyle (identity-stable region-aware restyle), and Seedream 4-0 edit-sequential (when treating the clip as a frame stack). Picks the right route based on whether the change is prose-driven, identity-locked, or needs frame-by-frame still inpaint chained into a video. Triggers on "video inpaint", "video inpainting", "remove from video", "mask region in video", "clean up video", "remove object from clip", "video patch", "frame-by-frame edit", "remove watermark from video", "remove passing person", or any explicit ask to edit a region across video frames.
openclaw skills install video-inpaintingRegion edits across video frames โ remove an object that appears across many frames, clean up wires or watermarks, replace a region with motion that matches the rest of the clip. This skill routes across the prompt-driven video edit endpoints in the RunComfy catalog and gives the agent a clear default for each intent.
runcomfy.com ยท Wan 2-7 edit-video ยท CLI docs
# 1. Install (see runcomfy-cli skill for details)
npm i -g @runcomfy/cli # or: npx -y @runcomfy/cli --version
# 2. Sign in
runcomfy login # or in CI: export RUNCOMFY_TOKEN=<token>
# 3. Edit a video (closest CLI-reachable approach)
runcomfy run wan-ai/wan-2-7/edit-video \
--input '{"video_url": "...", "prompt": "..."}' \
--output-dir ./out
CLI deep dive: runcomfy-cli skill.
Routes via prompt-driven region edits โ the model resolves the targeted region from spatial language across all frames.
Wan 2-7 Edit-Video โ wan-ai/wan-2-7/edit-video (default)
Wan 2-7's video edit endpoint. Drive frame-by-frame edits via prompt + the source video. Pick for: "remove the watermark in the bottom-right", "replace the sky with a sunset" โ prompt-driven region intent without an explicit mask. Avoid for: precise pixel-level region targeting โ use a ComfyUI workflow.
Lucy Edit Restyle โ decart/lucy-edit/restyle
Identity-stable video restyle that handles region-aware edits. Pick for: lightweight outfit / object swap that needs to track across frames. Avoid for: surgical mask-driven inpaint โ ComfyUI workflow.
Seedream 4-0 Edit-Sequential โ bytedance/seedream-4-0/edit-sequential
Sequential still edits โ feed a sequence of frames as inputs, apply the same edit instruction across each, useful if you're treating the video as a frame stack. Pick for: short, low-frame-rate sequences where each frame can be edited independently and a separate tool re-encodes to video. Avoid for: long clips, motion-coherent fills โ temporal consistency degrades.
Model: wan-ai/wan-2-7/edit-video
Catalog: Wan 2-7 edit-video
runcomfy run wan-ai/wan-2-7/edit-video \
--input '{
"video_url": "https://your-cdn.example/source.mp4",
"prompt": "Remove the watermark in the bottom-right corner across all frames. Preserve all other content exactly. Match background where the watermark was."
}' \
--output-dir ./out
"bottom-right corner", "the cables overhead", "the second person from the left"."Preserve all other content exactly" โ without this Wan may restyle frames inadvertently.For broader video edit, see video-edit.
The endpoints above are prompt-driven โ they resolve the target region from spatial language. For pixel-precise mask propagation with SAM2 segmentation tracking + temporal-aware inpaint backfill, RunComfy hosts dedicated ComfyUI workflows:
| Need | Workflow class |
|---|---|
| LTX 2-3 video inpaint (targeted frame editing) | ltx-2-3-inpaint-in-comfyui-targeted-video-frame-editing |
| Flux inpainting (still) โ chain frame-by-frame | comfyui-flux-inpainting-workflow |
| Flux ControlNet inpainting | flux-controlnet-inpainting-image-repair |
| Wan 2-2 video edit (broader video edit including inpaint) | search comfyui-workflows for "wan 2-2 edit" |
These are GUI workflows, not CLI endpoints. The CLI can't reach them โ open them in the RunComfy ComfyUI cloud for proper mask propagation + temporal consistency.
"remove the person walking in the background, fill with matching environment".image-inpainting.video-outpainting.video-edit.wan-models collection| code | meaning |
|---|---|
| 0 | success |
| 64 | bad CLI args |
| 65 | bad input JSON / schema mismatch |
| 69 | upstream 5xx |
| 75 | retryable: timeout / 429 |
| 77 | not signed in or token rejected |
Full reference: docs.runcomfy.com/cli/troubleshooting.
The skill picks Wan 2-7 Edit-Video (default for prompt-driven region edits) or one of the alternatives based on whether the user needs identity-locked restyle or frame-stack treatment. The CLI POSTs to the Model API, polls request status, and downloads the result into --output-dir.
npm i -g @runcomfy/cli or npx -y @runcomfy/cli. Agents must not pipe an arbitrary remote install script into a shell on the user's behalf.runcomfy login writes the API token to ~/.config/runcomfy/token.json with mode 0600. Set RUNCOMFY_TOKEN env var in CI / containers.--input. The CLI does not shell-expand prompt content. No shell-injection surface.model-api.runcomfy.net and *.runcomfy.net / *.runcomfy.com. No telemetry.Bash(runcomfy *) only.wan-models collection