๐Ÿซง Video Edit โ€” Pro Pack on RunComfy

v0.1.2

Video edit on RunComfy. This video edit skill transforms an existing video clip โ€” restyle, background swap, outfit swap, motion transfer, color grade, or any...

0ยท 91ยท 3 versionsยท 0 currentยท 0 all-timeยท Updated 32m agoยท MIT-0
byKalvin@kalvinrv

๐Ÿซง Video Edit โ€” Pro Pack on RunComfy

runcomfy.com ยท docs ยท Video edit models

Video edit on RunComfy. This skill is the canonical video edit entry point for the RunComfy Model API: give it a source video URL and an edit instruction, and it returns the edited video. Video edit on RunComfy means transforming an existing clip โ€” restyle, background swap, outfit swap, motion transfer, color grade โ€” without re-shooting.

What "video edit" means here

Video edit is the task of taking a source video and producing a transformed video that preserves identity, motion, or framing where you want, while changing what you specify. Video edit is distinct from text-to-video (no input clip) and from image-to-video (input is a still). Common video edit operations include:

  • Restyle video edit โ€” change look, lighting, atmosphere while keeping the subject and motion.
  • Background video edit โ€” swap the background of a talking-head or product video while preserving foreground identity.
  • Outfit swap video edit โ€” change wardrobe on the subject while keeping face, pose, and motion stable.
  • Motion transfer video edit โ€” transfer motion from a reference clip onto a target character.
  • Color grade video edit โ€” apply cinematic color, film grain, or commercial polish to an existing clip.
  • Packaging swap video edit โ€” replace product packaging design using a reference image, preserving the camera motion.

This skill picks the right video edit endpoint for the user's intent and calls runcomfy run <model>/<edit-endpoint> with the matching schema.

When to use video edit on RunComfy

Pick video edit on RunComfy whenever:

  • You have an existing video and want to change something about it โ€” video edit is the right task.
  • You want identity-stable video edit โ€” the subject, brand, or product from the input clip must survive into the edited video.
  • You want fast video edit iteration โ€” RunComfy hosts the GPU; you don't deploy or rent.
  • You're producing video edit at scale โ€” multi-language video edit dubs, A/B variant video edit, batch video edit jobs across SKUs.

If the user said "video edit", "edit video", "restyle this video", "swap the background", "change the outfit", "transfer this motion", "color grade this clip", or showed a video and asked to transform it โ€” route here.

Video edit routes

User intentVideo edit modelWhy
Default video edit โ€” restyle, background swap, color grade, packaging swapwan-ai/wan-2-7/edit-videoMost versatile video edit model; identity + motion preservation, up to 1080p video edit output
Motion-transfer video edit (transfer motion from a reference clip)kling/kling-2-6/motion-control-proDesigned for motion-mapping video edit with identity hold
Lightweight outfit-swap / atmospheric restyle video editdecart/lucy-edit/restyleFastest video edit pass for localized style changes; 720p

The agent reads this table, classifies the user's video edit intent, and picks the matching endpoint.

Prerequisites

  1. RunComfy CLI โ€” npm i -g @runcomfy/cli
  2. RunComfy account โ€” runcomfy login.
  3. CI / containers โ€” set RUNCOMFY_TOKEN=<token>.
  4. A source video URL โ€” formats and limits depend on the chosen video edit route.

Default video edit โ€” Wan 2.7 Edit-Video

The default video edit endpoint. Use for any general video edit task: restyle a talking-head video, swap a product background, replace packaging design with a reference image, apply a cinematic color grade. Up to 1080p video edit output.

Schema

FieldTypeRequiredDefaultNotes
promptstringyesโ€”Video edit instruction. Lead with preservation goals, then state the change.
videostringyesโ€”Source video URL for video edit. MP4/MOV, 2โ€“10s, โ‰ค100MB.
reference_imagestringnoโ€”Optional reference for design-transfer video edit (e.g. packaging swap).
resolutionenumno(input)720p or 1080p for the video edit output.
aspect_ratioenumno(input)W:H. Defaults to source video aspect.
durationintno00 = match input; 2โ€“10 truncates the video edit from the start.
audio_settingenumnoautoauto regenerates audio; origin preserves source audio in the video edit output.
seedintnoโ€”Reproducibility for video edit variants.

Invoke

Background swap video edit, identity preserved, audio kept:

runcomfy run wan-ai/wan-2-7/edit-video \
  --input '{
    "prompt": "Preserve the speaker'\''s face, pose, and lip movement; change the background to a modern office with neutral lighting.",
    "video": "https://.../speaker.mp4",
    "audio_setting": "origin"
  }' \
  --output-dir <absolute/path>

Packaging-swap video edit with reference image:

runcomfy run wan-ai/wan-2-7/edit-video \
  --input '{
    "prompt": "Maintain the original framing and hand movement; replace the packaging design using the reference image.",
    "video": "https://.../hand-holding-package.mp4",
    "reference_image": "https://.../new-packaging.png",
    "audio_setting": "origin"
  }' \
  --output-dir <absolute/path>

Motion-transfer video edit โ€” Kling 2.6 Pro Motion Control

Use when the video edit transfers motion from a reference clip onto a target character. This isn't restyle video edit โ€” it's motion-mapping video edit with identity hold.

FieldTypeRequiredNotes
promptstringyesDescribe the target motion / style for the video edit output.
imagestringyes (image orientation)Reference for character / background consistency in the video edit.
videostringyesMotion-reference clip for the video edit. 10โ€“30s depending on orientation.
keep_original_soundboolnoPreserve audio from the reference video edit input.
character_orientationenumyesimage (max 10s video edit output) or video (max 30s).
runcomfy run kling/kling-2-6/motion-control-pro \
  --input '{
    "prompt": "A young american woman dancing",
    "image": "https://.../target-character.jpg",
    "video": "https://.../motion-reference-dance.mp4",
    "character_orientation": "image",
    "keep_original_sound": true
  }' \
  --output-dir <absolute/path>

Lightweight video edit โ€” Lucy Edit Restyle

Use when the video edit is a localized style modification โ€” outfit swap, scene relight, atmospheric restyle โ€” and identity preservation is critical. Faster and cheaper than Wan 2.7 Edit-Video; capped at 720p.

FieldTypeRequiredDefaultNotes
promptstringyesโ€”Natural-language video edit instruction.
video_urlstringyesโ€”MP4/MOV/WEBM/GIF source for the video edit.
resolutionenumno720p720p only for this video edit tier.

Outfit-swap video edit:

runcomfy run decart/lucy-edit/restyle \
  --input '{
    "prompt": "Change outfit to professional business attire; preserve face and motion.",
    "video_url": "https://.../subject-walking.mp4"
  }' \
  --output-dir <absolute/path>

Atmospheric video edit:

runcomfy run decart/lucy-edit/restyle \
  --input '{
    "prompt": "Make lighting warm and golden hour; preserve face, pose, and motion.",
    "video_url": "https://.../subject-portrait.mp4"
  }' \
  --output-dir <absolute/path>

Prompting video edit โ€” what works

Video edit prompts behave differently from text-to-video prompts. The source clip already fixes most of the look โ€” your prompt should drive the change, not redescribe the video.

  • Lead with preservation goals. "Preserve [face / pose / motion / framing / lip movement]; [then state the video edit change]". Tell the video edit model what NOT to change.
  • One edit direction per video edit call. Compound video edits drift on motion. Pick one bucket โ€” restyle OR background OR outfit OR color โ€” per call.
  • Use reference_image only when the video edit needs an exact visual (packaging swap, costume swap matching a target). Don't pass refs for general restyle video edit.
  • audio_setting: "origin" for talking-head video edit where you don't want the soundtrack regenerated.
  • Localized change phrasing wins for lightweight video edit. "Outfit", "lighting", "background" โ€” pick one bucket.

Video edit FAQ

What's the max duration of a video edit clip? Wan 2.7 Edit-Video: 2โ€“10s. Kling Motion Control: 10s (image orientation) or 30s (video orientation). Lucy Edit Restyle: matches input.

What video formats does video edit accept? MP4, MOV (Lucy also takes WEBM and GIF). Source video edit input must be โ‰ค100MB on Wan 2.7.

Does video edit preserve face identity? Yes โ€” all three video edit routes are designed for identity preservation. State the goal explicitly: "preserve face and motion".

Can video edit keep the original audio? Yes โ€” set audio_setting: "origin" on Wan 2.7 Edit-Video, or keep_original_sound: true on Kling. Lucy preserves audio by default.

What's the highest-resolution video edit available here? 1080p on Wan 2.7 Edit-Video. Kling and Lucy cap at 720p.

Video edit vs text-to-video on RunComfy? Video edit transforms an existing clip (look largely fixed by source). Text-to-video starts from a prompt only (look generated). Use video edit when you have a clip; use text-to-video for novel content.

Can I run multiple video edits in one call? No. Each video edit call applies one direction; for compound video edits, chain calls and stitch.

Limitations

  • Each video edit route inherits its model's limits. Wan 2.7 Edit-Video: 2โ€“10s, 1080p ceiling. Kling Motion Control: 10s or 30s by orientation. Lucy Edit Restyle: 720p, no aspect control.
  • No multi-route video edit blending. This skill picks one video edit model per call. If you need outfit-swap + motion-transfer in the same video edit, that's two calls plus a stitch.
  • Brand-specific overrides โ€” if the user named a specific model variant, route to that brand skill (wan-2-7) instead of forcing it through this video edit router.

Exit codes

codemeaning
0video edit succeeded
64bad CLI args
65bad input JSON for video edit / schema mismatch
69upstream 5xx
75retryable: timeout / 429
77not signed in or token rejected

Full reference: docs.runcomfy.com/cli/troubleshooting.

How it works

The skill picks one of three video edit endpoints (Wan 2.7 Edit-Video, Kling Motion Control, or Lucy Edit Restyle) based on user intent, and invokes runcomfy run <endpoint> with the matching JSON body. The CLI POSTs to the RunComfy Model API, polls the video edit request status every 2 seconds, and downloads the resulting video from the *.runcomfy.net / *.runcomfy.com URL into --output-dir. Ctrl-C cancels the in-flight video edit request.

Security & Privacy

  • Token storage: runcomfy login writes the API token to ~/.config/runcomfy/token.json with mode 0600. Set RUNCOMFY_TOKEN env var in CI.
  • Input boundary: the video edit prompt is passed as JSON via --input. The CLI does NOT shell-expand. No shell-injection surface.
  • Third-party content: video / image URLs are fetched by the RunComfy server. Treat external URLs as untrusted โ€” image-based prompt injection is a known risk for any video edit model.
  • Outbound endpoints: only model-api.runcomfy.net and *.runcomfy.net / *.runcomfy.com. No telemetry.
  • Generated-file size cap: the CLI aborts any video edit download > 2 GiB.

Version tags

latestvk973yca5md7kz4j2kdfefdq9cn85rcky

Runtime requirements

Binsruncomfy
EnvRUNCOMFY_TOKEN
Config~/.config/runcomfy