Install
openclaw skills install ai-avatar-video-runcomfyAI avatar video on RunComfy. This RunComfy avatar video skill creates talking-head and lip-sync videos via the `runcomfy` CLI. Routes across ByteDance OmniHuman (RunComfy's lip-sync feature pick โ audio-driven full-body avatar from one portrait + audio file), Wan-AI Wan 2-7 (open-weights audio-driven lip-sync via `audio_url` on a portrait), HappyHorse 1.0 (Arena #1 t2v / i2v with in-pass audio from prompt โ no audio file needed), Seedance v2 Pro (multi-modal cinematic with reference audio + reference subject), and community Wan 2-2 Animate (stylized character animation). The RunComfy avatar video skill picks the right model for intent โ UGC voiceover, virtual presenter, dubbed product demo, lip-synced character, dialog scene โ and ships each model's documented prompting patterns plus the minimal `runcomfy run` invoke. Triggers on "talking head", "lip sync", "avatar video", "make X speak", "audio to video", "audio driven avatar", "virtual presenter", "AI spokesperson", "dubbed video", "UGC avatar", "HeyGen alternative", "Synthesia alternative", "digital human", "make this portrait talk", "video from voiceover", or any explicit ask to put words in a face with RunComfy.
openclaw skills install ai-avatar-video-runcomfyAI avatar video on RunComfy. Put words in a face. This RunComfy avatar video skill routes across RunComfy's audio-driven avatar models โ OmniHuman, Wan 2-7 with audio_url, HappyHorse, Seedance v2 โ picking the right path for the user's intent and shipping the documented prompts + the exact runcomfy run invoke for each.
runcomfy.com ยท Lip-sync feature ยท CLI docs
# 1. Install (see runcomfy-cli skill for details)
npm i -g @runcomfy/cli # or: npx -y @runcomfy/cli --version
# 2. Sign in
runcomfy login # or in CI: export RUNCOMFY_TOKEN=<token>
# 3. Generate an avatar video
runcomfy run <vendor>/<model>/<endpoint> \
--input '{"prompt": "...", "audio_url": "https://...", "image_url": "https://..."}' \
--output-dir ./out
CLI deep dive: runcomfy-cli skill.
Listed newest first. The agent classifies user intent โ pre-recorded audio file or just a script? Photoreal portrait or stylized character? Single shot or cinematic composition? โ and picks one route below.
OmniHuman โ bytedance/omnihuman/api (default)
ByteDance audio-driven full-body avatar. Feed one portrait + one audio file, get back a video where the subject speaks / sings / gestures naturally. Listed on RunComfy's
/feature/lip-syncas the curated default. Pick for: UGC voiceover, virtual presenter, dubbed product demo, multi-language clips from same portrait. Avoid for: no audio file available (need to generate speech from a script) โ use HappyHorse 1.0.
HappyHorse 1.0 โ happyhorse/happyhorse-1-0/text-to-video (t2v) ยท happyhorse/happyhorse-1-0/image-to-video (i2v)
Arena #1 t2v / i2v with in-pass audio generated from prompt. No external audio file required โ quote the spoken line inside the prompt. Pick for: written script with no audio file, "write a script โ get a video", concept clips, i2v talking-head from an existing portrait. Avoid for: precise lip-sync to a specific MP3 โ audio is regenerated each call, not locked.
Seedance v2 Pro โ bytedance/seedance-v2/pro
ByteDance multi-modal flagship โ up to 9 reference images, 3 reference videos, 3 reference audio tracks composed in one pass with cinematic motion / lens / lighting control. Pick for: cinematic monologue with reference subject + reference audio + reference scene; ad creative. Avoid for: simple "portrait + audio" jobs โ overpowered, slower. Use OmniHuman.
Wan 2-7 with audio_url โ wan-ai/wan-2-7/text-to-video
Open-weights with
audio_urlfield โ prompt describes the scene, audio file drives the mouth. Pick for: full scene control (not just a portrait), specific voiceover MP3, open-weights pipeline. Avoid for: simplest portrait-talks job โ use OmniHuman.
Wan 2-2 Animate โ community/wan-2-2-animate/api
Community-published variant on the Wan 2-2 base. Audio-driven full-body animation of stylized characters (illustration, anime, mascot). Pick for: stylized / illustrated character + audio (not a photoreal portrait). Avoid for: photoreal subjects โ use OmniHuman or Wan 2-7.
Model: bytedance/omnihuman/api
Catalog: omnihuman ยท /feature/lip-sync
ByteDance OmniHuman is the strongest single-shot path: feed it one portrait image + one audio file, get back a video where the subject speaks / sings / gestures naturally to the audio. No prompt required beyond the inputs.
runcomfy run bytedance/omnihuman/api \
--input '{
"image_url": "https://your-cdn.example/presenter.jpg",
"audio_url": "https://your-cdn.example/voiceover.mp3"
}' \
--output-dir ./out
audio_url โ open-weights lip-syncModel: wan-ai/wan-2-7/text-to-video
Catalog: wan-2-7
When you want full control over the scene (not just a portrait) and have a specific audio track. Wan 2-7 accepts an audio_url field โ the model generates the scene from prompt and locks the subject's mouth to the audio.
runcomfy run wan-ai/wan-2-7/text-to-video \
--input '{
"prompt": "Studio portrait of a woman in her 30s, confident expression, soft window light, neutral gray background.",
"audio_url": "https://your-cdn.example/voiceover.mp3",
"duration": 8
}' \
--output-dir ./out
Model: community/wan-2-2-animate/api
Catalog: wan-2-2-animate ยท /feature/character-swap
Pick this when the subject is a stylized character (illustration, anime, mascot) rather than a photoreal portrait, and you want full-body motion synchronized to audio. Community-published variant on the Wan 2-2 base.
runcomfy run community/wan-2-2-animate/api \
--input '{
"image_url": "https://your-cdn.example/character.png",
"audio_url": "https://your-cdn.example/voiceover.mp3"
}' \
--output-dir ./out
Schema details on the model page.
Model: happyhorse/happyhorse-1-0/text-to-video (t2v) or happyhorse/happyhorse-1-0/image-to-video (i2v)
Catalog: happyhorse-1-0
Pick HappyHorse when the user doesn't have an audio file โ they want a talking-head video from a written script and HappyHorse generates speech in-pass. The mouth sync is derived from the generated audio, not from an input file.
t2v with spoken script:
runcomfy run happyhorse/happyhorse-1-0/text-to-video \
--input '{
"prompt": "A woman in her 30s, confident expression, looks at the camera and says clearly: \"Welcome to our product demo. Today we are going to show you three things.\" Soft daylight, neutral background.",
"duration": 6,
"aspect_ratio": "9:16",
"resolution": "1080p"
}' \
--output-dir ./out
i2v from an existing portrait:
runcomfy run happyhorse/happyhorse-1-0/image-to-video \
--input '{
"image_url": "https://your-cdn.example/portrait.jpg",
"prompt": "She looks at the camera and says clearly: \"Hi, I am Aria.\" Audio: friendly tone, neutral accent.",
"duration": 5
}' \
--output-dir ./out
says clearly: "โฆ". Without the literal quote the model paraphrases or skips speech."Audio: friendly tone, neutral accent." โ outside the spoken line.Model: bytedance/seedance-v2/pro
Catalog: seedance-v2 Pro
Pick Seedance v2 Pro when the avatar work is part of a cinematic shot โ reference your subject from an image, your audio from a reference track, and have Seedance compose them with full motion + lens control.
runcomfy run bytedance/seedance-v2/pro \
--input '{
"prompt": "Anamorphic close-up โ the subject delivers a confident monologue to camera, golden hour light through window, shallow DoF.",
"reference_images": ["https://your-cdn.example/subject.jpg"],
"reference_audio": ["https://your-cdn.example/voiceover.mp3"],
"duration": 10,
"aspect_ratio": "21:9"
}' \
--output-dir ./out
Up to 9 reference images, 3 reference videos, 3 reference audio tracks per call โ match each role explicitly in the prompt.
ai-image-generation โ generate the portrait โ upload resultaudio_url โ most flexible scene + locked lip motion/models/feature/lip-sync โ RunComfy's curated lip-sync capability tag/models/feature/character-swap โ character animation / swaprecently-added collection โ fresh additions, including new avatar models| code | meaning |
|---|---|
| 0 | success |
| 64 | bad CLI args |
| 65 | bad input JSON / schema mismatch |
| 69 | upstream 5xx |
| 75 | retryable: timeout / 429 |
| 77 | not signed in or token rejected |
Full reference: docs.runcomfy.com/cli/troubleshooting.
The skill classifies the user request โ do they have a pre-recorded audio file, or only a script? Photoreal portrait or stylized character? Single shot or cinematic composition? โ and picks one of the five routes above. It then invokes runcomfy run <model_id> with the matching JSON body. The CLI POSTs to the Model API, polls request status, fetches the result, and downloads any .runcomfy.net / .runcomfy.com URLs into --output-dir.
npm i -g @runcomfy/cli or npx -y @runcomfy/cli. Agents must not pipe an arbitrary remote install script into a shell on the user's behalf.runcomfy login writes the API token to ~/.config/runcomfy/token.json with mode 0600. Set RUNCOMFY_TOKEN env var to bypass the file in CI / containers.--input. The CLI does not shell-expand prompt content. No shell-injection surface.model-api.runcomfy.net and *.runcomfy.net / *.runcomfy.com. No telemetry.runcomfy <subcommand>./feature/lip-sync โ RunComfy's curated lip-sync capability tag (OmniHuman + related models)/feature/character-swap โ character animation / swap (Wan 2-2 Animate)recently-added collection โ fresh additions including new avatar models