Install
openclaw skills install video-productionComplete A/B video pipeline — storyboard, Veo 3 batch generation, browser preview with feedback loop, and ffmpeg assembly into final videos. Use when creating multi-scene videos, running A/B tests on hooks/CTAs, previewing clips before stitching, or assembling a final cut from approved clips.
openclaw skills install video-productionGenerate cinematic video clips with Veo 3, review them in a browser preview, iterate with feedback, and assemble final A/B test videos — all with minimal token spend.
cd ~/.openclaw/workspace/skills/video-production
# 1. Generate all clips from storyboard
.venv/bin/python3 scripts/batch_generate.py --storyboard /path/to/storyboard.json
# 2. Open browser preview
.venv/bin/python3 scripts/generate_preview.py --storyboard /path/to/storyboard.json
# 3. (After feedback) Re-generate only revised scenes
.venv/bin/python3 scripts/apply_feedback.py --storyboard storyboard.json --feedback feedback.json
# 4. Assemble final video
.venv/bin/python3 scripts/ffmpeg_assembler.py --storyboard storyboard.json
Target: 15-second videos, 3 clips × 5s each
[HOOK: 5s] → [CORE: 5s] → [CTA/PAYOFF: 5s]
↑ ↑
swap for A/B swap for A/B
Economics:
storyboard.json
↓
batch_generate.py → clips/scene_01.mp4 ... scene_05.mp4
↓
generate_preview.py → preview.html (opens in browser, zero tokens)
↓
[review + paste feedback JSON to Muffin]
↓
[Muffin suggests revised prompts, updates storyboard.json]
↓
apply_feedback.py → re-generates only 'revise' scenes
↓
ffmpeg_assembler.py → final_AA.mp4, final_BA.mp4, final_AB.mp4, final_BB.mp4
Token cost: Only when writing storyboard + interpreting feedback. Preview, generation, and assembly are all zero tokens.
{
"project": "my-video",
"output_dir": "clips",
"final_output": "final.mp4",
"scenes": [
{
"id": "scene_01",
"role": "hook_a",
"label": "Hook A",
"order": 1,
"duration": 5,
"aspect_ratio": "16:9",
"prompt": "..."
}
],
"_ab_combinations": {
"video_1_AA": ["scene_01", "scene_03", "scene_04"],
"video_2_BA": ["scene_02", "scene_03", "scene_04"],
"video_3_AB": ["scene_01", "scene_03", "scene_05"],
"video_4_BB": ["scene_02", "scene_03", "scene_05"]
}
}
See scripts/storyboard_template.json for full template.
Paste this JSON to Muffin after reviewing preview.html:
{
"scenes": [
{ "id": "scene_01", "action": "approve", "notes": "" },
{ "id": "scene_02", "action": "revise", "notes": "slower camera, warmer light" }
]
}
| Parameter | Supported |
|---|---|
aspect_ratio | ✅ |
number_of_videos | ✅ |
negative_prompt | ✅ |
duration_seconds | ❌ Broken (throws 400 even with valid values) |
fps | ❌ Vertex AI only |
compression_quality | ❌ Vertex AI only |
enhance_prompt | ❌ Vertex AI only |
Models: veo-3.1-generate-preview (best) | veo-3.1-fast-generate-preview | veo-3.0-generate-001
SDK: google-genai (NOT google-generativeai)
Motion in every sentence — Veo produces laggy output from static prompts. Every sentence should describe camera OR subject movement.
Character continuity — Veo can't maintain exact characters across clips. Describe physical details explicitly in every scene that includes the same character.
✅ "The same client character from the opening — dark jacket, professional bearing, 30s-40s"
Stitch continuity — For seamless cuts, open each prompt with the color/light state the previous clip ends on.
✅ "Warm amber light, a direct visual continuation from the post-production suite..."
Single continuous shot — Each prompt is one continuous clip. Design it as one camera move that reveals multiple elements — not a montage description.
Content policy — Environmental/prop-only scenes generate reliably. Stressed people on phones can silently return no video. Keep humans calm or describe the environment instead.
When you hit the daily limit (429 RESOURCE_EXHAUSTED), use the quota watcher:
# Sets a cron that retries every 30 min, texts Master when done
chmod +x scripts/quota_watcher.sh
# Add to crontab:
(crontab -l 2>/dev/null | grep -v quota_watcher; \
echo "*/30 * * * * /path/to/quota_watcher.sh >> /tmp/quota_watcher.log 2>&1") | crontab -
See api-quota-watcher skill for the generic pattern.
| Script | Purpose |
|---|---|
scripts/batch_generate.py | Generate all scenes from storyboard, skip existing |
scripts/generate_preview.py | Build preview.html with video players + feedback form |
scripts/apply_feedback.py | Re-generate only scenes marked 'revise' |
scripts/ffmpeg_assembler.py | Stitch approved clips → final MP4 (cut or crossfade) |
scripts/quota_watcher.sh | Retry + notify cron for quota recovery |
scripts/storyboard_template.json | Starting storyboard template |
cd ~/.openclaw/workspace/skills/video-production
uv venv .venv
uv pip install google-genai Pillow requests
# API key must be in ~/.zshenv:
export GOOGLE_API_KEY="AIza..."
After all scenes approved, run assembler for each combo:
# Assemble all 4 A/B videos
for combo in AA BA AB BB; do
# Edit storyboard or pass scene list directly
.venv/bin/python3 scripts/ffmpeg_assembler.py \
--storyboard storyboard.json \
--output "final_${combo}.mp4"
done
Or hardcode in _ab_combinations in storyboard.json — assembler reads it automatically.
| Format | Notes |
|---|---|
| 16:9 (master) | Default — all scripts use this |
| 9:16 (vertical) | Change aspect_ratio to "9:16" in storyboard |
| 1:1 (square) | Change aspect_ratio to "1:1" |
Generate separate storyboards per format for best results. Don't crop 16:9 to 9:16 in post — re-generate with proper aspect.
Every new campaign starts fresh. No inherited characters, no assumed cast, no prompt weights from previous runs. If you want continuity from a past campaign, explicitly say so:
"Use HERO_01 from the MMM campaign"
If no cast is defined, use these placeholders:
HERO_01 — Primary UGC creatorFRIEND_01 — Recurring side characterHAND_MODEL_01 — Hands-only product handlerFirst approved output becomes the canonical identity baseline for that campaign.
When characters are defined, maintain a character_registry.json in the project folder:
{
"HERO_01": {
"identity": {
"age_range": "28-35",
"gender": "male",
"skin_tone": "...",
"hair": "...",
"build": "..."
},
"wardrobe": {
"preferred": [],
"avoid": [],
"signature": ""
},
"camera_rules": {
"preferred_framing": "medium close-up",
"avoid": []
},
"negative_constraints": [],
"reference_frames": [],
"phrase_weights": {}
}
}
When characters are defined, every prompt must include:
CAST:
- HERO: HERO_01 (identity locked; must match reference frames exactly)
Do not alter identity traits across frames or across future assets.
After generation, run vision model consistency check against reference frames:
After every human review decision, update:
Append every attempt to generation_log.jsonl (never deleted):
{
"timestamp": "...",
"campaign": "...",
"scene_id": "...",
"engine": "veo-3.1-generate-preview",
"attempt": 1,
"characters": ["HERO_01"],
"prompt": "...",
"output": "clips/scene_01.mp4",
"verification_score": 88,
"drift_notes": "",
"decision": "auto_pass",
"human_outcome": "approved",
"worked_phrases": [],
"failed_phrases": []
}
Escalate to Master via Telegram (never silently loop) when:
Escalation message must include: scene ID, engine, score, drift notes, and 2–3 options.
Even though each campaign starts clean, these persist in the skill folder:
generation_log.jsonl — full audit trailapproved_references/ — canonical frames by campaign, available to load on requestcampaign_phrase_weights/ — weight archives per campaign, loadable for continuity