Install
openclaw skills install wjs-burning-subtitlesClawHub Security found sensitive or high-impact capabilities. Review the scan results before using.
Use when the user has a video + an SRT and wants the subtitles either burned into the pixels (libass, always-visible) or soft-muxed as a togglable track. Also handles the final composite step for the localization pipeline — burn subs, mix a dub track, and keep the original audio as a low-volume bed, all in ONE ffmpeg encode (no cascade). Verifies libass availability and auto-downloads a static evermeet ffmpeg build when Homebrew's stripped binary lacks it. Triggers — "烧字幕", "硬字幕", "burn subtitles", "burn-in subs", "embed subtitle", "soft mux SRT", "把字幕烧进视频", "做最终合成".
openclaw skills install wjs-burning-subtitlesVideo + SRT → video with subtitles. Also the final-encode stage for the localization pipeline: takes a video, an optional dub track from /wjs-dubbing-video, and an optional SRT to burn, and produces the upload-ready MP4 in one ffmpeg pass. No cascade of decodes/re-encodes.
mov_text./wjs-dubbing-video: burn target-language subs + mix dub over original-as-bed in one encode./wjs-transcribing-audio then /wjs-translating-subtitles first./wjs-overlaying-video instead. Don't mix libass burn-in with HyperFrames captions on the same output./wjs-overlaying-video, not this skill.render.pyscripts/render.py auto-detects mode from flags:
--video + --srt → re-encodes video with burned subs, original audio passes through.--video + --dub → keeps original video stream; replaces or mixes the audio track.--video + --srt + --dub → burns subs AND mixes dub. By default keeps original audio at low volume as a "bed" under the dub (set --bed-volume 0 or --no-original-audio to drop it).Burn-in requires an ffmpeg built with libass. The script auto-downloads a static libass-enabled build from evermeet.cx into /tmp/ff_bin/ on first use if needed.
Player apps can show/hide. Works with any ffmpeg build — does not need libass:
ffmpeg -i input.mp4 -i input.zh-CN.srt \
-map 0:v -map 0:a -map 1:0 \
-c:v copy -c:a copy -c:s mov_text \
-metadata:s:s:0 language=zho -metadata:s:s:0 title="中文" \
output.mp4
This is fast (stream-copy) and reversible. Use it when:
render.py --video IN.mp4 --srt SUB.srt --soft-mux runs this path.
Required for WeChat/抖音/朋友圈 etc. where the player will not honor embedded subtitle tracks.
ffmpeg -filters 2>&1 | grep -E "subtitles|^.. ass "
If neither subtitles nor ass shows up, the build lacks libass. Homebrew's default ffmpeg formula is often stripped (no --enable-libass, no --enable-libfreetype, no drawtext). Don't waste time fighting the comma-escaping inside force_style — it will fail with No such filter: 'subtitles' no matter how the shell quotes it.
curl -fsSL -o /tmp/ff.zip https://evermeet.cx/ffmpeg/getrelease/zip
unzip -o /tmp/ff.zip -d /tmp/ff_bin >/dev/null
FF=/tmp/ff_bin/ffmpeg
$FF -version | grep -oE -- "--enable-(libass|libfreetype)"
Then use $FF instead of ffmpeg for the render. The brew binary is fine for everything else (probe, audio extraction, soft-mux). render.py does this auto-fallback if its default ffmpeg lacks libass.
🛑 Checkpoint — confirm before full-render. Burn-in re-encodes the entire video (minutes of CPU on a 5-min clip). Before kicking it off:
-t 30 for a fast preview.Skip the checkpoint only if the user has already approved a full render of this exact video at this exact font config in the same conversation.
$FF -i input.mp4 \
-vf "subtitles=input.zh-CN.srt:force_style='Fontname=PingFang SC\,Fontsize=12\,PrimaryColour=&H00FFFFFF\,OutlineColour=&H00000000\,BorderStyle=1\,Outline=2\,Shadow=1\,MarginL=20\,MarginR=20\,MarginV=40'" \
-c:v libx264 -crf 18 -preset medium -pix_fmt yuv420p \
-c:a copy output.mp4
Inside force_style, escape every comma as \, (the filter graph parser eats the bare comma as a chain separator). All other special chars are fine.
libass scales its internal PlayRes up to the actual video resolution. The number you pass is not pixels in the output. As a starting calibration on a 544×960 vertical phone video, Fontsize=22 rendered each Chinese character at ~55px wide and overflowed the frame, while Fontsize=12 rendered at ~30–35px wide and fit cleanly with 15-char lines.
Rule of thumb: start at Fontsize=12, render, then always extract a frame and look:
$FF -ss 30 -i output.mp4 -frames:v 1 /tmp/frame.png -y
# then Read /tmp/frame.png to verify the longest-line cue fits
Pick a timestamp that lands on the cue with the most characters per line — short lines won't expose overflow. Add MarginL=20 MarginR=20 as a safety inset; never trust default left/right margins.
Keys that matter (libass force_style):
Fontname=PingFang SC — macOS default CJK; alternates: Songti SC, Heiti SC, STHeiti, Hiragino Sans GB.Fontsize=12 — start small, scale up only after frame check.PrimaryColour=&H00FFFFFF — white text (BBGGRR + alpha).OutlineColour=&H00000000 — black outline.BorderStyle=1 — outline only (clean over varied backgrounds). Use BorderStyle=3 for an opaque box behind text when the background is busy.Outline=2 — 2px outline thickness.Shadow=1 — subtle drop shadow.MarginL=20 MarginR=20 — keep text inside the frame.MarginV=40 — vertical distance from the bottom edge.Even with correct Fontsize, lines that are too long will wrap or overflow. Keep each on-screen line ≤ ~15 Chinese characters (~42 Latin chars). Use explicit \n line breaks inside the SRT block — do not rely on auto-wrapping. Two short lines beat one long one every time. (This is upstream discipline — /wjs-translating-subtitles should already cap cues at these limits.)
A pure dub-only track sounds dubbed (because it is). Mixing the original audio at low volume under the dub gives the "professional translation" feel — you still hear the speaker's breath, emphasis, and laughter, just under the new voice.
$FF -i original.mp4 -i dub.mp4 \
-filter_complex "[0:a]volume=0.18[orig];\
[1:a]volume=1.0[dub];\
[orig][dub]amix=inputs=2:duration=longest:normalize=0[a]" \
-map 0:v -map "[a]" \
-c:v copy -c:a aac -b:a 192k mixed.mp4
Reasonable starting volumes:
0.15–0.25 (≈ −16 to −12 dB)1.0normalize=0 so amix doesn't auto-attenuate when both are active.To drop the original entirely: --no-original-audio (equivalent to --bed-volume 0).
One ffmpeg call does all three — burn the target subtitle onto the video stream and mix the two audio tracks:
$FF -i original.mp4 -i dub.mp4 \
-filter_complex "[0:v]subtitles=input.zh-CN.srt:force_style='Fontname=PingFang SC\,Fontsize=12\,PrimaryColour=&H00FFFFFF\,OutlineColour=&H00000000\,BorderStyle=1\,Outline=2\,Shadow=1\,MarginL=20\,MarginR=20\,MarginV=40'[v];\
[0:a]volume=0.18[orig];[1:a]volume=1.0[dub];\
[orig][dub]amix=inputs=2:duration=longest:normalize=0[a]" \
-map "[v]" -map "[a]" \
-c:v libx264 -crf 18 -preset medium -pix_fmt yuv420p \
-c:a aac -b:a 192k final.mp4
This is the "ship to social media" final cut. render.py --video original.mp4 --dub dub.mp4 --srt input.zh-CN.srt runs this exact pipeline.
render.py# Subtitles only (burn):
python3 ~/.claude/skills/wjs-burning-subtitles/scripts/render.py \
--video IN.mp4 --srt SUB.srt --out OUT.mp4
# Dub only (replace audio, no subs):
python3 ~/.claude/skills/wjs-burning-subtitles/scripts/render.py \
--video IN.mp4 --dub IN_zh_dub.mp4 --out OUT.mp4
# Full localized cut (burn + dub + original bed):
python3 ~/.claude/skills/wjs-burning-subtitles/scripts/render.py \
--video IN.mp4 --srt IN.zh-CN.srt --dub IN_zh_dub.mp4 --out OUT.mp4
# Soft-mux (no re-encode):
python3 ~/.claude/skills/wjs-burning-subtitles/scripts/render.py \
--video IN.mp4 --srt SUB.srt --soft-mux --out OUT.mp4
See render.py --help for the full style/audio flag list (--font, --fontsize, --color, --outline-color, --margin-v, --bed-volume, --no-original-audio).
<source>_burned.mp4 (re-encoded, libass-rendered subs)<source>_softsub.mp4 (stream-copy, mov_text track)<source>_final.mp4 (re-encoded video with burned subs + mixed audio)ffmpeg -filters | grep subtitles first; auto-fall back to evermeet static build if missing.force_style. The filter graph parser eats them. Escape every internal comma as \,./wjs-overlaying-video, don't burn here too..mmm; libass tolerates it but other downstream tools choke. Normalize to ,mmm.BorderStyle=3 (opaque box). Use BorderStyle=1 (outline only) unless the background is genuinely busy — the box looks heavy and dated./wjs-transcribing-audio + /wjs-translating-subtitles — produce the SRT input./wjs-dubbing-video — produces the *_<lang>_dub.mp4 input for full-localized-cut mode. The dub-only file is technically a finished video; this skill is what mixes the original underneath and burns the subs to make it shippable.MarginL=20 MarginR=20 and MarginV=40 (or higher) explicitly.mov_text track shows up in QuickTime but not in some Android players. If the target audience is mobile-Chinese, soft-mux is unreliable; burn instead.Outline=2 → Outline=3, or switch to BorderStyle=3 for a translucent box (BackColour=&H80000000 for 50% black).