Video Post Production

v1.0.0

End-to-end short-video post-production from one raw talking-head video: transcribe speech, build timed subtitle phrases, highlight key words, place sound eff...

0· 50·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description match the included scripts (speech alignment, subtitle generation, rendering). Declared runtime tools (ffmpeg, ffprobe, python, faster-whisper) are appropriate for transcription and video/audio processing. No unrelated binaries or credentials are requested.
Instruction Scope
Runtime instructions stay within the stated post-production workflow and reference only local files, ffmpeg, and the included scripts. Two things to note: (1) reference docs show example curl commands to download BGM/SFX from third-party sites (Mixkit, Pixabay) — these are optional examples but would fetch remote content if followed; (2) using faster-whisper/Whisper models typically triggers large model downloads (network + disk) when first run — the SKILL.md doesn't explicitly call this out, but it is expected behavior of the transcription dependency.
Install Mechanism
There is no install spec (instruction-only) and the code runs locally, which is lower risk. The SKILL.md suggests 'pip3 install faster-whisper' if missing; that is standard but will pull packages from PyPI. Reference docs contain curl examples to third-party sites — these are optional but involve external downloads if executed.
Credentials
The skill requires no environment variables, credentials, or config paths. The scripts do not read secrets or system configuration beyond checking standard binaries and working with provided media files.
Persistence & Privilege
always is false, the skill does not request permanent presence or modify other skills; it runs on-demand and writes outputs only to a working directory next to the input video.
Assessment
This skill appears to do what it claims: it runs local Python scripts plus ffmpeg to transcribe, generate ASS subtitles, and render a final video. Before installing/running: 1) ensure ffmpeg, ffprobe, and python3 are present; expect faster-whisper/Whisper to download model weights (large, network + disk usage) the first time you transcribe; 2) the reference docs include curl examples to download BGM/SFX from third‑party sites — only run those if you trust the source; 3) no credentials or secrets are requested by the skill, but you should still run it on a machine where large downloads and CPU/GPU usage are acceptable; 4) the subtitle generator uses simple text replacement for highlights (may over-replace in edge cases), so verify outputs on a sample video first. If you are processing sensitive videos, run the skill in an isolated environment to limit exposure.

Like a lobster shell, security has layers — review code before you run it.

latestvk974yqz57mbkm5q1h50bkv62gx83rgzc

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments