Audio SRT Workflow

v0.1.2

Generate or align SRT subtitles from audio using this repository. Use when the user asks for subtitle generation, transcript-to-audio alignment, timing clean...

1· 108·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for sariel2018/audio-srt-workflow.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Audio SRT Workflow" (sariel2018/audio-srt-workflow) from ClawHub.
Skill page: https://clawhub.ai/sariel2018/audio-srt-workflow
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install audio-srt-workflow

ClawHub CLI

Package manager switcher

npx clawhub@latest install audio-srt-workflow
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the included scripts and requirements: align_to_srt.py, gui_app.py, srt_stats.py, make_preview_mp4.py and a requirements.txt listing faster-whisper. Required tools referenced in SKILL.md (Python 3.10+, ffmpeg, faster-whisper) are appropriate for subtitle/transcription work. One minor mismatch: the code reads an optional environment variable FASTER_WHISPER_MODEL_DIR to locate models but this env var is not documented in the registry's required env list.
Instruction Scope
SKILL.md gives concrete invocation templates and environment checks (python version, ffmpeg, faster_whisper import) and directs the agent to run only the included scripts on user-supplied audio/text files. The instructions do not ask for unrelated files, secrets, or to transmit outputs to unusual external endpoints. Note: faster-whisper/model usage may download model weights from remote hosts when a model path is not local — this is expected for ASR but is a network behavior to be aware of.
Install Mechanism
No install spec is present (instruction-only), and dependencies are limited to a pinned faster-whisper package in scripts/requirements.txt. No arbitrary URL downloads or archive extraction are performed by the skill itself. The README suggests using a venv and pip install -r requirements.txt which is standard.
Credentials
The skill declares no required credentials (none in registry) which matches the code. However, the code optionally consults FASTER_WHISPER_MODEL_DIR to locate local model files (not declared in requires.env). Also, faster-whisper / underlying HF tooling may use existing Hugging Face credentials (e.g., HF_TOKEN) from the environment if the user has them configured when fetching private models; that credential access is implicit and not declared. Overall requested privileges are minimal and appropriate for the task, but be aware of the implicit model-download behavior and any credentials present in your environment.
Persistence & Privilege
always:false and allow_implicit_invocation:false in the agent metadata; the skill does not request permanent system presence, nor does it modify other skills or system-wide configs. It only runs local scripts and writes output files specified by the user.
Assessment
This package appears to be what it says: an offline toolset to align/generate SRTs and render preview videos. Before installing/running: - Run it in a dedicated Python virtualenv to avoid affecting your system Python. - Confirm ffmpeg is the binary you expect (ffmpeg in PATH or pass --ffmpeg-bin). The preview tool shells out to ffmpeg to burn subtitles. - If you want to avoid network downloads, pre-download or place Whisper model files in a local directory and set FASTER_WHISPER_MODEL_DIR (the code checks this env var) so faster-whisper won't fetch large weights at runtime. - Be aware: when faster-whisper does need to fetch models it may use Hugging Face tooling which can pick up HF tokens from your environment; avoid running this skill in an environment that contains unexpected credentials you don't want used. - The GUI requires tkinter; the code will exit if tkinter is unavailable. - Inspect the included scripts (they are small and readable) and test with non-sensitive audio first. If you need higher assurance, run in an isolated environment or container to observe any network activity during model loading.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bvazwe07da3g1ja1ht55jj583zhj4
108downloads
1stars
3versions
Updated 3w ago
v0.1.2
MIT-0

Audio SRT Workflow

Use this skill for end-to-end subtitle work.

This package is self-contained for runtime entrypoints:

  • scripts/align_to_srt.py
  • scripts/gui_app.py
  • scripts/srt_stats.py
  • scripts/make_preview_mp4.py
  • scripts/requirements.txt

Scope

  • Mode A: audio + reference text -> aligned SRT
  • Mode B: audio only -> auto subtitle SRT
  • Timing QA with srt_stats.py
  • Burned preview generation with make_preview_mp4.py

Inputs To Collect First

  1. Audio path (wav, mp3, m4a, ...)
  2. Whether a reference transcript is available
  3. Output SRT path (or output directory)
  4. Language hint (zh, en, ...)
  5. Preferred run style: CLI, GUI, or Python API

Decision Rule

  • If transcript exists, run Mode A (align_to_srt.py --text ...).
  • If transcript does not exist, run Mode B via GUI or Python API (run_auto_subtitle_pipeline).

Workflow

  1. Validate environment and paths.
  2. Choose Mode A or Mode B by transcript availability.
  3. Run subtitle generation from packaged scripts.
  4. Run timing diagnostics (srt_stats.py).
  5. If needed, render a preview mp4 with burned subtitles.

Resolve Skill Script Path

Set a local variable to your installed skill directory.

Codex default path:

SKILL_DIR="${CODEX_HOME:-$HOME/.codex}/skills/audio-srt-workflow"

OpenClaw/ClawHub install path example:

SKILL_DIR="<your-workdir>/skills/audio-srt-workflow"

Environment Checks

Run these checks before execution:

python3 --version
ffmpeg -version
python3 -c "import faster_whisper; print('ok')"

If faster-whisper import fails:

# Review dependencies before installing:
cat "$SKILL_DIR/scripts/requirements.txt"
pip install -r "$SKILL_DIR/scripts/requirements.txt"

Mode A Command Template (Audio + Transcript)

python3 "$SKILL_DIR/scripts/align_to_srt.py" \
  --audio "<input_audio>" \
  --text "<transcript_txt>" \
  --output "<output_srt>" \
  --model small \
  --language zh

Mode B Command Template (Audio Only)

GUI:

python3 "$SKILL_DIR/scripts/gui_app.py"

Or use Python API in scripts:

  • Build config with build_alignment_config(...)
  • Run run_auto_subtitle_pipeline(...)

See command details in references/command-templates.md.

QA And Preview

Timing stats:

python3 "$SKILL_DIR/scripts/srt_stats.py" --srt "<output_srt>"

Preview video:

python3 "$SKILL_DIR/scripts/make_preview_mp4.py" \
  --audio "<input_audio>" \
  --srt "<output_srt>" \
  --output "<preview_mp4>"

Output Conventions

  • Default output uses .srt extension.
  • Prefer dated naming for batch runs (for example output_YYYYMMDD.srt).
  • Keep intermediate checks in a separate folder from final delivery files.

Notes

  • For Chinese output (zh), the pipeline strips commas/periods only.
  • If timings look off, inspect waveform snap related arguments before changing model size.
  • This skill requires explicit invocation (allow_implicit_invocation: false).

Comments

Loading comments...