video-to-srt
v1.0.0Generate timecoded SRT subtitles from local video or audio files. Use when a user wants a local low-cost subtitle workflow, asks to transcribe local media in...
⭐ 1· 92·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name/description match the included scripts: the Python transcriber and wrapper create a local venv, install faster-whisper, run transcription, and write an SRT. Minor mismatches: the registry metadata lists no required binaries, but the wrapper assumes a system python3 is available (and in practice transcription will often require system media tools like ffmpeg); the default language is zh, which is an unusual default but explainable for a China-focused workflow.
Instruction Scope
SKILL.md instructs the agent to locate and operate on the user's local media file and to run the provided wrapper script. That is appropriate for the stated purpose. SKILL.md asks to request permission before installing packages (good). One important runtime effect not explicitly called out: the WhisperModel/faster-whisper runtime will download model weights from the Hugging Face Hub (network I/O, potentially large downloads) and will cache them under HF_HOME/XDG_CACHE_HOME; this is expected but should be disclosed to users and controlled by the environment.
Install Mechanism
No explicit install spec in registry; the wrapper script creates a local virtualenv and pip-installs faster-whisper from PyPI via requirements.txt. This is a standard approach (moderate risk): pip packages are third-party code and may pull native wheels or additional dependencies. There are no obscure download URLs or extracted archives in the skill itself.
Credentials
The skill requests no external credentials or config paths. It does set local cache and venv locations inside the skill folder and allows overriding HF_HOME/XDG_CACHE_HOME/VENV_DIR for reuse — these are reasonable and proportional to the task.
Persistence & Privilege
always is false and the skill does not request persistent system-wide changes. The wrapper creates a venv and caches models in the skill folder by default (scope-limited). Autonomous invocation is allowed (platform default) but not combined with other concerning factors.
Assessment
This skill appears to do what it says: it runs a local Python transcription tool (faster-whisper) inside a venv and writes an SRT next to the input. Before installing or running it, consider: 1) It will pip-install faster-whisper (third-party code) into a local virtualenv — review and permit package installation. 2) The model weights will likely be downloaded from the Hugging Face Hub (network traffic and potentially many gigabytes of storage) unless you already have cached models — be prepared for bandwidth and disk usage and confirm you want that. 3) Ensure your environment has python3 (and typically an ffmpeg binary or equivalent media backend) even though the registry metadata didn't declare required binaries. 4) The skill operates on local files: confirm the agent/session has permission to read the media you plan to transcribe. 5) The default language is Chinese (zh); change to --language auto or set the language explicitly if that is not desired. If these points are acceptable, the skill is coherent and reasonable for local transcription.Like a lobster shell, security has layers — review code before you run it.
latestvk97ff36h1h6qta36bj9fkr869s836fxz
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
