Auto Video Editing
v1.2.2Automated video editing skill for talk/vlog/standup videos. Use when: cutting video, splitting video into sentences, merging video clips, extracting audio, t...
⭐ 0· 100·0 current·0 all-time
byLiu Jie@maxazure
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
Name/description match the included scripts (extract audio, transcribe with Whisper, split, burn subtitles, merge, generate cover, add chapter bar). Required binaries (ffmpeg, python3) and pip packages (faster-whisper / openai-whisper) are reasonable for the stated functionality.
Instruction Scope
SKILL.md clearly describes running the provided scripts and an environment check. The scripts operate on user-provided video/audio files and on on-disk transcript JSON. They also print transcript text for AI-assisted title generation. Note: transcription will download Whisper models and fonts at runtime (or use configured mirrors), so the skill will perform network downloads beyond just calling the service APIs — this is expected for offline ML model workflows but worth awareness.
Install Mechanism
Install spec only offers a brew formula for ffmpeg (reasonable). Python dependencies (faster-whisper, openai-whisper, etc.) are not included in an automated install step — SKILL.md instructs the user to pip install them. Runtime model downloads (HuggingFace or mirrors) and font downloads (Google Fonts / jsDelivr) are performed by the scripts; these are expected but involve fetching large third-party artifacts and contacting external servers.
Credentials
The skill declares no required environment variables, no credentials, and no config paths. The scripts do set/use an optional USE_CN_MIRROR flag and may modify env vars for mirror usage when --mirror is passed — this is proportional to the China-optimized behavior described. There are no requests for unrelated secrets or cloud credentials.
Persistence & Privilege
The skill is user-invocable and not always-enabled. It does not request persistent platform privileges, nor does it modify other skills or system-wide agent settings. It writes cached fonts/models to its own directories (normal for this type of tool).
Assessment
This skill appears to do what it claims: local video/audio processing + Whisper-based transcription + FFmpeg-based editing. Before installing or running:
- Expect large model downloads and network access (HuggingFace / mirrors, Google Fonts / jsDelivr) when you run transcription or when fonts are missing. This is normal for Whisper-based tools but can consume significant bandwidth and disk space.
- The package asks you to pip-install faster-whisper/openai-whisper manually; review those packages and run them inside a virtualenv. Faster-whisper will download models from HuggingFace by default — you can choose mirrors if needed.
- The repository owner/homepage are not provided; if provenance matters, consider auditing utils.py (it handles font/model downloads and platform detection) and running the skill in an isolated environment (VM or container) on sample files first.
- The skill does not request secrets or remote endpoints for uploading your videos, but it will print transcript text (generate_cover may print TRANSCRIPT_FOR_TITLE_GENERATION to stdout) and may send network requests only to fetch models/fonts. If you want to avoid any external network activity, do not run the transcribe or font-download steps without reviewing and pre-provisioning models/fonts locally.
If you want, I can also: (a) scan utils.py for exact network calls and mirror configuration, or (b) produce step-by-step safe install/run instructions (virtualenv, offline model use).Like a lobster shell, security has layers — review code before you run it.
latestvk972yrqbs52khxrj585tmnjkkh83aq77
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🎬 Clawdis
OSmacOS · Linux · Windows
Binsffmpeg, python3
Install
Install FFmpeg (brew)
Bins: ffmpeg
brew install ffmpeg