Auto Video Cut

v1.0.0

Automatically trims single-speaker videos by detecting and removing silence and filler to produce a rough cut with quality scoring and deduplication.

0· 85·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zhuchenggong19851114-design/auto-video-cut.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Auto Video Cut" (zhuchenggong19851114-design/auto-video-cut) from ClawHub.
Skill page: https://clawhub.ai/zhuchenggong19851114-design/auto-video-cut
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install auto-video-cut

ClawHub CLI

Package manager switcher

npx clawhub@latest install auto-video-cut
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (auto-trim single-speaker video, silence/filler removal, dedup, scoring) aligns with the code and SKILL.md. The script uses FFmpeg for audio/video processing and invokes Whisper for transcription, which is exactly what this functionality requires. Sample work files included are consistent with expected inputs/outputs.
Instruction Scope
SKILL.md instructs the agent/user to run `python3 video_editor_auto.py` but the repository contains `video_editor_auto_v4.6.py` (filename mismatch) — you'll need to run the actual filename or rename it. The instructions and script operate only on supplied video files and a local work directory; they do not reference or require any unrelated system paths or credentials. Note: running Whisper may download model weights (network access) on first run; the SKILL.md does not explicitly warn about model downloads.
Install Mechanism
There is no install spec; dependencies are installed via normal package managers (pip/brew) as documented. requirements.txt only lists openai-whisper. No remote arbitrary archive downloads or installers are present in the manifest. The only external tooling invoked is FFmpeg and the Whisper package.
Credentials
The skill declares no required environment variables, credentials, or config paths and the code does not attempt to read secrets or unrelated env vars. All runtime needs (ffmpeg, whisper) are proportional to the stated purpose.
Persistence & Privilege
always:false and no install-time or runtime behavior attempts to persist the skill into system-wide agent settings. The script writes output and temporary files to the provided work/output directories only.
Scan Findings in Context
[no_findings] expected: Static pre-scan reported no injection or suspicious regex matches. That matches expectation for a local video-processing script that shells out to ffmpeg and whisper.
Assessment
This skill appears to do what it says: it uses FFmpeg to detect silence and OpenAI Whisper to transcribe, then scores and trims segments. Before installing/running: (1) fix the filename mismatch in the README or call the provided script (video_editor_auto_v4.6.py). (2) Ensure FFmpeg is installed and on PATH. (3) Be aware Whisper may download model weights the first time (it will use network and disk cache). (4) The script runs locally and reads/writes files in the work/output folders you supply — review those outputs and the script if you want to confirm no unintended file access. (5) If you will run this on sensitive videos, test on non-sensitive material first and review the transcript files it generates. If you want, I can point out exact lines to change for the filename or help inspect the remainder of the source for further hardening.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ddm8gxcw96xfcgdc1r619rs844hnq
85downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

auto-video-cut

抖音/视频自动剪辑Skill - 自动识别视频中的废话、沉默片段,生成粗剪版本。

适用场景

  • ✅ 单人出镜视频(vlog、教程、播客、知识分享)
  • ❌ 多人对话、采访
  • ❌ 音乐/B-roll较多的内容

功能

  • 场景A - 单视频:自动从单个视频中选取最佳片段
  • 场景B - 批量处理:处理多个视频,跨视频去重,拼接成片
  • 智能评分:4维度评分(清晰开始/结束、流畅度、自然节奏)
  • 内容去重:基于语音文本的相似度检测
  • 自动报告:生成详细的处理报告

前置要求

  • Python 3.8+
  • FFmpeg(包含ffprobe)
  • openai-whipser

安装依赖

# macOS
brew install ffmpeg
pip install openai-whisper

# Ubuntu / Debian
sudo apt install ffmpeg
pip install openai-whisper

# Windows
# 下载 FFmpeg: https://ffmpeg.org/download.html
# 添加到 PATH
pip install openai-whisper

使用方法

场景A:单视频剪辑

python3 video_editor_auto.py /path/to/video.mp4 ./output

场景B:批量处理+去重+拼接

python3 video_editor_auto.py /path/to/videos_folder ./output

输出

  • *.mp4 - 剪辑后的视频
  • *_报告.md - 处理报告

参数配置

在脚本顶部的 CONFIG 字典中修改:

参数说明默认值
silence_noise静音检测阈值(dB),越低越严格-30
silence_duration最小静音时长(秒)0.8
min_score最低评分(0-100)90
min_duration最短片段时长(秒)15
crf视频质量(18=无损)18

调优建议

  • 环境嘈杂 → silence_noise 设为 -35
  • 片段太碎 → silence_duration 设为 1.0
  • 想保留更多候选 → min_score 设为 85

工作流程

拍摄素材 → 归档到文件夹 → 运行脚本 → AI分析+剪辑 → 输 出粗剪 → 剪映精修 → 完成

来源

基于 gilbertwuu/Auto-Cut-video-A-Roll 项目改编

Comments

Loading comments...