Video Pipeline Bundle
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill matches a video-processing workflow, but it can automatically install packages, rename original videos, and send file/path details in ways that are broader than the documentation suggests.
Install only if you are comfortable with the skill changing your Python environment, processing and renaming batches of local videos, and sending transcripts or progress details to configured external services. Test first on copies of videos in an isolated environment, disable notifications unless needed, and use restricted API keys.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running the skill can modify the user's Python environment and pull code or models from external sources without a separate install approval step.
The transcription script changes the model download endpoint and installs unpinned Python packages during normal startup if dependencies are missing, rather than only giving a user-directed install guide.
os.environ.setdefault("HF_ENDPOINT", "https://hf-mirror.com"); subprocess.run([sys.executable, "-m", "pip", "install", dep, "--break-system-packages"], capture_output=True); if not check_dependencies(): ... sys.exit(1)Move dependency installation into an explicit, reviewed install step; pin package versions; avoid --break-system-packages; and disclose or allow users to choose the model download source.
Original video files may be renamed across a whole directory tree, which can disrupt the user's organization or other workflows.
The clipper recursively processes videos under the input directory and renames the original input file after creating the edited output.
for root, dirs, filenames in os.walk(directory): ... os.rename(input_path, renamed_path)
Make source-file renaming opt-in, add a dry-run/confirmation step, and document exactly which files will be changed before processing.
File names and directory paths could be sent to a Feishu target when the user did not expect notifications to be active.
If OPENCLAW_TARGET is set, notifications are enabled by default and include input/output paths, which conflicts with the SKILL.md safety text claiming notifications are default-off and can be disabled with '--notify false'.
TARGET = os.environ.get("OPENCLAW_TARGET", ""); parser.add_argument("--notify", "-n", action="store_true", default=True, help="启用进度通知"); send_message(f"🎬 开始剪辑任务!共 {total_files} 个视频\n输入: {args.input}\n输出: {args.output}")Make notifications default-off in code, support an explicit disable flag, and clearly show the exact data sent in each notification.
Audio transcript content may leave the local machine and be processed by MiniMax, OpenAI, or Anthropic depending on configuration.
The transcript text is sent to the selected external LLM provider for correction, which is purpose-aligned but can include sensitive spoken content.
payload = {"model": config["model"], "max_tokens": 4096, "messages": [{"role": "user", "content": prompt + text}]}; resp = requests.post(url, headers=headers, json=payload, timeout=120)Use this only for videos whose transcript can be shared with the chosen provider, or provide a local/no-LLM correction mode for sensitive material.
A provider API key may be used to send transcript data and incur account usage costs; passing it on the command line may expose it in shell history.
The skill reads LLM provider API keys from environment variables or command-line arguments and uses them as bearer tokens; this is expected for provider calls but not declared in registry metadata.
"env_key": "MINIMAX_API_KEY" ... "env_key": "OPENAI_API_KEY" ... "env_key": "ANTHROPIC_API_KEY"; api_key = args.api_key or os.environ.get(config["env_key"]); headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}Declare optional credentials in metadata, prefer environment variables or a secrets manager, and use restricted-scope API keys where available.
