Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Video Dub Clawhub

v1.0.4

Windows-first video localization pipeline for downloading, transcribing, translating, dubbing, and retiming YouTube or Bilibili videos.

1· 79·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for bzxcup-afk/video-dub.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Video Dub Clawhub" (bzxcup-afk/video-dub) from ClawHub.
Skill page: https://clawhub.ai/bzxcup-afk/video-dub
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install video-dub

ClawHub CLI

Package manager switcher

npx clawhub@latest install video-dub
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's files and scripts implement a Windows-first video download→transcribe→translate→TTS→retime pipeline, which matches the name/description. However, the registry metadata claims no required env vars or binaries while the SKILL.md and scripts clearly require system binaries (ffmpeg, node, Python 3.10+) and at least one API key (DEEPSEEK_API_KEY). This metadata mismatch is an incoherence that should be resolved before trusting metadata-only checks.
Instruction Scope
The SKILL.md instructions stay within the stated purpose (download, transcribe, translate, TTS, retime). They instruct an agent to set environment variables and run the local controller script. Two caution points: (1) the instructions explicitly advise persisting DEEPSEEK_API_KEY and cookies paths as user-level environment variables (writes to OS environment), and (2) the pipeline will send subtitle/text (and possibly small audio segments) to external translation/TTS providers (DeepSeek/Edge/VolcEngine/Azure) depending on config — this is expected for cloud translation/TTS but should be considered sensitive.
Install Mechanism
No install spec is provided (instruction-only installation) and the pipeline code is bundled. That minimizes hidden remote installs. The user must manually pip-install the listed requirements and ensure ffmpeg/node are present. This is proportional but requires manual review of requirements and network access for package installs.
!
Credentials
The pipeline legitimately needs API keys when using remote translation or TTS providers (DEEPSEEK_API_KEY, optional provider keys). However: (a) the registry metadata lists no required env vars while SKILL.md requires DEEPSEEK_API_KEY and lists several optional env vars (YTDLP_COOKIES_FILE, NODE_OPTIONS, TTS_PROVIDER, EDGE_TTS_VOICE), an inconsistency; (b) SKILL.md suggests persisting these at user-level, which stores secrets permanently on the machine — a higher-risk action than using ephemeral session vars; and (c) requirements.txt includes packages such as openai although the default translation path is DeepSeek, which is an extra dependency to review.
Persistence & Privilege
The skill does not request 'always: true' and normal autonomous invocation is allowed. The notable persistence behavior is instructional: SKILL.md recommends setting user-level environment variables (System.Environment.SetEnvironmentVariable) which persists keys/cookie paths across sessions. This is not a platform privilege, but it increases credential exposure on the host and should be a deliberate user decision.
What to consider before installing
This skill appears to be what it says (a local video localization pipeline) but there are a few things to check before installing or running it: - Metadata mismatch: the registry lists no required env vars/binaries, but SKILL.md and the scripts require ffmpeg, node, Python 3.10+, and at least DEEPSEEK_API_KEY. Don't trust registry metadata alone — follow SKILL.md and inspect files. - Inspect DeepSeek and TTS provider code: open video_pipeline/scripts/services/deepseek_translator.py and the TTS provider files to see exactly which remote endpoints are used and what data is sent (translation requests include transcript text and may include short context). If you must keep transcripts private, prefer a local translation/TTS or review provider privacy/retention policies. - Prefer ephemeral env vars: when testing, set DEEPSEEK_API_KEY and YTDLP_COOKIES_FILE only in the current shell/session rather than persisting at user-level. Persisting environment variables stores secrets on disk and increases risk. - Check for bugs / run in sandbox: some parts of bundled code show sloppy issues (the shipped scripts contain truncated lines and at least one reference to an undefined variable in a truncated file excerpt). Run the pipeline on non-sensitive sample videos in an isolated environment first (or in a VM/container) and inspect logs and network calls. - Package installs: pip install -r requirements.txt will pull heavy ML packages (torch, openai-whisper). Ensure you understand the runtime cost and network access required for package installation and model downloads. - Cookies file is sensitive: YTDLP_COOKIES_FILE points to a browser cookies.txt which can include authentication cookies. Only use a cookies file you control and avoid persisting paths/shared storage. If you want to proceed, first review deepseek_translator.py and any 'requests.post' or remote hostnames in the services folder, avoid persisting API keys, and test locally on non-sensitive content.

Like a lobster shell, security has layers — review code before you run it.

latestvk977qsjy6njy2cbghvfx290qk584yx3v
79downloads
1stars
5versions
Updated 1w ago
v1.0.4
MIT-0

Video Dub Skill

Use this skill when a user wants to turn a source video into a localized dubbed video with aligned subtitles.

This skill bundles the complete video_pipeline, so the pipeline code is included with the skill installation.

What the pipeline does

Primary workflow:

  1. Download the source video (via yt-dlp)
  2. Optionally replace only the opening picture with a cover image
  3. Extract mono 16k audio
  4. Transcribe with Whisper
  5. Clean English blocks, correct proper nouns, and translate
  6. Generate TTS
  7. Retime the video to match the dub
  8. Export aligned SRT files without burning subtitles

The main controller is video_pipeline/scripts/quick_deliver.py.

Supported modes

  • Forward localization: English video to Chinese dubbed video
  • Reverse localization: Chinese video to English dubbed video

Requirements

Environment variables (at least one required)

VariableRequiredDescription
DEEPSEEK_API_KEYYes*DeepSeek API key for translation. *Required if using default translation path.
YTDLP_COOKIES_FILENoPath to YouTube cookies.txt for reliable downloads
NODE_OPTIONSNoSet to --max-old-space-size=4096 if YouTube shows JavaScript challenges

Optional TTS providers (no API key needed for default)

ProviderEnv VariableRequired
Edge TTS (default)TTS_PROVIDER=edgeNo
VolcEngineTTS_PROVIDER=volcengine + API keyNo
AzureTTS_PROVIDER=azure + API keyNo
Windows SAPITTS_PROVIDER=windows_sapiNo

Translation provider

The default translation uses DeepSeek API. To use a different provider, edit video_pipeline/scripts/services/deepseek_translator.py and replace the base URL with your preferred API (e.g., OpenAI, Anthropic, Grok, etc.). The translation interface is standardized, so any LLM API that supports chat completions can be substituted.

System dependencies (must be installed)

  • Python 3.10+
  • ffmpeg and ffprobe (must be in PATH)
  • node (for yt-dlp's JavaScript runtime)

Python packages

pip install -r video_pipeline/requirements.txt

Key packages: yt-dlp, openai-whisper, torch, ffmpeg-python, edge-tts

Default settings

  • Whisper model: small
  • TTS provider: edge (no API key needed)
  • Edge voice: zh-CN-YunjianNeural
  • Translation: DeepSeek API
  • Retiming padding: 0.05s
  • Final subtitle target: *_zh_retimed_v4_final.srt

For reverse localization, reasonable English voice: en-US-GuyNeural

Proper Noun Glossary

The enrichment stage automatically applies a local glossary before translation.

Use it to normalize:

  • place names
  • people and organizations
  • recurring technical or military terms

Recommended format:

{
  "terms": [
    { "canonical": "Kyiv" },
    {
      "canonical": "Armed Forces of the Russian Federation",
      "aliases": [
        "armed force of the Russian Federation",
        "Amodovvoso-Durasian Federation"
      ],
      "min_similarity": 0.72
    }
  ]
}

Rules:

  • canonical is required.
  • aliases is optional.
  • If aliases is omitted, the canonical term still participates in fuzzy matching.
  • min_similarity is optional.
  • The glossary is stored in the pipeline bundle and does not require a separate manual step when the main controller is used.

Running the pipeline

# Install dependencies first
pip install -r video_pipeline/requirements.txt

# Set required environment variables
$env:DEEPSEEK_API_KEY="your_deepseek_api_key"  # Required for translation

# Optional: for reliable YouTube downloads
$env:YTDLP_COOKIES_FILE="path\to\youtube_cookies.txt"
$env:NODE_OPTIONS="--max-old-space-size=4096"

# Run the pipeline
cd <skill_root>\video_pipeline
python .\scripts\quick_deliver.py "https://www.youtube.com/watch?v=VIDEO_ID"

To rebuild an already processed video:

python .\scripts\quick_deliver.py "https://www.youtube.com/watch?v=VIDEO_ID" --refresh-tts

Expected outputs

After a successful forward run:

  • video_pipeline/data/output/*_zh_retimed_v4.mp4 - final dubbed video
  • video_pipeline/data/subtitles/*_zh_retimed_v4_final.srt - final subtitle file

Optional outputs:

  • video_pipeline/data/output/*_zh_male.mp4
  • video_pipeline/data/subtitles/*_zh.srt
  • video_pipeline/data/subtitles/*_zh_retimed_v4.srt
  • video_pipeline/data/structured/*.json
  • video_pipeline/data/state/debug/*_en_blocks.json

Agent guidance

When an agent runs this skill:

  1. Validate the input URL (YouTube or Bilibili)
  2. Set required environment variables (DEEPSEEK_API_KEY for translation)
  3. Ensure system dependencies are installed (ffmpeg, node, Python 3.10+)
  4. Run quick_deliver.py from the video_pipeline subdirectory
  5. Return the final video and subtitle paths
  6. If the user asks for partial reruns, rebuild only the requested stage when possible

Known limitations

  • YouTube downloads may require cookies file if encountering bot detection
  • Video processing requires significant disk space for intermediate files
  • Default TTS (Edge) requires no API key but an internet connection

Security notes

This skill uses common patterns that may trigger automated security scanners:

  • subprocess: Used to call ffmpeg, ffprobe, and yt-dlp for video processing. These are legitimate system utilities.
  • os.getenv("DEEPSEEK_API_KEY"): API key is read from environment variables only, never hardcoded.
  • decode(): Audio/video data is decoded for processing, not for malicious purposes.

These are standard practices for video processing pipelines and do not indicate any malicious behavior. The code does not:

  • Transmit data to unauthorized endpoints
  • Download or execute remote code
  • Store or exfiltrate credentials

If your security scanner blocks this skill, you can verify by reviewing the source code in video_pipeline/scripts/.

Packaging notes

This skill is published with the pipeline code bundled in the video_pipeline/ subdirectory.

The bundle excludes generated outputs and caches (data/raw/, data/audio/, data/tts/, data/output/, data/state/, etc.).

To rebuild the release bundle from source:

.\scripts\package_release.ps1 -SourceRoot "D:\video_pipeline" -DestinationRoot "<skill_destination>"

Comments

Loading comments...