Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Video Podcast Maker

v2.0.0

Use when user provides a topic and wants an automated video podcast created, OR when user wants to learn/analyze video design patterns from reference videos...

0· 98·0 current·0 all-time
byAgents365.ai@agents365-ai

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for agents365-ai/video-podcast-maker.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Video Podcast Maker" (agents365-ai/video-podcast-maker) from ClawHub.
Skill page: https://clawhub.ai/agents365-ai/video-podcast-maker
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3, ffmpeg, node, npx
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install video-podcast-maker

ClawHub CLI

Package manager switcher

npx clawhub@latest install video-podcast-maker
Security Scan
Capability signals
CryptoRequires walletCan make purchasesRequires OAuth token
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description, required binaries (python3, ffmpeg, node/npx) and code (Remotion templates, TTS backends, generation scripts) line up with a video podcast generator and design-learner. AZURE_SPEECH_KEY as the primary credential is appropriate for Azure TTS. The dependency on remotion-best-practices and use of Remotion/FFmpeg/TTS engines is coherent with the stated functionality.
Instruction Scope
SKILL.md instructs the agent to perform web research, TTS synthesis, Remotion rendering, frame extraction, and optional Playwright-based captures — all within expected scope. It also includes an auto-update routine that runs git fetch and (with explicit user consent) git pull in the skill directory and writes a .last_update_check timestamp file. Reading/writing files under ${CLAUDE_SKILL_DIR} and generating many project artifacts (timing.json, podcast_audio.wav, shorts/, etc.) is expected. No instructions request unrelated system secrets, but Playwright capture and the web-oriented parts may require additional binaries/permissions not declared in the metadata (browsers/Playwright).
Install Mechanism
Install spec uses brew to install ffmpeg (reasonable) and a 'uv' package entry for edge-tts (edge-tts is a known pip package). The 'uv' installer kind is ambiguous in the metadata (not a standard installer label), so it's unclear how edge-tts will be installed in the agent environment. The repository also contains an onyx_data/deployment README that shows a curl | install.sh pattern referencing raw.githubusercontent.com; that file is present in the repo but not automatically executed by SKILL.md. Presence of fetched-install instructions in the repo increases the attack surface if someone later runs those scripts, so treat them as a caution.
Credentials
The skill declares a single primary env var AZURE_SPEECH_KEY (appropriate for Azure TTS). The README and scripts reference many optional environment variables for other TTS backends (ELEVENLABS_API_KEY, OPENAI_API_KEY, VOLCENGINE_* etc.). Those are optional and reasonable for multi-backend TTS, but metadata only lists AZURE_SPEECH_KEY which is a mild inconsistency: the runtime may read additional env vars (if a user selects other backends) that aren't declared in requires.env. Users should be aware the skill will use any TTS-related keys present in their environment if configured to do so.
Persistence & Privilege
always:false (no forced/global install). The skill writes a .last_update_check timestamp in ${CLAUDE_SKILL_DIR} and contains an auto-update check that can run git fetch and — with explicit user confirmation — git pull to update code. This makes the skill able to modify its own code on disk with user consent. That behavior is documented in SKILL.md and prompts the user before pulling, but it increases runtime dynamism and should be considered when evaluating trustworthiness.
What to consider before installing
This skill appears to do what it claims (create video podcasts and learn visual design patterns), but there are a few things to watch before you install or run it: 1) Auto-update (git pull): The skill checks upstream and can pull updates into its own directory. It asks the user before pulling, but pulled updates change code that will later run on your machine — treat this like installing third‑party code on demand. Only allow updates if you trust the repository source. 2) Embedded deployment material: The repo contains an onyx_data/deployment README referencing a curl|install.sh flow (downloads a script from raw.githubusercontent.com). Those install instructions are present in the repo but are not automatically executed by the skill. Do NOT run those remote-install commands unless you review the script first. 3) Credentials/environment variables: The metadata declares AZURE_SPEECH_KEY as the primary credential, which is reasonable for Azure TTS. The README and code support many optional TTS backends (ElevenLabs, OpenAI, Volcengine, CosyVoice, Google). If you set those API keys in your environment, the skill may use them when configured — only provide keys you are comfortable sharing with this tool and consider using scoped keys. 4) Install ambiguity: The metadata lists an installer kind 'uv' for edge-tts — this is ambiguous. Confirm how edge-tts will be installed in your runtime (pip? other). Also, Playwright/browser capture is marked experimental and may require additional browser binaries; the SKILL.md does not declare those requirements explicitly. 5) Run in isolation: Because the skill runs arbitrary subprocesses (ffmpeg, npx remotion, ffprobe) and can modify files under its skill directory, run it in an isolated or disposable environment (container, VM, dedicated project directory) the first time. Inspect the code (especially any scripts that perform curl/downloads or execute external installers) before granting network access or API keys. 6) Verify remotion-best-practices dependency: The SKILL.md mandates invoking remotion-best-practices first. Confirm that skill’s source and trustworthiness, because this skill relies on it for core behavior. What would increase confidence: explicit installer steps that match typical package managers (pip/apt/brew) with no ambiguous 'uv' installer, explicit declaration of all optional env vars in metadata, and removal (or clearer documentation) of the onyx_data remote-install references. If you want, I can point out the specific files/lines that implement the auto-update, the curl references, and the code paths that read optional env vars so you can inspect them directly.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
OSmacOS · Linux
Binspython3, ffmpeg, node, npx
Primary envAZURE_SPEECH_KEY

Install

Homebrew
Bins: ffmpeg
brew install ffmpeg
uv
Bins: edge-tts
uv tool install edge-tts
latestvk970e4c9dn1mczq4d8an8trggx84ajst
98downloads
0stars
1versions
Updated 3w ago
v2.0.0
MIT-0
macOS, Linux

REQUIRED: Load Remotion Best Practices First

This skill depends on remotion-best-practices. You MUST invoke it before proceeding:

Skill tool: skill="remotion-best-practices"

Video Podcast Maker

Quick Start

Open Claude Code and say: "Make a video podcast about $ARGUMENTS"

Or invoke directly: /video-podcast-maker AI Agent tutorial


Design Learning

Extract visual design patterns from reference videos or images, store them in a searchable library, and apply them to new video compositions.

Commands

# Learn from images (Claude Vision analyzes design patterns)
python3 learn_design.py ./screenshot1.png ./screenshot2.png

# Learn from a local video (ffmpeg extracts frames automatically)
python3 learn_design.py ./reference.mp4

# Learn from a URL (Playwright captures screenshots — experimental)
python3 learn_design.py https://www.bilibili.com/video/BV1xx411c7mD

# Save with a named profile and tags
python3 learn_design.py ./reference.mp4 --profile "tech-minimal" --tags "tech,minimal,dark"

Reference Library Commands

references list          # List all stored references (auto-cleans orphaned entries)
references show <id>     # Show full design report for a reference
references delete <id>   # Delete a reference and its files

Style Profile Commands

profiles list            # List all saved style profiles
profiles show <name>     # Show profile props_override
profiles delete <name>   # Delete a style profile
profiles create <name>   # Create a new style profile interactively

Pre-Workflow Usage

When the user provides a reference video or image alongside a video creation request, extract design patterns before Step 1 and apply them as session overrides. See references/workflow-steps.md → Pre-workflow section for the full extraction flow.

Step 9 Integration

Before choosing visual design in Step 9, check for matching style profiles or reference library entries. Apply the best match as a starting point for Remotion composition props. See references/workflow-steps.md → Step 9 Style Profile Integration for the priority chain.


Auto Update Check

Agent behavior: Check for updates at most once per day (throttled by timestamp file):

STAMP="${CLAUDE_SKILL_DIR}/.last_update_check"
NOW=$(date +%s)
LAST=$(cat "$STAMP" 2>/dev/null || echo 0)
if [ $((NOW - LAST)) -gt 86400 ]; then
  timeout 5 git -C ${CLAUDE_SKILL_DIR} fetch --quiet 2>/dev/null || true
  LOCAL=$(git -C ${CLAUDE_SKILL_DIR} rev-parse HEAD 2>/dev/null)
  REMOTE=$(git -C ${CLAUDE_SKILL_DIR} rev-parse origin/main 2>/dev/null)
  echo "$NOW" > "$STAMP"
  if [ -n "$LOCAL" ] && [ -n "$REMOTE" ] && [ "$LOCAL" != "$REMOTE" ]; then
    echo "UPDATE_AVAILABLE"
  else
    echo "UP_TO_DATE"
  fi
else
  echo "SKIPPED_RECENT_CHECK"
fi
  • Update available: Ask user via AskUserQuestion. Yes → git -C ${CLAUDE_SKILL_DIR} pull. No → continue.
  • Up to date / Skipped: Continue silently.

Prerequisites Check

!( missing=""; node -v >/dev/null 2>&1 || missing="$missing node"; python3 --version >/dev/null 2>&1 || missing="$missing python3"; ffmpeg -version >/dev/null 2>&1 || missing="$missing ffmpeg"; [ -n "$AZURE_SPEECH_KEY" ] || missing="$missing AZURE_SPEECH_KEY"; if [ -n "$missing" ]; then echo "MISSING:$missing"; else echo "ALL_OK"; fi )

If MISSING reported above, see README.md for full setup instructions (install commands, API key setup, Remotion project init).


Overview

Automated pipeline for professional Bilibili horizontal knowledge videos from a topic.

Target: Bilibili horizontal video (16:9)

  • Resolution: 3840×2160 (4K) or 1920×1080 (1080p)
  • Style: Clean white (default)

Tech stack: Claude + Azure TTS + Remotion + FFmpeg

Output Specs

ParameterHorizontal (16:9)Vertical (9:16)
Resolution3840×2160 (4K)2160×3840 (4K)
Frame rate30 fps30 fps
EncodingH.264, 16MbpsH.264, 16Mbps
AudioAAC, 192kbpsAAC, 192kbps
Duration1-15 min60-90s (highlight)

Execution Modes

Agent behavior: Detect user intent at workflow start:

  • "Make a video about..." / no special instructions → Auto Mode
  • "I want to control each step" / mentions interactive → Interactive Mode
  • Default: Auto Mode

Auto Mode (Default)

Full pipeline with sensible defaults. Mandatory stop at Step 9:

  1. Step 9: Launch Remotion Studio — user reviews in real-time, requests changes until satisfied
  2. Step 10: Only triggered when user explicitly says "render 4K" / "render final version"
StepDecisionAuto Default
3Title positiontop-center
5Media assetsSkip (text-only animations)
7Thumbnail methodRemotion-generated (16:9 + 4:3)
9Outro animationPre-made MP4 (white/black by theme)
9Preview methodRemotion Studio (mandatory)
12SubtitlesSkip
14CleanupAuto-clean temp files

Users can override any default in their initial request:

  • "make a video about AI, burn subtitles" → auto + subtitles on
  • "use dark theme, AI thumbnails" → auto + dark + imagen
  • "need screenshots" → auto + media collection enabled

Interactive Mode

Prompts at each decision point. Activated by:

  • "interactive mode" / "I want to choose each option"
  • User explicitly requests control

Workflow State & Resume

Planned feature (not yet implemented). Currently, workflow progress is tracked via Claude's conversation context. If a session is interrupted, re-invoke the skill and Claude will check existing files in videos/{name}/ to determine where to resume.


Technical Rules

Hard constraints for video production — visual design is Claude's creative freedom:

RuleRequirement
Single ProjectAll videos under videos/{name}/ in user's Remotion project. NEVER create a new project per video.
4K Output3840×2160, use scale(2) wrapper over 1920×1080 design space
Content Width≥85% of screen width
Bottom Safe ZoneBottom 100px reserved for subtitles
Audio SyncAll animations driven by timing.json timestamps
ThumbnailMUST generate 16:9 (1920×1080) AND 4:3 (1200×900). Centered layout, title ≥120px, icons ≥120px, fill most of canvas. See design-guide.md.
FontPingFang SC / Noto Sans SC for Chinese text
Studio Before RenderMUST launch remotion studio for user review. NEVER render 4K until user explicitly confirms ("render 4K", "render final").

Additional Resources

Claude loads these files on demand — do NOT load all at once:

  • references/workflow-steps.md: Detailed step-by-step instructions (Steps 1-14). Load at workflow start.
  • references/design-guide.md: Visual minimums, typography, layout patterns, checklists. MUST load before Step 9.
  • references/troubleshooting.md: Error fixes, BGM options, preference commands, preference learning. Load on error or user request.
  • examples/: Real production video projects. Claude may reference these for composition structure and timing.json format.

Directory Structure

project-root/                           # Remotion project root
├── src/remotion/                       # Remotion source
│   ├── compositions/                   # Video composition definitions
│   ├── Root.tsx                        # Remotion entry
│   └── index.ts                        # Exports
│
├── public/                             # Remotion default (unused — use --public-dir videos/{name}/)
│
├── videos/{video-name}/                # Video project assets
│   ├── workflow_state.json             # Workflow progress
│   ├── topic_definition.md             # Step 1
│   ├── topic_research.md               # Step 2
│   ├── podcast.txt                     # Step 4: narration script
│   ├── podcast_audio.wav               # Step 8: TTS audio
│   ├── podcast_audio.srt               # Step 8: subtitles
│   ├── timing.json                     # Step 8: timeline
│   ├── thumbnail_*.png                 # Step 7
│   ├── output.mp4                      # Step 10
│   ├── video_with_bgm.mp4             # Step 11
│   ├── final_video.mp4                 # Step 12: final output
│   └── bgm.mp3                         # Background music
│
└── remotion.config.ts

Important: Always use --public-dir and full output path for Remotion render:

npx remotion render src/remotion/index.ts CompositionId videos/{name}/output.mp4 --public-dir videos/{name}/

Naming Rules

Video name {video-name}: lowercase English, hyphen-separated (e.g., reference-manager-comparison)

Section name {section}: lowercase English, underscore-separated, matches [SECTION:xxx]

Thumbnail naming (16:9 AND 4:3 both required):

Type16:94:3
Remotionthumbnail_remotion_16x9.pngthumbnail_remotion_4x3.png
AIthumbnail_ai_16x9.pngthumbnail_ai_4x3.png

Public Directory

Use --public-dir videos/{name}/ for all Remotion commands. Each video's assets (timing.json, podcast_audio.wav, bgm.mp3) stay in its own directory — no copying to public/ needed. This enables parallel renders of different videos.

# All render/studio/still commands use --public-dir
npx remotion studio src/remotion/index.ts --public-dir videos/{name}/
npx remotion render src/remotion/index.ts CompositionId videos/{name}/output.mp4 --public-dir videos/{name}/ --video-bitrate 16M
npx remotion still src/remotion/index.ts Thumbnail16x9 videos/{name}/thumbnail.png --public-dir videos/{name}/

Workflow

Progress Tracking

At Step 1 start:

  1. Create videos/{name}/workflow_state.json
  2. Use TaskCreate to create tasks per step. Mark in_progress on start, completed on finish.
  3. Each step updates BOTH workflow_state.json AND TaskUpdate.
 1. Define topic direction → topic_definition.md
 2. Research topic → topic_research.md
 3. Design video sections (5-7 chapters)
 4. Write narration script → podcast.txt
 5. Collect media assets → media_manifest.json
 6. Generate publish info (Part 1) → publish_info.md
 7. Generate thumbnails (16:9 + 4:3) → thumbnail_*.png
 8. Generate TTS audio → podcast_audio.wav, timing.json
 9. Create Remotion composition + Studio preview (mandatory stop)
10. Render 4K video (only on user request) → output.mp4
11. Mix background music → video_with_bgm.mp4
12. Add subtitles (optional) → final_video.mp4
13. Complete publish info (Part 2) → chapter timestamps
14. Verify output & cleanup
15. Generate vertical shorts (optional) → shorts/

Validation Checkpoints

After Step 8 (TTS):

  • podcast_audio.wav exists and plays correctly
  • timing.json has all sections with correct timestamps
  • podcast_audio.srt encoding is UTF-8

After Step 10 (Render):

  • output.mp4 resolution is 3840x2160
  • Audio-video sync verified
  • No black frames

Key Commands Reference

See CLAUDE.md for the full command reference (TTS, Remotion, FFmpeg, shorts generation).


User Preference System

Skill learns and applies preferences automatically. See references/troubleshooting.md for commands and learning details.

Storage Files

FilePurpose
user_prefs.jsonLearned preferences (auto-created from template)
user_prefs.template.jsonDefault values
prefs_schema.jsonJSON schema definition

Priority

Final = merge(Root.tsx defaults < global < topic_patterns[type] < current instructions)

User Commands

CommandEffect
"show preferences"Show current preferences
"reset preferences"Reset to defaults
"save as X default"Save to topic_patterns

Troubleshooting & Preferences

Full reference: Read references/troubleshooting.md on errors, preference questions, or BGM options.

Comments

Loading comments...