Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Ai Video Gen 1.0.0

v1.0.0

End-to-end AI video generation - create videos from text prompts using image generation, video synthesis, voice-over, and editing. Supports OpenAI DALL-E, Re...

0· 125·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for matttgx/ai-video-gen-1-0-0.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ai Video Gen 1.0.0" (matttgx/ai-video-gen-1-0-0) from ClawHub.
Skill page: https://clawhub.ai/matttgx/ai-video-gen-1-0-0
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ai-video-gen-1-0-0

ClawHub CLI

Package manager switcher

npx clawhub@latest install ai-video-gen-1-0-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's name/description and the included Python scripts match an AI video-generation purpose (image generation, video synth via LumaAI, TTS, FFmpeg). However the registry metadata claims no required environment variables while both SKILL.md and the code require multiple provider API keys (OPENAI_API_KEY, REPLICATE_API_TOKEN, LUMAAI_API_KEY, RUNWAY_API_KEY, ELEVENLABS_API_KEY). This mismatch between what the skill states it needs in metadata and what it actually uses is an incoherence.
!
Instruction Scope
SKILL.md and README instruct setting .env.example and refer to extra scripts/folders (multi_scene.py, edit_video.py, examples/) that are referenced but not present in the file manifest. The runtime instructions direct the agent to use provider APIs and to download assets from returned URLs (expected for this purpose) but they also assume platform-specific FFmpeg install steps (winget) and a .env.example that doesn't exist—these are sloppy and could mislead users.
Install Mechanism
There is no install spec in the registry (instruction-only style). The project includes a requirements.txt and the SKILL.md suggests pip install openai requests pillow replicate python-dotenv (consistent). There are no downloads from unknown hosts or archive extraction in the install instructions. The only mild concern is the reliance on an external FFmpeg binary (assumed installed via winget) which is platform-specific and not enforced by an install script.
!
Credentials
The code legitimately uses multiple API keys relevant to the stated functionality (OpenAI, Replicate, LumaAI, Runway, ElevenLabs). That use is proportionate to an end-to-end video generator. However the registry lists no required env vars and the package metadata ownerId in _meta.json differs from the registry ownerId presented — this inconsistency makes it unclear what credentials will actually be needed and who published the package.
Persistence & Privilege
The skill does not request permanent 'always' inclusion, does not modify other skills' configuration, and does not attempt to run background services or persist credentials. It runs scripts and uses subprocess to call FFmpeg and performs network calls to provider APIs — expected for this functionality.
What to consider before installing
This package appears to implement the advertised video-generation features, but metadata and docs are inconsistent in several ways. Before installing or running it: (1) verify the publisher/source — the ownerId in _meta.json doesn't match registry metadata and there's no homepage; prefer code from a known repo. (2) Expect to provide API keys for OpenAI/Replicate/LumaAI/Runway/ElevenLabs; the registry incorrectly reports no required env vars. Create scoped/limited API keys if possible and avoid using high-privilege keys. (3) Inspect missing references (.env.example, multi_scene.py, examples/) and confirm the scripts you need are present; the docs reference files that are not in the bundle. (4) Run the code in an isolated environment (container or VM) because it executes subprocesses (ffmpeg) and makes network calls to third-party APIs. (5) If you need to trust this skill in production, ask the publisher to fix the metadata (declare required env vars, provide homepage/repo) and provide a reproducible install/setup script. If you want, provide the registry metadata or publisher contact and I can reassess with that context.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fkwy5wss3sdvz2t9dab7h9183bjp5
125downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

AI Video Generation Skill

Generate complete videos from text descriptions using AI.

Capabilities

  1. Image Generation - DALL-E 3, Stable Diffusion, Flux
  2. Video Generation - LumaAI, Runway, Replicate models
  3. Voice-over - OpenAI TTS, ElevenLabs
  4. Video Editing - FFmpeg assembly, transitions, overlays

Quick Start

# Generate a complete video
python skills/ai-video-gen/generate_video.py --prompt "A sunset over mountains" --output sunset.mp4

# Just images to video
python skills/ai-video-gen/images_to_video.py --images img1.png img2.png --output result.mp4

# Add voiceover
python skills/ai-video-gen/add_voiceover.py --video input.mp4 --text "Your narration" --output final.mp4

Setup

Required API Keys

Add to your environment or .env file:

# Image Generation (pick one)
OPENAI_API_KEY=sk-...              # DALL-E 3
REPLICATE_API_TOKEN=r8_...         # Stable Diffusion, Flux

# Video Generation (pick one)
LUMAAI_API_KEY=luma_...           # LumaAI Dream Machine
RUNWAY_API_KEY=...                # Runway ML
REPLICATE_API_TOKEN=r8_...        # Multiple models

# Voice (optional)
OPENAI_API_KEY=sk-...             # OpenAI TTS
ELEVENLABS_API_KEY=...            # ElevenLabs

# Or use FREE local options (no API needed)

Install Dependencies

pip install openai requests pillow replicate python-dotenv

FFmpeg

Already installed via winget.

Usage Examples

1. Text to Video (Full Pipeline)

python skills/ai-video-gen/generate_video.py \
  --prompt "A futuristic city at night with flying cars" \
  --duration 5 \
  --voiceover "Welcome to the future" \
  --output future_city.mp4

2. Multiple Scenes

python skills/ai-video-gen/multi_scene.py \
  --scenes "Morning sunrise" "Busy city street" "Peaceful night" \
  --duration 3 \
  --output day_in_life.mp4

3. Image Sequence to Video

python skills/ai-video-gen/images_to_video.py \
  --images frame1.png frame2.png frame3.png \
  --fps 24 \
  --output animation.mp4

Workflow Options

Budget Mode (FREE)

  • Image: Stable Diffusion (local or free API)
  • Video: Open source models
  • Voice: OpenAI TTS (cheap) or free TTS
  • Edit: FFmpeg

Quality Mode (Paid)

  • Image: DALL-E 3 or Midjourney
  • Video: Runway Gen-3 or LumaAI
  • Voice: ElevenLabs
  • Edit: FFmpeg + effects

Scripts Reference

  • generate_video.py - Main end-to-end generator
  • images_to_video.py - Convert image sequence to video
  • add_voiceover.py - Add narration to existing video
  • multi_scene.py - Create multi-scene videos
  • edit_video.py - Apply effects, transitions, overlays

API Cost Estimates

  • DALL-E 3: ~$0.04-0.08 per image
  • Replicate: ~$0.01-0.10 per generation
  • LumaAI: $0-0.50 per 5sec (free tier available)
  • Runway: ~$0.05 per second
  • OpenAI TTS: ~$0.015 per 1K characters
  • ElevenLabs: ~$0.30 per 1K characters (better quality)

Examples

See examples/ folder for sample outputs and prompts.

Comments

Loading comments...