AI Video Generation

Create AI videos with Sora 2, Veo 3, Seedance, Runway, and modern APIs using reliable prompt and rendering workflows.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
6 · 3.6k · 37 current installs · 38 all-time installs
byIván@ivangdavila
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (AI video generation) matches the content: provider-specific guidance, model routing, async patterns, and local memory/history. The required config path (~/video-generation/) is appropriate for persisting preferences and run logs. There is no unrelated credential or binary requirement.
Instruction Scope
Runtime instructions are instruction-only (no code install). They direct the agent to read/write only the declared local config files, check presence of environment variables (without printing or asking for secrets), and send prompts/media to listed provider endpoints. The skill does not instruct reading other system files or exfiltrating extra data. The SKILL.md's claim that no other data is sent externally is a trust assertion the user should accept before installing.
Install Mechanism
There is no install spec and no code files to write or execute on install (instruction-only). This minimizes install-time risk; nothing is downloaded or extracted by the skill itself.
Credentials
The skill lists many optional provider credentials (OPENAI_API_KEY, GOOGLE_CLOUD_PROJECT, RUNWAY_API_KEY, LUMA_API_KEY, FAL_KEY, REPLICATE_API_TOKEN, VIDU_API_KEY, TENCENTCLOUD_SECRET_ID/KEY). These are proportionate to supporting many video providers but are optional — the skill only checks for presence (per setup.md) rather than asking for or storing secrets. Users should avoid putting long-lived master keys in shared environments and use per-project tokens where possible.
Persistence & Privilege
The skill persists only to its own config directory (~/video-generation/) and optionally history.md. It does not request always:true, does not alter other skills, and does not claim system-wide configuration changes. Memory/history file behavior is explicit and opt-in.
Assessment
This skill appears coherent for multi-provider AI video workflows, but remember: it will send your prompts and any reference media to third‑party providers listed in the doc. Before installing: (1) confirm you trust those providers with the content you will upload; (2) keep API keys out of chat and prefer short-lived or per-project tokens; (3) inspect and limit permissions for TENCENTCLOUD_* or other cloud keys if you use them; (4) protect the ~/video-generation/ folder (it stores preferences and optional history); and (5) monitor cost and output URLs (signed URLs expire). If you want extra assurance, request a copy of any runtime code that will run network calls or run the skill in an isolated environment first.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.1
Download zip
latestvk97awwyq52cr14pyktq96cvfcn82bv3a

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🎬 Clawdis
OSLinux · macOS · Windows
Config~/video-generation/

SKILL.md

Setup

On first use, read setup.md.

When to Use

User needs to generate, edit, or scale AI videos with current models and APIs. Use this skill to choose the right current model stack, write stronger motion prompts, and run reliable async video pipelines.

Architecture

User preferences persist in ~/video-generation/. See memory-template.md for setup.

~/video-generation/
├── memory.md      # Preferred providers, model routing, reusable shot recipes
└── history.md     # Optional run log for jobs, costs, and outputs

Quick Reference

TopicFile
Initial setupsetup.md
Memory templatememory-template.md
Migration guidemigration.md
Model snapshotbenchmarks.md
Async API patternsapi-patterns.md
OpenAI Sora 2openai-sora.md
Google Veo 3.xgoogle-veo.md
Runway Gen-4runway.md
Luma Rayluma.md
ByteDance Seedanceseedance.md
Klingkling.md
Viduvidu.md
Pika via Falpika.md
MiniMax Hailuominimax-hailuo.md
Replicate routingreplicate.md
Open-source local modelsopen-source-video.md
Distribution playbookpromotion.md

Core Rules

1. Resolve model aliases before API calls

Map community names to real API model IDs first. Examples: sora-2, sora-2-pro, veo-3.0-generate-001, gen4_turbo, gen4_aleph.

2. Route by task, not brand preference

TaskFirst choiceBackup
Premium prompt-only generationsora-2-proveo-3.1-generate-001
Fast drafts at lower costveo-3.1-fast-generate-001gen4_turbo
Long-form cinematic shotsgen4_alephray-2
Strong image-to-video controlveo-3.0-generate-001gen4_turbo
Multi-shot narrative consistencySeedance familyhailuo-2.3
Local privacy-first workflowsWan2.2 / HunyuanVideoCogVideoX

3. Draft cheap, finish expensive

Start with low duration and lower tier, validate motion and composition, then rerender winners with premium models or longer durations.

4. Design prompts as shot instructions

Always include subject, action, camera motion, lens style, lighting, and scene timing. For references and start/end frames, keep continuity constraints explicit.

5. Assume async and failure by default

Every provider pipeline must support queued jobs, polling/backoff, retries, cancellation, and signed-URL download before expiry.

6. Keep a fallback chain

If the preferred model is blocked or overloaded:

  1. same provider lower tier, 2) equivalent cross-provider model, 3) open model/local run.

Common Traps

  • Using nickname-only model labels in code -> avoidable API failures
  • Pushing 8-10 second generations before validating a 3-5 second draft -> wasted credits
  • Cropping after generation instead of generating native ratio -> lower composition quality
  • Ignoring prompt enhancement toggles -> tone drift across providers
  • Reusing expired output URLs -> broken export workflows
  • Treating all providers as synchronous -> stalled jobs and bad timeout handling

External Endpoints

ProviderEndpointData SentPurpose
OpenAIapi.openai.comPrompt text, optional input images/video refsSora 2 video generation
Google Vertex AIaiplatform.googleapis.comPrompt text, optional image input, generation paramsVeo 3.x generation
Runwayapi.dev.runwayml.comPrompt text, optional input mediaGen-4 generation and image-to-video
Lumaapi.lumalabs.aiPrompt text, optional keyframes/start-end imagesRay generation
Falqueue.fal.runPrompt text, optional input mediaPika and Hailuo hosted APIs
Replicateapi.replicate.comPrompt text, optional input mediaMulti-model routing and experimentation
Viduapi.vidu.comPrompt text, optional start/end/reference imagesVidu text/image/reference video APIs
Tencent MPSmps.tencentcloudapi.comPrompt text and generation parametersUnified AIGC video task APIs

No other data is sent externally.

Security & Privacy

Data that leaves your machine:

  • Prompt text
  • Optional reference images or clips
  • Requested rendering parameters (duration, resolution, aspect ratio)

Data that stays local:

  • Provider preferences in ~/video-generation/memory.md
  • Optional local job history in ~/video-generation/history.md

This skill does NOT:

  • Store API keys in project files
  • Upload media outside requested provider calls
  • Delete local assets unless the user asks

Trust

This skill can send prompts and media references to third-party AI providers. Only install if you trust those providers with your content.

Related Skills

Install with clawhub install <slug> if user confirms:

  • image-generation - Build still concepts and keyframes before video generation
  • image-edit - Prepare clean references, masks, and style frames
  • video-edit - Post-process generated clips and final exports
  • video-captions - Add subtitle and text overlay workflows
  • ffmpeg - Compose, transcode, and package production outputs

Feedback

  • If useful: clawhub star video-generation
  • Stay updated: clawhub sync

Files

18 total
Select a file
Select a file to preview.

Comments

Loading comments…