Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Sora Video Generation

v1.0.1

Generate videos from text prompts or reference images using OpenAI Sora. ✅ USE WHEN: - Need AI-generated video from text description - Want image-to-video (animate a still image) - Creating cinematic/artistic video content - Need motion/animation without lip-sync ❌ DON'T USE WHEN: - Need lip-sync (person speaking) → use veed-ugc or ugc-manual - Just need image generation → use nano-banana-pro or morpheus - Editing existing videos → use Remotion - Need UGC-style talking head → use veed-ugc INPUT: Text prompt + optional reference image OUTPUT: MP4 video (various resolutions/durations)

2· 1.4k·11 current·11 all-time
byPaul de Lavallaz@pauldelavallaz

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for pauldelavallaz/sora.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Sora Video Generation" (pauldelavallaz/sora) from ClawHub.
Skill page: https://clawhub.ai/pauldelavallaz/sora
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install pauldelavallaz/sora

ClawHub CLI

Package manager switcher

npx clawhub@latest install sora
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name, description, and runtime behavior align: the script calls OpenAI's videos endpoints to create and download Sora-generated videos. However the registry metadata lists no required environment variables even though the SKILL.md and script clearly require an OpenAI API key (OPENAI_API_KEY or --api-key).
Instruction Scope
SKILL.md instructions and the script stay on-topic: they take a prompt and optional image, resize the image, call the OpenAI Videos API, poll for completion, and download the MP4. The instructions do not request unrelated files, system credentials, or unexpected external endpoints.
Install Mechanism
This is instruction-only / no install spec, so nothing is auto-downloaded by an installer. The included Python script however lists dependencies (openai, httpx, pillow) but the skill does not provide an installation step. The user will need to ensure those packages and Python >=3.10 are installed before running.
!
Credentials
The script requires an OpenAI API key (OPENAI_API_KEY or --api-key) but the skill metadata declares no required env vars or primary credential. That omission is a mismatch and could lead to confusion; otherwise the script does not request unrelated credentials or broad system secrets.
Persistence & Privilege
The skill does not request permanent/autonomous privileges (always:false) and does not modify other skills or system-wide agent configuration. It writes output files and temporary image files only as needed for its function.
What to consider before installing
This skill is functionally coherent with its description (it calls OpenAI Sora to generate videos), but take these precautions before installing or running it: - Provide only a scoped OpenAI API key and be aware this key will be used to create video jobs and download content; verify billing and key permissions. The skill uses OPENAI_API_KEY (or --api-key) but the registry metadata omitted this — expect to supply it yourself. - Install the required Python environment and libraries (Python >=3.10, openai, httpx, pillow) or run in a controlled environment (virtualenv/venv) to avoid affecting system packages. - The code saves temp image files and the final MP4 to disk; ensure you run it in a directory where writing files is acceptable. - Videos may expire on the provider side (~1 hour), so they are downloaded immediately by the script; be mindful of any sensitive content sent to the API. - The skill source and homepage are unknown — if you need higher assurance, ask the publisher for provenance (where the package came from, signed releases, or an official repo) before trusting it with an API key. If you want to proceed, consider creating a dedicated OpenAI API key with limited scope or billing limits to reduce risk.

Like a lobster shell, security has layers — review code before you run it.

latestvk975cczhbr46zy5vxs7q23xbbn8103xr
1.4kdownloads
2stars
1versions
Updated 14h ago
v1.0.1
MIT-0

Sora Video Generation

Generate videos using OpenAI's Sora API.

API Reference

Endpoint: POST https://api.openai.com/v1/videos

Parameters

ParameterValuesDescription
promptstringText description of the video (required)
input_referencefileOptional image that guides generation
modelsora-2, sora-2-proModel to use (default: sora-2)
seconds4, 8, 12Video duration (default: 4)
size720x1280, 1280x720, 1024x1792, 1792x1024Output resolution

Important Notes

  • Image dimensions must match video size exactly - the script auto-resizes
  • Video generation takes 1-3 minutes typically
  • Videos expire after ~1 hour - download immediately

Usage

# Basic text-to-video
uv run ~/.clawdbot/skills/sora/scripts/generate_video.py \
  --prompt "A cat playing piano" \
  --filename "output.mp4"

# Image-to-video (auto-resizes image)
uv run ~/.clawdbot/skills/sora/scripts/generate_video.py \
  --prompt "Slow dolly shot, steam rising, warm lighting" \
  --filename "output.mp4" \
  --input-image "reference.png" \
  --seconds 8 \
  --size 720x1280

# With specific model
uv run ~/.clawdbot/skills/sora/scripts/generate_video.py \
  --prompt "Cinematic scene" \
  --filename "output.mp4" \
  --model sora-2-pro \
  --seconds 12

Script Parameters

FlagDescriptionDefault
--prompt, -pVideo description (required)-
--filename, -fOutput file path (required)-
--input-image, -iReference image pathNone
--seconds, -sDuration: 4, 8, or 128
--size, -szResolution720x1280
--model, -msora-2 or sora-2-prosora-2
--api-key, -kOpenAI API keyenv var
--poll-intervalCheck status every N seconds10

API Key

Set OPENAI_API_KEY environment variable or pass --api-key.

Prompt Engineering for Video

Good prompts include:

  1. Camera movement: dolly, pan, zoom, tracking shot
  2. Motion description: swirling, rising, falling, shifting
  3. Lighting: golden hour, candlelight, dramatic rim lighting
  4. Atmosphere: steam, particles, bokeh, haze
  5. Mood/style: cinematic, commercial, lifestyle, editorial

Example prompts:

Food commercial:

Slow dolly shot of gourmet dish, soft morning sunlight streaming through window, 
subtle steam rising, warm cozy atmosphere, premium food commercial aesthetic

Lifestyle:

Golden hour light slowly shifting across mountains, gentle breeze rustling leaves, 
serene morning atmosphere, premium lifestyle commercial

Product shot:

Cinematic close-up, dramatic lighting with warm highlights, 
slow reveal, luxury commercial style

Workflow: Image → Video

  1. Generate image with Nano Banana Pro (or use existing)
  2. Pass image as --input-image to Sora
  3. Write prompt describing desired motion/atmosphere
  4. Script auto-resizes image to match video dimensions

Output

  • Videos saved as MP4
  • Typical file size: 1.5-3MB for 8 seconds
  • Resolution matches --size parameter

Comments

Loading comments...