Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

FlowCutPro

v1.0.0

AI-powered cinematic video production using Google Veo 3 as the renderer and OpenClaw's configured LLM as the creative brain. Use when asked to create videos...

0· 90·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for windseeker1111/flowcutpro.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "FlowCutPro" (windseeker1111/flowcutpro) from ClawHub.
Skill page: https://clawhub.ai/windseeker1111/flowcutpro
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install flowcutpro

ClawHub CLI

Package manager switcher

npx clawhub@latest install flowcutpro
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's stated purpose (drive Google Veo 3 renders using an LLM brain and stitch with ffmpeg) matches the code: it POSTs to Google's generative endpoint and stitches with ffmpeg. However the registry declares no required environment variables while SKILL.md instructs users to set VEO_API_KEY — a clear mismatch between declared metadata and actual needs. The example and script also embed a default API key in code, which is unexpected for a production skill and not explained by the stated purpose.
!
Instruction Scope
SKILL.md instructs only that you provide a Veo/Gemini API key (and optionally store it in 1Password). The runtime code, besides calling Google Veo endpoints, attempts to use an Anthropic SDK (anthropic import and client.messages.create) to call a Claude model for shot planning. SKILL.md describes using 'OpenClaw's configured LLM' but does not document the need for Anthropic credentials or the Anthropic SDK — an instruction/scope mismatch. The code reads/writes local output files and invokes ffmpeg/ffprobe (expected), and it sends shot prompts and user content to external APIs (expected), but the lack of declaration for LLM credentials and the 1Password mention (not implemented in code) are inconsistent.
Install Mechanism
No install spec — this is an instruction + script skill. That is low risk from an installation perspective because nothing is downloaded or executed at install time beyond user-run scripts.
!
Credentials
SKILL.md and the scripts expect a VEO_API_KEY (Gemini/GaIA key) but the registry metadata lists no required env variables. Worse, both scripts include a hard-coded API key fallback (string beginning with 'AIzaSy...') embedded in the source. Hard-coded API keys are a red flag: either a leaked/test key was left in the repo, or the author used a key to simplify examples. The code also tries to use the Anthropic SDK for LLM planning but does not declare or document Anthropic/LLM credentials in the registry. The number and placement of credential-related artifacts are disproportionate and not transparent.
Persistence & Privilege
The skill is not always-enabled and does not request elevated or persistent platform privileges. It writes only to an output directory under the user's home and does not modify other skills or global agent settings.
What to consider before installing
Key points to consider before installing or running this skill: - Do not assume the embedded API key is safe: both scripts include a hard-coded string that looks like a Google API key (AIzaSy...). If that key is valid and belongs to you, rotate it immediately; if it belongs to someone else it may be unauthorized or rate-limited. - The registry metadata omits required env vars. SKILL.md instructs you to set VEO_API_KEY, but the skill's registry lists no env requirements — expect to provide at least VEO_API_KEY before use. - The code attempts to call an Anthropic client for the LLM shot planner but SKILL.md does not document Anthropic/LLM credentials or SDK installation. Ask the author which LLM integrations are required and how credentials are supplied. - The skill sends your prompt text and generated shot prompts to external services (Google Generative API and, possibly, Anthropic). Only run it if you are comfortable with that data leaving your machine. - If you want to use the skill: review and remove the hard-coded API key, confirm which LLM provider is used and supply your own credentials, and test in a sandboxed environment. If you cannot get clarity from the author, treat the skill as untrusted and avoid running it with real credentials or sensitive prompts.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dpj0bfw3pjj5s3we1xgfqbd83vzry
90downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

FlowCutPro — AI Cinematic Video Production

Two-layer architecture:

  • Brain: OpenClaw's configured LLM — shot planning, prompt engineering, style consistency, quality evaluation
  • Renderer: Google Veo 3 (veo-3.1-generate-preview) — photorealistic, physics-accurate, cinematic camera moves, 9:16/16:9/1:1

The LLM does the creative work. Veo 3 renders. ffmpeg stitches. You get professional video from a casual prompt.


Pipeline

User concept
    ↓
LLM: Shot Planner — breaks concept into N shots with timing + camera moves
    ↓
LLM: Prompt Engineer — expands each shot into optimized Veo 3 cinematic prompt
    ↓
Veo 3: Render shots in batches of 5 (API concurrent limit)
    ↓
LLM: Quality Evaluator — reviews output thumbnails vs brief, flags misses
    ↓
Veo 3: Regenerate any failing shots (up to 2 retries)
    ↓
ffmpeg: Stitch clips with crossfades → final video
    ↓
Deliver

Setup

Veo 3 API Key

Get a Gemini API key from https://aistudio.google.com/apikeys

export VEO_API_KEY="your-key-here"

Or store in 1Password: op://flow/gemini-api-key/key

Dependencies

pip install Pillow requests  # optional for thumbnails
brew install ffmpeg

Usage

# Single concept → full stitched video
python3 ~/clawd/skills/flowcutpro/scripts/flowcutpro.py \
  --concept "A luxury hotel guest arriving at sunset in Puerto Rico" \
  --shots 6 \
  --aspect-ratio 9:16 \
  --output-dir ~/clawd/output/flowcutpro/

# Reel / TikTok
python3 ~/clawd/skills/flowcutpro/scripts/flowcutpro.py \
  --concept "Morning coffee ritual in a minimalist Tokyo apartment" \
  --shots 4 \
  --aspect-ratio 9:16 \
  --duration 5 \
  --output-dir ~/clawd/output/flowcutpro/

# Cinematic widescreen
python3 ~/clawd/skills/flowcutpro/scripts/flowcutpro.py \
  --concept "A founder's journey from garage to IPO day" \
  --shots 8 \
  --aspect-ratio 16:9 \
  --output-dir ~/clawd/output/flowcutpro/

# Dry run (inspect shot plan without rendering)
python3 ~/clawd/skills/flowcutpro/scripts/flowcutpro.py \
  --concept "Product launch event at a Silicon Valley rooftop" \
  --shots 5 \
  --dry-run

# Render specific shots only (re-render misses)
python3 ~/clawd/skills/flowcutpro/scripts/flowcutpro.py \
  --concept "..." \
  --shots 6 \
  --only-shots 3 5

Output

~/clawd/output/flowcutpro/
  20260329-120000-shot01-arrival.mp4
  20260329-120000-shot02-lobby.mp4
  ...
  20260329-120000-FINAL-9x16.mp4   ← stitched master

Prompt Engineering — Veo 3 Best Practices

FlowCutPro automatically applies these rules when generating prompts:

  1. Always specify aspect ratio at the start: "Cinematic vertical 9:16 portrait..."
  2. Describe camera movement explicitly: slow push-in, dolly, crane, static wide, tracking shot
  3. Specify lighting: golden hour, overcast, blue hour, candlelit, harsh noon
  4. Include motion direction: "camera slowly pushes forward", "slow pan left to right"
  5. Name the aesthetic: cinematic, film grain, photorealistic, documentary, editorial
  6. Negative elements: "no text overlays, no logos, no CGI artifacts"
  7. Duration awareness: 5–8s per shot is optimal; 5s for fast cuts, 8s for slow moody shots
  8. Style consistency prefix: Start every shot prompt with the same style fingerprint for visual coherence across cuts

Examples

See examples/ folder:

  • hotel-commercial.py — 8-shot luxury hotel commercial (9:16)
  • product-launch.py — 6-shot product launch reel (9:16)
  • brand-story.py — 10-shot founder story (16:9)

Technical Details

  • Model: veo-3.1-generate-preview (Google Generative AI)
  • Endpoint: https://generativelanguage.googleapis.com/v1beta/models/veo-3.1-generate-preview:predictLongRunning
  • Aspect ratios: 9:16, 16:9, 1:1
  • Duration: 5–8 seconds per shot
  • Concurrent limit: 5 shots per batch (enforced automatically)
  • Stitch: ffmpeg xfade crossfade (0.5s transitions)
  • Output codec: H.264, CRF 18 (high quality)
  • Polling: 15s intervals, 10-minute timeout per shot

Limits & Notes

  • Veo 3 API is currently in preview — requires allowlist access via Google AI Studio
  • Each shot takes ~2–4 minutes to render
  • 10-shot video ≈ 20–40 minutes total (parallel batches of 5)
  • API key needs Gemini API enabled in Google Cloud Console
  • Free tier: limited daily quota; paid tier recommended for production use

Comments

Loading comments...