Core Speed Art

v1.0.1

Generate video, images, audio, and music using 40+ AI models via fal.ai. Use for video generation (Kling v3, Sora 2, Veo 3.1, LTX 2.3, Pixverse v5), image ge...

1· 157·0 current·1 all-time
byJiwei,Yuan@jiweiyuan

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jiweiyuan/corespeed-art.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Core Speed Art" (jiweiyuan/corespeed-art) from ClawHub.
Skill page: https://clawhub.ai/jiweiyuan/corespeed-art
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: FAL_KEY
Required binaries: uv
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install corespeed-art

ClawHub CLI

Package manager switcher

npx clawhub@latest install corespeed-art
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (multi-model fal.ai media) match the included script and reference docs. The skill requires 'uv' (used to run the included Python script) and FAL_KEY (fal.ai API key), which are reasonable and expected.
Instruction Scope
SKILL.md instructs the agent to run the provided script against fal.ai endpoints and to read the included model reference files — this is within scope. Minor privacy/operational notes: the script prints the first ~200 chars of request args (may expose prompts) and downloads whatever URLs the fal.ai API returns to disk (expected for saving outputs but could fetch arbitrary remote files if the service returns unexpected URLs).
Install Mechanism
The registry listing shows no external install spec, but SKILL.md includes an install hint to 'pip install uv'. The script uses PEP-723 inline metadata (fal-client dependency) so 'uv' will install fal-client at runtime; these are standard PyPI installs (not arbitrary archive downloads). This is moderate-risk compared with no install step but appears proportionate to the skill's needs.
Credentials
Only FAL_KEY is required. That credential is necessary for calling fal.ai and matches the described functionality. No unrelated secrets or config paths requested.
Persistence & Privilege
Skill is not always-enabled and does not request elevated platform privileges or modify other skills. Autonomous invocation is allowed (platform default) but not combined with other concerning flags.
Assessment
This skill appears to do exactly what it says: run a small Python client against fal.ai using your FAL_KEY. Before installing, confirm you want to give the skill access to a fal.ai API key (this allows use/billing on your account). Installing the helper 'uv' will pull packages from PyPI (uv and fal-client) — verify you trust those packages or install them in a controlled environment. Note the script will download whatever URLs the API returns and prints parts of the request JSON (your prompt may be logged to stdout); if you have sensitive prompts or inputs, avoid sending them. If you want extra assurance, review the included scripts/fal.py and the listed references (they are all present and readable) or run the script in an isolated environment first.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
Binsuv
EnvFAL_KEY
latestvk9709ycksqqvxmk1t5zjg8qxbd8376nt
157downloads
1stars
6versions
Updated 1mo ago
v1.0.1
MIT-0

Corespeed Art — Multi-Model AI Media via fal.ai

Auth: Set FAL_KEY with your fal.ai API key (get one at https://fal.ai/dashboard/keys).

Workflow

  1. Pick a model from the tables below
  2. Read its reference file to get the exact endpoint and parameters
  3. Run the command with the endpoint and JSON parameters

Usage

uv run {baseDir}/scripts/fal.py ENDPOINT --json '{"param":"value"}' -f output.ext [-i input.ext]
  • ENDPOINT — the fal.ai model path from the reference file (e.g. fal-ai/nano-banana-2)
  • --json — model parameters as JSON object
  • -f — output filename
  • -i — input file(s) to upload (repeat for multiple), auto-injected as image_url/image_urls/start_image_url/video_url
  • --audio — audio input file (for lipsync)

Image Generation

ModelBest ForReference
Nano Banana 2Pro quality, web search, thinkingRead nanobanana.md
FLUX 2 ProPhotorealistic, zero-configRead flux.md
FLUX Schnell⚡ Fastest iterationRead flux.md
FLUX Pro v1.1Accelerated, commercial useRead flux.md
FLUX.1 Dev12B params, fine-tuning friendlyRead flux.md
GPT Image 1.5Transparent bg, instruction followingRead gpt.md
Qwen Image 2 ProChinese+English, typography, native 2KRead qwen.md
Recraft V4 ProDesign/marketing, color controlRead recraft.md
Seedream 5 LiteMulti-image editing, reasoningRead seedream.md

Video Generation

ModelBest ForReference
Kling v3 Pro I2VBest I2V, multi-shot, audio, 3–15sRead kling.md
Sora 2 T2VLong video up to 20s, charactersRead sora.md
Sora 2 I2VImage→video with SoraRead sora.md
Veo 3.1 T2VCinematic + native audio/dialogueRead veo.md
Veo 3.1 I2VImage→video with audioRead veo.md
LTX 2.3 T2V Fast⚡ Fast, up to 2160p/20s, open sourceRead ltx.md
LTX 2.3 I2VImage→video, start+end frameRead ltx.md
Pixverse v5 I2VAnime, 3D, clay, cyberpunk stylesRead pixverse.md

Audio / TTS

ModelBest ForReference
MiniMax Speech-02 HD30+ languages, loudness normalizationRead minimax-speech.md

Music & Sound Effects

ModelBest ForReference
Beatoven MusicAI music, up to 90sRead beatoven-music.md

Utilities

ToolBest ForReference
Topaz UpscaleAI image/video upscale 2x–4xRead topaz.md
BRIA RMBGProfessional background removalRead bria-rmbg.md
Sync LipsyncAudio-driven lip sync on videoRead sync-lipsync.md

Notes

  • No manual Python setup required. The script uses PEP 723 inline metadata. uv run automatically creates an isolated virtual environment and installs the fal-client dependency on first run.
  • fal.ai uses a queue system — the script polls until generation completes.
  • Video generation can take 30s–3min.
  • Use timestamps in filenames: yyyy-mm-dd-hh-mm-ss-name.ext.
  • Script prints MEDIA: line for OpenClaw to auto-attach.
  • Do not read generated media back; report the saved path only.

Comments

Loading comments...