Install
openclaw skills install ultimate-ai-media-generatorGenerate AI images and videos using top-tier models including Sora 2, Kling 2.6, Seedance 2.0, Nano Banana Pro, Veo 3.1 and more. Supports text-to-image, tex...
openclaw skills install ultimate-ai-media-generatorA powerful skill for generating AI images and videos using the world's leading generative models:
Supports all major generation workflows: text-to-image, text-to-video, image-to-image, image-to-video, and video-to-video. The skill handles credit estimation, task creation, status polling, and automatic media output saving.
This skill automatically optimizes prompts for specific use cases to achieve the best results:
The included workflow templates (workflows/ folder) provide ready-to-use prompts and best practices for each use case.
The runtime uses a layered Python architecture:
scripts/cyberbara_api.py: thin entrypoint onlysrc/cyberbara_cli/cli.py: command parsing and command routingsrc/cyberbara_cli/usecases/: flow orchestration (generation + polling)src/cyberbara_cli/policies/: safety and policy rules (credits quote + formal confirmation)src/cyberbara_cli/gateways/: raw CyberBara API clientsrc/cyberbara_cli/config.py: API key discovery and local persistencesrc/cyberbara_cli/constants.py: fixed base URL and shared constantsWhen extending behavior, keep business rules in usecases/ or policies/, not in scripts/.
The script uses fixed base URL:
https://cyberbara.com
API key lookup order:
--api-keyCYBERBARA_API_KEY~/.config/cyberbara/api_keyRecommended one-time setup command:
python3 scripts/cyberbara_api.py setup-api-key "<api-key>"
Or save from environment variable:
export CYBERBARA_API_KEY="<api-key>"
python3 scripts/cyberbara_api.py setup-api-key --from-env
If API key is missing, the script immediately asks for it and shows where to get one:
https://cyberbara.com/settings/apikeys
When you provide API key via --api-key or interactive prompt, it is saved to:
~/.config/cyberbara/api_key
Future runs reuse this cached key, so users do not need to provide it every time.
Reference commands:
# 1) List video models
python3 scripts/cyberbara_api.py models --media-type video
# 2) Upload local reference images
python3 scripts/cyberbara_api.py upload-images ./frame.png ./style.jpg
# 3) Estimate credits
python3 scripts/cyberbara_api.py quote --json '{
"model":"sora-2",
"media_type":"video",
"scene":"text-to-video",
"options":{"duration":"10"}
}'
# 4) Create a video task (default behavior: wait for success, save outputs to ./media_outputs, auto-open)
python3 scripts/cyberbara_api.py generate-video --json '{
"model":"sora-2",
"prompt":"A calm drone shot over snowy mountains at sunrise",
"scene":"text-to-video",
"options":{"duration":"10","resolution":"standard"}
}'
# 5) Existing task: wait + save/open outputs
python3 scripts/cyberbara_api.py wait --task-id <TASK_ID> --interval 5 --timeout 900
Image and video generation are confirmation-gated by default:
# Single image request: script auto-quotes, then asks you to type CONFIRM
python3 scripts/cyberbara_api.py generate-image --json '{
"model":"nano-banana-pro",
"prompt":"A cinematic portrait under neon rain",
"scene":"text-to-image",
"options":{"resolution":"1k"}
}'
# Batch image requests (JSON array): script auto-quotes each request and prints total estimated credits
python3 scripts/cyberbara_api.py generate-image --file ./image-requests.json
image-requests.json format:
[
{
"model": "nano-banana-pro",
"prompt": "A cinematic portrait under neon rain",
"scene": "text-to-image",
"options": { "resolution": "1k" }
},
{
"model": "nano-banana-pro",
"prompt": "A product still life with dramatic side light",
"scene": "text-to-image",
"options": { "resolution": "1k" }
}
]
Only use --yes after explicit user approval has been obtained:
python3 scripts/cyberbara_api.py generate-image --file ./image-requests.json --yes
python3 scripts/cyberbara_api.py generate-video --json '{
"model":"sora-2",
"prompt":"A calm drone shot over snowy mountains at sunrise",
"scene":"text-to-video",
"options":{"duration":"10","resolution":"standard"}
}' --yes
Control auto-save and open behavior:
# keep waiting but do not auto-open media
python3 scripts/cyberbara_api.py generate-image --file ./image-requests.json --yes --no-open
# custom output directory
python3 scripts/cyberbara_api.py generate-video --json '{...}' --yes --output-dir ./downloads
# submit only (no wait/save/open)
python3 scripts/cyberbara_api.py generate-video --json '{...}' --yes --async
scripts/cyberbara_api.py supports:
setup-api-key to persist API key into local cachemodels to list public models (--media-type image|video optional)upload-images to upload local image files to /api/v1/uploads/imagesquote to estimate credit cost from JSON request bodygenerate-image to auto-quote credits, compute total for batch requests, require formal confirmation, create task(s), wait, then save/open outputsgenerate-video to auto-quote credits, compute total for batch requests, require formal confirmation, create task(s), wait, then save/open outputstask to fetch task snapshot by task IDwait to poll task until success, failed, or canceled, then save/open outputsbalance and usage to inspect creditsraw for direct custom endpoint callsUse --file request.json instead of --json for long payloads.
Authorization: Bearer <key> or x-api-key).options.* only.scene to avoid inference ambiguity.options.image_input for image-to-image and image-to-video.options.video_input for video-to-video./api/v1/tasks/{taskId} until final status; only success guarantees output URLs.media_outputs/ by default and auto-open them unless disabled.Use the reference file for full model matrices and examples:
references/cyberbara-api-reference.mdxFor fast lookup in large reference:
rg '^## |^### ' references/cyberbara-api-reference.mdx
rg 'kling-2.6|sora-2|veo-3.1|seedance' references/cyberbara-api-reference.mdx
invalid_api_key or api_key_required: verify key and headers.insufficient_credits: quote first or recharge credits.invalid_scene or scene_not_supported: choose scene supported by model.invalid_request: verify prompt and options requirements by model.task_not_found: verify task ID and environment domain.