Install
openclaw skills install weryai-video-generatorGenerate and transform WeryAI videos from text, images, storyboard frames, or first-frame and last-frame guidance. Use when the user needs text-to-video, ima...
openclaw skills install weryai-video-generatorGenerate WeryAI videos with the official base skill for text-to-video, image-to-video, video from image, storyboard-to-video, and first-frame/last-frame transition workflows. In agent environments, default to result-first execution: submit and poll until playable videos are ready or timeout is reached. Use bounded wait (short tasks: 10 minutes, long tasks: 30 minutes) and avoid unbounded polling loops.
Generate a WeryAI text-to-video clip and return the final playable video if it is ready within timeout.Turn this image into a video with subtle motion and wait for final output with bounded polling.Generate a transition video from this first frame to this last frame.Turn these storyboard frames into one coherent product reveal video.Check which WeryAI video model supports 10 seconds, 16:9, and audio before submitting.text-to-video, image-to-video, video from image, first-frame to last-frame video, storyboard-to-video, task statusSEEDANCE_2_0)duration=5, resolution=720p, aspect_ratio=9:16, generate_audio=true (for audio-capable models)Before the first real generation run:
https://www.weryai.com/api/keys.WERYAI_API_KEY.WERYAI_API_KEY in metadata.openclaw.requires.env and primaryEnv.WERYAI_API_KEY.export WERYAI_API_KEY="your_api_key_here"
Use one safe check before the first paid run:
node {baseDir}/scripts/models-video.js --mode text_to_video
node {baseDir}/scripts/wait-video.js --json '{"prompt":"A glowing koi swims through ink clouds","duration":5}' --dry-run
models-video.js confirms that the key is configured and the models endpoint is reachable.--dry-run confirms the request shape locally without spending credits.wait or submit-* commands still require available WeryAI balance.WERYAI_API_KEY must be set before paid runs.>=18 is required because the runtime uses built-in fetch.image, images, videos, audios) can be http/https URLs or local/file sources. Local/non-http(s) sources are uploaded first via /v1/generation/upload-file.submit and wait commands consume WeryAI credits.WERYAI_API_KEY secret and never write it into the repository.WERYAI_BASE_URL and WERYAI_MODELS_BASE_URL default to https://api.weryai.com and https://api-growth-agent.weryai.com. Only override them with trusted hosts.scripts/ before production use if you need higher assurance.models first, then submit.Unless the user explicitly changes them, prefer:
model: SEEDANCE_2_0 (SEEDANCE_2_0)duration: 5resolution: 720paspect_ratio: 9:16generate_audio: true (for audio-capable models)Always allow the user to override model, duration, resolution, aspect_ratio, and generate_audio. When the user asks for unsupported settings, check models-video.js and keep only values supported by the chosen model.
Guide the user progressively instead of explaining every parameter up front.
SEEDANCE_2_0) configuration.Use short operator-style guidance like this:
I can start with the default setup: Seedance 2.0 (Seedance 2.0), 5s, 720p, 9:16, audio on (for audio-capable models). If you want, I can also switch the model or adjust the duration, aspect ratio, resolution, or audio before submission.If you want a different model, tell me whether you care more about image quality, motion performance, start/end-frame control, multi-image support, or cost/speed, and I will check the supported models first.I can map your request into video settings. For example: vertical short video -> 9:16, landscape -> 16:9, longer clip -> 10s or 15s, add ambience -> generate_audio=true.Before I submit a paid task, I will show the final model, parameters, and prompt so you can confirm them.Ask only for the smallest missing detail needed to submit safely.
aspect_ratio when the user implies platform intent such as TikTok, Reels, YouTube Shorts, or landscape trailer.duration when the user asks for a longer clip or a slower beat.generate_audio enabled by default for audio-capable models unless the user asks to mute.Use these common mappings:
vertical, portrait, short video, TikTok, Reels, Shorts -> aspect_ratio: 9:16square -> aspect_ratio: 1:1landscape, widescreen, YouTube, cinematic frame -> aspect_ratio: 16:9make it longer, slower pacing -> increase duration to a supported value such as 10 or 15make it clearer, higher quality -> use the highest supported resolution for the chosen modeladd ambient sound, with audio -> generate_audio: trueuse another model, not Seedance, check supported models -> run models-video.js before submissionWhen the user asks to change the model or requests parameters that may be unsupported:
text_to_video, image_to_video, or multi_image_to_video.models-video.js command first.duration, aspect_ratio, resolution, and audio support.--dry-run or a safe model query before the paid call.Before a paid run, show a concise confirmation block with the final payload choices.
Ready to generate
- mode: `image-to-video`
- model: `SEEDANCE_2_0` (`Seedance 2.0`)
- duration: `5`
- resolution: `720p`
- aspect_ratio: `9:16`
- generate_audio: `true` (for audio-capable models)
- image: `https://example.com/input.png`
- prompt: `Animate this portrait with subtle hair and fabric motion, preserve identity, keep the composition stable, soft side lighting, gentle camera drift, clean final hold on the face.`
Wait for confirmation or requested edits before running a paid submission.
Use result-first execution as the default path so users receive final playable output whenever possible.
prompt, route to text-to-video.image, route to image-to-video.first_frame + last_frame, or image + last_image, normalize them into ordered images and route to the guided multi-image flow.images, route to multi-image-to-video.wait-video.js (submit + bounded polling) and return final video when ready.taskId/batchId plus follow-up status command.submit-* only when the user explicitly asks to create a task without waiting.# Default bounded wait (result-first)
node {baseDir}/scripts/wait-video.js --json '{"prompt":"A neon city flythrough at night","duration":5}'
# Poll an existing task
node {baseDir}/scripts/status-video.js --task-id <task-id>
prompt and, if needed, ordered public https image URLs.SEEDANCE_2_0), 5s, 720p, 9:16, and generate_audio=true for audio-capable models, unless the user asks otherwise.models-video.js first when support is uncertain.--dry-run when you need to preview the final payload before a paid submission.wait-video.js (submit + polling) for generation requests.taskId to the user and ask if they want you to check the status again. Do NOT show the raw node status command to the user; use it internally.submit-* only when the user explicitly asks for task creation without waiting.status-video.js to re-check an existing task or batch safely.short task class: text_to_video, default timeout 10 minutes (600000ms).long task class: image_to_video, multi_image_to_video, almighty_reference_to_video, default timeout 30 minutes (1800000ms).auto task class (default) maps from the effective submission mode.WERYAI_POLL_TIMEOUT_MS remains the highest-priority explicit override for compatibility.WERYAI_SHORT_TASK_TIMEOUT_MSWERYAI_LONG_TASK_TIMEOUT_MSprompt and duration are required for every generation mode.image, images, first_frame, last_frame, and last_image must be public https URLs.images and image are provided, images wins during mode detection.first_frame + last_frame and image + last_image are accepted aliases for start/end-frame intent.aspect_ratio, resolution, negative_prompt, or generate_audio when supported by the selected model.All commands print JSON to stdout. Successful results can include:
taskId, taskIds, batchIdtaskStatusvideosrequestSummarybalanceerrorCode, errorMessageUser-facing delivery requirement:
[Video](https://...)). If multiple videos are generated, render all of them using markdown links consecutively.model, duration, aspect_ratio, resolution, generate_audio.taskId to the user and ask if they want you to check the status again. Do NOT show the raw node status command to the user; use it internally.See references/error-codes.md for common failure classes and recovery hints.
The task is done when:
wait-video.js reaches terminal state with at least one playable video URL,taskId plus an offer to check status again later,status-video.js returns a clear in-progress or terminal state,submit or wait casually because each run can create a new paid task.wait-video.js in agent environments for long-running generations.submit-text-video.js, submit-image-video.js, and submit-multi-image-video.js are not idempotent.wait-video.js is also not idempotent because it submits first and then polls.status-video.js, models-video.js, and balance-video.js are safe to re-run.