Install
openclaw skills install phosor-aiGenerate AI videos (text-to-video, image-to-video) with optional custom LoRA styles via the Phosor AI platform. Supports importing images and LoRA models fro...
openclaw skills install phosor-aiGenerate AI videos (text-to-video, image-to-video) with optional custom LoRA styles via the Phosor AI platform.
For detailed API endpoints, parameters, pricing, and limits, see references/api.md.
export PHOSOR_API_KEY="your-api-key-here"
Keep your API key secret. Do not commit it to version control or share it publicly. All API calls are authenticated and billed through this key.
The CLI script is at scripts/phosor_client.py. All commands output JSON to stdout.
Local files used by the CLI:
phosor-pending.json to track pending job states locally# Submit T2V job (480p, 81 frames, 16fps)
python3 scripts/phosor_client.py submit "A cat walking on a beach at sunset" \
--width 854 --height 480 --num-frames 81 --fps 16
# Check status
python3 scripts/phosor_client.py status <request_id>
# Get result (video URL)
python3 scripts/phosor_client.py result <request_id>
Two-step flow: upload image first, then submit with the returned S3 key.
# Step 1: Upload image
python3 scripts/phosor_client.py upload-image /path/to/photo.jpg
# Returns: {"file_id": "img-xxx", "s3_key": "images/img-xxx.jpg", ...}
# Step 2: Submit I2V job using the s3_key as image_url
python3 scripts/phosor_client.py submit "The person in the photo starts dancing" \
--image-url "images/img-xxx.jpg" --width 854 --height 480
Upload your own LoRA model, then use it in video generation.
# Upload LoRA (two .safetensors: high_noise + low_noise)
python3 scripts/phosor_client.py upload-lora high_noise.safetensors low_noise.safetensors --name "My Style"
# Check upload status
python3 scripts/phosor_client.py lora-status <lora_id>
# Use in inference
python3 scripts/phosor_client.py submit "A person walking" --lora-id <lora_id> --lora-scale 1.0
# Or import LoRA from URLs
python3 scripts/phosor_client.py import-lora \
"https://example.com/high_noise.safetensors" \
"https://example.com/low_noise.safetensors" \
--name "My Style"
submit <prompt> — Submit inference job (T2V/I2V). Options: --width, --height, --num-frames, --fps, --steps, --guidance, --image-url, --lora-id, --lora-scale, --seed, --negative-prompt, --modelstatus <request_id> — Get job statusresult <request_id> — Get job result (video URL)poll — Poll all pending jobslist — List locally tracked pending jobshistory — Get job history. Options: --limitupload-image <file> — Upload image for I2Vimport-image <url> — Import image from URL. Options: --filenameupload-lora <high_noise_file> <low_noise_file> — Upload LoRA (two .safetensors). Options: --nameimport-lora <high_noise_url> <low_noise_url> — Import LoRA from URLs. Options: --nameloras — List LoRA models. Options: --limit, --offsetlora-status <lora_id> — Get LoRA upload statusdelete-lora <lora_id> — Delete a LoRA modelcheck-key — Validate API keymodels — List available modelsquotas — Get quota usage/limitsFrames must follow 1 + 4*k where k >= 1 (e.g. 5, 9, 13, ... 81, 85, ...). Server auto-aligns down.
frames_per_second — default: 16, range: 4–30num_inference_steps — default: 4, range: 4guidance_scale — default: 1.0, range: 1.0–3.5lora_scale — default: 1.0, range: 0.0–2.0Files must be uploaded before use:
upload-image / import-image → returns s3_key → use as --image-urlupload-lora / import-lora → returns lora_id → use as --lora-idPENDING → PROCESSING → COMPLETED / FAILED
The poll command checks all locally-tracked pending jobs and removes completed/failed ones.
Exchange rate: 10 credits = $1 USD. Credits pre-deducted, auto-refunded on failure.
GET /api/v1/models endpointwan/2.2-14b/text-to-video, wan/2.2-14b/image-to-video