Install
openclaw skills install mulerouterGenerates images and videos using MuleRouter or MuleRun multimodal APIs. Text-to-Image, Image-to-Image, Text-to-Video, Image-to-Video, video editing (VACE, k...
openclaw skills install mulerouterGenerate images and videos using MuleRouter or MuleRun multimodal APIs.
This skill requires the following environment variables to be set before use:
| Variable | Required | Description |
|---|---|---|
MULEROUTER_API_KEY | Yes | API key for authentication (get one here) |
MULEROUTER_BASE_URL | Yes* | Custom API base URL (e.g., https://api.mulerouter.ai). Takes priority over SITE. |
MULEROUTER_SITE | Yes* | API site: mulerouter or mulerun. Used if BASE_URL is not set. |
*At least one of MULEROUTER_BASE_URL or MULEROUTER_SITE must be set.
The API key is included in Authorization: Bearer headers when making network calls to the configured API endpoint.
If any of these variables are missing, the scripts will fail with a configuration error. Check the Configuration section below to set them up.
Before running any commands, verify the environment is configured:
Run the built-in config check script:
uv run python -c "from core.config import load_config; load_config(); print('Configuration OK')"
If this prints "Configuration OK", skip to Step 3. If it raises a ValueError, proceed to Step 2.
If the variables above are not set, ask the user to provide their API key and preferred endpoint.
Create a .env file in the skill's working directory:
# Option 1: Use custom base URL (takes priority over SITE)
MULEROUTER_BASE_URL=https://api.mulerouter.ai
MULEROUTER_API_KEY=your-api-key
# Option 2: Use site (if BASE_URL not set)
# MULEROUTER_SITE=mulerun
# MULEROUTER_API_KEY=your-api-key
Note: MULEROUTER_BASE_URL takes priority over MULEROUTER_SITE. If both are set, MULEROUTER_BASE_URL is used.
Note: The skill only loads variables prefixed with MULEROUTER_ from the .env file. Other variables in the file are ignored.
Important: Do NOT use export shell commands to set credentials. Use a .env file or ensure the variables are already present in your shell environment before invoking the skill.
uv to run scriptsThe skill uses uv for dependency management and execution. Make sure uv is installed and available in your PATH.
Run uv sync to install dependencies.
uv run python scripts/list_models.py
uv run python models/alibaba/wan2.6-t2v/generation.py --list-params
Text-to-Video:
uv run python models/alibaba/wan2.6-t2v/generation.py --prompt "A cat walking through a garden"
Text-to-Image:
uv run python models/alibaba/wan2.6-t2i/generation.py --prompt "A serene mountain lake"
Image-to-Video:
uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "https://example.com/photo.jpg" #remote image url
uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "/path/to/local/image.png" #local image path
For image parameters (--image, --images, etc.), prefer local file paths over base64.
# Preferred: local file path (auto-converted to base64)
--image /tmp/photo.png
--images ["/tmp/photo.png"]
Local file paths are validated before reading: only files with recognized image extensions (.png, .jpg, .jpeg, .gif, .bmp, .webp, .tiff, .tif, .svg, .ico, .heic, .heif, .avif) are accepted. Paths pointing to sensitive system directories or non-image files are rejected. Valid image files are converted to base64 and sent to the API, avoiding command-line length limits that occur with raw base64 strings.
MULEROUTER_API_KEY and either MULEROUTER_BASE_URL or MULEROUTER_SITE are setuv syncuv run python scripts/list_models.py to discover available modelsuv run python models/<path>/<action>.py --list-params to see parametersWhen listing models, each model's tags (e.g., [SOTA]) are displayed by default next to its name. Tags help identify model characteristics at a glance — for example, SOTA indicates a state-of-the-art model.
You can also filter models by tag using --tag:
uv run python scripts/list_models.py --tag SOTA
If you are unsure which model to use, present the available options to the user and let them choose. Use the AskUserQuestion tool (or equivalent interactive prompt) to ask the user which model they prefer. For example, if the user asks to "generate an image" without specifying a model, list the relevant image generation models with their tags and descriptions, and ask the user to pick one.