Install
openclaw skills install hidream-api-genOpenClaw AIGC models (video + image) client. **REQUIRES CREDENTIALS**: You must set `HIDREAM_AUTHORIZATION` environment variable or use `scripts/configure.py...
openclaw skills install hidream-api-genThis skill provides per-model scripts that validate parameters and call shared request runners.
HIDREAM_AUTHORIZATION from environment variables or a secure local config file (~/.config/openclaw/hidream_config.json, permission 0600).HIDREAM_ENDPOINT (default: https://vivago.ai).When a user wants to use this skill, follow these steps:
Check for Token: The system automatically checks for the API key in the following order:
~/.openclaw/.env (reads HIDREAM_AUTHORIZATION=sk-...)HIDREAM_AUTHORIZATION~/.config/openclaw/hidream_config.jsonRequest Token (If Missing or 401): Instruct the user to update their credentials:
sk-) and ask the agent to save it: echo "HIDREAM_AUTHORIZATION=sk-..." > ~/.openclaw/.envpython3 scripts/configure.py to interactively save the token.Handle Missing Prompts: If the user asks to generate an image or video but does not provide a specific prompt, DO NOT generate a random test image. Instead, politely ask the user what they would like to generate.
Generate: Use the Python interface to generate content based on user requests.
Save Output: When generation is complete and returns a media URL (image or video), ALWAYS download and save the file to the assets/ directory within this skill's folder (e.g., assets/generated_image.png). Do NOT use ~/.openclaw/workspace/output/ as it may have permission issues. Create the assets/ directory if it does not exist.
Present Results: When showing generated images or videos to the user, ALWAYS use the following Markdown formats to ensure proper rendering in the Claw interface. Use the local path if downloaded, or the remote URL:
2560*1440 or 2048*2048). If you get this error, increase the resolution parameter.~/.openclaw/.env:
echo "HIDREAM_AUTHORIZATION=sk-..." > ~/.openclaw/.env
https://vivago.ai/platform/info to recharge credits.You can call the scripts directly from Python code. This is the preferred way for AI agents to interact.
from scripts.seedream import run as run_seedream
# Example: Generate an image
try:
result = run_seedream(
version="M2",
prompt="A cyberpunk cat on the moon",
resolution="2048*2048",
authorization="sk-..." # Optional if env var is set
)
print(result)
except Exception as e:
print(f"Error: {e}")
from scripts.kling import run as run_kling
# Example: Generate a video
try:
result = run_kling(
version="Q2.5T-std",
prompt="A cyberpunk cat running on neon streets",
duration=5,
authorization="sk-..." # Optional if env var is set
)
print(result)
except Exception as e:
print(f"Error: {e}")
scripts/commom/base_image.py: shared OpenClaw image request runnerscripts/common/base_video.py: shared OpenClaw video request runnerscripts/common/task_client.py: http request runnerscripts/*.py: per-model scripts (parameter parsing + payload only)Set one of the following environment variables, or use scripts/configure.py:
HIDREAM_AUTHORIZATION: Bearer token value onlyHIDREAM_ENDPOINT: API Endpoint (default: https://vivago.ai)OPENCLAW_AUTHORIZATION (Legacy): Alternative for HIDREAM_AUTHORIZATIONOPENCLAW_ENDPOINT (Legacy): Alternative for HIDREAM_ENDPOINTrequests (Python library) - see requirements.txtscripts/sora_2_pro.pyscripts/seedance_1_0_pro.pyscripts/seedance_1_5_pro.pyscripts/minimax_hailuo_02.pyscripts/kling.py (Refactored for Python access)scripts/seedream.py (Refactored for Python access)scripts/nano_banana.py