Install
openclaw skills install higgsfield-generateGenerate images and videos via Higgsfield AI through 30+ models including Nano Banana 2, Soul V2, Veo 3.1, Kling 3.0, Seedance 2.0, Flux 2, GPT Image 2, plus Marketing Studio for branded ad video/image with curated avatars and imported products. Use when: "generate an image", "make a picture", "create artwork", "make a video", "animate this photo", "image-to-video", "img2vid", "edit this image with AI", "stylize a photo", "remix this image", "produce a clip", "render a scene", "create an ad", "make a UGC video", "generate marketing video", "make a product demo", "create unboxing", "TV spot", "virtual try-on", "product showcase", "brand video", "presenter video for product", "import product from URL", "create avatar for ad". Supports text-to-image, image-to-image, image-to-video, reference-based generation, and Marketing Studio (avatars + products + ad modes). Auto-detects whether passed IDs are uploads or previous jobs. Chain with higgsfield-soul-id when the user wants their face in the output. NOT for: training Soul Character (use higgsfield-soul-id), professional product photoshoots with mode-specific prompt enhancement (use higgsfield-product-photoshoot), text-only / chat / TTS tasks.
openclaw skills install higgsfield-generateSubmit jobs to any Higgsfield model. Wraps the higgsfield CLI. Covers generic image/video gen and Marketing Studio (branded ads, avatars, products).
Before any other command, make sure the CLI is installed and authenticated:
higgsfield is not on $PATH, install it:
curl -fsSL https://raw.githubusercontent.com/higgsfield-ai/cli/main/install.sh | sh
higgsfield account status fails with Session expired / Not authenticated, ask the user to run higgsfield auth login (interactive, opens a browser) and wait for them to confirm before continuing.Skip both checks if higgsfield account status already prints account info.
--aspect_ratio 16:9) stay English.--wait to generate create so the command blocks until done and prints the result URL itself. Avoid the two-step create → wait pattern.Pick a model. Practical defaults from production use:
Image:
higgsfield-product-photoshoot instead. NOT this skill.higgsfield-soul-id) → Soul 2.0 for stills, Soul Cinema for cinematicVideo:
For the actual --model ID to pass to higgsfield generate create, run higgsfield model list --json | jq to map display names to IDs. See references/model-catalog.md for the full table.
Pass media inputs straight to flags. Media flags accept a local file path or a UUID. CLI auto-uploads paths and auto-detects job vs upload for UUIDs. No need to pre-upload. Each model declares accepted roles (image, start_image, end_image, video, audio) — see references/media-inputs.md.
Validate quickly. If unsure of params, run higgsfield model get <jst> --json once and pass only what's needed. Use schema defaults otherwise. The server returns adjustments for non-fatal coercions (e.g. aspect_ratio=99:99 → closest match) and a structured error for invalid declared-param values.
Submit and wait in one shot. higgsfield generate create <jst> --prompt "..." [media flags] [param flags] --wait. Blocks until terminal status and prints the result URL on stdout. Tunables: --wait-timeout 20m (default 10m), --wait-interval 5s (default 3s).
Deliver. Send the URL plus a one-line summary (model, duration if video).
To inspect or rerun later, higgsfield generate list --json and higgsfield generate get <id> --json work for retrospection. higgsfield generate wait <id> is still available if you ever need to rejoin a job started without --wait.
| Flag | Use for | Models that accept it |
|---|---|---|
--image <path-or-id> | reference image | most image models, seedance_2_0, veo3, marketing_studio_video |
--start-image <path-or-id> | first frame for image-to-video transitions | kling3_0, kling2_6, veo3_1, seedance_2_0, marketing_studio_video |
--end-image <path-or-id> | last frame for transitions | kling3_0, seedance_2_0, marketing_studio_video |
--video <path-or-id> | reference video | seedance_2_0 |
--audio <path-or-id> | reference audio (lipsync, soundtrack match) | seedance_2_0 (use this, NOT --generate-audio) |
Each flag accepts either a local file path (auto-uploaded) or a UUID (upload id from higgsfield upload create, or a previous job id). Each model declares its own role set via MEDIA_ROLES. See references/media-inputs.md for the full table.
Flags pass through to model schema. Use higgsfield model get <jst> to discover.
higgsfield generate create gpt_image_2 --prompt "neon city at dusk" --aspect_ratio 16:9 --resolution 2k --wait
higgsfield generate create nano_banana_2 --prompt "anime character concept, expressive pose" --image ./ref.png --wait
higgsfield generate create seedance_2_0 --prompt "camera dollies in" --start-image ./first.png --duration 8 --wait
higgsfield generate create text2image_soul_v2 --prompt "..." --soul-id <soul_ref_id> --wait
For machine-readable output (chained pipelines, agent context), add --json. With --wait --json you get the final job object array. Without --wait, you get the job IDs.
Stdin prompt: echo "..." | higgsfield generate create z_image --wait.
Branded image/video gen: avatars + products + ad-style modes. Use models marketing_studio_video and marketing_studio_image.
preset (browse higgsfield marketing-studio avatars list) or custom (uploaded photos via higgsfield marketing-studio avatars create).higgsfield marketing-studio products fetch --url ...) or created from uploaded images (higgsfield marketing-studio products create).higgsfield marketing-studio products fetch --url <url> --wait (polls until import done)higgsfield upload create <photo>... then higgsfield marketing-studio products create --title "..." --image <id>...
Capture product id.higgsfield marketing-studio avatars list and pick a preset matching the brand voice.higgsfield marketing-studio avatars create --name "..." --image <upload_id>.ugc. Other slugs (canonical from MCP): ugc_how_to, ugc_unboxing, product_showcase, product_review, tv_spot, wild_card, ugc_virtual_try_on, virtual_try_on. See references/marketing-modes.md.higgsfield generate create marketing_studio_video \
--prompt "..." \
--avatars '[{"id":"<avatar_id>","type":"preset"}]' \
--product_ids '[<product_id>]' \
--mode ugc \
--duration 15 \
--resolution 720p \
--aspect_ratio 9:16 \
--wait
Resolution is 480p or 720p. Aspect ratio is one of auto/21:9/16:9/4:3/1:1/3:4/9:16. --generate-audio true is supported here (unlike seedance_2_0). --wait blocks until done; bump --wait-timeout 30m for longer ad runs.When the user gives a product URL and wants a marketing video in one go:
# 1. Trigger fetch (returns the product id and starts background scrape)
higgsfield marketing-studio products fetch --url https://shop.example.com/sneakers --wait
# 2. Generate the marketing video against the same URL — backend reuses the entity
higgsfield generate create marketing_studio_video \
--url https://shop.example.com/sneakers \
--mode ugc \
--duration 15 \
--aspect_ratio 9:16 \
--wait
Backend dedupes by URL, so repeated runs reuse the existing entity instead of re-fetching.
Same as above but use marketing_studio_image model:
higgsfield generate create marketing_studio_image \
--prompt "..." \
--aspect_ratio 1:1 \
--resolution 2k \
--wait
Missing required params: prompt → user gave no prompt; ask for it.Invalid values: aspect_ratio=99:99 (allowed: ...) → bad enum; pick from allowed.Unknown params: foo → schema doesn't accept that flag; check higgsfield model get <jst>.Session expired → higgsfield auth login.See references/troubleshooting.md for more.
Load on demand:
references/model-catalog.md — picking the right model for the taskreferences/prompt-engineering.md — writing prompts that workreferences/media-inputs.md — image/video reference flowsreferences/troubleshooting.md — common errors and fixesreferences/marketing-avatars.md — preset vs custom avatarsreferences/marketing-products.md — URL fetch vs manual product createreferences/marketing-modes.md — every Marketing Studio mode