Ai Video Maker From Image

v1.0.0

convert images into animated image videos with this skill. Works with JPG, PNG, WEBP, HEIC files up to 200MB. marketers, social media creators use it for tur...

0· 76·0 current·0 all-time
bypeandrover adam@peand-rover

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for peand-rover/ai-video-maker-from-image.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ai Video Maker From Image" (peand-rover/ai-video-maker-from-image) from ClawHub.
Skill page: https://clawhub.ai/peand-rover/ai-video-maker-from-image
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ai-video-maker-from-image

ClawHub CLI

Package manager switcher

npx clawhub@latest install ai-video-maker-from-image
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the runtime actions: the SKILL.md documents uploading images, creating a session, submitting render jobs, and returning MP4s. Requiring a NEMO_TOKEN and calling nemovideo.ai endpoints is consistent with a cloud rendering service.
Instruction Scope
Instructions are explicit about API calls, session lifecycle, SSE handling, uploads, polling, and required headers. One notable behavior: if NEMO_TOKEN is not present the skill instructs the agent to obtain an anonymous token via POST to the provider (anonymous-token) and use it (100 free credits, 7-day expiry). That is coherent for a convenience fallback but means the agent will contact an external service and obtain credentials automatically (the skill tells the agent not to expose raw tokens).
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest-risk install footprint. All runtime behavior is network/API calls described in SKILL.md.
Credentials
Declared primary credential NEMO_TOKEN is appropriate for a cloud rendering backend. Minor inconsistency: registry metadata reported no required config paths, while the skill frontmatter lists a config path (~/.config/nemovideo/). This is a small metadata mismatch but does not materially expand privileges. The skill will upload user images to the external service (expected) so the token grants access to those uploads and renders.
Persistence & Privilege
always is false and the skill does not request persistent system-wide privileges. Autonomous invocation (model-invocation enabled) is the platform default and acceptable given the skill's purpose.
Assessment
This skill appears to be what it says: a cloud-based image→video converter that calls nemovideo.ai and uses a NEMO_TOKEN. Before installing, consider: (1) privacy — images are uploaded to an external service, so avoid sending sensitive images; (2) token handling — the skill can use an existing NEMO_TOKEN or automatically obtain an anonymous token (100 free credits, 7-day expiry), which means the agent will contact the provider on first run; (3) metadata mismatch — the frontmatter mentions a local config path (~/.config/nemovideo/) while registry metadata did not — minor but worth noting. If you trust the nemo provider and are comfortable uploading media, this skill is coherent; otherwise do not install or inspect network calls and the provider's privacy/terms before use.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97f9qgtrey7x2wrcqd2252bps84t1gt
76downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Share your images and I'll get started on AI video creation. Or just tell me what you're thinking.

Try saying:

  • "convert my images"
  • "export 1080p MP4"
  • "turn these images into a 30-second"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

AI Video Maker from Image — Convert Images into Videos

Drop your images in the chat and tell me what you need. I'll handle the AI video creation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a three product photos in JPG format, ask for turn these images into a 30-second video with transitions and background music, and about 30-60 seconds later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — using images with similar aspect ratios produces smoother transitions.

Matching Input to Actions

User prompts referencing ai video maker from image, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourceai-video-maker-from-image
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these images into a 30-second video with transitions and background music" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, HEIC for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Common Workflows

Quick edit: Upload → "turn these images into a 30-second video with transitions and background music" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...