Image To Video Make

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — turn these photos into a 15-second video with smooth transitions — and get...

0· 82·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mhogan2013-9/image-to-video-make.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Image To Video Make" (mhogan2013-9/image-to-video-make) from ClawHub.
Skill page: https://clawhub.ai/mhogan2013-9/image-to-video-make
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install image-to-video-make

ClawHub CLI

Package manager switcher

npx clawhub@latest install image-to-video-make
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description, endpoints, and the single required credential (NEMO_TOKEN) are coherent with a cloud image→video rendering service. No unrelated binaries or third‑party credentials are requested.
Instruction Scope
The SKILL.md fully describes contacting mega-api-prod.nemovideo.ai, creating sessions, uploading images, reading SSE, polling render status, and returning download URLs — all within the stated purpose. Two points to watch: (1) it instructs the agent to read this file's frontmatter and detect install path to set attribution headers (this requires filesystem access to the agent runtime), and (2) it tells the agent to 'store' session_id and token but doesn’t specify where or how (in-memory, env, or disk). Those are reasonable for a session-based client but are underspecified and grant the agent discretion about persistence and storage location.
Install Mechanism
Instruction-only skill with no install spec and no code files — nothing is written to disk by an installer. This is the lowest-risk install mechanism.
Credentials
The only required environment credential is NEMO_TOKEN (declared as primaryEnv), which matches the described API usage. The SKILL.md also includes metadata requiring a config path (~/.config/nemovideo/) even though the registry metadata listed none; this mismatch is minor but worth clarifying. The skill can also create an anonymous token if NEMO_TOKEN is absent, which is consistent with the described flow.
Persistence & Privilege
always:false and autonomous invocation are normal. The skill asks to persist session tokens and session_id for subsequent requests; it does not request system-wide changes or other skills' configs. Clarify where tokens/session IDs are stored and how long they persist (disk, agent config, or memory) before installing.
Assessment
This skill appears to do what it says: it connects to nemovideo's API to create/render videos from uploaded images. Before installing, confirm two things: (1) where the skill will store the NEMO_TOKEN and session_id (in-memory vs written to disk and which path), and whether you’re comfortable with that persistence; (2) whether the agent runtime will be allowed to read install paths or files (the skill reads its frontmatter and examines install path to set attribution headers). If you need stricter guarantees, set NEMO_TOKEN yourself and ask the skill/runtime not to persist the token to disk. If you want higher assurance, test in a sandboxed agent environment first.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97ck6qhmyq77makbrfag0dg9h859vdy
82downloads
0stars
1versions
Updated 6d ago
v1.0.0
MIT-0

Getting Started

Share your images and I'll get started on AI video creation. Or just tell me what you're thinking.

Try saying:

  • "convert my images"
  • "export 1080p MP4"
  • "turn these photos into a 15-second"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Image to Video Maker — Convert Photos Into Video Clips

Send me your images and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload three product photos in JPG format, type "turn these photos into a 15-second video with smooth transitions", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: using fewer images with longer durations per image produces smoother results.

Matching Input to Actions

User prompts referencing image to video make, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: image-to-video-make
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these photos into a 15-second video with smooth transitions" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, HEIC for the smoothest experience.

Export as MP4 for widest compatibility across all platforms.

Common Workflows

Quick edit: Upload → "turn these photos into a 15-second video with smooth transitions" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...